In this monograph dimensionality reduction methods and reflective data processing are investigated from the perspective of the ability to produce high precision recommendations and to cope with high unpredictability of the data sparsity. The reported research is oriented on constructing a processing model enabling to provide higher quality recommendations than the state-of-the-art collaborative and content-based filtering methods, but at the same time is not more computationally complex. The results of the theoretical study have been evaluated, according to a well-established methodology, using publicly available data sets and following scenarios reflecting the so-called find-good-items task (rather than the low-error-of-ratings prediction). Based on the presented analysis and experimental results, the author states that vector-space recommendation techniques and dimensionality reduction methods may be combined in a way preserving the high quality of recommendations, regardless of the amount of processed heterogeneous data.