This work is devoted to the problems of developing linguistic and algorithmic software that serves to create a modern system for extracting key content information through the application of a wide class of mathematical and linguistic methods of logical and analytical processing of large character arrays. As part of this research a generalized scheme of processing arrays of journalistic texts for media sub-language is developed, trends are identified in terms of compliance with the results of processing of natural language text, performed by computer and man, a text model is proposed as a composite of formal models of its components, which is based on the integration of statistical and formal linguistic methods, The algorithm for extracting elements of meaning from the array of texts of limited topics, including a block of primary semantic processing, block of indexing and ranking of concepts, block of establishing relationships, block of identifying the thematic unit, block of establishing pair occurrence, block of constructing a semantic network, block of synthesis of information from the network is developed.