Authors: Rudi Cilibrasi (CWI), Paul M. B. Vitanyi (CWI, University of Amsterdam, National ICT of Australia)Comments: 29 pages, 10 figuresSubj-class: Computation and Language; Artificial Intelligence; Databases; Information Retrieval; LearningACM-class: I.2.4; I.2.7We propose a new method to extract semantic knowledge from the world-wide-web for both supervised and unsupervised learning using the Google search engine in an unconventional manner. The approach is novel in its unrestricted problem domain, simplicity of implementation, and manifestly ontological underpinnings. We give evidence of elementary learning of the semantics of concepts, in contrast to most prior approaches. The method works as follows: The world-wide-web is the largest database on earth, and it induces a probability mass function, the Google distribution, via page counts for combinations of search queries. This distribution allows us to tap the latent semantic knowledge on the web. Shannon's coding theorem is used to establish a code-length associated with each search query. Viewing this mapping as a data compressor, we connect to earlier work on Normalized Compression Distance. We give applications in (i) unsupervised hierarchical clustering, demonstrating the ability to distinguish between colors and numbers, and to distinguish between 17th century Dutch painters; (ii) supervised concept-learning by example, using Support Vector Machines, demonstrating the ability to understand electrical terms, religious terms, emergency incidents, and by conducting a massive experiment in understanding WordNet categories; and (iii) matching of meaning, in an example of automatic English-Spanish translation.http://www.arxiv.org/abs/%20cs.CL/0412098http://www.cnews.ru/newtop/index.shtml?2005/01/28/173708OpenmetaМодель ЯзыкаКакСистемыВопросовhttp://www.livejournal.com/community/openmeta/37584.html?thread=540624#t540624