So, how we can learn the computer to speak and to think by quite general way, i.e. without any lexical and semantical rules hardwired? By AWT theory the concepts are forming a density gradients in causal space, so we can detect them by the same way, like the density fluctuation inside of gas by mutual distance of words in the text (i.e. by proximity analysis). The detection of meaning of sentence will not based on lexical analysis after then, but on the intuitive approach, which compares the mutual distance of words in the sentence with database in as large context, as possible.

For example, the phrase "blue water" has a meaning for us, because of relative proximity of such words in many sentences, which we have met already - such connection of words has a meaning for us quite intuitively, while the phrase "sharp water" hasn't. Therefore the semantic analysis of sense can be based on proximity analysis, when comparing the mutual distances of words in sentence with large semantic database by using of principle "each of other".

Such comparison will require a fast parallel computer, because the van-Neumann architecture of contemporary computers isn't well adopted for such approach. But the analog neural network based on entangled states of quantum waves (qubits) can be used, because every density fluctuation interacts with all others at the same moment, here. Such computer can learn to detect meaning of sentences just by reading of large amount of meaningful text, i.e. by using of the sentences in context, the meaning of which is considered valid a-priori. Such analysis of text corresponds the learning phase of common neural networks.