Based on what I know of the recent development in artificial intelligence (A.I.), large language models (LLMs) are a type of artificial intelligence that have been trained on massive datasets of text and corpora. They can generate text, translate languages, and answer questions in a way that is often indistinguishable from a human.
However, one thing that has been bugging me is the question, “Do LLMs really understand language in the same way that humans do?”

Photo by Pixabay: https://www.pexels.com/photo/yellow-tassel-159581/
I am no expert in linguistics, in epistemology, in cognitive psychology, and in philosophy to actually give a point of view on how humans understand language and give meaning to words and sentences. All my philosophy and cognitive/learning psychology classes have been more than two decades ago — and admittedly, I have not kept abreast with developments in these two areas of study. So I guess, I am not in a position to answer definitively.
However, from the definition of what LLMs are — that they are AI that are based on statistical models trained on massive sets of existing text to determine what are the likely combinations or sequences of words that make sense in response to a query — it seems to me that LLMs do not understand language in the same way that humans do. Neither do they assign and derive meaning from words as we humans do.
When an LLM outputs a word or sentence in response to a prompt, it is not because it understands the meanings of the words or sentences used in a prompt; rather, it looks at the combination of those words and sentences in the prompt and searches for the appropriate response based on the most likely combinations and sequences of words associated with the prompt.
This is a very different way of understanding language than humans do. Humans have a deep understanding of the meaning of words and sentences, using language and words and meaning to communicate our thoughts and ideas, to reason and solve problems, and to create art and literature. LLMs do not have this same level of understanding. We also learn not merely from text databases and corpora, but from social and personal, individual experiences with words and meaning. We are not limited by the “data” we are fed, and in fact, we have control over the experiences that we could encounter to “feed” into our meaning-giving to words and languages.
At least, that is how I understand it.
Could LLMs ever be like humans in understanding and assigning meaning to words and language?
I cannot say. My quick Google search suggests that it could be possible since LLMs are at the early stages of their development with no one really knowing definitively what the next step is.
(These are half-baked thoughts of mine, which sometimes come to me whilst in the shower, or having coffee, or reading some article on the web, or just simply sitting down pondering about life and its multi-facetedness.)
Leave a comment