Other species can communicate of course - but parrots speaking human languages is in fact only a matter of mimicry, because only humans can use language.
Other 'minds' can communicate - but whilst there is a lot of promise in AI and the future of education for example, only humans can use language in the way humans do.
Only humans can do code-switching: jumping between different registers, different voices and different languages. Only humans enjoy word play - and things such as malapropisms, spoonerisms and nonsense words. Humans love ambiguity - and ambiguity is everywhere in Englsh!
The problems around the relationship between language and thought is clearly only something humans are struggling with. It does seem to be, though, that language shapes the way we think, which is not the case with parrots or ChatGPT.
Another issue is that no matter how much 'input' we get when learning a language, there will always be the poverty of the stimulus - in other words, "there is an enormous gap between the linguistic stimuli to which children are exposed and the rich linguistic knowledge they attain".
These are theories of language learning and teaching: of behaviourism vs nativism - and look to theorist Chomsky and language acquisition and what makes us human.
And the latest edition of the Philosophy Now magazine takes a look at this:
Contemporary linguists and philosophers have formalized Descartes’ and Cordemoy’s observations about human linguistic possibility into three conditions:
Stimulus-Freedom: Humans can produce new expressions that lack any one-to-one relationship with their environments. Generally, stimuli in a human’s local environment appear to elicit utterances, but not cause them. If human language use is not affixed in some determinate, predictable fashion to stimuli, then language use is not directly caused by situations. Among other things, this means that meaningful expressions can be generated about environments far-removed from the local context in which the person speaks; or even about imaginary contexts. The contrast with animal communication is striking here. For animals, communication is restricted to the local context of its use. Human language use is, in sharp contrast, detachable: a pillar of the human intellect may be the ability to detach oneself from the circumstances in which cognitive resources are deployed without reliance on stimuli to do so. In other words, we can think for ourselves.
Unboundedness: Human language use is not confined to a pre-sorted list of words, phrases, or sentences (as it is with LLMs). Instead, there’s no fixed set of utterances humans can produce. This is the infinite productivity of human language – the unlimited combination and re-combination of finite elements into new forms that convey new, independent meanings.
Appropriateness to Circumstance: That human language use is stimulus-free can be revealing when we reflect that utterances are routinely appropriate to the situations in which they are made and coherent to others who hear them. If human language use is both stimulus-free and not caused by situations, this means its relation to one’s environment must be the more obscure relation of appropriateness. Indeed, language use “is recognized as appropriate by other participants in the discourse situation who might have reacted in similar ways and whose thoughts, evoked by this discourse, correspond to those of the speaker” (Language and Problems of Knowledge, Noam Chomsky, 1988).
Only when all three conditions are simultaneously present does language use take on its special human character. As Chomsky summarized it:
“man has a species-specific capacity, a unique type of intellectual organization which cannot be attributed to peripheral organs or related to general intelligence and which manifests itself in what we may refer to as the ‘creative aspect’ of ordinary language use – its property being both unbounded in scope and stimulus-free. Thus Descartes maintains that language is available for the free expression of thought or for appropriate response in any new context and is undetermined by any fixed association of utterances to external stimuli or physiological states (identifiable in any noncircular fashion)”
(Noam Chomsky, Cartesian Linguistics, 1966).
It is a distinctively human trait to use language in a manner that is simultaneously stimulus-free, unbounded, yet appropriate and coherent to others. Such language use is neither determined (by a stimulus) nor random (inappropriate). This ability enables people to deploy their intellectual resources to any problem, or to create new problems altogether, putting our shared cognitive capacities to use across contexts at will. It is little wonder then why Descartes assigns such importance to language use in his test for other minds. A being that uses language in only one or two of the three ways described can be explained in mechanical terms – but the presence of all three is something that modern technology lacks the tools to create.
Rescuing Mind from the Machines | Issue 168 | Philosophy Now
Or, to put it another way:
The article discusses the misconceptions of "AGI doomers" - those who believe that a future artificial general intelligence (AGI) system could rebel against humanity. The author argues that the key flaw in the doomer's argument is their failure to recognize the "creative aspect of language use" (CALU) - the uniquely human ability to use language in a stimulus-free, unbounded, yet appropriate and coherent manner. The author contends that this extra-mechanical nature of human language and cognition is a critical characteristic that current AI systems lack, and that the doomer's assumption of an AGI system spontaneously developing such capabilities is unfounded.What the AGI Doomers Misunderstand - Aili
This is quite a debate:
Do large language models (LLMs) use language creatively? Ample intellectual content has been produced recently over whether LLMs generate text sufficiently novel to be considered “creative” or merely synthesize creatively human-generated content without a distinctive contribution of their own. It is one dimension of a highly complex debate that is unfolding over the nature of both LLMs and human intelligence.
This saga has seen contributions from thinkers in a diversity of disciplines, including computer science, robotics, cognitive science, philosophy, and even national security. A notable flashpoint is linguist Noam Chomsky’s fiery critique of ChatGPT and LLMs in The New York Times. This controversial piece illuminates stark divides between scientific approaches to the nature of the human mind, natural and artificial intelligence (AI), and how engineering makes use (or doesn’t) of these notions.
Chomsky’s NYT piece spurred tremendous debates on this subject, as he highlighted his belief that “Intelligence consists not only of creative conjectures but also of creative criticism.” The discourse which has sprung up in the wake of this and other pieces surrounds familiar arguments about the utility of generative linguistics, the role of cognitive science in AI, and even broader matters such as the emergent theory of mind capabilities in LLMs.
I find myself frustrated and baffled. This is good because otherwise, I may not have written this article. But the reasons are not stellar: Chomsky’s rigid communication style has prevented him from leveraging some of the fascinating features of his own linguistic work in a direct and explicit manner to assess LLMs’ capabilities. Conversely, machine learning researchers have so thoroughly indulged in the euphoria of the field’s recent (and real) advancements that they frequently lack the will required to assess whether human cognition is as straightforward as it seems.
I attempt to remedy this here. Where Chomsky’s approach to the mind and the tradition of generative linguistics broadly are brought into AI, they have focused intensely on familiar arguments like the poverty of the stimulus and the innateness of linguistic knowledge or principles. I instead highlight what is known in the rationalist tradition in philosophy and cognitive science as the “creative aspect of language use,” or CALU.
Do Large Language Models Have Minds Like Ours? | Towards AI
.
.
.