
[ad_1]
One of the causes of the advancement of humans as compared to the rest of the species on our planet is ability to think critically. Psychologically, it refers to “theory of mind”, which is simply the ability to sense the relative difference in our people’s mental states. For example, in ideal circumstances, you would not disturb your colleague with a typical workplace gossip if they appear to be deeply focused on a task at hand. In this case, you recognised the difference in two mental states (your own: willingness to gossip; your colleague’s: focused state of completing a workplace task) and took the decision to not gossip and let your colleague work. This is exactly what the “theory of mind” means.
Scientifically, the difference to think critically, is all what ultimately defines the advancement of a species.
With Artificial Intelligence chatbots like ChatGPT trained on vast amount of data on internet becoming a mainstream workplace/educational staple, the following question has come to the fore as a matter of concern: Can Artificial Intelligence read our minds?
“Theory of mind may have spontaneously emerged in large language models,” argues Michal Kosinski, a psychologist at the Stanford Graduate School of Business, in a paper submitted on a ‘Computation and Language’ portal of Cornell University.
Michal claimed in his paper that the March 2023 version of GPT-4, yet-to-be-released by ChatGPT-maker OpenAI, could solve 95 per cent of ‘Theory of Mind’ tasks. Thus far, these abilities were considered “uniquely human”.
“These findings suggest that Theory of Mind-like ability may have spontaneously emerged as a byproduct of language models’ improving language skills,” Michel argues further in his paper.
However, soon after these results were released, Tomer Ullman, a psychologist at Harvard University, illustrated that small adjustments in the Artificial Intelligence prompts could completely change the answers.
ALSO WATCH | ChatGPT is making waves, but can it trusted?
A New York Times report cited Maarten Sap, a computer scientist at Carnegie Mellon University. Maarten reportedly fed more than 1,000 theory of mind tests into large language models and found that the most advanced transformers, like ChatGPT and GPT-4, passed only about 70 per cent of the time. Dr. Sap reportedly said that even passing 95 per cent of the time would not be evidence of real theory of mind.
Artificial Intelligence, in its current form struggle at engaging in abstract reasoning and often making “spurious correlations,” Maarten was quoted as saying by New York Times.
The debate continues if the natural language processing abilities of Artificial Intelligence could match that of human beings. Scientists remain divided, as a 2022 survey of Natural Language Processing scientists suggests: 51 percent believed that large language models could eventually “understand natural language in some nontrivial sense”, and 49 percent believed that they could not.
You can now write for wionews.com and be a part of the community. Share your stories and opinions with us here.
[ad_2]