[ad_1]
NEW DELHI: Artificial Intelligence (AI) could replace or change the nature of social science research, scientists from the University of Waterloo and University of Toronto (Canada), Yale University and the University of Pennsylvania in the US said in an article. “What we wanted to explore in this article is how social science research practices can be adapted, even reinvented, to harness the power of AI,” said Igor Grossmann, professor of psychology at Waterloo.
Large language models (LLMs), of which ChatGPT and Google Bard are examples, are increasingly capable of simulating human-like responses and behaviours, having been trained on vast amounts of text data, their article published in the journal Science said.
This, they said, offered novel opportunities for testing theories and hypotheses about human behaviour at great scale and speed.
Social scientific research goals, they said, involve obtaining a generalised representation of characteristics of individuals, groups, cultures, and their dynamics.
With the advent of advanced AI systems, the scientists said that the landscape of data collection in the social sciences may shift, which are traditionally known to rely on methods such as questionnaires, behavioral tests, observational studies, and experiments.
“AI models can represent a vast array of human experiences and perspectives, possibly giving them a higher degree of freedom to generate diverse responses than conventional human participant methods, which can help to reduce generalisability concerns in research,” said Grossmann.
“LLMs might supplant human participants for data collection,” said psychology professor at Pennsylvania, Philip Tetlock.
“In fact, LLMs have already demonstrated their ability to generate realistic survey responses concerning consumer behaviour.
“Large language models will revolutionize human-based forecasting in the next 3 years,” said Tetlock.
Tetlock also said that in serious policy debates, it wouldn’t make sense for humans unassisted by AIs to venture probabilistic judgments.
“I put an 90 per cent chance on that. Of course, how humans react to all of that is another matter,” said Tetlock.
Studies using simulated participants could be used to generate novel hypotheses that could then be confirmed in human populations, the scientists said, even as opinions are divided on the feasibility of this application of AI.
The scientists warn that LLMs are often trained to exclude socio-cultural biases that exist for real-life humans. This meant that sociologists using AI in this way would not be able to study those biases, they said in the article.
Researchers will need to establish guidelines for the governance of LLMs in research, said Dawn Parker, a co-author on the article from the University of Waterloo.
“Pragmatic concerns with data quality, fairness, and equity of access to the powerful AI systems will be substantial,” Parker said.
“So, we must ensure that social science LLMs, like all scientific models, are open-source, meaning that their algorithms and ideally data are available to all to scrutinize, test, and modify.
“Only by maintaining transparency and replicability can we ensure that AI-assisted social science research truly contributes to our understanding of human experience,” said Parker.
Large language models (LLMs), of which ChatGPT and Google Bard are examples, are increasingly capable of simulating human-like responses and behaviours, having been trained on vast amounts of text data, their article published in the journal Science said.
This, they said, offered novel opportunities for testing theories and hypotheses about human behaviour at great scale and speed.
Social scientific research goals, they said, involve obtaining a generalised representation of characteristics of individuals, groups, cultures, and their dynamics.
With the advent of advanced AI systems, the scientists said that the landscape of data collection in the social sciences may shift, which are traditionally known to rely on methods such as questionnaires, behavioral tests, observational studies, and experiments.
“AI models can represent a vast array of human experiences and perspectives, possibly giving them a higher degree of freedom to generate diverse responses than conventional human participant methods, which can help to reduce generalisability concerns in research,” said Grossmann.
“LLMs might supplant human participants for data collection,” said psychology professor at Pennsylvania, Philip Tetlock.
“In fact, LLMs have already demonstrated their ability to generate realistic survey responses concerning consumer behaviour.
“Large language models will revolutionize human-based forecasting in the next 3 years,” said Tetlock.
Tetlock also said that in serious policy debates, it wouldn’t make sense for humans unassisted by AIs to venture probabilistic judgments.
“I put an 90 per cent chance on that. Of course, how humans react to all of that is another matter,” said Tetlock.
Studies using simulated participants could be used to generate novel hypotheses that could then be confirmed in human populations, the scientists said, even as opinions are divided on the feasibility of this application of AI.
The scientists warn that LLMs are often trained to exclude socio-cultural biases that exist for real-life humans. This meant that sociologists using AI in this way would not be able to study those biases, they said in the article.
Researchers will need to establish guidelines for the governance of LLMs in research, said Dawn Parker, a co-author on the article from the University of Waterloo.
“Pragmatic concerns with data quality, fairness, and equity of access to the powerful AI systems will be substantial,” Parker said.
“So, we must ensure that social science LLMs, like all scientific models, are open-source, meaning that their algorithms and ideally data are available to all to scrutinize, test, and modify.
“Only by maintaining transparency and replicability can we ensure that AI-assisted social science research truly contributes to our understanding of human experience,” said Parker.
[ad_2]