[ad_1]
At a global summit on Artificial Intelligence (AI) in the military domain, nearly 100 countries, including the United States, China, and Ukraine, agreed that humans—not AI—should make critical decisions regarding the use of nuclear weapons. The nations have signed a non-binding agreement to that effect.
The REAIM summit
The two-day ‘Responsible AI in the Military Domain (REAIM)’ summit held in Seoul wrapped up with a non-binding declaration called the “Blueprint for Action.” It emphasises the necessity of maintaining human control in decisions concerning nuclear weapons deployment.
The non-binding agreement says it is essential to “maintain human control and involvement for all actions… concerning nuclear weapons employment”.
It adds that AI applications in the military “must be applied in accordance with applicable national and international law”.
“AI applications should be ethical and human-centric,” it adds.
The summit also noted that there was a need for “further discussions… for clear policies and procedures”.
However, the declaration stopped short of outlining sanctions or consequences for any violations of these principles.
Even though the declaration is not legally binding, it was not signed by China.
Russia, a Chinese ally, was not invited to due to its invasion of Ukraine.
Artificial intelligence in the military
AI is already utilised in military operations for tasks like reconnaissance and surveillance, and analysis.
It also has the potential to autonomously select targets in the future, as made evident by an AI-based tool “Lavender,” which is reportedly being used by Israel during the war in Gaza against Hamas militant groups.
The Lavender system is said to mark suspected operatives in the military wings of Hamas and Palestinian Islamic Jihad (PIJ) as potential bombing targets, including low-ranking individuals.
The software analyses data collected through mass surveillance on most of Gaza’s 2.3 million residents, assessing and ranking the likelihood of each person’s involvement in the military wing of Hamas or PIJ.
Individuals are given a rating of 1 to 100, indicating their likelihood of being a militant.
As per reports, even though the AI machine has an error rate of 10 per cent, its outputs were treated “as if it were a human decision”.
(With inputs from agencies)
[ad_2]