Amazon,
Microsoft, may be putting world at risk of killer AI, says report
France24.com Date created
: 22/08/2019 - 01:33
Amazon, Microsoft and Intel are among leading tech companies that could
spearhead a global AI arms race, according to a report that surveyed major
players from the sector about their stance on lethal autonomous weapons.
Dutch NGO Pax ranked 50 companies by three criteria: whether they were
developing technology that could be relevant to killer robots, whether they
were working on related military projects, and if they had committed to
abstaining from contributing in the future.
"Why are companies like Microsoft and Amazon not denying that they're
currently developing these highly controversial weapons, which could decide to
kill people without direct human involvement?" said Frank Slijper, lead
author of the report published Monday.
PUBBLICITÀ
The use of AI to allow weapon systems to autonomously select and attack
targets has sparked ethical debates in recent years, with critics warning they
would jeopardize international security and herald a third revolution in
warfare after gunpowder and the atomic bomb.
A panel of government experts debated policy options regarding lethal
autonomous weapons at a meeting of the United Nations Convention on Certain
Conventional Weapons in Geneva on Tuesday and Wednesday -- though it has been
difficult so far to achieve an international consensus.
Google, which last year published guiding principles eschewing AI for use
in weapons systems, was among seven companies found to be engaging in
"best practice" in the analysis that spanned 12 countries, as was
Japan's Softbank, best known for its humanoid Pepper robot.
Twenty-two companies were of "medium concern," while 21 fell into
a "high concern" category, notably Amazon and Microsoft who are both
bidding for a $10 billion Pentagon contract to provide the cloud infrastructure
for the US military.
"Autonomous weapons will inevitably become scalable weapons of mass
destruction, because if the human is not in the loop, then a single person can
launch a million weapons or a hundred million weapons," Stuart Russell, a
computer science professor at the University of California, Berkeley told AFP
on Wednesday.
"The fact is that autonomous weapons are going to be developed by
corporations, and in terms of a campaign to prevent autonomous weapons from
becoming widespread, they can play a very big role," he added.
The development of AI for military purposes has triggered debates and
protest within the industry: Last year Google declined to renew a Pentagon
contract called Project Maven, which used machine learning to distinguish people
and objects in drone videos.
It also dropped out of the running for Joint Enterprise Defense
Infrastructure (JEDI), the cloud contract that Amazon and Microsoft are hoping
to bag.
The report noted that Microsoft employees had also voiced their opposition
to a US Army contract for an augmented reality headset, HoloLens, that aims at
"increasing lethality" on the battlefield.
- What they might look like -
According to Russell, "anything that's currently a weapon, people are
working on autonomous versions, whether it's tanks, fighter aircraft, or
submarines."
Israel's Harpy is an autonomous drone that already exists,
"loitering" in a target area and selecting sites to hit.
More worrying still are new categories of autonomous weapons that don't yet
exist -- these could include armed mini-drones like those featured in the 2017
short film "Slaughterbots."
"With that type of weapon, you could send a million of them in a
container or cargo aircraft -- so they have destructive capacity of a nuclear
bomb but leave all the buildings behind," said Russell.
Using facial recognition technology, the drones could "wipe out one
ethnic group or one gender, or using social media information you could wipe
out all people with a political view."
The European Union in April published guidelines for how companies and
governments should develop AI, including the need for human oversight, working
towards societal and environmental wellbeing in a non-discriminatory way, and
respecting privacy.
Russell argued it was essential to take the next step in the form of an
international ban on lethal AI, that could be summarized as "machines that
can decide to kill humans shall not be developed, deployed, or used."
© 2019 AFP
Nessun commento:
Posta un commento