The news broke last week: following pressure from some of its employees, Google had decided to abandon Project Maven, its project to equip the Pentagon with super-advanced facial recognition technologies. Sundar Pichai, Google CEO, also took the opportunity to put their new ethical regulations online. This 8000 word document lays out the situations where the company proposes limiting the use of artificial intelligence. Emphasising the importance of this mini charter, Pichai stressed that it was a set of “concrete standards that will actively govern our research and product development and will impact our business decisions” A true distillation of “Don’t be evil” , Google’s founding motto, this charter reaffirms that the company will not buy technologies that can do harm, or have the potential to do so and explicitly will not use Artificial Intelligence for weapons. AI should be socially beneficial, avoid creating or reinforcing unfair bias, and be safety tested; it should also protect private data, uphold high standards of scientific excellence, and be accountable to people. Springing from this, Google asserts its intention to terminate any technological use of AI when it becomes aware that it violates its principles. After jettisoning Boston Dynamics, a company that spread panic on the web with its videos of “robots on legs” jumping and running in the forest, it would seem that this is Google’s next practical gesture aimed at limiting technological advances linked to the development of artificial intelligence; with this move the Californian giant is turning words to actions, seeing as its founders sounded the alarm some time ago, alongside other key figures such as Bill Gates, Elon Musk or the late Stephen Hawkins.
Google is certainly not the only one raising questions about the technological development of AI. Eye in the Sky has become a reference. Cambridge University researchers have developed an algorithm which can identify movement as “attack” or “violence” in real time. The practical implementation involves mounting a camera on a DSS (Drone Surveillance System) to monitor crowd movements. The researchers on this project designed them for drones used by law enforcement agencies for surveillance. They explain how a SHDL (ScatterNet Hybrid Deep Learning) system makes it possible for the drone to to compare situations it views against others to detect violence. It can distinguish various movements such as strangulation, punching, kicking, shooting or stabbing. The accuracy of this system’s latest version is 89%. This decreases in relation to the number of individuals being observed (the higher the number, the less precise the observations are). Up till now, the experiments that have been carried out (with a minimum of two and a maximum of 10 individuals) were only on scenarios played by actors. Responding to questions from TheRegister, Amarjot Singh, one of the study’s co-authors, says the images it captures will not be kept in the cloud for any purpose other than those of the software itself. Another problem relates to “false positive” misinterpretations. Over zealous or wrongly programmed software, could, for example, send out an alert, by mistake, about certain sporting events… Another – perhaps much more serious – possibility would be that this surveillance system could fall into the hands of an ill-intentioned government, a possibility acknowledged by Singh, who points out: “The system [could potentially] be used to identify and track individuals who the government thinks is violent but in reality might not be […] The designer of the [final] system decides what is ‘violent’ which is one concern I can think of.” He acknowledged that it is quite possible to misuse a system like this, but noted that programming requires a huge amount of data and advanced programming knowledge and hoped that monitoring would be put in place to prevent abuse of this technology. He plans to test the project soon at two music festivals and also to monitor India’s national borders. If the results are conclusive, he hopes to market his surveillance device.
These two examples show the extent to which the future of Artificial Intelligence is riddled with ethical problems. On the one hand, we have a world corporate giant issuing a charter and deciding to put an outright brake on its “superpowers” by suspending its development of AI, deciding that a state should not be able to make use of such technological efficiency. On the other hand, we have student researchers who are willing to do anything to push their technology to the limit and bring it to market, while being aware that it could fall into the hands of an unscrupulous state, or even individuals.
All things being equal, it is reassuring to note that it is companies and researchers themselves who are aware of these problems and who are the ones raising the issues and taking precautions. It’s not a huge leap to compare the current situation with the early days of genetic engineering in the 1970s, when researchers met at the Asilomar conference and began to consider the possible consequences of their actions, if genetically modified bacteria ever spread outside the laboratory. It appears that a similar awareness may be emerging among the main players seeking to develop AI applications. A few weeks ago in an editorial we mentioned the Villani report, named after the French mathematician and politician, who examined the future of AI in Europe and considered that ethics should be a prerequisite for any further development of AI on the “old continent”. At the time we discussed the trap that this kind of exhortation can fall into. In fact, while we may be reassured by the fact that key players in the AI field are themselves asking ethical questions about their own applications and the possible consequences of their development, it might, on the other hand, be counterproductive to pose these questions first, especially as an abstract principle and when we know that global competition is heating up. [1]. It’s clear from our two examples that the “misuse” of AI might be as likely to be the work of states as individuals, and we also know that it is fantasy to think you can call a halt to technological development. You need only look at the speed at which countries like China are adopting facial recognition technologies to be convinced of this. In addition, the number of “positive applications” of AI to date seems to far outweigh the harmful uses, and looking at it from a “risk-benefit analysis” point of view, it would seem clear that we cannot cut off our noses to spite our faces in depriving ourselves of these advances (see our assessment of precision agriculture)
So let us reassure ourselves that awareness of the “potential risks” of AI is uppermost in the minds of the key players who work with it daily. We will have to wager on the collective intelligence resulting from the meeting of all the interested parties being the best guarantee of the beneficial use of AI’s super-power and that a balance will emerge from the confrontation of the various power blocs involved.
[1] At the #AIForHumanity conference, Antoine Petit, the head of the CNRS [The French National Center for Scientific Research] unveiled the Villani Commission report, and gave a warning: “We … should not specialise in ethical questions while at the same time US and China do business and create jobs. Let’s not let it appear that this is a world of homogeneous ethical values!” This leads the economist Philippe Silberzahn to say : “In placing AI at the service of ethics, the report makes two mistakes: on the one hand it gives itself no chance to think properly about the ethics of AI because we will be thinking in a vacuum – we can only think by doing, and on the other hand it condemns France look on from the sidelines”
This post is also available in: FR (FR)DE (DE)