The Ethical Resolve Blog

dot background pattern

Skynet starts with data mining: thinking through the ethics of AI

I was recently interviewed by John C. Havens at Mashable about the creation of data ethics and AI ethics committees.

The ethics of artificial intelligence is becoming a much more concrete public discussion, particularly with the recent open letter advocating for a ban on autonomous weapons systems. The letter, organized by the Future of Life Institute and signed by 10,000 plus people, including many AI researchers and prominent tech leaders, advocates for an international ban on autonomous weapons systems that can operate without meaningful human input.

This follows on the heels of some major media attention earlier in the year about Bill Gates, Elon Musk and Steven Hawking arguing that artificial super-intelligence poses a future existential threat to humanity (including all signing another FLI open letter). Hawking told the BBC that, “The development of full artificial intelligence could spell the end of the human race.” There are reasons to be skeptical of some of this fear, not least of which is the definitional problem of actually getting a handle on what counts as AI and whether it would ever have generalized, incredibly plastic intelligence like human bio-brains do or the ability to maintain machine bodies without humans. (My favorite semi-serious reason for doubting is Baratunde Thurston’s point that if AI looked like human intelligence in aggregate it would spend all day taking cat pictures and trying to sell the rest of us stuff.)

While it is certainly important to take a long-run view and get ahead of major risks that rise to the level of existential threats, it is perhaps more critical to locate actionable decision points and establish community standards for making sound decisions. While opposing autonomous weapons gets attention and money, the real work of AI ethics is (hopefully) going to be done collaborative, dialog-heavy decisions about immediate concerns.

We tend to want to think about the ethics of emerging technologies as if there is some switch that we can toggle on/off. We act as if there will be at some future moment a bimodal decision where we choose whether or not to turn on the robot that will lead to an AI apocalypse, and we therefore imagine that ethics exists to tell us not to do it. But such a decision will never present itself and it’s a mistake to treat ethical decision-making as such. Instead, we should anticipate a long series of decisions where we can expect values to get deeply embedded in the systems we design . This should encourage us to emphasize coordinated efforts that require collaboration, openness, transparency and longevity. Our process for ethical decision making should fit with, and fit in, the design process.

The public discussion of AI ethics seems to under-appreciate the extent to which AI systems, especially systems that engage with human sociality, are profoundly data intensive endeavors. The amount of testing and data collection necessary to get a good predictive grip on human behavioral preferences will likely be enormous. Our fear of autonomous weapons (sure, something to plausibly be afraid of) tends to push us toward seeing AI as something embodied in a discrete robot that could do us harm. But Skynet is not just the Terminator—more than anything it is a ubiquitous sensor network constantly gobbling up data. The more pressing question is how we should handle the dilemmas posed by the massive amount of data collected, analyzed and used for the predictive activities necessary to support the future of AI.

More Resources
ER Logo
© 2019 All Rights Reserved