BlogsRSS

Google researchers highlight real-world risks of AI

We hear a lot these days about artificial intelligence (AI) these days, but in the telecoms world, the term is thrown about rather casually, as though it’s simply a really advanced algorithm tool to make Siri more useful.

AI is typically discussed in the context of big data, the IoT and augmented/virtual reality apps. At the upcoming ITU Telecom World in Bangkok, the Leadership Summit and Forum – which features a digital economy theme – lumps AI in with other topics such as smart sustainable cities, the role of taxation in driving connectivity, fostering SMEs and digital financial services.

In the IT world, AI is taken more seriously – and treated with far more caution, albeit perhaps too much caution. Famously, Elon Musk once described AI as “potentially more dangerous than nukes”, while people like Bill Gates, Stephen Hawking and Nick Bostrum have expressed concerns that AI poses serious dangers if it’s developed with no regard for the potential risks.

But when you look at the actual progress of AI, the risks are less about (say) Skynet declaring war on humanity and more about the practical realities of software agents or Roombas making decisions, and the ability of humans to correct wrong decisions – or even to teach machines when they’ve made bad decisions.

To that end, researchers from Google AI, Musk-led startup OpenAI, Stanford University and UC Berkeley have released a paper addressing AI risks that are more grounded in the reality of how AI will be used in the coming years – such as smart cleaning robots.

Google Research’s Chris Olah explains in this blog post:

While possible AI safety risks have received a lot of public attention, most previous discussion has been very hypothetical and speculative. We believe it’s essential to ground concerns in real machine learning research, and to start developing practical approaches for engineering AI systems that operate safely and reliably.

The paper touches on five main problems that AI researchers should focus on, using the example of an AI-enabled cleaning robot:

  • Avoiding Negative Side Effects: How can we ensure that an AI system will not disturb its environment in negative ways while pursuing its goals, e.g. a cleaning robot knocking over a vase because it can clean faster by doing so?
  • Avoiding Reward Hacking: How can we avoid gaming of the reward function? For example, we don’t want this cleaning robot simply covering over messes with materials it can’t see through.
  • Scalable Oversight: How can we efficiently ensure that a given AI system respects aspects of the objective that are too expensive to be frequently evaluated during training? For example, if an AI system gets human feedback as it performs a task, it needs to use that feedback efficiently because asking too often would be annoying.
  • Safe Exploration: How do we ensure that an AI system doesn’t make exploratory moves with very negative repercussions? For example, maybe a cleaning robot should experiment with mopping strategies, but clearly it shouldn’t try putting a wet mop in an electrical outlet.
  • Robustness to Distributional Shift: How do we ensure that an AI system recognizes, and behaves robustly, when it’s in an environment very different from its training environment? For example, heuristics learned for a factory workfloor may not be safe enough for an office.

The paper is available here.

See also: this article from Wired, which offers some extra color on possible AI mishaps that won’t destroy the planet but would cause problems.