Apple Intelligence Could Have Easily Prevented the Luigi Mangione Fake News Saga

Apple Intelligence Could Have Easily Prevented the Luigi Mangione Fake News Saga

Last week, Apple Intelligence managed to produce a piece of misinformation involving Luigi Mangione, utilizing the notification summary feature. The AI inaccurately concluded that the suspect in the murder of United Health CEO Brian Thompson had taken his own life.

This error, while unfortunate, is not particularly surprising; AI systems frequently commit such mistakes. It is, however, notable that Apple failed to prevent it, despite being an easily avoidable situation.

AI Blunders: Amusing or Hazardous

Generative AI systems can often produce remarkable outcomes, but it is essential to remember that they lack true intelligence, resulting in some rather notable blunders.

Many of these errors are humorous. For instance, an AI at a McDonald’s drive-through kept adding chicken nuggets to orders until they totaled 260; Google once suggested that a geologist’s advice was to eat one rock daily, and also recommended using glue to secure cheese on pizza; and Microsoft bizarrely suggested a food bank as a tourist venue.

Nonetheless, there have been instances of perilous AI guidance. One example includes an AI-authored book on mushroom foraging which incorrectly advised tasting mushrooms to identify poisonous varieties; some mapping applications that directed users toward wildfires; and the Boeing system that was linked to the crashes of two aircraft, resulting in 346 fatalities.

Embarrassing Instances

The Apple Intelligence summary of a BBC News article was neither funny nor harmful, but it was certainly embarrassing.

After launching in the UK, Apple Intelligence uses artificial intelligence (AI) to summarize and organize notifications. This week, the AI-generated summary inaccurately suggested that BBC News had published a story claiming Luigi Mangione, arrested in connection with the murder of healthcare CEO Brian Thompson in New York, had committed suicide. That assertion is false.

This isn’t the first incident of this nature; a previous summary from Apple Intelligence incorrectly stated that Israeli Prime Minister Benjamin Netanyahu had been arrested, when the actual report cited an ICC warrant for him.

Avoidable Misinformation

While it’s impossible to eliminate all errors, such mistakes are inherently part of generative AI systems.

In the case of Apple’s news notification summaries, the challenge is compounded; headlines are inherently brief and incomplete, and Apple’s AI is attempting to further condense these already condensed pieces of information. It is not surprising that this approach sometimes leads to significant misunderstandings.

Although Apple cannot entirely eliminate these errors, it could mitigate risks on particularly sensitive topics. For instance, it could flag keywords such as “killing,” “shot,” “shooter,” “death,” and others for human review before distribution.

In this case, the misstep was simply awkward; however, it’s easy to foresee how a mistake on a sensitive subject could spark outrage. Consider a summary that seemingly accuses victims of violent crimes or disasters.

While human oversight would require additional effort from the Apple News team, an around-the-clock review system could be established with the cost of just a handful of employees working shifts. Such an investment seems minor for Apple compared to the potential repercussions of a PR debacle for a nascent feature.

Image by Jorge Franganillo on Unsplash

: . More.