Ticker

6/recent/ticker-posts

Apple’s AI Feature Faces Backlash Over Misinformation in News Alerts

Apple’s AI Feature Faces Backlash Over Misinformation in News Alerts
Apple’s AI Feature Faces Backlash Over Misinformation in News Alerts

 

Apple's artificial intelligence (AI) feature for summarizing notifications on iPhones has come under fire for generating inaccurate news alerts, raising concerns about the technology's potential to spread misinformation.

Last week, Apple’s AI feature inaccurately summarized a BBC News notification regarding the PDC World Darts Championship semi-final. The notification falsely claimed that British darts player Luke Littler had already won the championship. This error occurred a day before the actual final, which Littler eventually did win. Shortly after that incident, Apple Intelligence, the tech giant’s AI system, generated another false notification claiming that tennis legend Rafael Nadal had come out as gay. The BBC has been attempting to address the issue with Apple since December. The broadcaster raised concerns after the AI feature created a misleading headline suggesting Luigi Mangione, a man arrested for the murder of UnitedHealthcare CEO Brian Thompson, had died by suicide—a claim that was entirely false. In response to the growing backlash, Apple told the BBC on Monday that it is working on an update to address the problem. The proposed fix will include clarifications indicating when text in notifications is generated by Apple Intelligence. Currently, these AI-generated notifications appear as though they originate directly from the source.

“Apple Intelligence features are in beta, and we are continuously improving them based on user feedback,” Apple said in a statement. The company also encouraged users to report any “unexpected notification summaries.”

Apple’s notification summarization issues have impacted other news organizations as well. In November, the AI system falsely notified users that Israeli Prime Minister Benjamin Netanyahu had been arrested. This error was flagged on the social media platform Bluesky by Ken Schwencke, a senior editor at ProPublica.

CNBC has reached out to the BBC and the New York Times for comments on Apple’s proposed solution to the misinformation issue.

AI and the Challenge of Misinformation

Apple promotes its AI-generated notification summaries as a tool to help users manage their lock screens by condensing multiple notifications into a single alert. However, the feature has been plagued by what AI experts call “hallucinations” — instances where AI produces false or misleading information.

Ben Wood, chief analyst at CCS Insights, noted, “Apple is not alone in facing challenges with AI-generated content. We’ve seen many examples of AI confidently delivering inaccuracies, or ‘hallucinations.’”

According to Wood, the problem stems from Apple’s attempt to condense notifications into short summaries. This process sometimes results in mischaracterizations of events that are presented as factual. “Apple had the added complexity of compressing content into very short summaries, which led to erroneous messages,” he said.

Generative AI systems, like the one used by Apple, rely on extensive data to generate responses. However, when the system lacks sufficient information, it may still produce a response—sometimes resulting in inaccurate or fabricated claims.

Apple has not specified a timeline for resolving the bug but stated that a fix is expected “in the coming weeks.” Meanwhile, rivals are closely observing how Apple addresses the issue to avoid similar pitfalls in their AI systems.

Post a Comment

0 Comments