WhatsApp Meta AI Optional Feature Sparks Backlash Over Removal Restrictions

You are currently viewing WhatsApp Meta AI Optional Feature Sparks Backlash Over Removal Restrictions

WhatsApp’s new Meta AI feature is causing a stir, as users express frustration over the assistant’s permanent presence in the app. Despite WhatsApp claiming the WhatsApp Meta AI tool is “entirely optional,” it cannot be removed—raising concerns over user control and data privacy.

The new Meta AI assistant, represented by a persistent blue circle with pink and green accents, now sits at the bottom right of users’ Chats screens. Tapping it opens a chatbot built to answer questions and offer suggestions. But many users are voicing frustration that the feature can’t be removed, calling into question how “optional” it really is.

In a statement to the BBC, WhatsApp said it views the AI feature similarly to other permanent elements in the app like Channels and Status, adding, “We think giving people these options is a good thing and we’re always listening to feedback from our users.”

The controversy mirrors the backlash Microsoft faced over its now-adjusted Recall feature, which was initially always-on. The pressure led Microsoft to allow users to disable it—something WhatsApp has yet to offer for Meta AI.

Limited Rollout, But Visible Presence

Not all users can access the feature yet. According to Meta, the AI assistant is only available in select countries during this initial rollout. If you don’t see the blue circle or the new “Ask Meta AI or Search” bar at the top of your screen, it may not be live in your region yet.

The Meta AI assistant is also being integrated into Instagram and Facebook Messenger, and is powered by Meta’s Llama 4, the company’s latest large language model.

Upon first use, WhatsApp presents a detailed disclaimer explaining the assistant is “optional” and offers help with ideas, questions, and learning. But critics argue that the inability to disable or hide the feature contradicts Meta’s claims.

Concerns Around Accuracy and Privacy

In user tests, Meta AI performed well on simple queries, such as delivering a detailed weather report for Glasgow. However, it stumbled by providing a follow-up link that directed to Charing Cross in London, not the similarly named area in Glasgow, highlighting ongoing issues with AI hallucination and location-based accuracy.

Privacy experts are sounding the alarm about data usage and transparency. AI and privacy advisor Dr. Kris Shrishak criticized Meta for “exploiting its existing market” and warned that users are being turned into test subjects for Meta’s AI tools. “No one should be forced to use AI,” he told the BBC.

He also raised ethical concerns about Meta’s data practices, especially in light of ongoing legal action. The Atlantic recently reported that Meta may have trained its Llama AI models on millions of pirated books and research papers from sites like LibGen, sparking lawsuits from author groups in the UK and globally.

WhatsApp and Meta’s Response

Meta has not commented on The Atlantic’s investigation but claims Meta AI only interacts with content users explicitly share with it, while all other WhatsApp chats remain end-to-end encrypted.

Still, the UK’s Information Commissioner’s Office (ICO) says it is monitoring the use of AI in WhatsApp closely:

“Organisations who want to use people’s personal details to train or use generative AI models need to comply with all their data protection obligations.”

Meta’s own guidelines caution users to avoid sharing sensitive personal information, suggesting that anything shared with the AI could be retained or analyzed.

As AI becomes more deeply embedded in messaging apps, user pushback over lack of control, unclear data usage, and privacy risks is becoming harder for companies like Meta to ignore. For now, WhatsApp’s “optional” AI assistant may be here to stay—whether users want it or not.

Get the Latest AI News on AI Content Minds Blog

Leave a Reply