The issue of privacy in the era of Artificial Intelligence is once again at the center of public debate, after viral allegations suggested that Google was using the content of users’ personal emails to train its AI model, Gemini. The technology giant reacted this weekend, classifying the information as “misleading” and clarifying the technical distinction between its productivity tools and language model training.
The controversy originated in an article published — and later corrected — by security company Malwarebytes. The initial report suggested that enabling Gmail’s “Smart Features” implied automatic consent to the use of private data in the development of Gemini. The news spread quickly across social media, fueled by growing fears that personal data is being fed, unchecked, into major language models (LLM).
In statements to the portal The VergeJenny Thomson, a Google spokesperson, assured: “These reports are misleading. We haven’t changed anyone’s settings, Gmail’s Smart Features have been around for many years, and we don’t use Gmail content to train our Gemini AI model.”
What was the confusion
Malwarebytes ended up issuing a fix, admitting that Google’s recent documentation overhaul misled its analysts. The security company noted that the vague language used by the technology, associated with the term “smart” (Smart Features), led many to assume a direct connection to Generative Artificial Intelligence, at a time when Gemini is being integrated into almost all of the brand’s products.
The “Smart Features” in question actually refer to local processing algorithms that have operated in Gmail for several years. These tools are responsible for automatically filtering unsolicited mail (spam), categorizing messages in the “Promotions” and “Social” tabs, and offering automatic response suggestions (Smart Compose). According to the company’s technical documentation, these processes are distinct from the training of foundational AI models.
Judicial process aggravates mistrust
The incident comes at a delicate time for Google. Recent research indicates that this controversy coincides temporally with a proposed class-action lawsuit filed in federal court in San José, California.
The lawsuit alleges that Google may have given Gemini improper access to Gmail, Chat and Meet data without users’ explicit and informed consent. The lawsuit argues that the company was transforming privacy into a system of opt-out (active by default), forcing the user to search for obscure settings to protect their information, which would violate state privacy statutes.
User control options
Despite assurances that data is not used to “teach” AI, Gmail’s architecture allows users to disable algorithmic processing of their messages.
Privacy experts note that, through Gmail’s general settings menu, it is possible to locate the “Smart features and personalization” section. Although Google states that this option should be turned off by default for new users, anecdotal reports suggest that many find the feature active in their accounts. By disabling this option, the user prevents content analysis for automatic features, although this results in a more manual and less filtered email experience.
This episode serves as a reminder of the constant tension between the convenience of modern digital tools and the opacity of the terms of service that govern them.