Enhanced memory: new point of contention and controversy in the AI market

Abstract OpenAI and Google are investing in improved memory capacity for AI assistants, enabling more personalized and contextual responses.
OpenAI and Google seem to agree on one thing: for their AI assistants to move beyond being simple chatbots and evolve into true digital companions, the ability to remember what was said becomes essential. And last week, it was Google’s AI Gemini’s turn to take the lead in activating what’s called “enhanced memory.”
Enhanced memory in AI refers to the ability to remember and utilize information from past interactions to provide more personalized and contextual responses, allowing conversations to continue and adapt to user preferences. It may seem like a minor technical detail, but this functionality completely changes the way we interact with AI.
With the new feature, initially available only to Gemini Advanced subscribers via the Google One AI Premium plan and in the English version, Gemini can now recall past interactions and use that context to provide more relevant responses. This means users can continue previous conversations without having to start from scratch or search for past threads. In addition, users can ask Gemini to summarize past discussions and build on existing projects.
“Whether you’re asking a question about something that’s already been discussed or asking Gemini to summarize a previous conversation, Gemini now uses information from relevant chats to craft a response,” Google says.
ChatGPT already allows the user to activate memory, but it is not yet the improved version [already speculated, including with screenshots, on social media]. Gemini also already had this possibility, but the difference now is that it allows the AI to reference past discussions, as highlighted above.
Benefits and careThe idea of an assistant that remembers your preferences and projects is appealing to anyone who uses generative AI for recurring tasks.
Think of a student who can resume research without having to contextualize everything again, or a professional who delegates tasks to an assistant without repeating instructions. Even for corporate use, this memory can be a differential when keeping a history of requests and refinements in long projects.
As you can see, the promised possibilities are enormous, but along with convenience comes concern: to what extent are these interactions protected?
In the announcement of the new feature, in addition to promising to release it to more users and languages, Google emphasized that users have control over what information is stored. Chat history can be reviewed, deleted or kept for a period of time determined by the user, accessing the “Gemini App Activity” setting in the profile menu. Gemini will also visually indicate when it is using information from previous conversations to formulate responses.
In this regard, AI assistants’ memory can be seen as a double-edged sword. On the one hand, it makes the use of technology more efficient and intuitive. On the other, it raises questions about data security and the use of this information to train models without explicit consent, in some cases. In other words, more than just an addition to the product, memory will certainly be a topic for further debate and controversy surrounding generative AI.
The challenge now is to find a balance between convenience and security. If big techs really want to gain or convey trust in their AIs, they will need to ensure that memory functions as a resource at the user's service and not just another source for data collection.
As you who follow the newsletter know, the evolution of AI is evident, and memory could be a game changer. The question remains as to who will be able to deliver the best experience without compromising user privacy.
After all, AI that does not generate trust (real and not artificial…), in a booming market, completely compromises the work of attracting, winning over and retaining new users. And what was supposed to be an indispensable tool becomes a vague memory...
(*) Alexandre Gonçalves is a journalist, founder of agenteINFORMA – digital content and products, and editor of the newsletters agenteGPT (insights and your user experience of ChatGPT), and agenTV3 (monitors and curates news about the implementation of TV 3.0 in Brazil) .
terra