#idea: AI should not just have a bigger and better memory, but a more personalizable one. One of the big opportunities in AI is better memory. By giving models a "memory", they can learn more about us, and then deliver better results, over time. This is great, and I'm a fan. But, along with building bigger and better memory, there's an opportunity to also make the memory more customizable. Here's a simple (naive) implementation. Imagine that in ChatGPT or Claude, there was a way to enter "Memory Instructions". Example: "Remember things I tell you to improve future responses, especially related to business/work. DON'T remember things I reveal about health or anything related to my family. Don't keep memories around for longer than a year unless they are factual and do not change. The world changes quickly and my thinking or position on a topic may change too." It's a bit like having a real personal assistant. You may have them open any mail that is a bill or junk mail -- but not personal mail. You may want them to remember that you prefer an aisle seat when traveling, but not that you binge-watched a guilty-pleasure TV show last week. The idea is to have some degree of control over what the system remembers. This would help people get more comfortable with the idea of the A.I. having memory and context.
How AI Memory Affects Personalization
Explore top LinkedIn content from expert professionals.
Summary
AI memory is the ability of artificial intelligence systems to retain and recall information over time, which helps create more personalized user experiences. By understanding user preferences and storing relevant data, AI with memory can customize its responses and interactions to meet individual needs.
- Set memory boundaries: Define what information the AI can and cannot remember, ensuring it aligns with your preferences and privacy needs.
- Review and manage data: Regularly check and update the stored memory settings to ensure the AI retains only what’s necessary and helpful for your goals.
- Use memory strategically: Teach the AI specific details about your preferences or tasks to enhance its ability to provide tailored recommendations and support.
-
-
ChatGPT recently added a feature called Memory. Over time, the AI can gradually learn about your job, role, and preferences, potentially improving your interactions. But, if you're going to keep this feature enabled, I recommend vigilant monitoring. Here's how I recommend you manage it: 1️⃣. Monitor All Additions: Whenever Memory updates, ChatGPT will notify you in the chat. Always review these additions to ensure they are something that will help, not hinder, your workflow across all the various ways you use ChatGPT. You can delete any specific information if you don't want it to remember. 2️⃣. Settings Overview: Navigate to Settings > Personalization > Memory. This is where you can toggle the feature on or off. Click on the "Manage" button to view and delete items in memory. If you keep Memory turned on, make sure you regularly clean up this list to keep only relevant information. 3️⃣. Active Management: You can actively tell ChatGPT what you want it to remember about you. To do this, start a new chat and explicitly tell it things you want it to remember (this is essentially adding them to Memory yourself, though ChatGPT will probably paraphrase what you give it when it stores it). 💡 Pro Tip: Evaluate if Memory enhances your personal use of the tool. If it proves distracting or irrelevant, don’t hesitate to turn it off. I have deleted the vast majority of what ChatGPT has stored about me in Memory because it's not relevant to most things I do. I'm considering turning this feature off for myself because I'm not sure it really enhances what I do and it adds work to keep it 'clean.' (But before I do I am doing a little bit more experimenting). But for other users, I could see where this feature is really valuable. Keep in mind that Memory does not mean ChatGPT remembers entire conversations - it does NOT. It grabs specific things about you from conversations that it thinks is relevant to inform its responses in the future, making it more personalized over time. For me it remembers things like I call it George Paul Thomas IV (or "George" for short) as a nickname, and I find it appalling when emails or other communications start with 'I hope this email finds you well' or anything similar. It does NOT remember the email it wrote that resulted in me directing it to never use that kind of annoying greeting in an email though. This Memory functionality can dramatically affect how your chats come out (just like Custom Instructions do), so definitely keep an eye on it if you do keep it turned on! I’m curious—how are you finding this Memory feature? Have you kept it enabled or decided to switch it off? Have you found any creative ways to use it? Would love to know your thoughts around this feature!
-
Agent memory is crucial for engaging, personalized conversations. 🧠 Without it, Large Language Models (LLMs) struggle to maintain coherent, long-term dialogues, hindering their effectiveness in applications like customer service and virtual assistants. Existing memory systems often fall short due to rigid memory granularity and fixed retrieval mechanisms, leading to fragmented representations and insufficient adaptation. Introducing Reflective Memory Management (RMM), a novel approach designed to overcome these limitations. 🚀 RMM presents a significant enhancement to long-term dialogue memory by incorporating two key innovations: • Prospective Reflection: Dynamically summarizes interactions into a topic-based memory bank, optimizing memory organization for effective future retrieval. • Retrospective Reflection: Refines retrieval through online reinforcement learning, leveraging LLM-generated attribution signals to learn from past retrieval mistakes to adapt to diverse contexts and user patterns. RMM enables LLMs to maintain a more nuanced and adaptable memory, leading to more coherent dialogues. It achieves over 10% accuracy improvement on the LongMemEval and MSC datasets compared to baselines without memory management, and over 5% improvement over existing personalized dialogue agents. Paper: https://lnkd.in/gpHExq75 Authors: Zhen Tan, Jun Yan, I-Hung Hsu, Rujun Han, Zifeng Wang, Long Le, Yiwen Song, Yanfei Chen, Hamid Palangi, George Lee, Anand Iyer, Tianlong Chen, Huan Liu, Chen-Yu Lee, Tomas Pfister #LLMs #AI #NLP #RAG #MachineLearning #ConversationalAI #MemoryManagement
-
AI doesn’t remember you. That’s a problem. Most people are surprised to learn this, but modern generative AI models—LLMs like GPT—don’t have memory. Yes, they generate responses to our questions, but the models themselves are stateless. They don’t remember past conversations. They don't even remember the last thing you asked in the current conversation. They don’t know who you are. And they have no way to retain information unless you feed it to them every. single. time. If you've played with these chatbots, you probably think I'm lying because this doesn't align with your experience. And that's true, because engineers have designed systems to store all your conversations into databases so the LLM can recall what it needs at the right. But this creates a big challenge for engineers building AI applications: How do you give AI memory? How do you make sure the right memory is recalled? How can you be sure to condense and summarize long conversations into key points into the memory without losing important information? Mem0 is an open source tool designed to solve the memory problem in LLM-based applications. Every time a user sends a message and gets a response, mem0 stores both—building a long-term memory. Mem0 does it smartly by creating a graph of the important information: what the conversation was about, what the user seems to care about, what the agent is doing, etc. It distills signal from noise, and it does it in an easy to use api where all you have to do is send mem0 the inputs and outputs, and it does the rest for you behind the scenes. Why does this matter? Because once AI has memory, we can build much better experiences. Think about: 🎓 An AI tutor that knows what you’ve already learned, so it can skip the basics and focus on what you’re struggling with. ✈️ A travel assistant that knows your past preferences. 🧪 A research agent that doesn’t keep forgetting what you’ve already told it. This isn’t needed for every AI app—but for the ones that benefit from context and personalization, a good memory tool is a game-changer.