Glen
April 16, 2026
Memory for AI agents is built on a single assumption, and the entire industry is quietly wrong about it.
Today’s memory for agents is single-tenant by design. Every agent runs inside its own private thread, with its own private history and its own narrow slice of the world, and nothing it learns ever reaches anyone else. This is how nearly every major memory product has been built. Look at Zep, Letta, Mem0, Supermemory, and the rest of the field. Underneath all of them sits the same premise: memory is the property of a single user.
That premise is not random. It holds up for two reasons. The first is that it mimics nature. We have a long-standing habit of building AI in our own image, and each person carries their own private memories, so when we sit down to design memory for a machine, human memory is the first template we reach for. Single-tenant memory is the most natural reflection of it. The second is that the alternative is genuinely hard. Organizations are hierarchical, multi-author, permission-laden, and full of conflicts, contradictions, and opinions that shift over time. Turning that into working software is a much taller order than giving one agent its own scratchpad. Neither reason is foolish. Together, though, they produce a design that aligns with how organizations actually operate.
Companies are not piles of isolated threads. They run on shared understanding, accumulated across dozens or hundreds of people and tens of thousands of interactions, carried forward by the organization itself even as individuals come and go. Giving every agent its own private memory does not make an organization intelligent. It just multiplies the number of isolated minds working in parallel, each one rediscovering what somebody else already figured out down the hall. What makes a senior employee valuable is not that they are smarter than a new hire. It’s that they carry context from every project they have touched, every decision they have seen made, and every argument that was had and quietly resolved before anyone thought to write it down. That accumulation of remembered context is what turns a group of people into a winning company, and it is exactly what we are asking agents to do without. We are asking them to do the work of a senior employee while carrying none of the memory that makes a senior employee possible in the first place.
The industry has started to notice, and a wave of AI memory products has appeared. Almost all of them are storage layers dressed up as cognition. They give an agent a cleaner way to retrieve what it was told yesterday and make one agent less forgetful. That is a meaningful improvement at the level of a single conversation, but it does nothing to make an organization intelligent. The mistake underneath all of it is treating memory as a retrieval problem in the first place. A company’s knowledge is not a pile of transcripts, or a folder of documents, or any one artifact you can point to. It is all of that fused into a single working understanding, carried forward by the organization itself, and that understanding has never been well represented in any digital form. That is what actually has to be built, and nothing in the current wave is trying to build it.
The field is also fighting the wrong fight. The current battle is being waged on retrieval benchmarks: who has better precision, who has better recall, who scores higher on LoCoMo, who can stuff more context into a single prompt. The engineering is real, but the ground is shifting underneath it. Models keep getting better. Context windows keep getting longer. Embeddings keep getting cheaper. And retrieval itself is closing in on the point where it is simply good enough. To the average commuter, the difference between a car that hits sixty in two seconds and one that hits it in one-point-nine is invisible. Both get you to work. The last tenth is a benchmark story, not a real-world one, and memory is heading the same way. Within a year or two, the marginal gains these companies are squeezing out of their retrieval pipelines will be invisible to anyone who is not staring directly at a benchmark. The moats built on those gains will quietly evaporate. We have no interest in building a company whose edge collapses the moment the underlying models catch up, or the moment retrieval stops being a meaningful axis of competition at all. We want to build something structurally different, designed from the ground up for organizations, where the defensibility comes not from a slightly better retrieval algorithm but from a real understanding of how a company actually thinks, remembers, and decides.
The next leap is not a better memory for one agent. It is a shared memory for many. The moment every agent in a company draws from the same understanding of the world the company has built, the walls between roles begin to dissolve. Your intern’s agent writes copy as sharp as anything marketing ships, because it has access to the same positioning, the same past campaigns, and the same lessons from what worked and what did not. Your hiring manager’s agent follows your engineering team’s conventions and processes as faithfully as your senior engineers do, because it can see them the same way they can. Your support agent handles objections like your AEs. Your finance agent writes like your founders. None of these agents are individually smarter. The gap between a great employee and an average one has always been access to context more than raw intelligence. When every agent reads from the same memory, that gap quietly closes. Agents stop contradicting each other because they are looking at the same picture. Work stops being redone because the organization remembers it was already done. Decisions stop being re-argued because the argument and its resolution are already part of what the company now knows.
This is less abstract than it sounds. A VP onboarding a new hire no longer has to block off an hour to walk them through the history of the company. She shares the relevant memories to the new hire’s agent and lets the agent do the teaching, available on demand, as many times as the new hire needs it, with none of the context lost in translation. A chief of staff stepping in after a key employee leaves does not have to reconstruct the black box that employee left behind. The company’s Salesforce conventions, the way reporting was built, the naming schemes nobody bothered to write down, all of it sits in the memories that employee accumulated, and the chief of staff’s agent can pull from them to keep things moving without missing a beat. A sales leader who has spent years coaching reps one call at a time on what a good discovery sounds like, how to handle pricing objections, how to read the current market, stops having to repeat himself. His coaching becomes memory the reps’ agents can draw from, layered on top of the Gong and Fathom transcripts of the rep’s own calls, so the rep gets live Q&A and coaching from their agent that is grounded in what the manager actually believes.
All of this only matters more as agents become the default interface for work. The more humans step back from clicking through apps, and the more they operate through agents that take actions across every tool a company uses, the more the quality of that work depends on the memory sitting behind it. In that world, when an employee leaves, their knowledge does not leave with them, because it was never really theirs to take in the first place. It was the company’s all along. Silos stop existing because there is nothing left to silo, and meetings that existed only to get everyone on the same page become unnecessary, because the page is already shared.
This is organizational memory.
It is what a company has that a collection of individuals does not, and it is why a ten-year-old company cannot be rebuilt by a team reading its public documentation. It is the residue of ten thousand small decisions, the unwritten lessons of every experiment that failed, and the muscle memory of how this particular group of people ships, sells, hires, and decides. Humans transmit it imperfectly, through meetings, onboarding, and the slow osmosis of working alongside each other for years. Agents get none of that osmosis, and if we want them to operate at the level of a senior employee, we have to build the substrate that a senior employee learns from. Companies need a memory. Not a file cabinet.
Glen is exactly that. Not another single-tenant memory dressed up for teams. Not a better retrieval layer. Not a smarter vector database. Glen is a working understanding of how a company operates, built from the ground up for organizations rather than retrofitted for them. Glen understands the social fabric of the company: the relationships between the people inside it, which memories belong to whom, and which ones should stay scoped to a single person, a single team, or the whole organization. Privacy is not an afterthought or a permissions layer bolted on top. It is a first-class concern, because a memory system that leaks across roles is one no organization can trust enough to actually use. And Glen gets sharper the longer you run on it, because every interaction teaches it something new about how your company actually operates. When a VP onboards someone, when a chief of staff picks up where a departed employee left off, when a sales leader coaches a team, none of that knowledge stays trapped inside a single conversation or a single person. It becomes part of the company itself, and every agent that runs there draws from it.
Once that memory exists, the compounding begins. Every interaction teaches the organization something. Every agent that joins inherits everything that came before. The longer a company runs on Glen, the smarter it gets. That is not a feature curve. That is a moat. The companies that build on a substrate like this will be operating at a level the rest of the industry has not yet noticed is possible. That is the future worth aiming at. We think it starts here.