Member-only story

What the Heck is LangChain Memory? And Why Should You Care?

Allan Alfonso
2 min readJan 24, 2025

Large Language Models (LLMs) are like Dory from Finding Nemo.

They forget. This can be a big issue when you’re building a conversational application such as a chatbot. LLMs only respond to the prompt it received. Anything before that is forgotten.

Memory solves this problem by providing the history of the conversation to the LLM.

LLMs are Stateless

The only ‘state’ LLMs recognize is their current interaction.

The LLM doesn’t store any data from the previous interaction. Data from previous interactions needs to be stored separately. It can be stored something as simple as a variable or as complex as a database.

Knowing LLMs are Stateless is critical to building LLM applications.

Memories have Different Costs

A chatbot’s memory is sending the full conversation as context.

The longer the conversation, the more tokens required, and the higher the cost. Much like how humans don’t need the full history to understand the context of a conversation, LLMs can also understand the context of a conversation with a subset of messages. By sending only a subset of messages or a summary of messages, cost can be reduced.

--

--

Allan Alfonso
Allan Alfonso

No responses yet