Intro

In Computer Science, just like in human cognition, there are different levels of memory:

  • Primary Memory (like RAM) is the active temporary memory used for current tasks, reasoning, and decision-making on current tasks. It holds the information you are currently working with. It’s fast but volatile, meaning that it loses data when the power is off.
  • Secondary Memory (like physical storage) refers to long-term storage of learned knowledge that is not immediately active in working memory. It’s not always accessed during real-time decision-making but can be retrieved when needed. Therefore, it is slower but more persistent.
  • Tertiary Memory (like backup of historical data) refers to archival memory, where information is stored for backup purposes and disaster recovery. It’s characterized by high capacity and low cost, but with slower access time. Consequently, it’s rarely used.

AI Agents can leverage all the types of memory. First, they can use Primary Memory to handle your current question. Then, they would access Secondary Memory to bring in knowledge from recent conversations. And, if needed, they might even retrieve older information from Tertiary Memory.

In this tutorial, I’m going to show how to build an AI Agent with memory across multiple sessions. I will present some useful Python code that can be easily applied in other similar cases (just copy, paste, run) and walk through every line of code with comments so that you can replicate this example (link to full code at the end of the article).

Setup

Let’s start by setting up Ollama (pip install ollama==0.5.1), a library that allows users to run open-source LLMs locally, without needing cloud-based services, giving more control over data privacy and performance. Since it runs locally, any conversation data does not leave your machine.
First of all, you need to download Ollama from the website. 

Then, on the prompt shell of your laptop, use the command to download the selected LLM. I’m going with Alibaba’s Qwen, as it’s both smart and light.

After the download is completed, you can move on to Python and start writing code.

import ollama
llm = "qwen2.5"

Let’s test the LLM:

stream = ollama.generate(model=llm, prompt='''what time is it?''', stream=True)
for chunk in stream:
    print(chunk['response'], end='', flush=True)

Database

An Agent with multi-session memory is an Artificial Intelligence system that can remember information from one interaction to the next, even if those interactions happen at different times or over separate sessions. For example, a personal assistant AI that remembers your daily schedule and preferences, or a customer support Bot that knows your issue history without needing you to re-explain each time.

Basically, the Agent needs to access the chat history. Based on how old the past conversations are, this could be classified as Secondary or Tertiary Memory.

Let’s get to work. We can store conversation data in a vector database, which is the best solution for efficiently storing, indexing, and searching unstructured data. Currently, the most used vector db is Microsoft’s AISearch, while the best open-source one is ChromaDB, which is useful, easy, and free.

After a quick pip install chromadb==0.5.23 you can interact with the db using Python in three different ways:

  • chromadb.Client() to create a db that stays temporarily in memory without occupying physical space on disk.
  • chromadb.PersistentClient(path) to save and load the db from your local machine.
  • chromadb.HttpClient(host='localhost', port=8000) to have a client-server mode on your browser.

When storing documents in ChromaDB, data are saved as vectors so that one can search with a query-vector to retrieve the closest matching records. Please note that, if not specified otherwise, the default embedding function is a sentence transformer model (all-MiniLM-L6-v2).

import chromadb

## connect to db
db = chromadb.PersistentClient()

## check existing collections
db.list_collections()

## select a collection
collection_name = "chat_history"
collection = db.get_or_create_collection(name=collection_name, 
    embedding_function=chromadb.utils.embedding_functions.DefaultEmbeddingFunction())

To store your data, first you need to extract the chat and save it as one text document. In Ollama, there are 3 roles in the interaction with an LLM:

  • system — used to pass core instructions to the model on how the conversation should proceed (i.e. the main prompt)
  • user — used for user’s questions, and also for memory reinforcement (i.e. “remember that the answer must have a specific format”)
  • assistant — it’s the reply from the model (i.e. the final answer)

Ensure that each document has a unique id, which you can generate manually or allow Chroma to auto-generate. One important thing to mention is that you can add additional information as metadata (i.e., title, tags, links). It is optional but very useful, as metadata enrichment can significantly enhance document retrieval. For instance, here, I’m going to use the LLM to summarize each document into a few keywords.

from datetime import datetime

def save_chat(lst_msg, collection):
    print("--- Saving Chat ---")
    ## extract chat
    chat = ""
    for m in lst_msg:
        chat += f'{m["role"]}: <<{m["content"]}>>' +'\n\n'
    ## get idx
    idx = str(collection.count() +1)
    ## generate info
    p = "Describe the following conversation using only 3 keywords separated by a comma (for example: 'finance, volatility, stocks')."
    tags = ollama.generate(model=llm, prompt=p+"\n"+chat)["response"]
    dic_info = {"tags":tags,
                "date": datetime.today().strftime("%Y-%m-%d"),
                "time": datetime.today().strftime("%H:%M")}
    ## write db
    collection.add(documents=[chat], ids=[idx], metadatas=[dic_info])
    print(f"--- Chat num {idx} saved ---","\n")
    print(dic_info,"\n")
    print(chat)
    print("------------------------")

We need to start and save a chat to see it in action.

Run basic Agent

To start, I shall run a very basic LLM chat (no tools needed) to save the first conversation in the database. During the interaction, I am going to mention some important information, not included in the LLM knowledge base, that I want the Agent to remember in the next session.

prompt = "You are an intelligent assistant, provide the best possible answer to user's request."
messages = [{"role":"system", "content":prompt}]

while True:    
    ## User
    q = input('🙂 >')
    if q == "quit":
        ### save chat before quitting
        save_chat(lst_msg=messages, collection=collection)
        break
    messages.append( {"role":"user", "content":q} )
   
    ## Model
    agent_res = ollama.chat(model=llm, messages=messages, tools=[])
    res = agent_res["message"]["content"]
   
    ## Response
    print("👽 >", f"\x1b[1;30m{res}\x1b[0m")
    messages.append( {"role":"assistant", "content":res} )

At the end, the conversation was saved with enriched metadata.

Tools

I want the Agent to be able to retrieve information from previous conversations. Therefore, I need to provide it with a Tool to do so. To put it in another way, the Agent must do a Retrieval-Augmented Generation (RAG) from the history. It’s a technique that combines retrieval and generative models by adding to LLMs knowledge facts fetched from external sources (in this case, ChromaDB).

def retrieve_chat(query:str) -> str:
    res_db = collection.query(query_texts=[query])["documents"][0][0:10]
    history = ' '.join(res_db).replace("\n", " ")
    return history

tool_retrieve_chat = {'type':'function', 'function':{
  'name': 'retrieve_chat',
  'description': 'When you knowledge is NOT enough to answer the user, you can use this tool to retrieve chats history.',
  'parameters': {'type': 'object', 
                 'required': ['query'],
                 'properties': {
                    'query': {'type':'str', 'description':'Input the user question or the topic of the current chat'},
}}}}

After fetching data, the AI must process all the information and give the final answer to the user. Sometimes, it can be more effective to treat the “final answer” as a Tool. For example, if the Agent does multiple actions to generate intermediate results, the final answer can be thought of as the Tool that integrates all of this information into a cohesive response. By designing it this way, you have more customization and control over the results.

def final_answer(text:str) -> str:
    return text

tool_final_answer = {'type':'function', 'function':{
  'name': 'final_answer',
  'description': 'Returns a natural language response to the user',
  'parameters': {'type': 'object', 
                 'required': ['text'],
                 'properties': {'text': {'type':'str', 'description':'natural language response'}}
}}}

We’re finally ready to test the Agent and its memory.

dic_tools = {'retrieve_chat':retrieve_chat, 
             'final_answer':final_answer}

Run Agent with memory

I shall add a couple of utils functions for Tool usage and for running the Agent.

def use_tool(agent_res:dict, dic_tools:dict) -> dict:
    ## use tool
    if agent_res["message"].tool_calls is not None:
        for tool in agent_res["message"].tool_calls:
            t_name, t_inputs = tool["function"]["name"], tool["function"]["arguments"]
            if f := dic_tools.get(t_name):
                ### calling tool
                print('🔧 >', f"\x1b[1;31m{t_name} -> Inputs: {t_inputs}\x1b[0m")
                ### tool output
                t_output = f(**tool["function"]["arguments"])
                print(t_output)
                ### final res
                res = t_output
            else:
                print('🤬 >', f"\x1b[1;31m{t_name} -> NotFound\x1b[0m")      
    ## don't use tool
    else:
        res = agent_res["message"].content
        t_name, t_inputs = '', ''
    return {'res':res, 'tool_used':t_name, 'inputs_used':t_inputs}

When the Agent is trying to solve a task, I want to keep track of the Tools that have been used and the results it gets. The model should try each Tool only once, and the iteration shall stop only when the Agent is ready to give the final answer.

def run_agent(llm, messages, available_tools):
    ## use tools until final answer
    tool_used, local_memory = '', ''
    while tool_used != 'final_answer':
        ### use tool
        try:
            agent_res = ollama.chat(model=llm, messages=messages, tools=[v for v in available_tools.values()])
            dic_res = use_tool(agent_res, dic_tools)
            res, tool_used, inputs_used = dic_res["res"], dic_res["tool_used"], dic_res["inputs_used"]
        ### error
        except Exception as e:
            print("⚠️ >", e)
            res = f"I tried to use {tool_used} but didn't work. I will try something else."
            print("👽 >", f"\x1b[1;30m{res}\x1b[0m")
            messages.append( {"role":"assistant", "content":res} )       
        ### update memory
        if tool_used not in ['','final_answer']:
            local_memory += f"\n{res}"
            messages.append( {"role":"user", "content":local_memory} )
            available_tools.pop(tool_used)
            if len(available_tools) == 1:
                messages.append( {"role":"user", "content":"now activate the tool final_answer."} ) 
        ### tools not used
        if tool_used == '':
            break
    return res

Let’s start a new interaction, and this time I want the Agent to activate all the Tools, for retrieving and processing old information.

prompt = '''
You are an intelligent assistant, provide the best possible answer to user's request. 
You must return natural language response.
When interacting with a user, first you must use the tool 'retrieve_chat' to remember previous chats history.  
'''
messages = [{"role":"system", "content":prompt}]

while True:
    ## User
    q = input('🙂 >')
    if q == "quit":
        ### save chat before quitting
        save_chat(lst_msg=messages, collection=collection)
        break
    messages.append( {"role":"user", "content":q} )
   
    ## Model
    available_tools = {"retrieve_chat":tool_retrieve_chat, "final_answer":tool_final_answer}
    res = run_agent(llm, messages, available_tools)
   
    ## Response
    print("👽 >", f"\x1b[1;30m{res}\x1b[0m")
    messages.append( {"role":"assistant", "content":res} )

I gave the Agent a task not directly correlated to the topic of the last session. As expected, the Agent activated the Tool and looked into previous chats. Now, it will use the “final answer” to process the information and respond to me.

Conclusion

This article has been a tutorial to demonstrate how to build AI Agents with Multi-Session Memory from scratch using only OllamaWith these building blocks in place, you are already equipped to start developing your own Agents for different use cases.

Full code for this article: GitHub

I hope you enjoyed it! Feel free to contact me for questions and feedback or just to share your interesting projects.

👉 Let’s Connect 👈

(All images, unless otherwise noted, are by the author)

Share.

Comments are closed.