My Opinion on LLM With Episodic Memory

About a week ago, I have listened to a Lex Fridman Podcast with Charan Ranganath and took notes on it.

I immediately thought it could be applied in LLMs, as many ideas that C.R. proposed are quite interesting, such as:

  • Simillar neurons are used by the brain when predicting the future and remembering the past
  • Humans don’t replay the past, we imagine what the past could’ve been by taking bits and pieces
  • “You don’w want to remember more, but better” (Remembering things at a higher abstract, cramming vs actually knowing)
    • Maybe there’s something here
  • He suggested Internal Models of Events
  • Forgetting and Retrieval failure

Internal Models of Events

Internal Models of Events are formed with both Semantic and Episodic memory at particular points of high prediction error. And those points are when its maximally optimal to encode as episodic memory.

Thoughts

  • wait do human store episodic memories for training while sleeping?