Echo: A Large Language Model with Temporal Episodic Memory


HuggingFace Logo


πŸ› οΈ Code   |   πŸ“š Dataset   |   🧠 7B Model   |   🧠 72B Model


πŸš€ Welcome to Echo!

Echo is a cutting-edge large language model (LLM) designed with temporal episodic memory, enabling advanced reasoning and context retention. We invite the community to explore, use, and cite our work!


πŸ“– Citation

If you use Echo in your research or applications, please cite us:

@misc{liu2025echolargelanguagemodel,
  title={Echo: A Large Language Model with Temporal Episodic Memory},
  author={WenTao Liu and Ruohua Zhang and Aimin Zhou and Feng Gao and JiaLi Liu},
  year={2025},
  eprint={2502.16090},
  archivePrefix={arXiv},
  primaryClass={cs.CL},
  url={https://arxiv.org/abs/2502.16090},
}

🌟 Get Involved

We welcome contributions, feedback, and collaboration from the community. Feel free to open issues or pull requests on our GitHub!


Empowering AI with memory. Echo: Remember, Reason, Respond.

Downloads last month
5
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support