Echo: A Large Language Model with Temporal Episodic Memory
π οΈ Code | π Dataset | π§ 7B Model | π§ 72B Model
π Welcome to Echo!
Echo is a cutting-edge large language model (LLM) designed with temporal episodic memory, enabling advanced reasoning and context retention. We invite the community to explore, use, and cite our work!
π Citation
If you use Echo in your research or applications, please cite us:
@misc{liu2025echolargelanguagemodel,
title={Echo: A Large Language Model with Temporal Episodic Memory},
author={WenTao Liu and Ruohua Zhang and Aimin Zhou and Feng Gao and JiaLi Liu},
year={2025},
eprint={2502.16090},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.16090},
}
π Get Involved
We welcome contributions, feedback, and collaboration from the community. Feel free to open issues or pull requests on our GitHub!
Empowering AI with memory. Echo: Remember, Reason, Respond.
- Downloads last month
- 12
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support