Qiannan commited on
Commit
e953ce1
·
verified ·
1 Parent(s): f5807b6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -13,7 +13,7 @@ tags:
13
 
14
  <div align="center">
15
 
16
- <h1 style="font-size: 2.8em; margin-bottom: 0.5em;">师承万象教育大模型(MuduoLLM)</h1>
17
  <h2 style="font-size: 1.8em; color: #666; margin-top: 0;">传承木铎金声,智启教育未来<br>Inheriting Wisdom, Inspiring Future Education</h2>
18
 
19
  [![GitHub](https://img.shields.io/badge/GitHub-MuduoLLM-blue)](https://github.com/ERC-ITEA/MuduoLLM)
@@ -22,7 +22,7 @@ tags:
22
 
23
  # 简介 | Introduction
24
 
25
- 师承万象大模型(MuduoLLM)是北京师范大学和北京世纪好未来教育科技有限公司共同研发的首个紧扣新课标知识体系的基础教育大模型,确保所学知识内容与基础教育课程标准高度契合,精准对接学生核心素养培育与教师专业成长需求。在应用层面,基础教育大模型深度融合新课标理念,实现探究启发式智能答疑、素养导向型智能出题、情境沉浸式教案生成,从知识传授转向核心素养培育,助力培养全面发展时代新人。同时,师承万象大模型是当前性能表现较为突出的开源基础教育大模型之一,为开发者提供了可进一步优化的空间。
26
 
27
  MuduoLLM is the educational large language model jointly developed by Beijing Normal University and TAL Education Group, tightly integrated with the new curriculum standards knowledge system. It ensures that the knowledge content aligns perfectly with basic education curriculum standards, precisely meeting the needs of student core competency cultivation and teacher professional development. At the application level, the model deeply integrates new curriculum concepts, enabling inquiry-based intelligent Q&A, competency-oriented question generation, and immersive lesson plan creation, shifting from knowledge transmission to core competency cultivation, helping to nurture well-rounded individuals for the new era. Additionally, MuduoLLM is one of the most outstanding open-source educational large language models, providing developers with room for further optimization.
28
 
@@ -31,7 +31,7 @@ MuduoLLM is the educational large language model jointly developed by Beijing No
31
 
32
  - **Base Architecture**: [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
33
  - **Parameters**: 14 billion (14B)
34
- - **Training Data**: Approximately 400GB of educational domain text data, including question generation, Q&A, and lesson plans
35
  - **Training Methods**:
36
  - Domain-specific Pretraining: Injecting educational domain-specific corpora to enhance semantic understanding
37
  - Supervised Fine-Tuning (SFT): Targeted optimization for educational scenarios (question generation/Q&A/lesson plan generation)
 
13
 
14
  <div align="center">
15
 
16
+ <h1 style="font-size: 2.8em; margin-bottom: 0.5em;">师承万象基础教育大模型(MuduoLLM)</h1>
17
  <h2 style="font-size: 1.8em; color: #666; margin-top: 0;">传承木铎金声,智启教育未来<br>Inheriting Wisdom, Inspiring Future Education</h2>
18
 
19
  [![GitHub](https://img.shields.io/badge/GitHub-MuduoLLM-blue)](https://github.com/ERC-ITEA/MuduoLLM)
 
22
 
23
  # 简介 | Introduction
24
 
25
+ 师承万象基础教育大模型(MuduoLLM)是北京师范大学和北京世纪好未来教育科技有限公司共同研发的首个紧扣新课标知识体系的基础教育语言大模型,确保所学知识内容与基础教育课程标准高度契合,精准对接学生核心素养培育与教师专业成长需求。在应用层面,基础教育大模型深度融合新课标核心知识和育人理念,具备知识理解型智能解题、启发引导式智能答疑、情境创设型智能出题和素养导向型教案生成等教育能力,从知识传授转向核心素养培育,助力培养全面发展时代新人。同时,师承万象基础教育大模型是当前性能表现较为突出的开源基础教育大模型之一,为开发者提供了可进一步优化的空间。
26
 
27
  MuduoLLM is the educational large language model jointly developed by Beijing Normal University and TAL Education Group, tightly integrated with the new curriculum standards knowledge system. It ensures that the knowledge content aligns perfectly with basic education curriculum standards, precisely meeting the needs of student core competency cultivation and teacher professional development. At the application level, the model deeply integrates new curriculum concepts, enabling inquiry-based intelligent Q&A, competency-oriented question generation, and immersive lesson plan creation, shifting from knowledge transmission to core competency cultivation, helping to nurture well-rounded individuals for the new era. Additionally, MuduoLLM is one of the most outstanding open-source educational large language models, providing developers with room for further optimization.
28
 
 
31
 
32
  - **Base Architecture**: [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
33
  - **Parameters**: 14 billion (14B)
34
+ - **Training Data**: Approximately 1TB of general and educational domain text data, including question generation, Q&A, and lesson plans
35
  - **Training Methods**:
36
  - Domain-specific Pretraining: Injecting educational domain-specific corpora to enhance semantic understanding
37
  - Supervised Fine-Tuning (SFT): Targeted optimization for educational scenarios (question generation/Q&A/lesson plan generation)