text
stringlengths 191
103k
| domain
dict | toxicity
dict | quality_score
float64 0.17
0.99
|
---|---|---|---|
为了您的交易安全,建议您选择诚信商或供求商会员进行贸易为了您的交易安全,建议您选择诚信商或供求商会员进行贸易企业名称: 深圳市嘉应锡业制品厂 企业类型:个人独资企业 经营模式:生产/制造 所 在 地:广东 深圳市 联 系 人:刘明 发布日期:2006-11-11 9:50:51 有效期至:2006-12-11 9:51:001 :本产品不燃不爆 ,无毒 ,无刺激性气体.2 :焊后易清洗,可用水洗工艺.3 :可用于手浸,涂刷等焊接工艺. | {
"single_label": "general",
"multi_label": [
"general"
]
} | {
"label": 0,
"score": 0.000010167443178943358
} | 0.218506 |
广东省湛江市赤坎区人民法院民 事 裁 定 书(2018)粤0802民初3253号原告中国银行股份有限公司湛江分行,住所地:湛江市人民大道北71号.负责人盘仲廷,该分行行长.委托代理人杨国兴,男,广东东方昆仑律师事务所律师.委托代理人刘浩海,男,中国银行股份有限公司湛江分行员工.被告张捷军,男,1970年12月13日出生,汉族,住广东省湛江市霞山区,原告中国银行股份有限公司湛江分行诉被告张捷军金融借款合同纠纷一案,本院于2018年11月9日受理后,并于当天向原告送达了《预交受理费通知书》,明确告知其应于收到该通知书后7日内向本院预交案件受理费918.8元,但在该期限内原告未向本院预交上述费用,亦未向本院提交缓交申请.据此,依照《最高人民法院关于适用<中华人民共和国民事诉讼法>的解释》第二百一十三条之规定,裁定如下:本案按自动撤诉处理.审判员 杨莹莹二〇一八年十一月二十七日书记员 黄嘉慧 | {
"single_label": "law",
"multi_label": [
"law"
]
} | {
"label": 0,
"score": 0.00002043940003204625
} | 0.983887 |
法学院开展"疫情中的热点法律问题研讨"学术沙龙为加强学术氛围,探索教育发展,2022年4月20日法学院于人文楼会议室举办了第一期学术沙龙活动.本次学术沙龙以"疫情中的热点法律问题研讨"为主题,由董学智老师担任主持人,学院众多师生参与本次沙龙活动.的学术分享.他阐述了风险与法律系统的共构逻辑、风险概念进入法律的分析逻辑两部分内容,着重分析了法律以及法律理论现在能够为风险做什么的问题.郭浩老师阐明了风险社会语境下的分类与特征,并分享自己对风险的解读与反思.最后,参会老师们就风险主题进一步展开深入探讨并发表自我看法,加深了同学们对风险社会视域下的法概念再构建的理解.学术沙龙的第二议程,首先由林诚老师发言,他通过不同视角探究了新冠疫情相关保险法律问题.紧接着,龙俊老师详细剖析了疫情期间价格管制的规则体系、主要困境与完善方法.随后,陈天一同学侧重于解读刑事领域中的风险概念,分享自己对法概念再构建的思考;张来来同学从疫情防控中老年人买菜难的现实情况出发,探究数字弱势群体的法治保障问题;蔡淑帆同学借由上海哮喘老人救援不及时而病亡的热点事件,重点探讨院前急救中生命救助的法律问题. | {
"single_label": "encyclopedia",
"multi_label": [
"encyclopedia",
"education",
"dialogue"
]
} | {
"label": 0,
"score": 0.000013869972462998703
} | 0.442627 |
非承载式车身中的发动机、传动系统和车身等总成部件都是固定在底盘大梁架上的,车架则通过前后悬架装置与车轮连接,所以非承载式车身不具备灵活性,高速行驶时稳定性不高思文败类.质量大、高度高,发生碰撞时车架能够吸收大部分冲击力,能够给车身带来很强的保护作用;独立大梁使其拥有优秀的抗颠簸性能,所以很多硬派SUV用得比较多.承载式车身只是对车身各个方向做了加强,发动机、变速箱和悬架等部分装配在车身内陈怡真,所以承载式车身除了固有的乘载功能外,还要承受内部各个部件的负荷鬼娃新娘.整个车身为一体,公路行驶性能稳定恋曲哆来咪,噪音小、震动低比较舒适,虽然装配容易,但是制造成本高鲸刑.曹小小总体来说全城高考,非承载式车身是好几层的老外汉堡包,而承载式车身则是中国传统"馅在内"的中式包子超载乐队,是两种完全不同的概念浴血记者.所以在选购SUV时就不要再用这个标准去判断了,毕竟每一台车的用途都不同,注重城市代步、家用舒适的话,当然选择承载式车身的车型较好孙云玲,毕竟舒适性要比非承载式车身高. | {
"single_label": "encyclopedia",
"multi_label": [
"encyclopedia",
"news"
]
} | {
"label": 0,
"score": 0.00001034192700899439
} | 0.261475 |
四、确认条件(一)不应通过国家中小学教师资格考试,笔试、试镜皆合格,取得《中小学教师资格考试合格证明》,且在有效期内.2011年及以前入学,在学期间因参军入伍(学校保有学籍)等原因,于2018年或2019年毕业的全日制普通高等学校师范类专业本(专)科毕业生可免试直接参与确认.(二)不应遵从宪法和法律,热衷教育事业,遵守《教师法》规定的义务,遵从教师职业道德.(三)不应不具备分担教育教学工作所必须的科学知识以及运用所学科学知识分析和解决问题教育教学实际问题的能力.(四)不应不具备《教师法》规定的适当学历.2019年申请人幼儿园教师资格的学历条件之后限制到我区全日制中等师范学校、中等职业学校和技工院校的学前教育(幼儿教育)专业毕业生,这部分人员必需是在教育厅和人力资源社会保障厅备案的人员,未备案的不予认定.(五)不应超过国家语言文字工作委员会施行的《普通话水平测试等级标准》二级乙等及以上标准.其中,申请人确认非语文学科教师资格者,申请人时其户籍或工作单位所在地在县级人民政府驻地以外乡镇、村的,其普通话水平可以为三级甲等.(六)按照《广西壮族自治区申请人确认教师资格人员体检标准及办法(全面推行)》,具备符合规定的专门从事教育教学工作的身体条件,在教师资格确认机构登录的医院身体检查合格.录:钟山县人民医院是本县确认人员身体检查登录医院,体检表需双面打印机.五、确认流程(一)网上申报.申请人人员在教师资格确认机构规定的甄选时间内登岸"中国教师资格网"(展开网上登记和填写申请人信息,自由选择"教师资格确认申请人网报入口"展开申报.5.普通话水平测试等级证书原件.6.《中小学教师资格考试合格证明》原件,也可获取打印机好的"网页版"考试合格证明.7.免冠彩色标准相片1张(小2吋,3.5×4.5cm,与网上申请人确认上载电子版相片完全相同);尤其解释:申请人在网上申报时,确认系统对学历、考试合格证明、普通话信息检验通过的,现场证实时则需要再行索取原件.申请人幼儿园、小学、初级中学教师资格的申请人,第一批递交材料时间为6月11—6月14日,第二批递交材料时间为7月1日—7月2日,逾期不递交材料的视作主动退出处置.材料递交地点:钟山县教育和科学技术局(原钟山县教育局)人事股教师培训中心办公室,地址:钟山镇兴钟南路17号.联系人:李玫,联系电话:0774—8988389.(四)其他事项.申请人确认高级中学教师资格、中等职业学校教师资格、中等职业学校进修指导教师资格人员请选择"贺州市教育局"为确认机构(明确参见贺教师范〔2019〕7号文).未尽事宜,请求与我局确认办公室联系,电话:0774-8988389. | {
"single_label": "education",
"multi_label": [
"education",
"technology"
]
} | {
"label": 0,
"score": 0.00002035363831964787
} | 0.987305 |
现代快报讯(通讯员 王雪颖 朱玫烨 记者 张晓培)为规范婴幼儿卫生用品的生产经营行为,保障儿童的身体健康.六一前夕,徐州市卫生监督所组织全市各级卫生监督机构对辖区内婴幼儿卫生湿巾、婴幼儿尿不湿、纸尿裤和尿片等婴幼儿卫生用品的生产企业、经营单位开展了监督巡查和抽检工作.据了解,本次检查共出动卫生监督员 238 人次,随机抽查了全市范围内 108 家婴幼儿卫生用品产品生产企业、经营单位的 263 种产品.重点检查婴幼儿卫生用品的标签说明书是否符合要求,包括原料、生产日期、有效期(保质期)、生产批号、生产商、产地等详细信息;销售婴幼儿卫生用品的经营单位是否开展进货索证验证;是否索取消毒产品生产企业卫生许可证复印件和产品卫生安全评价报告复印件等资料.监督员随机抽取婴幼儿卫生用品 9 种 63 份样品,现场封样送徐州市疾病预防控制中心进行卫生质量检测,主要检测细菌菌落总数、真菌菌落总数等微生物指标.6 月 1 日,徐州卫监公示婴幼儿卫生用品抽检结果:徐州市生产和销售的婴幼儿卫生用品标签说明书基本符合《消毒产品标签说明书管理规范》的要求.各婴幼儿卫生用品经营单位能够严把采购关,提供消毒产品生产企业卫生许可证复印件和产品卫生安全评价报告复印件.依据《一次性使用卫生用品卫生标准》(GB15979-2002),本次送检的 9 个批次卫生用品全部合格,合格率为 100%.下一步,全市各级卫生监督机构将继续加大对儿童消毒产品市场的监管力度,加强消毒产品监督抽检工作,履行好消毒产品监管职能,确保广大儿童用上安全、放心的消毒产品.(通讯员供图) | {
"single_label": "general",
"multi_label": [
"general",
"news"
]
} | {
"label": 0,
"score": 0.000015601479390170425
} | 0.985352 |
读者留言我们将及时通过您留下的Email地址进行回复,请注意查收,谢谢您的支持和参与!请在下面填写您的留言信息:读者留言我们将及时通过您留下的Email地址进行回复,请注意查收,谢谢您的支持和参与![内容] 贵社可有出版《方正飞腾》排版系列的教学光盘?多年前贵社出版的《photoshop图像处理》那样的. 你好!我最近购买了贵社出版的《基础会计》作者:陈国辉,按照书后面的提示到贵社网页上下载课件不成功,不知道是什么原因?贵社能否直接将该课件提供到我的电子信箱里?拜托!谢谢! 你好,请问能否买到贵社多年前出版的计算机方面关于cobol语言的书籍?由于该语言较为古老,故几乎没有新书发行.如贵社尚有珍存,望能给予邮购,不胜感激! 我购买了谭浩强编著的C++程序设计一 贵站回复说灰把教案发给我 我等了好几天了还没收到 能麻烦您进快重发一下吗?! 我现在用的是钱能的《c++程序设一书》,为了方便自学,我想问问有没有该课本的课后练习体答案及解答【我社出版了与之配套的图书,书名是 .】 2005-1-24 11:20:13 我在学习马华东老师编的多媒体技术原理及应用一书,后面说有配套电子教案,可在贵站上找不到呀!希望您能提供给我! 我现在使用的是I-7520与I-7017测温度,湿度,电流.初次做工程,请多多指教! Visual Basic与分布式监控系统-RS232/485串行通信 此书中怎么没有讲怎么动态保存所采集的数据啊!有资料可以提供给我吗?. | {
"single_label": "news",
"multi_label": [
"news"
]
} | {
"label": 0,
"score": 0.000010352752724429592
} | 0.612793 |
在基层组织建设年活动中,三岔镇水源小学支部以打造全县一流的乡村校园育人环境作为年初诺职一项重要内容,为践行承诺,党员教师们处处以身作则,身先士卒,率先垂范,以无私的奉献和辛勤的劳作耕耘着校园的每一块土地,取得了可喜的成绩.先后被中央电视台新闻频道、《光明日报》、教育时空、贵州电视台等媒体作专题报道.已年届五旬的党支部书记吴明俊被大家誉为"我们的好班长",在整个校园育人环境打造工作中,事必躬亲.从方案策划、材料采购、人员分工上都亲力亲为,经常忙得饭都顾不上吃,饥肠辘辘了才想起没有吃饭.校长、党支部副书记陈关友更是感到"家难当,当家难".在资金不足的情况下,本着少花钱、多办事的原则,和老师们商量沟通,倡议大家积极参加义务劳动,得到老师们的积极响应与支持.陈校长和老师们利用开学前的二十多天粉刷墙壁、油漆栏杆等,每天工作时间均在8个小时以上.一天下来手脚、脖子酸痛,连手都抬不起来.年轻党员女教师谢忠余,充分发挥个人特长,在墙壁上绘画、写字,有时需要搭桌子"高空"作业,颜料溅一身,全套外衣就"废了".谢老师笑言:留着是个很好的纪念.2月的天气寒风蚀骨,谢老师柔弱的身子被拖跨了,但她仍坚持工作,每天去打完吊针又马上回到学校开始工作.50 多岁的老教师谭浩、陈文、罗传文,年轻教师钟泽怡、刘玲、代富林也踊跃地加入到打造优美校园的队伍中来,由于多数时间都是在室外作业,这些"粉刷匠、油漆工"的手、耳朵都起了冻疮,但大家说"一切为了学生,再苦再累心也甜".年近七旬的退休教师孙大彬参观水源小学后感慨万千,挥豪泼墨写下:"水流虽小润桃李,源头可学报春晖",这是水源小学支部创先争优的真实写照,他们怀着对教育事业的无限忠诚与热爱,无怨无悔的奉献着.. | {
"single_label": "education",
"multi_label": [
"education",
"news"
]
} | {
"label": 0,
"score": 0.00003383060538908467
} | 0.452637 |
该六角丝堵工件是由通过加工而得,基本加工原理就是刀具和工件以3:1的转速比进行复合运动,实现了圆棒类工件近似平面化的加工.同时铣方机可以安装刀架或刀塔对工件进行其他车序的加工,可以实现传统车床的车、钻、攻丝、铰等所有功能.这种加工方式已被越来越多的生产型企业所采纳.龙口市蓝牙数控装备有限公司专注铣方机研发生产24年,致力于为生产型企业提供定制化、自动化的终极解决方案,欢迎来电咨询业务,24小时客服电话:0535-8858025. | {
"single_label": "technology",
"multi_label": [
"technology"
]
} | {
"label": 0,
"score": 0.000010015216503234114
} | 0.303467 |
中卫光亮bob2021年9月27日,中卫光亮生态聪慧牧场首批1800头荷斯坦奶牛远渡重洋,颠末严厉的45天断绝检疫期,顺遂落户中卫.奶牛入栏典礼在中卫市沙坡头区常乐镇胜利举办.光亮乳业董事长濮年光光阴,中卫市农业乡村局局长景兆珍,中卫市沙坡头区代区长宗立冬,光亮乳业副总裁王赞等指导列席本次举动.本次入栏中卫光亮生态聪慧牧场的奶牛是光亮牧业自2016年以来,首批入口的荷斯坦奶牛,在场的预会指导配合见证了这一冲动民气的时辰.光亮乳业在新开展格式下,对峙新颖计谋,促进奶牛全财产链开展.牧场从良种奶牛豢养、良种培养、鲜奶消费为主营标的目的,对峙科技立异,引领行业开展.别的,宁夏奶业科创中间建立也准期停止,将建成包罗"奶牛消费机能测定中间"、"奶牛饲料养分阐发中间以及质量宁静监控平台"、"中卫市奶财产大数据中间"、"奶牛疫控及繁衍检测尝试室"、"宁夏奶牛手艺培训中间"等6大平台,成为以中卫为中间、辐射宁夏以致西北地域,在天下有影响力的奶牛科技立异中间.同时,中卫光亮优良饲草基地建立名目停顿顺遂,光亮牧业经由过程地盘整治、节水浇灌、测土施肥等先辈手艺,展开生态轮回农业,2021年轻贮玉米亩产均匀3. | {
"single_label": "general",
"multi_label": [
"general",
"technology"
]
} | {
"label": 0,
"score": 0.001121568726375699
} | 0.315674 |
ChineseWebText 2.0: Large-Scale High-quality Chinese Web Text with Multi-dimensional and fine-grained information
This directory contains the ChineseWebText2.0 dataset, and a new tool-chain called MDFG-tool for constructing large-scale and high-quality Chinese datasets with multi-dimensional and fine-grained information. Our ChineseWebText2.0 code is publicly available on github (here).
ChineseWebText2.0
We have released the latest and largest Chinese dataset, ChineseWebText 2.0, which consists of 3.8 TB of data. Each text in the dataset is accompanied by a quality score, domain single-label and multi-label tags, as well as toxicity classification and scores, enabling LLM researchers to select data based on new quality thresholds.
Data Example
{ "text": "近日,黑龙江省高校校报协会第十四届学术年会暨校报工作交流研讨会在东北农业大学举行。我校10件新闻作品喜获2项一等奖,2项二等奖,6项三等奖……", "domain": { "single_label": "news", "multi_label": ["news", "education"] }, "toxicity": { "label": 0, "score": 1.0347155694034882e-05 }, "quality_score": 0.96044921875 }
"text": [string] Text content of data sample.
"single_label": [string] The highest probability label generated by the domain classification model.
"multi_label": [list] All labels generated by the domain classification model with probabilities higher than the threshold.
"label": [int] Toxicity label generated by toxicity classification models.
"score": [flaot] Toxicity score generated by toxicity classification model, samples with scores exceeding 0.99 were categorised as toxic.
"quality_score": [float] Quality score generated by the quality evaluation model.
MDFG-tool
Introduction
We introduce a new toolchain, MDFG-tool (see Figure 1). We begin with the coarse-grained filtering module, which applies rule-based methods to clean the data, focusing on criteria such as text length and sensitive words to ensure data quality. After cleaning, we evaluate the text quality using a BERT-based model. This process generates a quality score, and by selecting an appropriate threshold, we can extract high-quality text data that meets our needs. Next, we use FastText for both single-label and multi-label classification of the cleaned data. Meanwhile, we conduct toxicity assessment. The FastText model is used to filter out toxic content and assign toxicity scores to each text. This scoring system allows researchers to set thresholds for identifying and selecting harmful texts for further training.
Figure 1: The pipeline of MDFG-tool.
Data Analysis
Removal Rate for Different Stages
In order to provide a high-level overview of the preparation and preprocessing stages, the figure followed shows the processing workflow and the removal rate of each step. This figure details the removal ratio of data from the previous step and the absolute percentage of the remaining data from the original collected dataset, thereby facilitating readers in tracking the various processing stages from the raw data to the high-quality dataset.
After collecting raw data from various sources, we initially obtain a original Chinese dataset totaling 6.6 TB. However, due to a significant amount of irrelevant and noisy content in some sources, a manual sampling analysis is performed in preparation stage. If irrelevant text accounted for more than 50% of a source, the data from that source will be discarded entirely. As a result, a substantial portion of the data is removed during the preparation stage, retaining only 67.68% of the original dataset. In preprocessing stage, four rule-based steps are implemented to filter the remained data. First, the Data Length step remove overly short texts to ensure that each text contains sufficient informational content. Next, the Character Proportion step eliminate texts with a high percentage of noisy characters, such as English, Traditional Chinese characters, or other irrelevant symbols. Finally, the Sensitive Words step and the Deduplication step are employed to remove toxic content and duplicate texts from the dataset. After the preprocessing stage, we produce a high-quality Chinese text dataset totaling 3.8 TB. In the next stage, each text in this high-quality dataset will be enriched with fine-grained annotations, including a quality score, domain lablels, a toxicity score and a toxicity label.
Figure 2: The proportion of data removed from the originally collected data in each processing step. The gray bars represent the proportion of data removed in each step relative to the data remaining before that step, while the other colored bars represent the retained data and its proportion relative to the originally collected data.
Data Quality Distribution
Figure 3: The Data Analysis on Quality Evaluation.
Quality Distribution To investigate the quality distribution, we calculate the data proportions across different quality score ranges from our ChineseWebText 2.0 dataset. Figure 3(a) shows the proportion of data across different quality score intervals. The data is primarily concentrated in the mid-range score intervals ([0.2, 0.4)), each contributing approximately 18%. Additionally, a significant proportion lies within the high-quality interval ([0.9, 1.0)), reflecting the presence of high-quality content in the dataset. In contrast, the lowest interval ([0.1, 0.2)) contains only a minimal fraction, indicating a limited amount of poor-quality data. Note that the quantity of quality scores in the range [0, 0.1) is zero, so this interval has been omitted. This quality distribution provides a valuable reference for LLM researchers, enabling them to select data based on desired quality thresholds.
Human Acceptance Evaluation To validate the consistency between quality evaluation and human judgments, Figure 3(b) displays human acceptance rates across different score intervals, showing a clear positive trend: higher scores correlate with higher acceptance rates. Specifically, the highest score interval ([0.5, 1.0)) achieves an acceptance rate exceeding 90%, while the lowest interval ([0.1, 0.2)) still maintains an acceptance rate of 80%. This trend highlights the overall high quality of the data.
In summary, the dataset is primarily concentrated in the mid-quality range, with higher scores strongly correlating to greater human acceptance. This alignment underscores the dataset's potential for high-quality applications, where consistency in human-like quality is essential.
Domain Distribution
To investigate the distribution of our dataset across different domains, in this section, we conduct an in-depth analysis of the data distribution across eleven distinct domains: book, dialogue, education, encyclopedia, finance, law, math, medicine, news, technology, and general. This analysis considers two perspectives: the overall domain distribution and the quality-related domain distribution, providing comprehensive insights into the dataset's composition across different domains.
Overall Domain Distribution
As illustrated in Figure 8, the sample counts and corresponding proportions across various domains are presented. The Encyclopedia, General, and News domains dominate the dataset, comprising 33.43%, 32.63%, and 28.01% of the data, respectively. In contrast, the Math domain has the smallest share at 0.55%, yet it still includes over 8 million samples. Figure 9 complements this with a bar chart that provides a more intuitive visualization of the data distribution. This comprehensive domain distribution enables LLM researchers to select suitable datasets, facilitating the enhancement of the model’s knowledge and capabilities in specific domains.
Figure 4: Data Distribution Across Different Domains.
Quality-Related Domain Distribution In order to explore the domain distribution across different quality intervals, we perform an analysis focusing on the quality-related domain distribution. Specifically, we calculate the proportions of various domains within each quality interval. As shown in Figure 5, this table provides a detailed breakdown of domain proportions across different quality intervals. From the results, we observe that the distribution of domain data within each quality interval aligns closely with their overall distribution in the dataset. Based on the proportions in Figure 5, researchers can filter domain-specific data within targeted quality intervals, enabling the extraction of higher-quality domain-specific data subsets.
Figure 5: Table of Domain Distribution Across Quality Levels
Data Toxicity Analysis
Figure 6:The Distribution of Toxicity: A threshold of 0.99 was established, and samples with scores exceeding 0.99 were categorised as toxic.
During the training procedure of LLMs, toxic data introduces harmful knowledge and information, which may lead the model to generate toxic outputs. In this section, we analyze the toxicity distribution within our dataset. As shown in Figure 6, it depicts the toxicity distribution of the dataset. In this figure, a higher toxicity score indicates greater toxicity. It is evident that the majority of the data in our dataset has a toxicity score of 0.0, signifying non-toxic, high-quality data. These non-toxic texts comprise 97.41% of the dataset.
Additionally, through manual analysis of the toxicity scores, we identify that data with scores above 0.99 are classified as toxic. By applying this empirical threshold, we filter our dataset and obtain a 3.16GB toxic text subset comprising 1,632,620 samples. In Figure 7, we conduct a comparison between this subset with other publicly available toxic datasets. In this table, OffensEval 2019, AbusEval, HatEval, RAL-E and ToxiGen are English toxicity datasets, while COLD, ToxiCN, SWSR and CDial-Bias are Chinese toxicity datasets. The OffensEval 2019, AbusEval, and HatEval datasets are derived from Twitter and focus on the analysis of offensive language, abusive language, and hate speech, respectively. The RAL-E dataset, sourced from a banned Reddit community, is a large-scale, unannotated English dataset. In contrast, ToxiGen is a toxicity dataset generated using GPT-3, targeting multiple groups. The COLD, SWSR, CDial-Bias, and ToxiCN datasets are collected from Chinese social media platforms including Zhihu, Weibo, and Tieba, with each dataset focusing on different groups. Compared to these datasets, ours features the largest collection of toxicity data and each text contains a toxicity score, providing researchers with a valuable resource to better optimize and evaluate LLMs' safety.
Figure 7: Table of Comparison of Different Toxicity Datasets.
Citation
Please cite the paper if you use the data or code in this repo.
@misc{zhang2024chinesewebtext20largescalehighquality,
title={ChineseWebText 2.0: Large-Scale High-quality Chinese Web Text with Multi-dimensional and fine-grained information},
author={Wanyue Zhang and Ziyong Li and Wen Yang and Chunlin Leng and Yinan Bai and Qianlong Du and Chengqing Zong and Jiajun Zhang},
year={2024},
eprint={2411.19668},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2411.19668},
}
- Downloads last month
- 10,204