Commit History
Feat: Add sharegpt multirole (#1137)
		40a88e8
	
		
		unverified
	ORPO (#1419)
		2ea70eb
	
		
		unverified
	Train parameters exclusively in specific ranges (#1390)
		05bcc9e
	
		
		unverified
	Add Glaive conversation format support (#1365)
		b7d8a7d
	
		
		unverified
	plain input/output prompt strategy w/o chat templates (#1346)
		4d09b42
	
		
		unverified
	run tests again on Modal (#1289) [skip ci]
		0001862
	
		
		unverified
	fix for protected model_ namespace w pydantic (#1345)
		6b3b271
	
		
		unverified
	more fixes 20240228 (#1342) [skip ci]
		0f985e1
	
		
		unverified
	Pydantic 2.x cfg (#1239)
		cc3cebf
	
		
		unverified
	make mlflow optional (#1317)
		5894f0e
	
		
		unverified
	Scheduler implementation of Continual Pre-Training of Large Language Models: How to (re)warm your model?  (#1273)
		8430db2
	
		
		unverified
	Pretrain transforms (#1261)
		c7cf381
	
		
		unverified
	relora: magnitude pruning of the optimizer (#1245)
		8c2e05a
	
		
		unverified
	support for true batches with multipack (#1230)
		00568c1
	
		
		unverified
	Support for additional_special_tokens (#1221) [skip ci]
		25e037f
	
		
		unverified
	Peft lotfq (#1222)
		4cb7900
	
		
		unverified
	ADD: warning if hub_model_id ist set but not any save strategy (#1202)
		af29d81
	
		
		unverified
	Feat/chatml add system message (#1117)
		98b4762
	
		
		unverified
	Phi2 multipack (#1173)
		814aee6
	
		
		unverified
	Feat(test): Add tests for alpaca chatml prompt tokenizer (#1088)
		5439707
	
		
		unverified
	Falcon embeddings (#1149) [skip docker]
		e799e08
	
		
		unverified
	Deprecate max packed sequence len (#1141)
		2ce5c0d
	
		
		unverified
	Multipack simplify for Mixtral (#1142)
		6910e6a
	
		
		unverified
	Add `layers_to_transform` for `lora_config` (#1118)
		8487b97
	
		
		unverified
	
		xzuyn
		
	commited on
		
		
Enable or disable bf16 support based on availability (#1116)
		0865613
	
		
		unverified
	
		Simon Hällqvist
		
	commited on