Commit History
Pydantic 2.x cfg (#1239)
		cc3cebf
	
		
		unverified
	make mlflow optional (#1317)
		5894f0e
	
		
		unverified
	Add MPS support (#1264)
		fac2d98
	
		
		unverified
	simplify haldning for newer multipack patches so they can be added in a single place (#1270)
		5698943
	
		
		unverified
	Fix bug preventing model_kwargs being injected (#1262)
		73f1bda
	
		
		unverified
	
		Zac Brannelly
		
	commited on
		
		
relora: magnitude pruning of the optimizer (#1245)
		8c2e05a
	
		
		unverified
	support for true batches with multipack (#1230)
		00568c1
	
		
		unverified
	Peft deepspeed resume (#1227)
		c67fb71
	
		
		unverified
	Support for additional_special_tokens (#1221) [skip ci]
		25e037f
	
		
		unverified
	Fix typo (#1231) [skip ci]
		8608d80
	
		
		unverified
	
		xhedit
		
	commited on
		
		
Peft lotfq (#1222)
		4cb7900
	
		
		unverified
	Revert "run PR e2e docker CI tests in Modal" (#1220) [skip ci]
		8da1633
	
		
		unverified
	run PR e2e docker CI tests in Modal (#1217) [skip ci]
		36d053f
	
		
		unverified
	more checks and fixes for deepspeed and fsdp (#1208) [skip ci]
		e923e62
	
		
		unverified
	Feat/chatml add system message (#1117)
		98b4762
	
		
		unverified
	fix(log): improve warning to clarify that lora_modules_to_save expect a list (#1197)
		08719b9
	
		
		unverified
	Mixtral fixes 20240124 (#1192) [skip ci]
		54d2ac1
	
		
		unverified
	Phi2 multipack (#1173)
		814aee6
	
		
		unverified
	Falcon embeddings (#1149) [skip docker]
		e799e08
	
		
		unverified
	jupyter lab fixes  (#1139) [skip ci]
		eaaeefc
	
		
		unverified
	Qwen2 (#1166)
		f5a828a
	
		
		unverified
	make sure the model config loader respects the model_revision too (#1160) [skip-ci]
		fccb542
	
		
		unverified
	Deprecate max packed sequence len (#1141)
		2ce5c0d
	
		
		unverified
	Multipack simplify for Mixtral (#1142)
		6910e6a
	
		
		unverified
	Add `layers_to_transform` for `lora_config` (#1118)
		8487b97
	
		
		unverified
	
		xzuyn
		
	commited on