--- license: mit datasets: - nsarrazin/lichess-games-2023-01 pipeline_tag: text-generation tags: - chess --- A 231M parameter base model trained on lichess games from January 2023 that ended in checkmate (filtered out games that were won because of time). It's trained on sequences of UCI moves, inference is pretty simple: ```py from transformers import GPT2LMHeadModel, AutoTokenizer model = GPT2LMHeadModel.from_pretrained("nsarrazin/chessformer").eval() tokenizer = AutoTokenizer.from_pretrained("nsarrazin/chessformer") moves = " ".join(["e2e4", "e7e5", "d2d4", "d7d5"]) model_inputs = tokenizer(moves, return_tensors="pt") gen_tokens = model.generate(**model_inputs, max_new_tokens=1)[0] next_move = tokenizer.decode(gen_tokens[-1]) print(next_move) #d4e5 ```