--- base_model: - wwe180/Llama3-35B-lingyang-v1 library_name: transformers tags: - mergekit - merge license: - other --- #The model is experimental, so the results cannot be guaranteed. ## Merge Details ### Merge Method This model was merged using the passthrough merge method using [wwe180/Llama3-35B-lingyang-v1](https://huggingface.co/wwe180/Llama3-35B-lingyang-v1) as a base. ### Models Merged The following models were included in the merge: * [wwe180/mergekit-passthrough-bklxjmn](https://huggingface.co/wwe180/mergekit-passthrough-bklxjmn) + [wave-on-discord/llama-3-70b-no-robots-adapter](https://huggingface.co/wave-on-discord/llama-3-70b-no-robots-adapter) ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "wwe180/Llama3-35B-lingyang-v1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", )