Model Card for Llama-3.2-3B-Instruct-Bitcoin-Analyst-v2
This repository contains a specialized version of meta-llama/Llama-3.2-3B-Instruct
, expertly fine-tuned to function as a Bitcoin and cryptocurrency market analyst. The model is the result of a multi-stage "continuation training" process, making it highly capable of understanding and responding to complex instructions in the financial domain.
Model Details
Model Description
This model is a Causal Language Model (CLM) based on the Llama 3.2 3B Instruct architecture. It was developed through a sequential fine-tuning process to enhance its knowledge and instruction-following capabilities for topics related to Bitcoin, blockchain technology, and financial markets.
The training procedure involved two main stages:
- Initial Specialization: The base model was first merged with a high-performing LoRA adapter (
tahamajs/llama-3.2-3b-instruct-bitcoin-analyst-perfect
) to provide a strong foundation of domain-specific knowledge. - Continuation Training: A new LoRA adapter was then trained on top of this already-specialized model using the
tahamajs/bitcoin-llm-finetuning-dataset
. - Final Merge: The final model available here is the result of merging the second adapter, combining the knowledge from all stages into a single, powerful model.
- Developed by: tahamajs
- Model type: Causal Language Model, Instruction-Tuned
- Language(s) (NLP): English (en)
- License: Llama 3 Community License Agreement
- Finetuned from model:
meta-llama/Llama-3.2-3B-Instruct
Model Sources [optional]
- Repository:
tahamajs/llama-3.2-3b-instruct-bitcoin-analyst-perfect_v2
Uses
Direct Use
This model is intended for direct use as an instruction-following chatbot for topics related to Bitcoin and cryptocurrency. It can be used for question answering, analysis, and explanation of complex financial and technical concepts. For best results, prompts should be formatted using the Llama 3 chat template.
Out-of-Scope Use
This model is not a financial advisor. It should not be used for making investment decisions. The model's knowledge is limited to its training data and it may produce inaccurate or outdated information. It is not designed for general-purpose conversation outside of its specialized domain.
Bias, Risks, and Limitations
This model inherits the limitations of the base Llama 3.2 model and the biases present in its training data. In the financial domain, there is a risk of generating overly optimistic or pessimistic statements that could be misinterpreted as financial advice. Users should be aware of these risks and verify any factual information independently.
Recommendations
Users should critically evaluate all outputs from this model, especially when they pertain to financial metrics or price predictions. We recommend clearly stating to any end-users that the text is generated by an AI and is not a substitute for professional financial advice.
How to Get Started with the Model
Use the code below to load and run the model using the transformers
library.
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# Use the ID of this repository
model_id = "tahamajs/llama-3.2-3b-instruct-bitcoin-analyst-perfect_v2"
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
# Use the Llama 3 chat template for instruction-following
messages = [
{"role": "user", "content": "What is the Bitcoin halving and what is its expected impact on the price?"},
]
# Apply the chat template and tokenize
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
# Generate a response
outputs = model.generate(
input_ids,
max_new_tokens=512,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
# Decode and print the output
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
Training Details
Training Data
The second stage of fine-tuning was performed on the tahamajs/bitcoin-llm-finetuning-dataset
. This dataset contains instruction-response pairs related to Bitcoin, market analysis, and blockchain technology.
Training Procedure
Preprocessing
The training data was formatted into the Llama 3 chat template using the following structure for each example:
<|begin_of_text|>
<|start_header_id|>user<|end_header_id|>
{instruction}
{input}
<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>
{output}
<|eot_id|>
The loss was calculated only on the assistant's response tokens.
Training Hyperparameters
- Training regime:
bf16
mixed precision with QLoRA - LoRA
r
: 16 - LoRA
alpha
: 32 - LoRA
dropout
: 0.1 - LoRA
target_modules
:q_proj
,k_proj
,v_proj
,o_proj
,gate_proj
,up_proj
,down_proj
learning_rate
: 1e-4num_train_epochs
: 1per_device_train_batch_size
: 1gradient_accumulation_steps
: 8optimizer
: paged_adamw_32bitlr_scheduler_type
: cosine
Training Loss
The training loss shows a clear downward trend, indicating that the model successfully learned from the new data.
Environmental Impact
- Hardware Type: Not specified
- Hours used: Not specified
- Cloud Provider: Not specified
- Compute Region: Not specified
- Carbon Emitted: Not estimated
Technical Specifications [optional]
Model Architecture and Objective
This is a decoder-only transformer based on the Llama 3.2 architecture. It was fine-tuned using a causal language modeling objective.
Compute Infrastructure
Software
- PyTorch
- Transformers
- PEFT (v0.17.0)
- TRL
- BitsAndBytes
Model Card Authors [optional]
tahamajs
Model Card Contact
[More Information Needed]
Framework versions
- PEFT 0.17.0
Model tree for tahamajs/llama-3.2-3b-instruct-bitcoin-analyst-perfect_v2
Base model
meta-llama/Llama-3.2-3B-Instruct