Tesslate/Rust_Dataset
Viewer • Updated • 46.6k • 80 • 23
How to use Daemontatox/FerrisMind with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="Daemontatox/FerrisMind")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Daemontatox/FerrisMind")
model = AutoModelForCausalLM.from_pretrained("Daemontatox/FerrisMind")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use Daemontatox/FerrisMind with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "Daemontatox/FerrisMind"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Daemontatox/FerrisMind",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/Daemontatox/FerrisMind
How to use Daemontatox/FerrisMind with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "Daemontatox/FerrisMind" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Daemontatox/FerrisMind",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "Daemontatox/FerrisMind" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Daemontatox/FerrisMind",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use Daemontatox/FerrisMind with Unsloth Studio:
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Daemontatox/FerrisMind to start chatting
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Daemontatox/FerrisMind to start chatting
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Daemontatox/FerrisMind to start chatting
pip install unsloth
from unsloth import FastModel
model, tokenizer = FastModel.from_pretrained(
model_name="Daemontatox/FerrisMind",
max_seq_length=2048,
)How to use Daemontatox/FerrisMind with Docker Model Runner:
docker model run hf.co/Daemontatox/FerrisMind
FerrisMind is a finetuned variant of Qwen3 Coder Flash, specialized for Rust programming. It was trained using GRPO in an attempt to mimic hybrid thinking and utilize it in coding instruct models. It is optimized for:
// Example: Async file reader in idiomatic Rust
use tokio::fs::File;
use tokio::io::{self, AsyncReadExt};
#[tokio::main]
async fn main() -> io::Result<()> {
let mut file = File::open("example.txt").await?;
let mut contents = String::new();
file.read_to_string(&mut contents).await?;
println!("File content: {}", contents);
Ok(())
}
Base model
Qwen/Qwen3-Coder-30B-A3B-Instruct