Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

Intel
/
neural-chat-7b-v3-1

Text Generation
Transformers
PyTorch
Safetensors
English
mistral
LLMs
Intel
conversational
Eval Results
text-generation-inference
Model card Files Files and versions
xet
Community
21
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

A new version of Mistral is out! Id love to see a new neural chat

#20 opened about 1 year ago by
rombodawg

Model not found when using OVModelForCausalLM

#17 opened over 1 year ago by
thirdrock

How to load pytorch shards only and not safe-tensors ? so that we can load only the pytorch model into gpu from huggingface?

#16 opened over 1 year ago by
bilwa99

Add extra `metadata` in `README.md`

#15 opened over 1 year ago by
alvarobartt

Other benchmarks as MT-Bench and/or AlpacaEval

2
#14 opened over 1 year ago by
alvarobartt

About DROP results within the `lm-eval-harness`

4
#13 opened over 1 year ago by
alvarobartt

Request: DOI

1
#12 opened over 1 year ago by
Sintayew4

Potential ways to reduce inference latency on CPU cluster?

2
#11 opened over 1 year ago by
TheBacteria

Add base_model metadata

#9 opened over 1 year ago by
davanstrien

Context Length

πŸ‘ 1
2
#7 opened over 1 year ago by
mrfakename

Free and ready to use neural-chat-7B-v3-1-GGUF model as OpenAI API compatible endpoint

πŸ‘ 🀝 3
#6 opened over 1 year ago by
limcheekin

What is the different between Intel/neural-chat-7b-v3-1 vs Intel/neural-chat-7b-v3?

πŸ‘ 1
3
#3 opened over 1 year ago by
Ichsan2895

Prompt Template?

πŸ‘ 4
13
#1 opened over 1 year ago by
fakezeta
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs