Update README.md
Browse files
README.md
CHANGED
@@ -21,7 +21,7 @@ We believe the future of AI is open. That’s why we’re sharing our latest mod
|
|
21 |
- **Use or build optimized foundation models**, including Llama, Mistral, Qwen, Gemma, DeepSeek, and others, tailored for performance and accuracy in real-world deployments.
|
22 |
- **Customize and fine-tune models for your workflows**, from experimentation to production, with tools and frameworks built to support reproducible research and enterprise AI pipelines.
|
23 |
- **Maximize inference efficiency across hardware** using production-grade compression and optimization techniques like quantization (FP8, INT8, INT4), structured/unstructured sparsity, distillation, and more, ready for cost-efficient deployments with vLLM.
|
24 |
-
- **Confidently deploy validated models** on Red Hat AI products.
|
25 |
|
26 |
🔗 **Explore relevant open-source tools**:
|
27 |
- [**vLLM**](https://github.com/vllm-project/vllm) – Serve large language models efficiently across GPUs and environments.
|
|
|
21 |
- **Use or build optimized foundation models**, including Llama, Mistral, Qwen, Gemma, DeepSeek, and others, tailored for performance and accuracy in real-world deployments.
|
22 |
- **Customize and fine-tune models for your workflows**, from experimentation to production, with tools and frameworks built to support reproducible research and enterprise AI pipelines.
|
23 |
- **Maximize inference efficiency across hardware** using production-grade compression and optimization techniques like quantization (FP8, INT8, INT4), structured/unstructured sparsity, distillation, and more, ready for cost-efficient deployments with vLLM.
|
24 |
+
- **Confidently deploy [validated models](http://www.redhat.com/en/products/ai/validated-models)** on Red Hat AI products.
|
25 |
|
26 |
🔗 **Explore relevant open-source tools**:
|
27 |
- [**vLLM**](https://github.com/vllm-project/vllm) – Serve large language models efficiently across GPUs and environments.
|