Papers
arxiv:2506.00019

Amadeus-Verbo Technical Report: The powerful Qwen2.5 family models trained in Portuguese

Published on May 20
Authors:
,

Abstract

The report details the creation of Amadeus Verbo, a suite of large language models for Brazilian Portuguese, which aims to facilitate fine-tuning foundation models for open-source development with available data and resources.

AI-generated summary

This report introduces the experience of developing Amadeus Verbo, a family of large language models for Brazilian Portuguese. To handle diverse use cases, Amadeus Verbo includes base-tuned, merged, and instruction-tuned models in sizes of 0.5B, 1.5B, 3B, 7B, 14B, 32B, and 72B parameters. Thus, the main objective is to show how easy it is to fine-tune foundation models to democratize the open-source development of Brazilian Portuguese LLMs when data and resources are available. Amadeus-Verbo family models are all available at HuggingFace at https://huggingface.co/collections/amadeusai/amadeus-verbo-qwen25-67cf2e7aae69ce2b3bcdcfda.

Community

Sign up or log in to comment

Models citing this paper 21

Browse 21 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2506.00019 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2506.00019 in a Space README.md to link it from this page.

Collections including this paper 1