File size: 3,558 Bytes
d07c36f d6f3423 940a562 d6f3423 940a562 d6f3423 940a562 d6f3423 940a562 d6f3423 940a562 d6f3423 940a562 d6f3423 940a562 d6f3423 940a562 d6f3423 940a562 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 |
---
base_model: unsloth/gemma-2-2b-it-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** vinimuchulski
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-2b-it-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
---
```markdown
# Agente de Chamada de Função com LangChain e Prompt Personalizado
Este projeto implementa um agente baseado em LangChain com um prompt personalizado para realizar chamadas de função, utilizando o modelo `GEMMA-2-2B-it-GGUF-function_calling` hospedado no Hugging Face.
## Descrição
O código cria um agente que utiliza ferramentas personalizadas e um modelo de linguagem para responder perguntas com base em um fluxo estruturado de pensamento e ação. Ele inclui uma ferramenta personalizada (`get_word_length`) que calcula o comprimento de uma palavra e um prompt ReAct modificado para guiar o raciocínio do agente.
## Pré-requisitos
- Python 3.8+
- Bibliotecas necessárias:
```bash
pip install langchain langchain-ollama
```
## Código
Aqui está o código principal:
```python
from langchain.agents import AgentExecutor
from langchain.agents import tool, create_react_agent
from langchain import hub
from langchain_ollama.llms import OllamaLLM
from langchain.prompts import PromptTemplate
# Definir o modelo
MODEL = "hf.co/vinimuchulski/GEMMA-2-2B-it-GGUF-function_calling:latest"
llm = OllamaLLM(model=MODEL)
# Criar ferramenta personalizada
@tool
def get_word_length(word: str) -> int:
"""Retorna o comprimento de uma palavra."""
return len(word)
# Definir prompt personalizado
custom_react_prompt = PromptTemplate(
input_variables=["input", "agent_scratchpad", "tools", "tool_names"],
template="""Answer the following questions as best you can. You have access to the following tools:
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action, formatted as a string
Observation: the result of the action
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Example:
Question: What is the length of the word "hello"?
Thought: I need to use the get_word_length tool to calculate the length of the word "hello".
Action: get_word_length
Action Input: "hello"
Observation: 5
Thought: I now know the length of the word "hello" is 5.
Final Answer: 5
Begin!
Question: {input}
Thought: {agent_scratchpad}"""
)
# Configurar ferramentas
tools = [get_word_length]
tools_str = "\n".join([f"{tool.name}: {tool.description}" for tool in tools])
tool_names = ", ".join([tool.name for tool in tools])
# Criar o agente
agent = create_react_agent(
tools=tools,
llm=llm,
prompt=custom_react_prompt.partial(tools=tools_str, tool_names=tool_names),
)
# Criar o executor
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True,
handle_parsing_errors=True
)
# Testar o agente
question = "What is the length of the word PythonDanelonAugustoTrajanoRomanovCzarVespasianoDiocleciano?"
response = agent_executor.invoke({"input": question})
print(response)
```
|