--- base_model: unsloth/gemma-2-2b-it-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma2 - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** vinimuchulski - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2-2b-it-bnb-4bit This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth) --- ```markdown # Agente de Chamada de Função com LangChain e Prompt Personalizado Este projeto implementa um agente baseado em LangChain com um prompt personalizado para realizar chamadas de função, utilizando o modelo `GEMMA-2-2B-it-GGUF-function_calling` hospedado no Hugging Face. ## Descrição O código cria um agente que utiliza ferramentas personalizadas e um modelo de linguagem para responder perguntas com base em um fluxo estruturado de pensamento e ação. Ele inclui uma ferramenta personalizada (`get_word_length`) que calcula o comprimento de uma palavra e um prompt ReAct modificado para guiar o raciocínio do agente. ## Pré-requisitos - Python 3.8+ - Bibliotecas necessárias: ```bash pip install langchain langchain-ollama ``` ## Código Aqui está o código principal: ```python from langchain.agents import AgentExecutor from langchain.agents import tool, create_react_agent from langchain import hub from langchain_ollama.llms import OllamaLLM from langchain.prompts import PromptTemplate # Definir o modelo MODEL = "hf.co/vinimuchulski/GEMMA-2-2B-it-GGUF-function_calling:latest" llm = OllamaLLM(model=MODEL) # Criar ferramenta personalizada @tool def get_word_length(word: str) -> int: """Retorna o comprimento de uma palavra.""" return len(word) # Definir prompt personalizado custom_react_prompt = PromptTemplate( input_variables=["input", "agent_scratchpad", "tools", "tool_names"], template="""Answer the following questions as best you can. You have access to the following tools: {tools} Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [{tool_names}] Action Input: the input to the action, formatted as a string Observation: the result of the action Thought: I now know the final answer Final Answer: the final answer to the original input question Example: Question: What is the length of the word "hello"? Thought: I need to use the get_word_length tool to calculate the length of the word "hello". Action: get_word_length Action Input: "hello" Observation: 5 Thought: I now know the length of the word "hello" is 5. Final Answer: 5 Begin! Question: {input} Thought: {agent_scratchpad}""" ) # Configurar ferramentas tools = [get_word_length] tools_str = "\n".join([f"{tool.name}: {tool.description}" for tool in tools]) tool_names = ", ".join([tool.name for tool in tools]) # Criar o agente agent = create_react_agent( tools=tools, llm=llm, prompt=custom_react_prompt.partial(tools=tools_str, tool_names=tool_names), ) # Criar o executor agent_executor = AgentExecutor( agent=agent, tools=tools, verbose=True, handle_parsing_errors=True ) # Testar o agente question = "What is the length of the word PythonDanelonAugustoTrajanoRomanovCzarVespasianoDiocleciano?" response = agent_executor.invoke({"input": question}) print(response) ```