{
"cells": [
{
"cell_type": "markdown",
"id": "0faf5297",
"metadata": {},
"source": [
"# SmolLM Preview Demo\n",
"\n",
"This notebook demonstrates how to use the model for various tasks, including reasoning, tool use, and more. Similar to the Qwen3 models, the model is capable of responding with either extended thinking or not (controlled by `/think` or `/no_think` system message). This is a preview of the model's capabilities and has several limitations we are currently working on:\n",
"\n",
"* Does not know it's identity is SmolLM\n",
"* Tool use only works without extended thinking\n",
"* Multi-turn reasoning consistency depends on the domain (better for math and code than for general chat)\n",
"* The conversational vibe needs improvement"
]
},
{
"cell_type": "markdown",
"id": "7a44bcee",
"metadata": {},
"source": [
"## Environment\n",
"\n",
"> Note you must use `transformes@main` to load the model until v0.53.0 is released.\n",
"\n",
"- Platform: Linux-5.15.0-1048-aws-x86_64-with-glibc2.31\n",
"- Python version: 3.11.11\n",
"- TRL version: 0.19.0.dev0+7e434fa\n",
"- PyTorch version: 2.6.0\n",
"- accelerator(s): NVIDIA H100 80GB HBM3\n",
"- Transformers version: 4.53.0.dev0\n",
"- Accelerate version: 1.7.0\n",
"- Accelerate config: not found\n",
"- Datasets version: 3.6.0\n",
"- HF Hub version: 0.33.0\n",
"- bitsandbytes version: 0.46.0\n",
"- DeepSpeed version: 0.17.0\n",
"- Diffusers version: 0.33.1\n",
"- Liger-Kernel version: 0.5.10\n",
"- LLM-Blender version: 0.0.2\n",
"- OpenAI version: 1.55.2\n",
"- PEFT version: 0.15.2\n",
"- vLLM version: 0.8.5.post1"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "32bc7087",
"metadata": {},
"outputs": [],
"source": [
"# uncomment and run\n",
"# %pip uninstall transformers -y\n",
"# %pip install git+https://github.com/huggingface/transformers.git"
]
},
{
"cell_type": "markdown",
"id": "01718c21",
"metadata": {},
"source": [
"## Imports"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "f7027e5a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"INFO 06-26 10:13:43 [__init__.py:239] Automatically detected platform cuda.\n"
]
}
],
"source": [
"from vllm import LLM, SamplingParams"
]
},
{
"cell_type": "markdown",
"id": "2e4d3e42",
"metadata": {},
"source": [
"## Load model"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f1ae58ad",
"metadata": {},
"outputs": [],
"source": [
"model, rev = \"smol-explorers/intermediate-sft-v3\", \"v18.00-step-000018000\"\n",
"llm = LLM(model=model, revision=rev, gpu_memory_utilization=0.8)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "87289e8d",
"metadata": {},
"outputs": [],
"source": [
"sampling_params = SamplingParams(temperature=0.6, top_p=0.95, max_tokens=8192)"
]
},
{
"cell_type": "markdown",
"id": "9df6308f",
"metadata": {},
"source": [
"## Helper functions"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "048c5048",
"metadata": {},
"outputs": [],
"source": [
"def respond(query, system_message=None, tools=None):\n",
" \"\"\"\n",
" Respond to a query with an optional system message.\n",
" \"\"\"\n",
" messages = []\n",
" if system_message:\n",
" messages.append({\"role\": \"system\", \"content\": system_message})\n",
" messages.append({\"role\": \"user\", \"content\": query})\n",
" \n",
" response = llm.chat(messages, sampling_params=sampling_params, tools=tools)\n",
" print(response[0].outputs[0].text)"
]
},
{
"cell_type": "markdown",
"id": "5e7a7d92",
"metadata": {},
"source": [
"## Extended vs no extended thinking\n",
"\n",
"By default, the model will respond with extended thinking. If you wish to disable this, include `/no_think` in the system message."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "b5161c8c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"INFO 06-26 10:14:51 [chat_utils.py:397] Detected the chat template content format to be 'string'. You can set `--chat-template-content-format` to override this.\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "583e92a64dc24983891ca935960a1b65",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Processed prompts: 0%| | 0/1 [00:00, ?it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Okay, so I need to figure out the capital of France. Let me start by recalling what I know. I remember that France is a country in Europe, and I think its capital is a major city there. But wait, I'm not entirely sure. Let me think. There's Paris, which I've heard mentioned a lot in the context of France. For example, I know that Paris is known for landmarks like the Eiffel Tower and the Louvre Museum. But could there be another city considered the capital?\n",
"\n",
"Hmm, maybe I should verify this. I think in some countries, the capital might be a smaller city, but France is one of the larger countries. I don't recall hearing about another city being called the capital. Let me try to remember any other cities in France. There's Lyon, Marseille, but those are more known for their industries or cultural significance. \n",
"\n",
"Wait, maybe I should check my memory against some historical facts. France has a long history, and its capital has probably been the same for a long time. I think during the French Revolution, Paris was the capital. Before that, it was Paris as well, I believe. So even in the past, Paris was the capital. \n",
"\n",
"But just to be thorough, I should consider if there's any other possible contender. For instance, maybe the capital is a different city during different periods? I don't think so. The current capital is definitely Paris. Let me also think about the administrative structure. France is divided into regions and departments, but the capital is the seat of the government. The government is based in Paris, so that's the capital. \n",
"\n",
"I might also recall that the official name of the city is Paris, and it's the capital of the Île-de-France region. The region is one of the 13 administrative regions in France. So, yes, that makes sense. \n",
"\n",
"Wait, could there be a confusion with another city? For example, in some languages, the name might be different, but the capital is still Paris. For instance, in French, the name is still Paris. \n",
"\n",
"I think I'm pretty confident here. The capital of France is Paris. But just to be 100% sure, maybe I should look up a fact or two. Wait, I can't actually look things up right now, but based on my existing knowledge, I'm pretty sure. So the answer should be Paris.\n",
"\n",
"\n",
"The capital of France is **Paris**. It is the largest city in France and serves as the seat of the French government, housing key institutions like the Eiffel Tower, the Louvre Museum, and the National Assembly of France. Paris is also the capital of the Île-de-France region, one of France's 13 administrative regions.\n"
]
}
],
"source": [
"# default mode is extended thinking\n",
"respond(\"What is the capital of France?\")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "69afb03b",
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "6eef86d90d324388a50cb295460199d6",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Processed prompts: 0%| | 0/1 [00:00, ?it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"The capital of France is Paris.\n"
]
}
],
"source": [
"# disable extended thinking\n",
"respond(\"What is the capital of France?\", system_message=\"/no_think\")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "8d7ef7ca",
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "5b9af9049a8e469587d17b6dfa5a4f13",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Processed prompts: 0%| | 0/1 [00:00, ?it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"The capital o' France be Paris, me hearty!\n"
]
}
],
"source": [
"# no thinking with a persona\n",
"respond(\"What is the capital of France?\", system_message=\"Always respond like a pirate. /no_think\")"
]
},
{
"cell_type": "markdown",
"id": "04ed3cec",
"metadata": {},
"source": [
"Under the hood, the system message is inserted within a structured system prompt that is wrapped in the model's chat template. Here's an example:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "66583cc5",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<|im_start|>system\n",
"## Metadata\n",
"\n",
"Knowledge Cutoff Date: June 2025\n",
"Today Date: 26 June 2025\n",
"Reasoning Mode: /no_think\n",
"\n",
"## Custom Instructions\n",
"\n",
"Always respond like a pirate.\n",
"\n",
"## Tools\n",
"\n",
"### XML Tools\n",
"\n",
"None\n",
"\n",
"### Python Tools\n",
"\n",
"None\n",
"\n",
"<|im_end|>\n",
"<|im_start|>user\n",
"What is the capital of France?<|im_end|>\n",
"<|im_start|>assistant\n",
"\n",
"\n",
"\n",
"\n"
]
}
],
"source": [
"print(llm.get_tokenizer().apply_chat_template([\n",
" {\"role\": \"system\", \"content\": \"Always respond like a pirate. /no_think\"}, \n",
" {\"role\": \"user\", \"content\": \"What is the capital of France?\"}\n",
" ], tokenize=False, add_generation_prompt=True))"
]
},
{
"cell_type": "markdown",
"id": "690c1462",
"metadata": {},
"source": [
"Without the `/no_think` suffix, the reasoning model is set to `/think` in the metadata:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "cbaa1679",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<|im_start|>system\n",
"## Metadata\n",
"\n",
"Knowledge Cutoff Date: June 2025\n",
"Today Date: 26 June 2025\n",
"Reasoning Mode: /think\n",
"\n",
"## Custom Instructions\n",
"\n",
"Always respond like a pirate.\n",
"\n",
"## Tools\n",
"\n",
"### XML Tools\n",
"\n",
"None\n",
"\n",
"### Python Tools\n",
"\n",
"None\n",
"\n",
"<|im_end|>\n",
"<|im_start|>user\n",
"What is the capital of France?<|im_end|>\n",
"<|im_start|>assistant\n",
"\n"
]
}
],
"source": [
"print(llm.get_tokenizer().apply_chat_template([\n",
" {\"role\": \"system\", \"content\": \"Always respond like a pirate.\"}, \n",
" {\"role\": \"user\", \"content\": \"What is the capital of France?\"}\n",
" ], tokenize=False, add_generation_prompt=True))"
]
},
{
"cell_type": "markdown",
"id": "483e0067",
"metadata": {},
"source": [
"## Examples"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "cbe9470a",
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "e4a1a53aa3554203a8848cd4fff88a97",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Processed prompts: 0%| | 0/1 [00:00, ?it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
" Okay, so I need to figure out how many Rs are in the word \"strawberry.\" Let me start by writing down the word and then going through each letter one by one. Sometimes it's easy to miss letters, especially if they're in the middle or at the end. Let me see... The word is S-T-R-A-W-B-E-R-R-Y. Hmm, let me break it down letter by letter.\n",
"\n",
"Starting with the first letter: S. That's a S, so no R there. Next is T. Still no R yet. Third letter is R. Oh, there's one R. Let me count that. So first R is at position 3. Then the next letters are A, W, B, E. No Rs in those. Then the next letters after E are R and R again. Wait, so after E comes R (position 7), and then another R (position 8), right? Let me check again: S-T-R-A-W-B-E-R-R-Y. Yeah, the letters after E are R (7th letter) and R (8th letter), and then Y is the last one. So that's two more Rs. So total Rs would be the first R (position 3) plus the two more at positions 7 and 8. That makes 3 Rs in total?\n",
"\n",
"Wait, hold on. Let me write them out with their positions to make sure I didn't miscount.\n",
"\n",
"1: S\n",
"\n",
"2: T\n",
"\n",
"3: R\n",
"\n",
"4: A\n",
"\n",
"5: W\n",
"\n",
"6: B\n",
"\n",
"7: E\n",
"\n",
"8: R\n",
"\n",
"9: R\n",
"\n",
"10: Y\n",
"\n",
"Wait a second, maybe I miscounted the letters. Let me count the letters in \"strawberry\" again. Let's see:\n",
"\n",
"S (1), T (2), R (3), A (4), W (5), B (6), E (7), R (8), R (9), Y (10). So there are 10 letters total. So the Rs are at positions 3, 8, and 9. That's three Rs. So the answer should be 3?\n",
"\n",
"But maybe I'm missing something. Let me check again. Let's spell it out slowly: S-T-R-A-W-B-E-R-R-Y. So after the first R, the next letters are A, W, B, E, then R, R, Y. So yes, that's three Rs. Wait, but maybe I should double-check if there's another R somewhere else. Let me write each letter with its position:\n",
"\n",
"1: S\n",
"\n",
"2: T\n",
"\n",
"3: R\n",
"\n",
"4: A\n",
"\n",
"5: W\n",
"\n",
"6: B\n",
"\n",
"7: E\n",
"\n",
"8: R\n",
"\n",
"9: R\n",
"\n",
"10: Y\n",
"\n",
"Yep, that's three Rs. So the answer is 3? Hmm, but I feel like sometimes people might miscount the letters. Let me confirm the spelling of strawberry. S-T-R-A-W-B-E-R-R-Y. Yep, that's correct. So three Rs. Okay, I think that's right. But maybe I should think if there's another way to approach this. For example, maybe using a different method like counting the number of times the letter R appears in the word. Alternatively, maybe I can use a tool or a different approach, but since I can't actually use a tool right now, I'll stick to my own counting.\n",
"\n",
"Alternatively, maybe I can split the word into parts. Let's see: \"straw\" and \"berry.\" \"Straw\" has an R, and \"berry\" has two Rs. So that's 1 + 2 = 3 Rs. That confirms it again. So \"straw\" is S-T-R-A-W, so one R, and \"berry\" is B-E-R-R-Y, so two Rs. So total of three. That seems consistent. Okay, so I think the answer is three Rs in the word strawberry.\n",
"\n",
"\n",
"The word **strawberry** contains **3** instances of the letter **R**. Here's the breakdown:\n",
"\n",
"1. **S-T-R-A-W** (first part of the word) has 1 **R**.\n",
"2. **B-E-R-R-Y** (second part of the word) has 2 **R**s.\n",
"3. Total: \\(1 + 2 = 3\\) **R**s.\n",
"\n",
"**Answer:** \\boxed{3}\n"
]
}
],
"source": [
"respond(\"How many rs are there in the word strawberry?\")"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "36c991bd",
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "386eb5d0a30e4947bb6f50b953cc02dd",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Processed prompts: 0%| | 0/1 [00:00, ?it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
" Okay, so I need to figure out which number is larger between 9.11 and 9.9. Hmm, let me start by recalling how numbers are compared. I think it's similar to comparing whole numbers, but with decimals. The key here is to line up the decimal points and then compare each corresponding digit from left to right. \n",
"\n",
"First, let me write both numbers down to make sure I have them right: 9.11 and 9.9. Both have a 9 in the units place, which is the first digit after the decimal point. The first number, 9.11, has a 1 in the hundredths place, and the second number, 9.9, has a 9 in the tenths place. Wait, actually, hold on. Let me clarify the places. In decimal numbers, the first digit after the decimal is the tenths, the second is hundredths, the third is thousandths, and so on. \n",
"\n",
"So, for 9.11, breaking it down: the 9 is in the ones place, then the decimal point, then 1 in the tenths, 1 in the hundredths. For 9.9, it's 9 in the ones place, decimal, then 9 in the tenths place. So comparing them, since both have 9 in the ones place, we can focus on the tenths place first. \n",
"\n",
"In 9.11, the tenths place is 1, and in 9.9, it's 9. Since 9 is larger than 1, that means 9.9 is bigger than 9.11. Wait, is that right? Let me double-check. \n",
"\n",
"Another way to think about it: 9.9 is the same as 9.90, right? So if I write both numbers with the same number of decimal places, 9.11 vs. 9.90. Now, comparing the tenths place first: 9 vs. 1. Since 9 is greater than 1, the number with 9 in the tenths place is larger. So 9.90 is larger than 9.11, which confirms the initial thought. \n",
"\n",
"Alternatively, maybe convert them both to fractions to see? Let's see. 9.11 is equal to 911/100, and 9.9 is 99/10. To compare these fractions, I can convert them to have the same denominator. 99/10 is equal to 990/100. So 911/100 vs. 990/100. Since 990 is greater than 911, 9.9 is indeed larger. \n",
"\n",
"Hmm, so both methods confirm that 9.9 is larger. Wait, but just to make sure I didn't mix up tenths and hundredths. Let me write them vertically:\n",
"\n",
"9.11\n",
"9.90\n",
"\n",
"Starting from the left, the ones place is the same (9 vs 9). Then the tenths place: 1 vs 9. Since 9 is bigger, the second number is larger. So yeah, 9.9 is bigger. \n",
"\n",
"Alternatively, if I think about money, like dollars and cents. 9.9 would be $9.90 and 9.11 is $9.11. Obviously, 9 dollars and 90 cents is more than 9 dollars and 11 cents. \n",
"\n",
"Is there any chance I might have misread the numbers? Let me check again. The problem says 9.11 versus 9.9. Yep, that's correct. The first has two decimal places, the second has one. But when you write them with the same number of decimal places, like 9.90, it's clearer. \n",
"\n",
"Another way: subtract the two numbers. Let's see, 9.9 minus 9.11. 9.9 minus 9 is 0.9, and then subtract the remaining 0.11, so 0.9 - 0.11 = 0.79. Since the result is positive, that means 9.9 is larger. \n",
"\n",
"So all these different methods are pointing to the same conclusion. 9.9 is larger than 9.11. I think that's solid. \n",
"\n",
"Wait, but just to ensure I haven't overlooked anything. Let me think of another angle. Suppose I have two numbers where the integer parts are the same, so the difference must come from the decimal parts. The decimal part of 9.9 is 0.9, and that of 9.11 is 0.11. Since 0.9 is greater than 0.11, the number with the larger decimal part is larger. \n",
"\n",
"Alternatively, if the decimal parts were the same, then we would look at the next digit. But here, the decimal parts differ in the tenths place, so that's the determining factor. \n",
"\n",
"I think I've covered all the possible ways to compare these numbers. So the answer should be 9.9 is larger. \n",
"\n",
"**Final Answer**\n",
"The larger number is \\boxed{9.9}.\n",
"\n",
"\n",
"To determine which number is larger between 9.11 and 9.9, we compare them digit by digit starting from the leftmost digit.\n",
"\n",
"1. Both numbers have 9 in the ones place, so we compare the tenths place next.\n",
"2. The tenths place of 9.11 is 1, and the tenths place of 9.9 is 9.\n",
"3. Since 9 is greater than 1, the number with 9 in the tenths place (9.9) is larger.\n",
"\n",
"To confirm, converting both numbers to fractions:\n",
"- \\(9.11 = \\frac{911}{100}\\)\n",
"- \\(9.9 = \\frac{99}{10} = \\frac{990}{100}\\)\n",
"\n",
"Comparing the fractions:\n",
"- \\(\\frac{911}{100}\\) vs. \\(\\frac{990}{100}\\)\n",
"- Since \\(990 > 911\\), \\(\\frac{990}{100}\\) is larger.\n",
"\n",
"Thus, the larger number is \\(\\boxed{9.9}\\).\n"
]
}
],
"source": [
"respond(\"Which number is larger, 9.11 or 9.9?\")"
]
},
{
"cell_type": "markdown",
"id": "73b363dc",
"metadata": {},
"source": [
"## Tools\n",
"\n",
"To use tools, we follow the OpenAI spec and populate the \"XML tools\" part of the system prompt as follows:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "5ce18731",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<|im_start|>system\n",
"## Metadata\n",
"\n",
"Knowledge Cutoff Date: June 2025\n",
"Today Date: 26 June 2025\n",
"Reasoning Mode: /think\n",
"\n",
"## Custom Instructions\n",
"\n",
"You are a helpful AI assistant named SmolLM, trained by Hugging Face. Your role as an assistant involves thoroughly exploring questions through a systematic thinking process before providing the final precise and accurate solutions. This requires engaging in a comprehensive cycle of analysis, summarizing, exploration, reassessment, reflection, backtracking, and iteration to develop well-considered thinking process. Please structure your response into two main sections: Thought and Solution using the specified format: Thought section Solution section. In the Thought section, detail your reasoning process in steps. Each step should include detailed considerations such as analysing questions, summarizing relevant findings, brainstorming new ideas, verifying the accuracy of the current steps, refining any errors, and revisiting previous steps. In the Solution section, based on various attempts, explorations, and reflections from the Thought section, systematically present the final solution that you deem correct. The Solution section should be logical, accurate, and concise and detail necessary steps needed to reach the conclusion.\n",
"\n",
"## Tools\n",
"\n",
"### XML Tools\n",
"\n",
"You may call one or more functions to assist with the user query.\n",
"\n",
"You are provided with function signatures within XML tags:\n",
"\n",
"\n",
"{\"type\": \"function\", \"function\": {\"name\": \"get_current_temperature\", \"description\": \"Gets the temperature at a given location.\", \"parameters\": {\"type\": \"object\", \"properties\": {\"location\": {\"type\": \"string\", \"description\": \"The location to get the temperature for\"}}, \"required\": [\"location\"]}}}\n",
"\n",
"\n",
"For each function call, return a json object with function name and arguments within XML tags.\n",
"\n",
"### Python Tools\n",
"\n",
"None\n",
"\n",
"<|im_end|>\n",
"<|im_start|>user\n",
"Hey, what's the weather like in Paris right now?<|im_end|>\n",
"<|im_start|>assistant\n",
"\n"
]
}
],
"source": [
"def get_current_temperature(location: str):\n",
" \"\"\"\n",
" Gets the temperature at a given location.\n",
"\n",
" Args:\n",
" location: The location to get the temperature for\n",
" \"\"\"\n",
" return 22.0 # bug: Sometimes the temperature is not 22. low priority\n",
"\n",
"tools = [get_current_temperature] \n",
"\n",
"chat = [\n",
" {\"role\": \"user\", \"content\": \"Hey, what's the weather like in Paris right now?\"}\n",
"]\n",
"\n",
"tool_prompt = llm.get_tokenizer().apply_chat_template(\n",
" chat, \n",
" tools=tools,\n",
" add_generation_prompt=True,\n",
" tokenize=False\n",
")\n",
"print(tool_prompt)"
]
},
{
"cell_type": "markdown",
"id": "da5aab4d",
"metadata": {},
"source": [
"Currently tool use is only support without extended thinking, so you must include `/no_think` in the system message:"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "d79e0d4a",
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "fe925087de054388b6a99bff92b5e526",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Processed prompts: 0%| | 0/1 [00:00, ?it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"{\"type\": \"function\", \"function\": {\"name\": \"get_current_temperature\", \"description\": \"Gets the temperature at a given location.\", \"parameters\": {\"type\": \"object\", \"properties\": {\"location\": {\"type\": \"string\", \"description\": \"The location to get the temperature for\"}, \"units\": {\"type\": \"string\", \"description\": \"The unit of measurement for the temperature, either 'fahrenheit' or 'celsius'.\"}}, \"required\": [\"location\", \"units\"]}}}\n",
"\n"
]
}
],
"source": [
"respond(\"Hey, what's the weather like in Paris right now?\", tools=tools, system_message=\"/no_think\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d132f70a",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "trl-internal",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}