.gitattributes CHANGED
@@ -40,9 +40,3 @@ assets/tuto_open_hands/agent_working.png filter=lfs diff=lfs merge=lfs -text
40
  assets/tuto_open_hands/build_app.png filter=lfs diff=lfs merge=lfs -text
41
  assets/tuto_open_hands/agent_prompting.png filter=lfs diff=lfs merge=lfs -text
42
  assets/tuto_open_hands/app_ui.png filter=lfs diff=lfs merge=lfs -text
43
- assets/images_example/example_mistral_common_1.png filter=lfs diff=lfs merge=lfs -text
44
- assets/images_example/example_mistral_common_2.png filter=lfs diff=lfs merge=lfs -text
45
- assets/images_example/example_mistral_common_3.png filter=lfs diff=lfs merge=lfs -text
46
- assets/images_example/example_mistral_common_res_1.png filter=lfs diff=lfs merge=lfs -text
47
- assets/images_example/example_mistral_common_res_2.png filter=lfs diff=lfs merge=lfs -text
48
- assets/images_example/example_mistral_common_res_3.png filter=lfs diff=lfs merge=lfs -text
 
40
  assets/tuto_open_hands/build_app.png filter=lfs diff=lfs merge=lfs -text
41
  assets/tuto_open_hands/agent_prompting.png filter=lfs diff=lfs merge=lfs -text
42
  assets/tuto_open_hands/app_ui.png filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
README.md CHANGED
@@ -35,7 +35,7 @@ extra_gated_description: >-
35
  pipeline_tag: text2text-generation
36
  ---
37
 
38
- # Devstral-Small-2505
39
 
40
  Devstral is an agentic LLM for software engineering tasks built under a collaboration between [Mistral AI](https://mistral.ai/) and [All Hands AI](https://www.all-hands.dev/) 🙌. Devstral excels at using tools to explore codebases, editing multiple files and power software engineering agents. The model achieves remarkable performance on SWE-bench which positionates it as the #1 open source model on this [benchmark](#benchmark-results).
41
 
@@ -107,7 +107,6 @@ The model can also be deployed with the following libraries:
107
  - [`mistral-inference`](https://github.com/mistralai/mistral-inference): See [here](#mistral-inference)
108
  - [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers)
109
  - [`LMStudio`](https://lmstudio.ai/): See [here](#lmstudio)
110
- - [`llama.cpp`](https://github.com/ggml-org/llama.cpp): See [here](#llama.cpp)
111
  - [`ollama`](https://github.com/ollama/ollama): See [here](#ollama)
112
 
113
 
@@ -322,6 +321,7 @@ from mistral_common.protocol.instruct.messages import (
322
  )
323
  from mistral_common.protocol.instruct.request import ChatCompletionRequest
324
  from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
 
325
  from huggingface_hub import hf_hub_download
326
  from transformers import AutoModelForCausalLM
327
 
@@ -372,7 +372,7 @@ You can serve the model locally with [LMStudio](https://lmstudio.ai/).
372
  * Download [LM Studio](https://lmstudio.ai/) and install it
373
  * Install `lms cli ~/.lmstudio/bin/lms bootstrap`
374
  * In a bash terminal, run `lms import devstralQ4_K_M.gguf` in the directory where you've downloaded the model checkpoint (e.g. `mistralai/Devstral-Small-2505_gguf`)
375
- * Open the LMStudio application, click the terminal icon to get into the developer tab. Click select a model to load and select Devstral Q4 K M. Toggle the status button to start the model, in setting toggle Serve on Local Network to be on.
376
  * On the right tab, you will see an API identifier which should be devstralq4_k_m and an api address under API Usage. Keep note of this address, we will use it in the next step.
377
 
378
  Launch Openhands
@@ -394,23 +394,6 @@ docker run -it --rm --pull=always \
394
  Click “see advanced setting” on the second line.
395
  In the new tab, toggle advanced to on. Set the custom model to be mistral/devstralq4_k_m and Base URL the api address we get from the last step in LM Studio. Set API Key to dummy. Click save changes.
396
 
397
- ### llama.cpp
398
-
399
- Download the weights from huggingface:
400
-
401
- ```
402
- pip install -U "huggingface_hub[cli]"
403
- huggingface-cli download \
404
- "mistralai/Devstral-Small-2505_gguf" \
405
- --include "devstralQ4_K_M.gguf" \
406
- --local-dir "mistralai/Devstral-Small-2505_gguf/"
407
- ```
408
-
409
- Then run Devstral using the llama.cpp CLI.
410
-
411
- ```bash
412
- ./llama-cli -m Devstral-Small-2505_gguf/devstralQ4_K_M.gguf -cnv
413
- ```
414
 
415
  ### Ollama
416
 
@@ -418,34 +401,4 @@ You can run Devstral using the [Ollama](https://ollama.ai/) CLI.
418
 
419
  ```bash
420
  ollama run devstral
421
- ```
422
-
423
- ### Example: Understanding Test Coverage of Mistral Common
424
-
425
- We can start the OpenHands scaffold and link it to a repo to analyze test coverage and identify badly covered files.
426
- Here we start with our public `mistral-common` repo.
427
-
428
-
429
- After the repo is mounted in the workspace, we give the following instruction
430
- ```
431
- Check the test coverage of the repo and then create a visualization of test coverage. Try plotting a few different types of graphs and save them to a png.
432
- ```
433
- The agent will first browse the code base to check test configuration and structure.
434
-
435
- ![Repo Exploration](assets/images_example/example_mistral_common_1.png)
436
-
437
- Then it sets up the testing dependencies and launches the coverage test:
438
-
439
- ![Repo Exploration](assets/images_example/example_mistral_common_2.png)
440
-
441
- Finally, the agent writes necessary code to visualize the coverage.
442
- ![Repo Exploration](assets/images_example/example_mistral_common_3.png)
443
-
444
- At the end of the run, the following plots are produced:
445
- ![Repo Exploration](assets/images_example/example_mistral_common_res_1.png)
446
- ![Repo Exploration](assets/images_example/example_mistral_common_res_2.png)
447
- ![Repo Exploration](assets/images_example/example_mistral_common_res_3.png)
448
-
449
-
450
-
451
-
 
35
  pipeline_tag: text2text-generation
36
  ---
37
 
38
+ # Devstrall-Small-2505
39
 
40
  Devstral is an agentic LLM for software engineering tasks built under a collaboration between [Mistral AI](https://mistral.ai/) and [All Hands AI](https://www.all-hands.dev/) 🙌. Devstral excels at using tools to explore codebases, editing multiple files and power software engineering agents. The model achieves remarkable performance on SWE-bench which positionates it as the #1 open source model on this [benchmark](#benchmark-results).
41
 
 
107
  - [`mistral-inference`](https://github.com/mistralai/mistral-inference): See [here](#mistral-inference)
108
  - [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers)
109
  - [`LMStudio`](https://lmstudio.ai/): See [here](#lmstudio)
 
110
  - [`ollama`](https://github.com/ollama/ollama): See [here](#ollama)
111
 
112
 
 
321
  )
322
  from mistral_common.protocol.instruct.request import ChatCompletionRequest
323
  from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
324
+ from mistral_common.tokens.tokenizers.tekken import SpecialTokenPolicy
325
  from huggingface_hub import hf_hub_download
326
  from transformers import AutoModelForCausalLM
327
 
 
372
  * Download [LM Studio](https://lmstudio.ai/) and install it
373
  * Install `lms cli ~/.lmstudio/bin/lms bootstrap`
374
  * In a bash terminal, run `lms import devstralQ4_K_M.gguf` in the directory where you've downloaded the model checkpoint (e.g. `mistralai/Devstral-Small-2505_gguf`)
375
+ * Open the LMStudio application, click the terminal icon to get into the developer tab. Click select a model to load and select Devstral Q4 K M. Toggle the status button to start the model, in setting oggle Serve on Local Network to be on.
376
  * On the right tab, you will see an API identifier which should be devstralq4_k_m and an api address under API Usage. Keep note of this address, we will use it in the next step.
377
 
378
  Launch Openhands
 
394
  Click “see advanced setting” on the second line.
395
  In the new tab, toggle advanced to on. Set the custom model to be mistral/devstralq4_k_m and Base URL the api address we get from the last step in LM Studio. Set API Key to dummy. Click save changes.
396
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
397
 
398
  ### Ollama
399
 
 
401
 
402
  ```bash
403
  ollama run devstral
404
+ ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
assets/images_example/example_mistral_common_1.png DELETED

Git LFS Details

  • SHA256: ae2a75203fe94099fd2705196320407c15b5054364d2cede377810eec7080890
  • Pointer size: 131 Bytes
  • Size of remote file: 148 kB
assets/images_example/example_mistral_common_2.png DELETED

Git LFS Details

  • SHA256: 36b9c05118f205d25099582b52ec654fb76ecb13b62ebc4bcc5437167f38f2f6
  • Pointer size: 131 Bytes
  • Size of remote file: 151 kB
assets/images_example/example_mistral_common_3.png DELETED

Git LFS Details

  • SHA256: 648b3927fbbe330ecf88151c62c18142c07281891f7cf1fc75c3d9ad35862b92
  • Pointer size: 131 Bytes
  • Size of remote file: 147 kB
assets/images_example/example_mistral_common_res_1.png DELETED

Git LFS Details

  • SHA256: 4f73c29c89582f856d07920534fb9fb4916fe42e8a4dfe8f6cb5ae05870f6f8f
  • Pointer size: 131 Bytes
  • Size of remote file: 356 kB
assets/images_example/example_mistral_common_res_2.png DELETED

Git LFS Details

  • SHA256: ee919cb5db996ee42202d7f22b15f92add75267fbd88f4ad618328a0ae064a95
  • Pointer size: 131 Bytes
  • Size of remote file: 213 kB
assets/images_example/example_mistral_common_res_3.png DELETED

Git LFS Details

  • SHA256: bf3b2513d1d28374986701f7ff031757b30f965f2419e0a88f188964830c371f
  • Pointer size: 131 Bytes
  • Size of remote file: 321 kB