shimmyshimmer commited on
Commit
e0f6e3f
·
verified ·
1 Parent(s): 775cc7b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -129
README.md CHANGED
@@ -125,11 +125,12 @@ Now you can start a new conversation with the agent by clicking on the plus sign
125
 
126
 
127
  The model can also be deployed with the following libraries:
128
- - [`LMStudio (recommended for quantized model)`](https://lmstudio.ai/): See [here](#lmstudio)
129
- - [`vllm (recommended)`](https://github.com/vllm-project/vllm): See [here](#vllm)
130
- - [`ollama`](https://github.com/ollama/ollama): See [here](#ollama)
131
  - [`mistral-inference`](https://github.com/mistralai/mistral-inference): See [here](#mistral-inference)
132
  - [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers)
 
 
133
 
134
  ### OpenHands (recommended)
135
 
@@ -267,7 +268,7 @@ Make sure you install [`vLLM >= 0.8.5`](https://github.com/vllm-project/vllm/rel
267
  pip install vllm --upgrade
268
  ```
269
 
270
- Doing so should automatically install [`mistral_common >= 1.5.4`](https://github.com/mistralai/mistral-common/releases/tag/v1.5.4).
271
 
272
  To check:
273
  ```
@@ -315,7 +316,7 @@ messages = [
315
  "content": [
316
  {
317
  "type": "text",
318
- "text": "Write a function that computes fibonacci in Python.",
319
  },
320
  ],
321
  },
@@ -327,96 +328,6 @@ response = requests.post(url, headers=headers, data=json.dumps(data))
327
  print(response.json()["choices"][0]["message"]["content"])
328
  ```
329
 
330
- <details>
331
- <summary>Output</summary>
332
-
333
- Certainly! The Fibonacci sequence is a series of numbers where each number is the sum of the two preceding ones, usually starting with 0 and 1. Here's a simple Python function to compute the Fibonacci sequence:
334
-
335
- ### Iterative Approach
336
- This approach uses a loop to compute the Fibonacci number iteratively.
337
-
338
- ```python
339
- def fibonacci(n):
340
- if n <= 0:
341
- return "Input should be a positive integer."
342
- elif n == 1:
343
- return 0
344
- elif n == 2:
345
- return 1
346
-
347
- a, b = 0, 1
348
- for _ in range(2, n):
349
- a, b = b, a + b
350
- return b
351
-
352
- # Example usage:
353
- print(fibonacci(10)) # Output: 34
354
- ```
355
-
356
- ### Recursive Approach
357
- This approach uses recursion to compute the Fibonacci number. Note that this is less efficient for large `n` due to repeated calculations.
358
-
359
- ```python
360
- def fibonacci_recursive(n):
361
- if n <= 0:
362
- return "Input should be a positive integer."
363
- elif n == 1:
364
- return 0
365
- elif n == 2:
366
- return 1
367
- else:
368
- return fibonacci_recursive(n - 1) + fibonacci_recursive(n - 2)
369
-
370
- # Example usage:
371
- print(fibonacci_recursive(10)) # Output: 34
372
- ```
373
-
374
- \### Memoization Approach
375
- This approach uses memoization to store previously computed Fibonacci numbers, making it more efficient than the simple recursive approach.
376
-
377
- ```python
378
- def fibonacci_memo(n, memo={}):
379
- if n <= 0:
380
- return "Input should be a positive integer."
381
- elif n == 1:
382
- return 0
383
- elif n == 2:
384
- return 1
385
- elif n in memo:
386
- return memo[n]
387
-
388
- memo[n] = fibonacci_memo(n - 1, memo) + fibonacci_memo(n - 2, memo)
389
- return memo[n]
390
-
391
- # Example usage:
392
- print(fibonacci_memo(10)) # Output: 34
393
- ```
394
-
395
- \### Dynamic Programming Approach
396
- This approach uses an array to store the Fibonacci numbers up to `n`.
397
-
398
- ```python
399
- def fibonacci_dp(n):
400
- if n <= 0:
401
- return "Input should be a positive integer."
402
- elif n == 1:
403
- return 0
404
- elif n == 2:
405
- return 1
406
-
407
- fib = [0, 1] + [0] * (n - 2)
408
- for i in range(2, n):
409
- fib[i] = fib[i - 1] + fib[i - 2]
410
- return fib[n - 1]
411
-
412
- # Example usage:
413
- print(fibonacci_dp(10)) # Output: 34
414
- ```
415
-
416
- You can choose any of these approaches based on your needs. The iterative and dynamic programming approaches are generally more efficient for larger values of `n`.
417
-
418
- </details>
419
-
420
 
421
  ### Mistral-inference
422
 
@@ -450,39 +361,7 @@ You can run the model using the following command:
450
  mistral-chat $HOME/mistral_models/Devstral --instruct --max_tokens 300
451
  ```
452
 
453
- If you prompt it with "Write me a unique and efficient function that computes fibonacci in Python", the model should generate something along the following lines:
454
-
455
- <details>
456
- <summary>Output</summary>
457
-
458
- Certainly! A common and efficient way to compute Fibonacci numbers is by using memoization to store previously computed values. This avoids redundant calculations and significantly improves performance. Below is a Python function that uses memoization to compute Fibonacci numbers efficiently:
459
-
460
- ```python
461
- def fibonacci(n, memo=None):
462
- if memo is None:
463
- memo = {}
464
-
465
- if n in memo:
466
- return memo[n]
467
-
468
- if n <= 1:
469
- return n
470
-
471
- memo[n] = fibonacci(n - 1, memo) + fibonacci(n - 2, memo)
472
- return memo[n]
473
-
474
- # Example usage:
475
- n = 10
476
- print(f"Fibonacci number at position {n} is {fibonacci(n)}")
477
- ```
478
-
479
- ### Explanation:
480
-
481
- 1. **Base Case**: If `n` is 0 or 1, the function returns `n` because the Fibonacci sequence starts with 0 and 1.
482
- 2. **Memoization**: The function uses a dictionary `memo` to store the results of previously computed Fibonacci numbers.
483
- 3. **Recursive Case**: For other values of `n`, the function recursively computes the Fibonacci number by summing the results of `fibonacci(n - 1)` and `fibonacci(n)`
484
-
485
- </details>
486
 
487
  ### Ollama
488
 
@@ -532,7 +411,7 @@ tokenized = tokenizer.encode_chat_completion(
532
  ChatCompletionRequest(
533
  messages=[
534
  SystemMessage(content=SYSTEM_PROMPT),
535
- UserMessage(content="Write me a function that computes fibonacci in Python."),
536
  ],
537
  )
538
  )
 
125
 
126
 
127
  The model can also be deployed with the following libraries:
128
+ - [`LMStudio (recommended for quantized model)`](https://lmstudio.ai/): See [here](#lmstudio-recommended-for-quantized-model)
129
+ - [`vllm (recommended)`](https://github.com/vllm-project/vllm): See [here](#vllm-recommended)
 
130
  - [`mistral-inference`](https://github.com/mistralai/mistral-inference): See [here](#mistral-inference)
131
  - [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers)
132
+ - [`ollama`](https://github.com/ollama/ollama): See [here](#ollama)
133
+
134
 
135
  ### OpenHands (recommended)
136
 
 
268
  pip install vllm --upgrade
269
  ```
270
 
271
+ Doing so should automatically install [`mistral_common >= 1.5.5`](https://github.com/mistralai/mistral-common/releases/tag/v1.5.5).
272
 
273
  To check:
274
  ```
 
316
  "content": [
317
  {
318
  "type": "text",
319
+ "text": "<your-command>",
320
  },
321
  ],
322
  },
 
328
  print(response.json()["choices"][0]["message"]["content"])
329
  ```
330
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
331
 
332
  ### Mistral-inference
333
 
 
361
  mistral-chat $HOME/mistral_models/Devstral --instruct --max_tokens 300
362
  ```
363
 
364
+ You can then prompt it with anything you'd like.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
365
 
366
  ### Ollama
367
 
 
411
  ChatCompletionRequest(
412
  messages=[
413
  SystemMessage(content=SYSTEM_PROMPT),
414
+ UserMessage(content="<your-command>"),
415
  ],
416
  )
417
  )