czczup commited on
Commit
4dcf984
1 Parent(s): 584ab2d

Update README.md

Browse files
Files changed (2) hide show
  1. README.md +5 -7
  2. configuration.json +1 -0
README.md CHANGED
@@ -81,7 +81,7 @@ The training pipeline for a single model in InternVL 2.5 is structured across th
81
 
82
  We introduce a progressive scaling strategy to align the vision encoder with LLMs efficiently. This approach trains with smaller LLMs first (e.g., 20B) to optimize foundational visual capabilities and cross-modal alignment before transferring the vision encoder to larger LLMs (e.g., 72B) without retraining. This reuse skips intermediate stages for larger models.
83
 
84
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/AVb_PSxhJq1z2eUFNYoqQ.png)
85
 
86
  Compared to Qwen2-VL's 1.4 trillion tokens, InternVL2.5-78B uses only 120 billion tokens—less than one-tenth. This strategy minimizes redundancy, maximizes pre-trained component reuse, and enables efficient training for complex vision-language tasks.
87
 
@@ -164,7 +164,7 @@ As shown in the following figure, from InternVL 1.5 to 2.0 and then to 2.5, the
164
 
165
  ### Video Understanding
166
 
167
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/uD5aYt2wNYL94Xn8MOVih.png)
168
 
169
  ## Evaluation on Language Capability
170
 
@@ -510,10 +510,10 @@ Many repositories now support fine-tuning of the InternVL series models, includi
510
 
511
  ### LMDeploy
512
 
513
- LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams.
514
 
515
  ```sh
516
- pip install lmdeploy>=0.5.3
517
  ```
518
 
519
  LMDeploy abstracts the complex inference process of multi-modal Vision-Language Models (VLM) into an easy-to-use pipeline, similar to the Large Language Model (LLM) inference pipeline.
@@ -537,8 +537,6 @@ If `ImportError` occurs while executing this case, please install the required d
537
 
538
  When dealing with multiple images, you can put them all in one list. Keep in mind that multiple images will lead to a higher number of input tokens, and as a result, the size of the context window typically needs to be increased.
539
 
540
- question = 'Describe this video in detail.'
541
-
542
  ```python
543
  from lmdeploy import pipeline, TurbomindEngineConfig
544
  from lmdeploy.vl import load_image
@@ -602,7 +600,7 @@ print(sess.response.text)
602
  LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
603
 
604
  ```shell
605
- lmdeploy serve api_server OpenGVLab/InternVL2_5-1B --backend turbomind --server-port 23333
606
  ```
607
 
608
  To use the OpenAI-style interface, you need to install OpenAI:
 
81
 
82
  We introduce a progressive scaling strategy to align the vision encoder with LLMs efficiently. This approach trains with smaller LLMs first (e.g., 20B) to optimize foundational visual capabilities and cross-modal alignment before transferring the vision encoder to larger LLMs (e.g., 72B) without retraining. This reuse skips intermediate stages for larger models.
83
 
84
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64006c09330a45b03605bba3/UoNUyS7ctN5pBxNv9KnzH.png)
85
 
86
  Compared to Qwen2-VL's 1.4 trillion tokens, InternVL2.5-78B uses only 120 billion tokens—less than one-tenth. This strategy minimizes redundancy, maximizes pre-trained component reuse, and enables efficient training for complex vision-language tasks.
87
 
 
164
 
165
  ### Video Understanding
166
 
167
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64006c09330a45b03605bba3/tcwH-i1qc8H16En-7AZ5M.png)
168
 
169
  ## Evaluation on Language Capability
170
 
 
510
 
511
  ### LMDeploy
512
 
513
+ LMDeploy is a toolkit for compressing, deploying, and serving LLMs & VLMs.
514
 
515
  ```sh
516
+ pip install lmdeploy>=0.6.4
517
  ```
518
 
519
  LMDeploy abstracts the complex inference process of multi-modal Vision-Language Models (VLM) into an easy-to-use pipeline, similar to the Large Language Model (LLM) inference pipeline.
 
537
 
538
  When dealing with multiple images, you can put them all in one list. Keep in mind that multiple images will lead to a higher number of input tokens, and as a result, the size of the context window typically needs to be increased.
539
 
 
 
540
  ```python
541
  from lmdeploy import pipeline, TurbomindEngineConfig
542
  from lmdeploy.vl import load_image
 
600
  LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
601
 
602
  ```shell
603
+ lmdeploy serve api_server OpenGVLab/InternVL2_5-1B --server-port 23333
604
  ```
605
 
606
  To use the OpenAI-style interface, you need to install OpenAI:
configuration.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"framework": "pytorch", "task": "other"}