Update README.md
Browse files
README.md
CHANGED
@@ -36,11 +36,11 @@ The model is developed to process diverse inputs, including images and text, fac
|
|
36 |
|
37 |
Cephalo provides a robust framework for multimodal interaction and understanding, including the development of complex generative pipelines to create 2D and 3D renderings of material microstructures as input for additive manufacturing methods.
|
38 |
|
39 |
-
This version of Cephalo, lamm-mit/Cephalo-Phi-3-vision-128k-4b, is based on the Phi-3-Vision-128K-Instruct model. The model has a context length of 128,000 tokens. Further details, see: https://huggingface.co/microsoft/Phi-3-vision-128k-instruct.
|
40 |
|
41 |
### Chat Format
|
42 |
|
43 |
-
Given the nature of the training data, the Cephalo-Phi-3-vision-128k-4b model is best suited for a single image input wih prompts using the chat format as follows.
|
44 |
You can provide the prompt as a single image with a generic template as follow:
|
45 |
```markdown
|
46 |
<|user|>\n<|image_1|>\n{prompt}<|end|>\n<|assistant|>\n
|
@@ -62,14 +62,16 @@ import requests
|
|
62 |
from transformers import AutoModelForCausalLM
|
63 |
from transformers import AutoProcessor
|
64 |
|
65 |
-
model_id = "lamm-mit/Cephalo-Phi-3-vision-128k-4b"
|
66 |
|
67 |
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="cuda", trust_remote_code=True, torch_dtype="auto")
|
68 |
|
69 |
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
|
70 |
|
|
|
|
|
71 |
messages = [
|
72 |
-
{"role": "user", "content": "<|image_1|>\
|
73 |
]
|
74 |
|
75 |
url = "https://d2r55xnwy6nx47.cloudfront.net/uploads/2018/02/Ants_Lede1300.jpg"
|
@@ -103,7 +105,11 @@ Sample output:
|
|
103 |
<small>Image by [Vaishakh Manohar](https://www.quantamagazine.org/the-simple-algorithm-that-ants-use-to-build-bridges-20180226/)</small>
|
104 |
|
105 |
<pre style="white-space: pre-wrap;">
|
106 |
-
The image shows a group of red
|
|
|
|
|
|
|
|
|
107 |
</pre>
|
108 |
|
109 |
|
|
|
36 |
|
37 |
Cephalo provides a robust framework for multimodal interaction and understanding, including the development of complex generative pipelines to create 2D and 3D renderings of material microstructures as input for additive manufacturing methods.
|
38 |
|
39 |
+
This version of Cephalo, lamm-mit/Cephalo-Phi-3-vision-128k-4b-beta, is based on the Phi-3-Vision-128K-Instruct model. The model was trained on a combination of scientific text-image and text-only data. The model has a context length of 128,000 tokens. Further details, see: https://huggingface.co/microsoft/Phi-3-vision-128k-instruct.
|
40 |
|
41 |
### Chat Format
|
42 |
|
43 |
+
Given the nature of the training data, the Cephalo-Phi-3-vision-128k-4b-beta model is best suited for a single image input wih prompts using the chat format as follows.
|
44 |
You can provide the prompt as a single image with a generic template as follow:
|
45 |
```markdown
|
46 |
<|user|>\n<|image_1|>\n{prompt}<|end|>\n<|assistant|>\n
|
|
|
62 |
from transformers import AutoModelForCausalLM
|
63 |
from transformers import AutoProcessor
|
64 |
|
65 |
+
model_id = "lamm-mit/Cephalo-Phi-3-vision-128k-4b-beta"
|
66 |
|
67 |
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="cuda", trust_remote_code=True, torch_dtype="auto")
|
68 |
|
69 |
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
|
70 |
|
71 |
+
question = "What is shown in this image, and what is the relevance for materials design? Include a discussion of multi-agent AI."
|
72 |
+
|
73 |
messages = [
|
74 |
+
{"role": "user", "content": f"<|image_1|>\n{question}"},
|
75 |
]
|
76 |
|
77 |
url = "https://d2r55xnwy6nx47.cloudfront.net/uploads/2018/02/Ants_Lede1300.jpg"
|
|
|
105 |
<small>Image by [Vaishakh Manohar](https://www.quantamagazine.org/the-simple-algorithm-that-ants-use-to-build-bridges-20180226/)</small>
|
106 |
|
107 |
<pre style="white-space: pre-wrap;">
|
108 |
+
The image shows a group of red ants (Solenopsis invicta) climbing over a vertical wooden post. The ants are using their long legs and antennae to navigate the rough surface of the post, which is covered in small hairs. The relevance for materials design is that the ants' ability to climb over rough surfaces can inspire the development of new materials with improved adhesion and grip properties. The ants' hairs are made of a protein called cuticular, which is known for its strong adhesive properties. By studying the structure and properties of these hairs, researchers can gain insights into how to design materials with similar properties.
|
109 |
+
|
110 |
+
Multi-agent AI refers to the use of multiple agents, such as ants, to perform tasks or solve problems. In this case, the ants are working together to climb over the post, which requires coordination and communication between the individual ants. This can inspire the development of multi-agent AI systems that can work together to solve complex problems.
|
111 |
+
|
112 |
+
Overall, the image of red ants climbing over a wooden post can provide valuable insights into materials design and multi-agent AI. By studying the ants' behavior and the properties of their hairs, researchers can develop new materials with improved adhesion and grip properties, and design multi-agent AI systems that can work together to solve complex problems.
|
113 |
</pre>
|
114 |
|
115 |
|