Spaces:
Sleeping
Sleeping
Update app.py
Browse files
app.py
CHANGED
@@ -9,26 +9,9 @@ from threading import Thread
|
|
9 |
from typing import Generator
|
10 |
|
11 |
|
12 |
-
MODEL_PATH = "fancyfeast/
|
13 |
-
TITLE = "<h1><center>
|
14 |
DESCRIPTION = """
|
15 |
-
<div>
|
16 |
-
<p>🧪🧪🧪 This an experiment to see how well JoyCaption Alpha Two can learn to answer questions about images and follow instructions.
|
17 |
-
I've only finetuned it on 600 examples, so it is **highly experimental, very weak, broken, and volatile**. But for only training 600 examples,
|
18 |
-
I thought it was performing surprisingly well and wanted to share.</p>
|
19 |
-
<p>**This model cannot see any chat history.**</p>
|
20 |
-
<p>🧐💬📸 Unlike JoyCaption Alpha Two, you can ask this finetune questions about the image, like "What is he holding in his hand?", "Where might this be?",
|
21 |
-
and "What are they wearing?". It can also follow instructions, like "Write me a poem about this image",
|
22 |
-
"Write a caption but don't use any ambigious language, and make sure you mention that the image is from Instagram.", and
|
23 |
-
"Output JSON with the following properties: 'skin_tone', 'hair_style', 'hair_length', 'clothing', 'background'." Remember that this was only finetuned on
|
24 |
-
600 VQA/instruction examples, so it is _very_ limited right now. Expect it to frequently fallback to its base behavior of just writing image descriptions.
|
25 |
-
Expect accuracy to be lower. Expect glitches. Despite that, I've found that it will follow most queries I've tested it with, even outside its training,
|
26 |
-
with enough coaxing and re-rolling.</p>
|
27 |
-
<p>🚨🚨🚨 If the "Help improve JoyCaption" box is checked, the _text_ query you write will be logged and I _might_ use it to help improve JoyCaption.
|
28 |
-
It does not log images, user data, etc; only the text query. I cannot see what images you send, and frankly, I don't want to. But knowing what kinds of instructions
|
29 |
-
and queries users want JoyCaption to handle will help guide me in building JoyCaption's VQA dataset. This dataset will be made public. As always, the model itself is completely
|
30 |
-
public and free to use outside of this space. And, of course, I have no control nor access to what HuggingFace, which are graciously hosting this space, collects.</p>
|
31 |
-
</div>
|
32 |
"""
|
33 |
|
34 |
PLACEHOLDER = """
|
@@ -98,7 +81,7 @@ def chat_joycaption(message: dict, history, temperature: float, top_p: float, ma
|
|
98 |
convo = [
|
99 |
{
|
100 |
"role": "system",
|
101 |
-
"content": "You are a helpful
|
102 |
},
|
103 |
{
|
104 |
"role": "user",
|
|
|
9 |
from typing import Generator
|
10 |
|
11 |
|
12 |
+
MODEL_PATH = "fancyfeast/260kxqt2-1199872-llava"
|
13 |
+
TITLE = "<h1><center>EXPERIMENTAL MODEL 260kxqt2-1199872</center></h1>"
|
14 |
DESCRIPTION = """
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
"""
|
16 |
|
17 |
PLACEHOLDER = """
|
|
|
81 |
convo = [
|
82 |
{
|
83 |
"role": "system",
|
84 |
+
"content": "You are JoyCaption, a helpful AI assistant with vision capabilities.",
|
85 |
},
|
86 |
{
|
87 |
"role": "user",
|