FroggyQc commited on
Commit
e2126fa
1 Parent(s): 6e909c2

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. README.md +24 -7
  2. app.py +62 -0
  3. requirements.txt +3 -0
README.md CHANGED
@@ -1,12 +1,29 @@
1
  ---
2
- title: Tinyllama Chat Gradio
3
- emoji: 👀
4
- colorFrom: red
5
- colorTo: pink
6
  sdk: gradio
7
  sdk_version: 4.19.2
8
- app_file: app.py
9
- pinned: false
10
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
1
  ---
2
+ title: tinyllama_chat_gradio
3
+ app_file: app.py
 
 
4
  sdk: gradio
5
  sdk_version: 4.19.2
 
 
6
  ---
7
+ ## Tinyllama Chatbot Implementation with Gradio
8
+
9
+ We offer an easy way to interact with Tinyllama. This guide explains how to set up a local Gradio demo for a chatbot using TinyLlama.
10
+ (A demo is also available on the Hugging Face Space [TinyLlama/tinyllama_chatbot](https://huggingface.co/spaces/TinyLlama/tinyllama-chat)) or Colab [colab](https://colab.research.google.com/drive/1qAuL5wTIa-USaNBu8DH35KQtICTnuLsy?usp=sharing).
11
+
12
+ ### Requirements
13
+ * Python>=3.8
14
+ * PyTorch>=2.0
15
+ * Transformers>=4.34.0
16
+ * Gradio>=4.13.0
17
+
18
+ ### Installation
19
+ `pip install -r requirements.txt`
20
+
21
+ ### Usage
22
+
23
+ `python TinyLlama/chat_gradio/app.py`
24
+
25
+ * After running it, open the local URL displayed in your terminal in your web browser. (For server setup, use SSH local port forwarding with the command: `ssh -L [local port]:localhost:[remote port] [username]@[server address]`.)
26
+ * Interact with the chatbot by typing questions or commands.
27
+
28
 
29
+ **Note:** The chatbot's performance may vary based on your system's hardware. Ensure your system meets the above requirements for optimal experience.
app.py ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ import torch
3
+ from transformers import AutoModelForCausalLM, AutoTokenizer
4
+ from transformers import StoppingCriteria, StoppingCriteriaList, TextIteratorStreamer
5
+ from threading import Thread
6
+
7
+ # Loading the tokenizer and model from Hugging Face's model hub.
8
+ tokenizer = AutoTokenizer.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat-v1.0")
9
+ model = AutoModelForCausalLM.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat-v1.0")
10
+
11
+ # using CUDA for an optimal experience
12
+ device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
13
+ model = model.to(device)
14
+
15
+
16
+ # Defining a custom stopping criteria class for the model's text generation.
17
+ class StopOnTokens(StoppingCriteria):
18
+ def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
19
+ stop_ids = [2] # IDs of tokens where the generation should stop.
20
+ for stop_id in stop_ids:
21
+ if input_ids[0][-1] == stop_id: # Checking if the last generated token is a stop token.
22
+ return True
23
+ return False
24
+
25
+
26
+ # Function to generate model predictions.
27
+ def predict(message, history):
28
+ history_transformer_format = history + [[message, ""]]
29
+ stop = StopOnTokens()
30
+
31
+ # Formatting the input for the model.
32
+ messages = "</s>".join(["</s>".join(["\n<|user|>:" + item[0], "\n<|assistant|>:" + item[1]])
33
+ for item in history_transformer_format])
34
+ model_inputs = tokenizer([messages], return_tensors="pt").to(device)
35
+ streamer = TextIteratorStreamer(tokenizer, timeout=10., skip_prompt=True, skip_special_tokens=True)
36
+ generate_kwargs = dict(
37
+ model_inputs,
38
+ streamer=streamer,
39
+ max_new_tokens=1024,
40
+ do_sample=True,
41
+ top_p=0.95,
42
+ top_k=50,
43
+ temperature=0.7,
44
+ num_beams=1,
45
+ stopping_criteria=StoppingCriteriaList([stop])
46
+ )
47
+ t = Thread(target=model.generate, kwargs=generate_kwargs)
48
+ t.start() # Starting the generation in a separate thread.
49
+ partial_message = ""
50
+ for new_token in streamer:
51
+ partial_message += new_token
52
+ if '</s>' in partial_message: # Breaking the loop if the stop token is generated.
53
+ break
54
+ yield partial_message
55
+
56
+
57
+ # Setting up the Gradio chat interface.
58
+ gr.ChatInterface(predict,
59
+ title="Tinyllama_chatBot",
60
+ description="Ask Tiny llama any questions",
61
+ examples=['How to cook a fish?', 'Who is the president of US now?']
62
+ ).launch(share=True) # Launching the web interface.
requirements.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ torch>=2.0
2
+ transformers>=4.35.0
3
+ gradio>=4.13.0