saucam commited on
Commit
c90d9dc
Β·
verified Β·
1 Parent(s): d2eabdf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -0
README.md CHANGED
@@ -72,4 +72,57 @@ Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.
72
  What is a large language model?<|im_end|>
73
  <|im_start|>assistant
74
  A large language model (LLM) is a deep neural network that is trained to predict the next word in a sequence of text. LLMs are typically trained on large amounts of text data and can be used for a variety of tasks such as language translation, text completion, and question answering. They are often used to generate human-like text and are becoming increasingly popular in natural language processing applications. The LLM uses a transformer architecture, which consists of multiple layers of neural networks that are trained to process and understand the relationships between words in a sentence. The transformer architecture is designed to handle long sequences of text and is capable of capturing the context of a word within a sentence. This allows the LLM to generate coherent and grammatically correct text that is similar to human writing. LLMs are typically trained on a large corpus of text data and can be fine-tuned for specific tasks by retraining on smaller datasets that are relevant to the task at hand. This allows the LLM to adapt to the specific requirements of a particular application and improve its performance. The LLM can be used to generate text in a variety of formats, including natural language, code, and even mathematical expressions. It can also be used to translate text from one language to another, generate summaries of
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
  ```
 
72
  What is a large language model?<|im_end|>
73
  <|im_start|>assistant
74
  A large language model (LLM) is a deep neural network that is trained to predict the next word in a sequence of text. LLMs are typically trained on large amounts of text data and can be used for a variety of tasks such as language translation, text completion, and question answering. They are often used to generate human-like text and are becoming increasingly popular in natural language processing applications. The LLM uses a transformer architecture, which consists of multiple layers of neural networks that are trained to process and understand the relationships between words in a sentence. The transformer architecture is designed to handle long sequences of text and is capable of capturing the context of a word within a sentence. This allows the LLM to generate coherent and grammatically correct text that is similar to human writing. LLMs are typically trained on a large corpus of text data and can be fine-tuned for specific tasks by retraining on smaller datasets that are relevant to the task at hand. This allows the LLM to adapt to the specific requirements of a particular application and improve its performance. The LLM can be used to generate text in a variety of formats, including natural language, code, and even mathematical expressions. It can also be used to translate text from one language to another, generate summaries of
75
+ ```
76
+
77
+ Preserves the whole document as context capability
78
+
79
+ ```
80
+ from transformers import AutoTokenizer, AutoModelForCausalLM
81
+ import torch
82
+
83
+ model_id = "saucam/PowerBot-8B"
84
+
85
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
86
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
87
+
88
+ messages = [
89
+ {"role": "user", "content": "what is the percentage change of the net income from Q4 FY23 to Q4 FY24?"}
90
+ ]
91
+
92
+ document = """NVIDIA (NASDAQ: NVDA) today reported revenue for the fourth quarter ended January 28, 2024, of $22.1 billion, up 22% from the previous quarter and up 265% from a year ago.\nFor the quarter, GAAP earnings per diluted share was $4.93, up 33% from the previous quarter and up 765% from a year ago. Non-GAAP earnings per diluted share was $5.16, up 28% from the previous quarter and up 486% from a year ago.\nQ4 Fiscal 2024 Summary\nGAAP\n| $ in millions, except earnings per share | Q4 FY24 | Q3 FY24 | Q4 FY23 | Q/Q | Y/Y |\n| Revenue | $22,103 | $18,120 | $6,051 | Up 22% | Up 265% |\n| Gross margin | 76.0% | 74.0% | 63.3% | Up 2.0 pts | Up 12.7 pts |\n| Operating expenses | $3,176 | $2,983 | $2,576 | Up 6% | Up 23% |\n| Operating income | $13,615 | $10,417 | $1,257 | Up 31% | Up 983% |\n| Net income | $12,285 | $9,243 | $1,414 | Up 33% | Up 769% |\n| Diluted earnings per share | $4.93 | $3.71 | $0.57 | Up 33% | Up 765% |"""
93
+
94
+ def get_formatted_input(messages, context):
95
+ system = "System: This is a chat between a user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions based on the context. The assistant should also indicate when the answer cannot be found in the context."
96
+ instruction = "Please give a full and complete answer for the question."
97
+
98
+ for item in messages:
99
+ if item['role'] == "user":
100
+ ## only apply this instruction for the first user turn
101
+ item['content'] = instruction + " " + item['content']
102
+ break
103
+
104
+ conversation = '\n\n'.join(["User: " + item["content"] if item["role"] == "user" else "Assistant: " + item["content"] for item in messages]) + "\n\nAssistant:"
105
+ formatted_input = system + "\n\n" + context + "\n\n" + conversation
106
+
107
+ return formatted_input
108
+
109
+ formatted_input = get_formatted_input(messages, document)
110
+ tokenized_prompt = tokenizer(tokenizer.bos_token + formatted_input, return_tensors="pt").to(model.device)
111
+
112
+ terminators = [
113
+ tokenizer.eos_token_id,
114
+ tokenizer.convert_tokens_to_ids("<|eot_id|>")
115
+ ]
116
+
117
+ outputs = model.generate(input_ids=tokenized_prompt.input_ids, attention_mask=tokenized_prompt.attention_mask, max_new_tokens=128, eos_token_id=terminators)
118
+
119
+ response = outputs[0][tokenized_prompt.input_ids.shape[-1]:]
120
+ print(tokenizer.decode(response, skip_special_tokens=True))
121
+ ```
122
+
123
+ ```
124
+ Downloading shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<00:00, 12.71it/s]
125
+ Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:08<00:00, 4.05s/it]
126
+ Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.
127
+ The percentage change of the net income from Q4 FY23 to Q4 FY24 is 769%. This is calculated by taking the difference between the two net incomes ($12,285 million and $1,414 million) and dividing it by the net income from Q4 FY23 ($1,414 million), then multiplying by 100 to get the percentage change. So, the formula is ((12,285 - 1,414) / 1,414) * 100 = 769%.
128
  ```