lambda-technologies-limited commited on
Commit
caf4f0d
·
verified ·
1 Parent(s): 5aaafe3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +60 -76
README.md CHANGED
@@ -140,94 +140,78 @@ Created by: Future Technologies Limited
140
  - **Ethical and Responsible Use:** Users must ensure that the model is not exploited for harmful purposes such as generating misleading content, deepfakes, or offensive imagery. Ethical guidelines and responsible practices should be followed in all use cases.
141
 
142
  ## How to Get Started with the Model
143
-
144
- Use the code below to get started with the model.
145
-
146
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
147
 
148
  ## Training Details
149
 
150
-
151
-
152
- #### Preprocessing [optional]
153
-
154
- [More Information Needed]
155
 
156
 
157
  #### Training Hyperparameters
158
 
159
  - **Training regime:** bf16 mixed precision <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
160
- -
161
-
162
- ### Results
163
-
164
- [More Information Needed]
165
-
166
- #### Summary
167
-
168
-
169
-
170
- ## Model Examination [optional]
171
-
172
- <!-- Relevant interpretability work for the model goes here -->
173
-
174
- [More Information Needed]
175
 
176
  ## Environmental Impact
177
 
178
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
179
-
180
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
181
-
182
  - **Hardware Type:** Nividia A100 GPU
183
  - **Hours used:** 45k+
184
  - **Cloud Provider:** Future Technologies Limited
185
  - **Compute Region:** Rajasthan, India
186
- - **Carbon Emitted:** 0 (Powered by clean Solar Energy with no harmful or polluting machines used. Environmentally sustainable and eco-friendly!)
187
-
188
-
189
- ### Model Architecture and Objective
190
-
191
- [More Information Needed]
192
-
193
- ### Compute Infrastructure
194
-
195
- [More Information Needed]
196
-
197
- #### Hardware
198
-
199
- [More Information Needed]
200
-
201
- #### Software
202
-
203
- [More Information Needed]
204
-
205
- ## Citation [optional]
206
-
207
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
208
-
209
- **BibTeX:**
210
-
211
- [More Information Needed]
212
-
213
- **APA:**
214
-
215
- [More Information Needed]
216
-
217
- ## Glossary [optional]
218
-
219
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
220
-
221
- [More Information Needed]
222
-
223
- ## More Information [optional]
224
-
225
- [More Information Needed]
226
-
227
- ## Model Card Authors [optional]
228
-
229
- [More Information Needed]
230
-
231
- ## Model Card Contact
232
-
233
- [More Information Needed]
 
140
  - **Ethical and Responsible Use:** Users must ensure that the model is not exploited for harmful purposes such as generating misleading content, deepfakes, or offensive imagery. Ethical guidelines and responsible practices should be followed in all use cases.
141
 
142
  ## How to Get Started with the Model
143
+ **Prerequisites:**
144
+ - **Install necessary libraries:**
145
+ ```
146
+ pip install pip install transformers diffusers torch Pillow huggingface_hub
147
+ ```
148
+ - **Code to Use the Model:**
149
+
150
+ ```
151
+ from transformers import AutoTokenizer, AutoModelForImageGeneration
152
+ from diffusers import DiffusionPipeline
153
+ import torch
154
+ from PIL import Image
155
+ import requests
156
+ from io import BytesIO
157
+
158
+ # Your Hugging Face API token
159
+ API_TOKEN = "your_hugging_face_api_token"
160
+
161
+ # Load the model and tokenizer from Hugging Face
162
+ model_name = "future-technologies/Floral-High-Dynamic-Range"
163
+
164
+ # Error handling for model loading
165
+ try:
166
+ model = AutoModelForImageGeneration.from_pretrained(model_name, use_auth_token=API_TOKEN)
167
+ tokenizer = AutoTokenizer.from_pretrained(model_name, use_auth_token=API_TOKEN)
168
+ except Exception as e:
169
+ print(f"Error loading model: {e}")
170
+ exit()
171
+
172
+ # Initialize the diffusion pipeline
173
+ try:
174
+ pipe = DiffusionPipeline.from_pretrained(model_name, use_auth_token=API_TOKEN)
175
+ pipe.to("cuda" if torch.cuda.is_available() else "cpu")
176
+ except Exception as e:
177
+ print(f"Error initializing pipeline: {e}")
178
+ exit()
179
+
180
+ # Example prompt for image generation
181
+ prompt = "A futuristic city skyline with glowing skyscrapers during sunset, reflecting the light."
182
+
183
+ # Error handling for image generation
184
+ try:
185
+ result = pipe(prompt)
186
+ image = result.images[0]
187
+ except Exception as e:
188
+ print(f"Error generating image: {e}")
189
+ exit()
190
+
191
+ # Save or display the image
192
+ try:
193
+ image.save("generated_image.png")
194
+ image.show()
195
+ except Exception as e:
196
+ print(f"Error saving or displaying image: {e}")
197
+ exit()
198
+
199
+ print("Image generation and saving successful!")
200
+ ```
201
 
202
  ## Training Details
203
 
204
+ The **Floral High Dynamic Range (LIGM)** model has been trained on a diverse and extensive dataset containing over 1 billion high-quality images. This vast dataset encompasses a wide range of visual styles and content, enabling the model to generate highly detailed and accurate images. The training process focused on capturing intricate features, dynamic lighting, and complex scenes, which allows the model to produce images with stunning realism and creative potential.
 
 
 
 
205
 
206
 
207
  #### Training Hyperparameters
208
 
209
  - **Training regime:** bf16 mixed precision <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
210
 
211
  ## Environmental Impact
212
 
 
 
 
 
213
  - **Hardware Type:** Nividia A100 GPU
214
  - **Hours used:** 45k+
215
  - **Cloud Provider:** Future Technologies Limited
216
  - **Compute Region:** Rajasthan, India
217
+ - **Carbon Emitted:** 0 (Powered by clean Solar Energy with no harmful or polluting machines used. Environmentally sustainable and eco-friendly!)