PyTorch
megatron-lm
nvidia
llama 2
kvcache
alancucki commited on
Commit
0e4ee54
·
verified ·
1 Parent(s): 95afeae

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -7
README.md CHANGED
@@ -50,17 +50,14 @@ Llama-2-7B-DMC-4x uses a model embedding size of 4096, 32 attention heads, MLP i
50
 
51
  ## Software Integration
52
  **Runtime Engine(s):**
53
- * [Not Applicable (N/A)]
54
 
55
  The model weights are distributed in bfloat16 format. However, it could be converted to other formats in order to run on other hardware microarchitectures.
56
 
57
- **Supported Hardware Microarchitecture Compatibility:** <br>
58
- * [NVIDIA Ampere] <br>
59
- * [NVIDIA Hopper] <br>
60
- * [NVIDIA Blackwell] <br>
61
 
62
- **[Preferred/Supported] Operating System(s):** <br>
63
- * [Linux] <br>
64
 
65
  ## Model Version(s)
66
  Llama 2 7B DMC 4x v1.0
 
50
 
51
  ## Software Integration
52
  **Runtime Engine(s):**
53
+ * Not Applicable (N/A)
54
 
55
  The model weights are distributed in bfloat16 format. However, it could be converted to other formats in order to run on other hardware microarchitectures.
56
 
57
+ **Supported Hardware Microarchitecture Compatibility:** Nvidia Ampere and newer GPUs.<br>
 
 
 
58
 
59
+ **Supported Operating System(s):** <br>
60
+ * Linux <br>
61
 
62
  ## Model Version(s)
63
  Llama 2 7B DMC 4x v1.0