Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

edumunozsala
/
llama-2-7b-int4-GPTQ-python-code-20k

Text Generation
Transformers
PyTorch
code
llama
llama-2
gptq
quantization
text-generation-inference
4-bit precision
Model card Files Files and versions Community
1
llama-2-7b-int4-GPTQ-python-code-20k
Ctrl+K
Ctrl+K
  • 1 contributor
History: 5 commits
SFconvertbot's picture
SFconvertbot
Adding `safetensors` variant of this model
a406798 over 1 year ago
  • .gitattributes
    1.52 kB
    initial commit over 1 year ago
  • README.md
    3.19 kB
    Upload README.md over 1 year ago
  • config.json
    1.13 kB
    Upload LlamaForCausalLM over 1 year ago
  • generation_config.json
    132 Bytes
    Upload LlamaForCausalLM over 1 year ago
  • model.safetensors
    3.9 GB
    LFS
    Adding `safetensors` variant of this model over 1 year ago
  • pytorch_model.bin

    Detected Pickle imports (4)

    • "torch.IntStorage",
    • "torch.HalfStorage",
    • "torch._utils._rebuild_tensor_v2",
    • "collections.OrderedDict"

    What is a pickle import?

    3.9 GB
    LFS
    Upload LlamaForCausalLM over 1 year ago
  • special_tokens_map.json
    434 Bytes
    Upload tokenizer over 1 year ago
  • tokenizer.json
    1.84 MB
    Upload tokenizer over 1 year ago
  • tokenizer_config.json
    732 Bytes
    Upload tokenizer over 1 year ago