Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up
BHbean 's Collections
LoRA
OS for LLM
LLM Training Systems
Survey
MoE LLM Systems
LLM resource-constrained Inference
New LLM Algorithms
LLM Internal Mechanism
Prompt Engineering
parallelism
KV Cache Compression
LLM reasoning systems
Speculative Decoding

LLM resource-constrained Inference

updated about 24 hours ago
Upvote
-

  • PRIMA.CPP: Speeding Up 70B-Scale LLM Inference on Low-Resource Everyday Home Clusters

    Paper • 2504.08791 • Published Apr 7 • 133

  • 70% Size, 100% Accuracy: Lossless LLM Compression for Efficient GPU Inference via Dynamic-Length Float

    Paper • 2504.11651 • Published Apr 15 • 28
Upvote
-
  • Collection guide
  • Browse collections
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs