huggingface / hf_transferLinks
☆468Updated last month
Alternatives and similar repositories for hf_transfer
Users that are interested in hf_transfer are comparing it to the libraries listed below
Sorting:
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆302Updated last year
- Advanced Quantization Algorithm for LLMs and VLMs, with support for CPU, Intel GPU, CUDA and HPU. Seamlessly integrated with Torchao, Tra…☆490Updated this week
- Low-Rank adapter extraction for fine-tuned transformers models☆171Updated last year
- ☆722Updated 2 weeks ago
- ☆536Updated 9 months ago
- ☆536Updated 7 months ago
- ☆157Updated 10 months ago
- Scalable toolkit for efficient model alignment☆807Updated last week
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆151Updated last year
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 7 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆818Updated this week
- A pytorch quantization backend for optimum☆946Updated 2 weeks ago
- Implementation of DoRA☆294Updated last year
- scalable and robust tree-based speculative decoding algorithm☆345Updated 4 months ago
- A bagel, with everything.☆320Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆275Updated last year
- Module, Model, and Tensor Serialization/Deserialization☆234Updated last week
- A repository for research on medium sized language models.☆497Updated last month
- Muon is Scalable for LLM Training☆1,059Updated 2 months ago
- Comparison of Language Model Inference Engines☆217Updated 5 months ago
- Minimalistic large language model 3D-parallelism training☆1,898Updated last week
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆199Updated 10 months ago
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆137Updated 10 months ago
- prime-rl is a codebase for decentralized async RL training at scale☆318Updated this week
- batched loras☆343Updated last year
- The RunPod worker template for serving our large language model endpoints. Powered by vLLM.☆318Updated 3 weeks ago
- OpenAI compatible API for TensorRT LLM triton backend☆208Updated 10 months ago
- Inference code for Mistral and Mixtral hacked up into original Llama implementation☆371Updated last year
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆727Updated 8 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆258Updated 10 months ago