huggingface / hf_transferLinks
☆472Updated 2 months ago
Alternatives and similar repositories for hf_transfer
Users that are interested in hf_transfer are comparing it to the libraries listed below
Sorting:
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆304Updated last year
- Official implementation of Half-Quadratic Quantization (HQQ)☆837Updated last week
- ☆542Updated 10 months ago
- Inference code for Mistral and Mixtral hacked up into original Llama implementation☆371Updated last year
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆198Updated 11 months ago
- A bagel, with everything.☆321Updated last year
- Implementation of DoRA☆294Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆277Updated last year
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆732Updated 9 months ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 8 months ago
- Minimalistic large language model 3D-parallelism training☆1,942Updated this week
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆339Updated 6 months ago
- ☆520Updated 7 months ago
- ☆541Updated 7 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆261Updated 11 months ago
- For releasing code related to compression methods for transformers, accompanying our publications☆431Updated 5 months ago
- ☆543Updated 6 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆239Updated last year
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆238Updated last year
- [ACL 2024] Progressive LLaMA with Block Expansion.☆505Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆173Updated last year
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆483Updated 10 months ago
- OpenDiLoCo: An Open-Source Framework for Globally Distributed Low-Communication Training☆508Updated 5 months ago
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆155Updated last year
- Serving multiple LoRA finetuned LLM as one☆1,066Updated last year
- Python bindings for ggml☆141Updated 9 months ago
- Large Context Attention☆716Updated 5 months ago
- Scalable toolkit for efficient model alignment☆818Updated 3 weeks ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆137Updated 11 months ago
- Efficient LLM Inference over Long Sequences☆378Updated 3 weeks ago