huggingface / hf_transferLinks
☆541Updated 3 months ago
Alternatives and similar repositories for hf_transfer
Users that are interested in hf_transfer are comparing it to the libraries listed below
Sorting:
- Inference code for Mistral and Mixtral hacked up into original Llama implementation☆371Updated 2 years ago
- Module, Model, and Tensor Serialization/Deserialization☆286Updated 5 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆912Updated last month
- ☆592Updated last year
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆316Updated 2 years ago
- batched loras☆349Updated 2 years ago
- Gemma 2 optimized for your local machine.☆378Updated last year
- FRP Fork☆177Updated 10 months ago
- Beyond Language Models: Byte Models are Digital World Simulators☆334Updated last year
- Implementation of DoRA☆306Updated last year
- A repository for research on medium sized language models.☆531Updated 8 months ago
- xet client tech, used in huggingface_hub☆403Updated this week
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆751Updated last year
- OpenDiLoCo: An Open-Source Framework for Globally Distributed Low-Communication Training☆562Updated last year
- Comparison of Language Model Inference Engines☆239Updated last year
- Inference code for Persimmon-8B☆412Updated 2 years ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆371Updated last year
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆155Updated last year
- DFloat11 [NeurIPS '25]: Lossless Compression of LLMs and DiTs for Efficient GPU Inference☆603Updated 2 months ago
- Reference implementation of Megalodon 7B model☆529Updated 8 months ago
- Official inference library for pre-processing of Mistral models☆849Updated last week
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆201Updated last year
- ☆577Updated last year
- OpenAI compatible API for TensorRT LLM triton backend☆220Updated last year
- scalable and robust tree-based speculative decoding algorithm☆366Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆280Updated 2 years ago
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆294Updated last year
- A benchmark for emotional intelligence in large language models☆398Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆157Updated last year
- A bagel, with everything.☆326Updated last year