pbelcak / UltraFastBERT
The repository for the code of the UltraFastBERT paper
☆514Updated 7 months ago
Related projects ⓘ
Alternatives and complementary repositories for UltraFastBERT
- Official implementation of Half-Quadratic Quantization (HQQ)☆701Updated last week
- ☆505Updated 3 weeks ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆350Updated 8 months ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆675Updated 7 months ago
- Official PyTorch implementation of QA-LoRA☆117Updated 8 months ago
- A repository for log-time feedforward networks☆216Updated 7 months ago
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆537Updated 6 months ago
- batched loras☆336Updated last year
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript☆551Updated 4 months ago
- ☆451Updated 3 weeks ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆262Updated last year
- Inference code for Persimmon-8B☆416Updated last year
- A bagel, with everything.☆312Updated 7 months ago
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆649Updated 3 months ago
- For releasing code related to compression methods for transformers, accompanying our publications☆372Updated last month
- ☆411Updated last year
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆173Updated 4 months ago
- ☆470Updated 2 months ago
- Code for fine-tuning Platypus fam LLMs using LoRA☆623Updated 9 months ago
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆196Updated 6 months ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆435Updated 6 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆178Updated 3 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆811Updated this week
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆280Updated 6 months ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆252Updated last year
- Minimalistic large language model 3D-parallelism training☆1,260Updated this week
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,045Updated 10 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆415Updated 11 months ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated last month
- Serving multiple LoRA finetuned LLM as one☆984Updated 6 months ago