batched loras
☆350Sep 6, 2023Updated 2 years ago
Alternatives and similar repositories for BLoRA
Users that are interested in BLoRA are comparing it to the libraries listed below
Sorting:
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,899Jan 21, 2024Updated 2 years ago
- Serving multiple LoRA finetuned LLM as one☆1,144May 8, 2024Updated last year
- Generate textbook-quality synthetic LLM pretraining data☆509Oct 19, 2023Updated 2 years ago
- ☆415Nov 2, 2023Updated 2 years ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆280Nov 3, 2023Updated 2 years ago
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,710Jun 25, 2024Updated last year
- ☆274Oct 31, 2023Updated 2 years ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,673Apr 17, 2024Updated last year
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,913Sep 30, 2023Updated 2 years ago
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆669Jul 22, 2024Updated last year
- ☆45Oct 13, 2023Updated 2 years ago
- An Efficient "Factory" to Build Multiple LoRA Adapters☆372Feb 13, 2025Updated last year
- ☆94Oct 5, 2023Updated 2 years ago
- 🤖 A PyTorch library of curated Transformer models and their composable components☆894Apr 17, 2024Updated last year
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆197May 6, 2024Updated last year
- Adaptive Inter-Class Similarity Distillation for Semantic Segmentation (MTAP 2025)☆29Nov 14, 2025Updated 3 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,315Mar 6, 2025Updated 11 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆204Aug 10, 2024Updated last year
- This repository contains code and tooling for the Abacus.AI LLM Context Expansion project. Also included are evaluation scripts and bench…☆600Nov 17, 2023Updated 2 years ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆83Sep 10, 2023Updated 2 years ago
- Minimalistic large language model 3D-parallelism training☆2,579Feb 19, 2026Updated 2 weeks ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆137Mar 14, 2024Updated last year
- ☆198Feb 9, 2024Updated 2 years ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Sep 18, 2025Updated 5 months ago
- Simplex Random Feature attention, in PyTorch☆76Oct 10, 2023Updated 2 years ago
- Salesforce open-source LLMs with 8k sequence length.☆725Jan 31, 2025Updated last year
- Mixture of Expert (MoE) techniques for enhancing LLM performance through expert-driven prompt mapping and adapter combinations.☆12Feb 11, 2024Updated 2 years ago
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,175Oct 8, 2024Updated last year
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆3,728May 21, 2025Updated 9 months ago
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,915Updated this week
- data cleaning and curation for unstructured text☆329Aug 6, 2024Updated last year
- Customizable implementation of the self-instruct paper.☆1,049Mar 7, 2024Updated last year
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆473Apr 21, 2024Updated last year
- Convert all of libgen to high quality markdown☆255Dec 13, 2023Updated 2 years ago
- ☆1,027Jan 4, 2024Updated 2 years ago
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,196Jul 11, 2024Updated last year
- Fine-tune mistral-7B on 3090s, a100s, h100s☆724Oct 11, 2023Updated 2 years ago
- Let's make sand talk☆588Oct 17, 2023Updated 2 years ago