batched loras
☆351Sep 6, 2023Updated 2 years ago
Alternatives and similar repositories for BLoRA
Users that are interested in BLoRA are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,907Jan 21, 2024Updated 2 years ago
- Serving multiple LoRA finetuned LLM as one☆1,152May 8, 2024Updated last year
- An Efficient "Factory" to Build Multiple LoRA Adapters☆375Feb 13, 2025Updated last year
- Generate textbook-quality synthetic LLM pretraining data☆509Oct 19, 2023Updated 2 years ago
- ☆415Nov 2, 2023Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆668Jul 22, 2024Updated last year
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,722Jun 25, 2024Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,690Apr 17, 2024Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆281Nov 3, 2023Updated 2 years ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,915Sep 30, 2023Updated 2 years ago
- Multipack distributed sampler for fast padding-free training of LLMs☆207Aug 10, 2024Updated last year
- 🤖 A PyTorch library of curated Transformer models and their composable components☆895Apr 17, 2024Updated last year
- ☆94Oct 5, 2023Updated 2 years ago
- ☆45Oct 13, 2023Updated 2 years ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- ☆275Oct 31, 2023Updated 2 years ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,327Mar 6, 2025Updated last year
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆197May 6, 2024Updated last year
- This repository contains code and tooling for the Abacus.AI LLM Context Expansion project. Also included are evaluation scripts and bench…☆601Nov 17, 2023Updated 2 years ago
- ☆50Mar 14, 2024Updated 2 years ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆83Sep 10, 2023Updated 2 years ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- Customizable implementation of the self-instruct paper.☆1,050Mar 7, 2024Updated 2 years ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Sep 18, 2025Updated 6 months ago
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- Minimalistic large language model 3D-parallelism training☆2,644Apr 7, 2026Updated last week
- Let's make sand talk☆588Oct 17, 2023Updated 2 years ago
- ☆198Feb 9, 2024Updated 2 years ago
- Just a bunch of benchmark logs for different LLMs☆121Jul 28, 2024Updated last year
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆472Apr 21, 2024Updated last year
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆138Mar 14, 2024Updated 2 years ago
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆3,745May 21, 2025Updated 10 months ago
- Salesforce open-source LLMs with 8k sequence length.☆726Jan 31, 2025Updated last year
- Fine-tune mistral-7B on 3090s, a100s, h100s☆724Oct 11, 2023Updated 2 years ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- Mixture of Expert (MoE) techniques for enhancing LLM performance through expert-driven prompt mapping and adapter combinations.☆12Feb 11, 2024Updated 2 years ago
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,208Jul 11, 2024Updated last year
- ☆1,027Jan 4, 2024Updated 2 years ago
- Large Language Model Text Generation Inference☆10,830Mar 21, 2026Updated 3 weeks ago
- Simplex Random Feature attention, in PyTorch☆76Oct 10, 2023Updated 2 years ago
- Accessible large language models via k-bit quantization for PyTorch.☆8,107Updated this week
- [ICLR 2024] Lemur: Open Foundation Models for Language Agents☆557Oct 28, 2023Updated 2 years ago