hydrallm / llama-moe-v1Links
☆95Updated last year
Alternatives and similar repositories for llama-moe-v1
Users that are interested in llama-moe-v1 are comparing it to the libraries listed below
Sorting:
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆81Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆77Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆123Updated last year
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆188Updated 9 months ago
- Code repository for the c-BTM paper☆106Updated last year
- Multi-Domain Expert Learning☆66Updated last year
- A repository for transformer critique learning and generation☆89Updated last year
- Experiments on speculative sampling with Llama models☆126Updated last year
- ☆120Updated 8 months ago
- Pre-training code for Amber 7B LLM☆166Updated last year
- ☆92Updated last year
- batched loras☆343Updated last year
- ☆87Updated last year
- The official repo for "LLoCo: Learning Long Contexts Offline"☆116Updated 11 months ago
- ☆72Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆143Updated 8 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- Just a bunch of benchmark logs for different LLMs☆118Updated 10 months ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆220Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆202Updated last year
- Self-Alignment with Principle-Following Reward Models☆161Updated 3 weeks ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆114Updated 2 years ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆257Updated 10 months ago
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆102Updated 2 years ago
- Functional Benchmarks and the Reasoning Gap☆86Updated 8 months ago
- Token-level adaptation of LoRA matrices for downstream task generalization.☆14Updated last year
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆171Updated 4 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆238Updated last year
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆196Updated last year