SkunkworksAI / hydra-moeView external linksLinks
☆415Nov 2, 2023Updated 2 years ago
Alternatives and similar repositories for hydra-moe
Users that are interested in hydra-moe are comparing it to the libraries listed below
Sorting:
- ☆95Jul 26, 2023Updated 2 years ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Sep 18, 2025Updated 4 months ago
- Customizable implementation of the self-instruct paper.☆1,050Mar 7, 2024Updated last year
- batched loras☆349Sep 6, 2023Updated 2 years ago
- ☆45Oct 13, 2023Updated 2 years ago
- ☆273Oct 31, 2023Updated 2 years ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,657Mar 8, 2024Updated last year
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73May 26, 2024Updated last year
- Full finetuning of large language models without large memory requirements☆94Sep 22, 2025Updated 4 months ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Feb 27, 2024Updated last year
- Generate textbook-quality synthetic LLM pretraining data☆509Oct 19, 2023Updated 2 years ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆396Feb 24, 2024Updated last year
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,705Jun 25, 2024Updated last year
- A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer☆1,630Sep 15, 2023Updated 2 years ago
- Go ahead and axolotl questions☆11,289Updated this week
- ☆63Sep 23, 2024Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆279Nov 3, 2023Updated 2 years ago
- ☆717Mar 6, 2024Updated last year
- Tools for merging pretrained large language models.☆6,783Jan 26, 2026Updated 2 weeks ago
- ☆593Aug 23, 2024Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆159Feb 9, 2024Updated 2 years ago
- Run evaluation on LLMs using human-eval benchmark☆427Sep 12, 2023Updated 2 years ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,668Apr 17, 2024Updated last year
- Entropy Based Sampling and Parallel CoT Decoding☆3,436Nov 13, 2024Updated last year
- Inference code for Mistral and Mixtral hacked up into original Llama implementation☆369Dec 9, 2023Updated 2 years ago
- ☆22Aug 27, 2023Updated 2 years ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,897Jan 21, 2024Updated 2 years ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,316Mar 6, 2025Updated 11 months ago
- A library for squeakily cleaning and filtering language datasets.☆49Jul 10, 2023Updated 2 years ago
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,891May 3, 2024Updated last year
- Self-Alignment with Principle-Following Reward Models☆169Sep 18, 2025Updated 4 months ago
- Advanced Ultra-Low Bitrate Compression Techniques for the LLaMA Family of LLMs☆110Jan 11, 2024Updated 2 years ago
- clean up your LLM datasets☆114May 30, 2023Updated 2 years ago
- Multipack distributed sampler for fast padding-free training of LLMs☆204Aug 10, 2024Updated last year
- Token-level adaptation of LoRA matrices for downstream task generalization.☆15Apr 14, 2024Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆83Sep 10, 2023Updated 2 years ago
- Run inference on replit-3B code instruct model using CPU☆160Jul 5, 2023Updated 2 years ago
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,477Jun 7, 2025Updated 8 months ago
- ☆74Sep 5, 2023Updated 2 years ago