☆415Nov 2, 2023Updated 2 years ago
Alternatives and similar repositories for hydra-moe
Users that are interested in hydra-moe are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆95Jul 26, 2023Updated 2 years ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Sep 18, 2025Updated 6 months ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Feb 27, 2024Updated 2 years ago
- Full finetuning of large language models without large memory requirements☆94Sep 22, 2025Updated 6 months ago
- Customizable implementation of the self-instruct paper.☆1,052Mar 7, 2024Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- ☆45Oct 13, 2023Updated 2 years ago
- ☆274Oct 31, 2023Updated 2 years ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73May 26, 2024Updated last year
- batched loras☆351Sep 6, 2023Updated 2 years ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,667Mar 8, 2024Updated 2 years ago
- Generate textbook-quality synthetic LLM pretraining data☆509Oct 19, 2023Updated 2 years ago
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆159Feb 9, 2024Updated 2 years ago
- Token-level adaptation of LoRA matrices for downstream task generalization.☆15Apr 14, 2024Updated last year
- Go ahead and axolotl questions☆11,508Updated this week
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,722Jun 25, 2024Updated last year
- A library for squeakily cleaning and filtering language datasets.☆50Jul 10, 2023Updated 2 years ago
- ☆719Mar 6, 2024Updated 2 years ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆83Sep 10, 2023Updated 2 years ago
- A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer☆1,631Sep 15, 2023Updated 2 years ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆397Feb 24, 2024Updated 2 years ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆281Nov 3, 2023Updated 2 years ago
- ☆22Aug 27, 2023Updated 2 years ago
- Tools for merging pretrained large language models.☆6,895Mar 15, 2026Updated last week
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- ☆63Sep 23, 2024Updated last year
- ☆74Sep 5, 2023Updated 2 years ago
- ☆599Aug 23, 2024Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆206Aug 10, 2024Updated last year
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,922May 3, 2024Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,686Apr 17, 2024Updated last year
- ☆868Dec 8, 2023Updated 2 years ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,324Mar 6, 2025Updated last year
- Latent Large Language Models☆19Aug 24, 2024Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Self-Alignment with Principle-Following Reward Models☆170Sep 18, 2025Updated 6 months ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,903Jan 21, 2024Updated 2 years ago
- Simplex Random Feature attention, in PyTorch☆76Oct 10, 2023Updated 2 years ago
- llama.cpp with BakLLaVA model describes what does it see☆379Nov 8, 2023Updated 2 years ago
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,478Jun 7, 2025Updated 9 months ago
- clean up your LLM datasets☆113May 30, 2023Updated 2 years ago
- Run evaluation on LLMs using human-eval benchmark☆429Sep 12, 2023Updated 2 years ago