☆415Nov 2, 2023Updated 2 years ago
Alternatives and similar repositories for hydra-moe
Users that are interested in hydra-moe are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆95Jul 26, 2023Updated 2 years ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Sep 18, 2025Updated 6 months ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Feb 27, 2024Updated 2 years ago
- Full finetuning of large language models without large memory requirements☆94Sep 22, 2025Updated 6 months ago
- Customizable implementation of the self-instruct paper.☆1,050Mar 7, 2024Updated 2 years ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- ☆45Oct 13, 2023Updated 2 years ago
- ☆275Oct 31, 2023Updated 2 years ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73May 26, 2024Updated last year
- batched loras☆351Sep 6, 2023Updated 2 years ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,675Mar 8, 2024Updated 2 years ago
- Generate textbook-quality synthetic LLM pretraining data☆509Oct 19, 2023Updated 2 years ago
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆159Feb 9, 2024Updated 2 years ago
- Token-level adaptation of LoRA matrices for downstream task generalization.☆15Apr 14, 2024Updated 2 years ago
- Go ahead and axolotl questions☆11,688Updated this week
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,722Jun 25, 2024Updated last year
- A library for squeakily cleaning and filtering language datasets.☆50Jul 10, 2023Updated 2 years ago
- ☆719Mar 6, 2024Updated 2 years ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆83Sep 10, 2023Updated 2 years ago
- A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer☆1,636Sep 15, 2023Updated 2 years ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆397Feb 24, 2024Updated 2 years ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆281Nov 3, 2023Updated 2 years ago
- ☆22Aug 27, 2023Updated 2 years ago
- ☆63Sep 23, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Tools for merging pretrained large language models.☆6,973Mar 15, 2026Updated last month
- ☆74Sep 5, 2023Updated 2 years ago
- ☆602Aug 23, 2024Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆207Aug 10, 2024Updated last year
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,933May 3, 2024Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,690Apr 17, 2024Updated last year
- ☆868Dec 8, 2023Updated 2 years ago
- Latent Large Language Models☆19Aug 24, 2024Updated last year
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,327Mar 6, 2025Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Self-Alignment with Principle-Following Reward Models☆170Sep 18, 2025Updated 6 months ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,907Jan 21, 2024Updated 2 years ago
- Simplex Random Feature attention, in PyTorch☆76Oct 10, 2023Updated 2 years ago
- llama.cpp with BakLLaVA model describes what does it see☆379Nov 8, 2023Updated 2 years ago
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,471Jun 7, 2025Updated 10 months ago
- clean up your LLM datasets☆113May 30, 2023Updated 2 years ago
- Run evaluation on LLMs using human-eval benchmark☆430Sep 12, 2023Updated 2 years ago