serp-ai / Parameter-Efficient-MoE
Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks
☆31Updated 11 months ago
Alternatives and similar repositories for Parameter-Efficient-MoE
Users that are interested in Parameter-Efficient-MoE are comparing it to the libraries listed below
Sorting:
- 5X faster 60% less memory QLoRA finetuning☆21Updated 11 months ago
- entropix style sampling + GUI☆26Updated 6 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆53Updated 3 months ago
- ☆66Updated 11 months ago
- ☆48Updated 6 months ago
- ☆20Updated last year
- Model REVOLVER, a human in the loop model mixing system.☆33Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- Data preparation code for CrystalCoder 7B LLM☆44Updated last year
- Lego for GRPO☆28Updated last month
- ☆73Updated last year
- ☆27Updated last year
- Using multiple LLMs for ensemble Forecasting☆16Updated last year
- GPT-2 small trained on phi-like data☆66Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated last week
- RWKV-7: Surpassing GPT☆84Updated 5 months ago
- Pre-training code for CrystalCoder 7B LLM☆54Updated last year
- never forget anything again! combine AI and intelligent tooling for a local knowledge base to track catalogue, annotate, and plan for you…☆37Updated last year
- Simple GRPO scripts and configurations.☆58Updated 3 months ago
- look how they massacred my boy☆63Updated 6 months ago
- GPT-4 Level Conversational QA Trained In a Few Hours☆61Updated 8 months ago
- ☆49Updated last year
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆42Updated 11 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated last year
- A toolkit for fine-tuning, inferencing, and evaluating GreenBitAI's LLMs.☆84Updated 2 months ago
- ☆30Updated 10 months ago
- Nexusflow function call, tool use, and agent benchmarks.☆19Updated 5 months ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆32Updated 9 months ago