tianyi-lab / MoE-Embedding
Code for "Your Mixture-of-Experts LLM Is Secretly an Embedding Model For Free"
☆34Updated 3 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for MoE-Embedding
- The code implementation of MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language Models…☆30Updated 9 months ago
- DSBench: How Far are Data Science Agents from Becoming Data Science Experts?☆34Updated 3 weeks ago
- ☆57Updated last month
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆46Updated 2 months ago
- ☆40Updated this week
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated 8 months ago
- A repository for research on medium sized language models.☆74Updated 5 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated 8 months ago
- ☆26Updated 4 months ago
- ☆41Updated last month
- Minimal implementation of the Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models paper (ArXiv 20232401.01335)☆28Updated 8 months ago
- ☆21Updated last month
- Codebase for Instruction Following without Instruction Tuning