[ICLR 2025 Oral] "Your Mixture-of-Experts LLM Is Secretly an Embedding Model For Free"
☆92Oct 15, 2024Updated last year
Alternatives and similar repositories for MoE-Embedding
Users that are interested in MoE-Embedding are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [ICML 2025] Code for "R2-T2: Re-Routing in Test-Time for Multimodal Mixture-of-Experts"☆19Mar 10, 2025Updated last year
- [ICLR 2026] Fast-Slow Toolpath Agent with Subroutine Mining for Efficient Multi-turn Image Editing☆33Feb 6, 2026Updated 3 months ago
- [COLM 2025] "C3PO: Critical-Layer, Core-Expert, Collaborative Pathway Optimization for Test-Time Expert Re-Mixing"☆20Apr 9, 2025Updated last year
- Code for paper: Optimizing Length Compression in Large Reasoning Models☆28Oct 20, 2025Updated 6 months ago
- Cost-Sensitive Toolpath Agent for Multi-turn Image Editing☆31Mar 26, 2025Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- This is the code repo for our paper "Learning More Effective Representations for Dense Retrieval through Deliberate Thinking Before Searc…☆27Mar 2, 2025Updated last year
- AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning☆56Jun 13, 2025Updated 10 months ago
- [ ICLR 2025 ] Making LLMs More Effective with Hierarchical Mixture of LoRA Experts☆31Oct 9, 2025Updated 6 months ago
- Enable Next-sentence Prediction for Large Language Models with Faster Speed, Higher Accuracy and Longer Context☆42Aug 16, 2024Updated last year
- ☆20Apr 8, 2025Updated last year
- UFT: Unifying Supervised and Reinforcement Fine-Tuning☆29Jun 30, 2025Updated 10 months ago
- [ACL 2025 Findings] Implicit Reasoning in Transformers is Reasoning through Shortcuts☆18Mar 11, 2025Updated last year
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆93Dec 3, 2024Updated last year
- Use contrastive learning to train a large language model (LLM) as a retriever☆12Jul 19, 2024Updated last year
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆33Jun 2, 2023Updated 2 years ago
- ☆67Aug 14, 2025Updated 8 months ago
- Official code for the paper: WALL-E: World Alignment by NeuroSymbolic Learning improves World Model-based LLM Agents☆57Dec 3, 2025Updated 5 months ago
- [SIGIR24] Pre-training with Bag-of-Word Prediction for Dense Passage Retrieval☆18Feb 29, 2024Updated 2 years ago
- [COLM'25] Missing Premise exacerbates Overthinking: Are Reasoning Models losing Critical Thinking Skill?☆37Jun 5, 2025Updated 11 months ago
- Dataset used to evaluate Skill Extraction systems based on the ESCO skills taxonomy.☆17Jul 18, 2024Updated last year
- code for piccolo embedding model from SenseTime☆144May 21, 2024Updated last year
- Set-Encoder: Permutation-Invariant Inter-Passage Attention for Listwise Passage Re-Ranking with Cross-Encoders☆18May 23, 2025Updated 11 months ago
- [ICML2022] "Identity-Disentangled Adversarial Augmentation for Self-Supervised Learning"☆10Jul 24, 2022Updated 3 years ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- The code implementation of Symbolic-MoE☆46Sep 2, 2025Updated 8 months ago
- ☆70Jun 16, 2022Updated 3 years ago
- A Multi-domain Benchmark for Personalized Search Evaluation☆12Sep 7, 2023Updated 2 years ago
- Official PyTorch implementation of CD-MOE☆12Mar 18, 2026Updated last month
- Official implementation of Vector-ICL: In-context Learning with Continuous Vector Representations (ICLR 2025)☆23Jun 2, 2025Updated 11 months ago
- Code repository for the paper "MrT5: Dynamic Token Merging for Efficient Byte-level Language Models."☆58Sep 25, 2025Updated 7 months ago
- [NAACL'25] RuleR: Improving LLM Controllability by Rule-based Data Recycling☆14Sep 27, 2025Updated 7 months ago
- GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embeddings☆45Mar 6, 2024Updated 2 years ago
- Implementation for ACL 2024 paper "Meta-Task Prompting Elicits Embeddings from Large Language Models"☆12Jul 25, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- ☆11Nov 23, 2024Updated last year
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆404Apr 29, 2024Updated 2 years ago
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆114May 2, 2022Updated 4 years ago
- CLIP-MoE: Mixture of Experts for CLIP☆58Oct 10, 2024Updated last year
- [EMNLP 2024] Tree of Problems: Improving structured problem solving with compositionality☆19Mar 4, 2025Updated last year
- Personalized Image Generation with Large Multimodal Models☆15May 13, 2025Updated 11 months ago
- ✨✨VITA: Towards Open-Source Interactive Omni Multimodal LLM☆11Jun 16, 2025Updated 10 months ago