tianyi-lab / MoE-EmbeddingView external linksLinks
[ICLR 2025 Oral] "Your Mixture-of-Experts LLM Is Secretly an Embedding Model For Free"
☆90Oct 15, 2024Updated last year
Alternatives and similar repositories for MoE-Embedding
Users that are interested in MoE-Embedding are comparing it to the libraries listed below
Sorting:
- [ICML 2025] Code for "R2-T2: Re-Routing in Test-Time for Multimodal Mixture-of-Experts"☆19Mar 10, 2025Updated 11 months ago
- [ICLR 2026] Fast-Slow Toolpath Agent with Subroutine Mining for Efficient Multi-turn Image Editing☆29Feb 6, 2026Updated last week
- Code for paper: Optimizing Length Compression in Large Reasoning Models☆27Oct 20, 2025Updated 3 months ago
- [COLM 2025] "C3PO: Critical-Layer, Core-Expert, Collaborative Pathway Optimization for Test-Time Expert Re-Mixing"☆20Apr 9, 2025Updated 10 months ago
- A full-stack AI-powered business intelligence tool for non-experts, featuring serverless backend processing and a secure Streamlit fronte…☆25Jan 6, 2026Updated last month
- ☆20Apr 8, 2025Updated 10 months ago
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆30Nov 12, 2024Updated last year
- ☆29May 22, 2025Updated 8 months ago
- This is the code repo for our paper "Learning More Effective Representations for Dense Retrieval through Deliberate Thinking Before Searc…☆27Mar 2, 2025Updated 11 months ago
- [NeurIPS'25] ColorBench: Can VLMs See and Understand the Colorful World? A Comprehensive Benchmark for Color Perception, Reasoning, and R…☆30Sep 27, 2025Updated 4 months ago
- The code implementation of Symbolic-MoE☆46Sep 2, 2025Updated 5 months ago
- Official PyTorch implementation of CD-MOE☆12Mar 29, 2025Updated 10 months ago
- This project aims to utilize Generative AI for the next marketing strategy in the case of e-commerce customer segmentation.☆12Mar 19, 2024Updated last year
- ☆11Aug 26, 2024Updated last year
- [NAACL'25] "Revealing the Barriers of Language Agents in Planning"☆13Jun 22, 2025Updated 7 months ago
- AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning☆54Jun 13, 2025Updated 8 months ago
- CLIP-MoE: Mixture of Experts for CLIP☆55Oct 10, 2024Updated last year
- The official implementation of HybridNorm: Towards Stable and Efficient Transformer Training via Hybrid Normalization☆18Mar 7, 2025Updated 11 months ago
- Various agents from all of the top agent frameworks to integrate into swarms! Langchain, Griptape, CrewAI, and more!☆18Dec 22, 2025Updated last month
- [ACL 2024 Findings] Light-PEFT: Lightening Parameter-Efficient Fine-Tuning via Early Pruning☆13Sep 2, 2024Updated last year
- ☆20Oct 22, 2025Updated 3 months ago
- [ICCV 2025] Diffusion Curriculum (DisCL)☆17Sep 26, 2025Updated 4 months ago
- [NAACL'25 🏆 SAC Award] Official code for "Advancing MoE Efficiency: A Collaboration-Constrained Routing (C2R) Strategy for Better Expert…☆14Feb 4, 2025Updated last year
- Fast and memory-efficient exact attention☆18Jan 23, 2026Updated 3 weeks ago
- ☆29May 24, 2024Updated last year
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆91Dec 3, 2024Updated last year
- ☆56Nov 6, 2024Updated last year
- ☆16Dec 9, 2023Updated 2 years ago
- Dataset used to evaluate Skill Extraction systems based on the ESCO skills taxonomy.☆17Jul 18, 2024Updated last year
- ☆19Jun 4, 2025Updated 8 months ago
- An powered LLM Slack bot that uses an OpenAI API backend (LlamaCPP, Ollama, etc)☆12Updated this week
- KV Cache Steering for Inducing Reasoning in Small Language Models☆46Jul 24, 2025Updated 6 months ago
- ☆21Jul 21, 2025Updated 6 months ago
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆33Jun 2, 2023Updated 2 years ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆42Dec 29, 2025Updated last month
- ☆35May 16, 2025Updated 8 months ago
- Set-Encoder: Permutation-Invariant Inter-Passage Attention for Listwise Passage Re-Ranking with Cross-Encoders☆18May 23, 2025Updated 8 months ago
- Use contrastive learning to train a large language model (LLM) as a retriever☆12Jul 19, 2024Updated last year
- A Wikipedia-based summarization dataset☆14Mar 27, 2023Updated 2 years ago