tianyi-lab / MoE-Embedding
Code for "Your Mixture-of-Experts LLM Is Secretly an Embedding Model For Free"
☆43Updated 4 months ago
Alternatives and similar repositories for MoE-Embedding:
Users that are interested in MoE-Embedding are comparing it to the libraries listed below
- The code implementation of MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language Models…☆31Updated last year
- Codebase for Instruction Following without Instruction Tuning☆33Updated 4 months ago
- ☆59Updated this week
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆54Updated 5 months ago
- ☆13Updated 2 months ago
- Code for Paper: Harnessing Webpage Uis For Text Rich Visual Understanding☆46Updated 2 months ago
- Codebase accompanying the Summary of a Haystack paper.☆74Updated 4 months ago
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆107Updated 9 months ago
- Long Context Extension and Generalization in LLMs☆48Updated 4 months ago
- Codes and datasets for the paper Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Ref…☆41Updated this week
- ☆74Updated last month
- ☆23Updated 4 months ago
- [ICLR 2025] InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales☆71Updated last week
- Aioli: A unified optimization framework for language model data mixing☆20Updated 3 weeks ago
- DSBench: How Far are Data Science Agents from Becoming Data Science Experts?☆43Updated 3 months ago
- Code for EMNLP 2024 paper "Learn Beyond The Answer: Training Language Models with Reflection for Mathematical Reasoning"☆52Updated 4 months ago
- ☆60Updated last week
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆46Updated last year
- Minimal implementation of the Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models paper (ArXiv 20232401.01335)☆29Updated 11 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆58Updated 3 months ago
- Source code of our paper "PairDistill: Pairwise Relevance Distillation for Dense Retrieval", EMNLP 2024 Main.☆21Updated 2 months ago
- Official implementation of Hierarchical Context Merging: Better Long Context Understanding for Pre-trained LLMs (ICLR 2024).☆38Updated 6 months ago
- ☆48Updated 3 months ago
- [ICLR 2025] SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights☆52Updated this week
- Official implementation for "Law of the Weakest Link: Cross capabilities of Large Language Models"☆42Updated 4 months ago
- Official implementation of "Gemini in Reasoning: Unveiling Commonsense in Multimodal Large Language Models"☆35Updated last year
- Improving Text Embedding of Language Models Using Contrastive Fine-tuning☆59Updated 6 months ago
- a curated list of the role of small models in the LLM era☆90Updated 4 months ago
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆98Updated 4 months ago