The code for "MoPE: Mixture of Prefix Experts for Zero-Shot Dialogue State Tracking"
☆19Jan 25, 2025Updated last year
Alternatives and similar repositories for MoPE-DST
Users that are interested in MoPE-DST are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Official Implementation of "Learning to Refuse: Towards Mitigating Privacy Risks in LLMs"☆10Dec 13, 2024Updated last year
- [COLING 2025] NesTools: A Dataset for Evaluating Nested Tool Learning Abilities of Large Language Models☆18Jan 18, 2025Updated last year
- Solving Token Gradient Conflict in Mixture-of-Experts for Large Vision-Language Model☆13Feb 11, 2025Updated last year
- 🍼 Official implementation of Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Experts☆41Sep 29, 2024Updated last year
- [ICML 2025] Code for "R2-T2: Re-Routing in Test-Time for Multimodal Mixture-of-Experts"☆19Mar 10, 2025Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Code and data for "Timo: Towards Better Temporal Reasoning for Language Models" (COLM 2024)☆26Oct 23, 2024Updated last year
- Official Implementation of "Probing Language Models for Pre-training Data Detection"☆20Dec 4, 2024Updated last year
- 一种面向中文复杂问句的查询图生成方法,以及一份含有多种复杂句的中文知识图谱问答数据集☆18Mar 16, 2023Updated 3 years ago
- The code of 《M4: Multi-Proxy Multi-Gate Mixture of Experts Network for Multiple Instance Learning in Histopathology Image Analysis》☆14Mar 31, 2025Updated last year
- For the new students who just join a NLP group☆27Nov 4, 2017Updated 8 years ago
- CRAI is a multimodal large language model based on the Mixture of Experts (MoE) architecture, supporting text and image cross-modal tasks…☆16Apr 29, 2025Updated last year
- MoE-Visualizer is a tool designed to visualize the selection of experts in Mixture-of-Experts (MoE) models.☆16Apr 8, 2025Updated last year
- A general human-ai interaction platform.☆17Jan 18, 2025Updated last year
- Scaling Laws for Mixture of Experts Models☆15Feb 25, 2025Updated last year
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- ☆17Nov 26, 2024Updated last year
- Mamba R1 represents a novel architecture that combines the efficiency of Mamba's state space models with the scalability of Mixture of Ex…☆25Oct 13, 2025Updated 6 months ago
- Data pre-processing and training code on Open-X-Embodiment with pytorch☆11Jan 20, 2025Updated last year
- 🌳CED: Catalog Extraction from Documents☆16Jul 30, 2023Updated 2 years ago
- The source code and dataset mentioned in the paper Seal-Tools: Self-Instruct Tool Learning Dataset for Agent Tuning and Detailed Benchmar…☆54Nov 5, 2024Updated last year
- ☆42Oct 23, 2025Updated 6 months ago
- 🪞A powerful toolkit for almost all the Information Extraction tasks.☆124Apr 21, 2025Updated last year
- Mixture-of-Experts Multimodal Variational Autoencoder☆15Jul 3, 2025Updated 10 months ago
- [ICLR 2025] Drop-Upcycling: Training Sparse Mixture of Experts with Partial Re-initialization☆24Oct 5, 2025Updated 7 months ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- In this repository, we summary a paper list of works in conversational recommendation system and its related areas.☆15Sep 19, 2023Updated 2 years ago
- [NAACL'25 🏆 SAC Award] Official code for "Advancing MoE Efficiency: A Collaboration-Constrained Routing (C2R) Strategy for Better Expert…☆16Feb 4, 2025Updated last year
- Official code for "Efficient Residual Learning with Mixture-of-Experts for Universal Dexterous Grasping" (ICLR 2025)☆28Oct 25, 2025Updated 6 months ago
- Official repository for “Reasoning in the Dark: Interleaved Vision-Text Reasoning in Latent Space”☆18Jan 27, 2026Updated 3 months ago
- Official implementation for KDD25 paper "GraphLoRA: Structure-Aware Contrastive Low-Rank Adaptation for Cross-Graph Transfer Learning"☆22Jul 10, 2025Updated 9 months ago
- Prototyp MegaScale-Infer: Serving Mixture-of-Experts at Scale with Disaggregated Expert Parallelism☆30Apr 4, 2025Updated last year
- DeepTrace: A lightweight, scalable real-time diagnostic and analysis tool for distributed training tasks.☆18Nov 4, 2025Updated 6 months ago
- ☆17Apr 9, 2025Updated last year
- [CVPR 2025] Lifelong Knowledge Editing for Vision Language Models with Low-Rank Mixture-of-Experts☆23Jun 22, 2025Updated 10 months ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Codes for Hierarchical Time-Aware Mixture of Experts for Multi-Modal Sequential Recommendation (WWW2025)☆30Jun 17, 2025Updated 10 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆61Feb 7, 2025Updated last year
- ☆18Sep 5, 2024Updated last year
- Official Code for ACL 2023 paper: "Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft Prompting and Calibrated Confid…☆23May 8, 2023Updated 2 years ago
- LibMoE: A LIBRARY FOR COMPREHENSIVE BENCHMARKING MIXTURE OF EXPERTS IN LARGE LANGUAGE MODELS☆49Updated this week
- Implementation of the ICLR 2025 paper "Mixture of Experts Made Personalized: Federated Prompt Learning for Vision-Language Models"☆29Apr 2, 2025Updated last year
- ☆95Oct 8, 2023Updated 2 years ago