microsoft / MoPQLinks
☆12Updated 3 years ago
Alternatives and similar repositories for MoPQ
Users that are interested in MoPQ are comparing it to the libraries listed below
Sorting:
- Inference framework for MoE layers based on TensorRT with Python binding☆41Updated 4 years ago
- Official code for "Binary embedding based retrieval at Tencent"☆43Updated last year
- This package implements THOR: Transformer with Stochastic Experts.☆65Updated 4 years ago
- Retrieval with Learned Similarities (http://arxiv.org/abs/2407.15462, WWW'25 Oral)☆51Updated 6 months ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Updated last year
- [KDD'22] Learned Token Pruning for Transformers☆101Updated 2 years ago
- Summary of system papers/frameworks/codes/tools on training or serving large model☆57Updated last year
- ☆19Updated last year
- A memory efficient DLRM training solution using ColossalAI☆106Updated 2 years ago
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆112Updated 3 years ago
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆111Updated 7 months ago
- implement bert in pure c++☆36Updated 5 years ago
- Odysseus: Playground of LLM Sequence Parallelism☆78Updated last year
- Repository of LV-Eval Benchmark☆70Updated last year
- Block Sparse movement pruning☆81Updated 4 years ago
- ☆74Updated 2 years ago
- ☆140Updated last year
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆71Updated 2 years ago
- Are Intermediate Layers and Labels Really Necessary? A General Language Model Distillation Method ; GKD: A General Knowledge Distillation…☆32Updated 2 years ago
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆98Updated last year
- 🌱 梦想家(DreamerGPT):中文大语言模型指令精调☆51Updated 2 years ago
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆197Updated 2 years ago
- Code for the preprint "Cache Me If You Can: How Many KVs Do You Need for Effective Long-Context LMs?"☆47Updated 3 months ago
- Manages vllm-nccl dependency☆17Updated last year
- Implementation of a Quantized Transformer Model☆19Updated 6 years ago
- An Experiment on Dynamic NTK Scaling RoPE☆64Updated last year
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆187Updated last year
- This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron …☆33Updated 2 years ago
- Method to improve inference time for BERT. This is an implementation of the paper titled "PoWER-BERT: Accelerating BERT Inference via Pro…☆62Updated last month
- ☆20Updated 6 months ago