microsoft / MoPQLinks
☆12Updated 3 years ago
Alternatives and similar repositories for MoPQ
Users that are interested in MoPQ are comparing it to the libraries listed below
Sorting:
- Official code for "Binary embedding based retrieval at Tencent"☆43Updated last year
- Retrieval with Learned Similarities (http://arxiv.org/abs/2407.15462, WWW'25 Oral)☆48Updated 4 months ago
- Inference framework for MoE layers based on TensorRT with Python binding☆41Updated 4 years ago
- Summary of system papers/frameworks/codes/tools on training or serving large model☆57Updated last year
- [KDD'22] Learned Token Pruning for Transformers☆99Updated 2 years ago
- Repository of LV-Eval Benchmark☆70Updated last year
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Updated last year
- Code for the preprint "Cache Me If You Can: How Many KVs Do You Need for Effective Long-Context LMs?"☆44Updated last month
- Odysseus: Playground of LLM Sequence Parallelism☆77Updated last year
- Distributed DataLoader For Pytorch Based On Ray☆24Updated 3 years ago
- ☆74Updated 2 years ago
- This package implements THOR: Transformer with Stochastic Experts.☆65Updated 3 years ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆123Updated 7 months ago
- ☆19Updated last year
- Manages vllm-nccl dependency☆17Updated last year
- ☆20Updated last year
- implement bert in pure c++☆36Updated 5 years ago
- A memory efficient DLRM training solution using ColossalAI☆106Updated 2 years ago
- Block Sparse movement pruning☆81Updated 4 years ago
- An Experiment on Dynamic NTK Scaling RoPE☆64Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated 2 years ago
- Dynamic Context Selection for Efficient Long-Context LLMs☆40Updated 3 months ago
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆98Updated last year
- ☆20Updated 4 months ago
- Source code of paper ''KVSharer: Efficient Inference via Layer-Wise Dissimilar KV Cache Sharing''☆28Updated 10 months ago
- hnsw implemented by python☆69Updated 6 years ago
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆67Updated 2 years ago
- Vocabulary Parallelism☆21Updated 5 months ago
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆29Updated last year
- ☆11Updated 2 years ago