inclusionAI / asystem-astateLinks
☆32Updated last month
Alternatives and similar repositories for asystem-astate
Users that are interested in asystem-astate are comparing it to the libraries listed below
Sorting:
- Research prototype of PRISM — a cost-efficient multi-LLM serving system with flexible time- and space-based GPU sharing.☆54Updated 5 months ago
- ☆73Updated 4 months ago
- A NCCL extension library, designed to efficiently offload GPU memory allocated by the NCCL communication library.☆87Updated last month
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆264Updated last month
- ☆340Updated 3 weeks ago
- Stateful LLM Serving☆95Updated 10 months ago
- High performance Transformer implementation in C++.☆148Updated last year
- Scalable long-context LLM decoding that leverages sparsity—by treating the KV cache as a vector storage system.☆119Updated 3 weeks ago
- ☆130Updated last year
- ☆83Updated 3 months ago
- ☆164Updated 6 months ago
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆67Updated last year
- ☆75Updated last year
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆208Updated last year
- [Archived] For the latest updates and community contribution, please visit: https://github.com/Ascend/TransferQueue or https://gitcode.co…☆13Updated 2 weeks ago
- KV cache store for distributed LLM inference☆389Updated 2 months ago
- Allow torch tensor memory to be released and resumed later☆207Updated 2 weeks ago
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆91Updated 2 weeks ago
- [NeurIPS 2025] ClusterFusion: Expanding Operator Fusion Scope for LLM Inference via Cluster-Level Collective Primitive☆66Updated last month
- Utility scripts for PyTorch (e.g. Make Perfetto show some disappearing kernels, Memory profiler that understands more low-level allocatio…☆82Updated 4 months ago
- High-performance distributed data shuffling (all-to-all) library for MoE training and inference☆109Updated last month
- A high-performance RL training-inference weight synchronization framework, designed to enable second-level parameter updates from trainin…☆129Updated last month
- A framework for generating realistic LLM serving workloads☆98Updated 3 months ago
- Nex Venus Communication Library☆72Updated 2 months ago
- A lightweight design for computation-communication overlap.☆213Updated last week
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆92Updated this week
- Building the Virtuous Cycle for AI-driven LLM Systems☆140Updated this week
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆134Updated last year
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆123Updated last month
- Preview Code for Continuum Paper☆30Updated 3 weeks ago