linkedin / fmchiselLinks
fmchisel: Efficient Compression and Training Algorithms for Foundation Models
☆81Updated 2 months ago
Alternatives and similar repositories for fmchisel
Users that are interested in fmchisel are comparing it to the libraries listed below
Sorting:
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆375Updated this week
- LLM Serving Performance Evaluation Harness☆82Updated 10 months ago
- An early research stage expert-parallel load balancer for MoE models based on linear programming.☆491Updated 2 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆136Updated last year
- ☆48Updated last year
- KV cache compression for high-throughput LLM inference☆149Updated 11 months ago
- A minimal implementation of vllm.☆66Updated last year
- Accelerating MoE with IO and Tile-aware Optimizations☆542Updated last week
- [ICLR2025 Spotlight] MagicPIG: LSH Sampling for Efficient LLM Generation☆245Updated last year
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆248Updated 8 months ago
- Miles is an enterprise-facing reinforcement learning framework for large-scale MoE post-training and production workloads, forked from an…☆744Updated this week
- Autonomous GPU Kernel Generation via Deep Agents☆217Updated this week
- Efficient Long-context Language Model Training by Core Attention Disaggregation☆79Updated 3 weeks ago
- Code for data-aware compression of DeepSeek models☆68Updated last month
- Cataloging released Triton kernels.☆287Updated 4 months ago
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆251Updated this week
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆261Updated last month
- [ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration☆255Updated last year
- torchcomms: a modern PyTorch communications API☆321Updated this week
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆207Updated last year
- Ship correct and fast LLM kernels to PyTorch☆132Updated last week
- Easy, Fast, and Scalable Multimodal AI☆92Updated this week
- Distributed MoE in a Single Kernel [NeurIPS '25]☆183Updated this week
- Efficient LLM Inference over Long Sequences☆393Updated 6 months ago
- Perplexity GPU Kernels☆553Updated 2 months ago
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆275Updated 3 months ago
- LLM KV cache compression made easy☆799Updated last week
- 🔥 LLM-powered GPU kernel synthesis: Train models to convert PyTorch ops into optimized Triton kernels via SFT+RL. Multi-turn compilation…☆112Updated 2 months ago
- ☆45Updated 10 months ago
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆306Updated 7 months ago