Implementation for IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).
☆25Feb 22, 2026Updated last month
Alternatives and similar repositories for IceFormer
Users that are interested in IceFormer are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- This is the CUDA GPU implementation + Python interface (using PyTorch) of DCI. The paper can be found at https://arxiv.org/abs/1512.00442…☆13Dec 20, 2023Updated 2 years ago
- Longitudinal Evaluation of LLMs via Data Compression☆33May 29, 2024Updated last year
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- ☆14Mar 22, 2024Updated 2 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆17Jun 3, 2024Updated last year
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- [ICML‘25] Official code for paper "Occult: Optimizing Collaborative Communication across Experts for Accelerated Parallel MoE Training an…☆13Apr 17, 2025Updated 11 months ago
- Visual Concept Connectome☆15Jun 23, 2024Updated last year
- Optimizing the Deployment of Tiny Transformers on Low-Power MCUs☆35Sep 2, 2024Updated last year
- SGEMM optimization with cuda step by step☆22Mar 23, 2024Updated 2 years ago
- Mediapipe 0.10.1 with CUDA GPU Support python libs☆10Dec 1, 2023Updated 2 years ago
- ☆13Jan 7, 2025Updated last year
- ☆72Mar 26, 2025Updated last year
- Running inference on the ZeroSCROLLS benchmark☆22Apr 18, 2024Updated last year
- Whisper in TensorRT-LLM☆17Sep 21, 2023Updated 2 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Aug 3, 2025Updated 8 months ago
- QAQ: Quality Adaptive Quantization for LLM KV Cache☆53Mar 27, 2024Updated 2 years ago
- An open source time series library for Python implementing Matrix Profile☆23Jul 31, 2018Updated 7 years ago
- Loop Nest - Linear algebra compiler and code generator.☆20Oct 22, 2022Updated 3 years ago
- ☆16Mar 13, 2023Updated 3 years ago
- Official code for the paper "HEXA-MoE: Efficient and Heterogeneous-Aware MoE Acceleration with Zero Computation Redundancy"☆15Mar 6, 2025Updated last year
- ☆310Jul 10, 2025Updated 9 months ago
- An efficient spatial accelerator enabling hybrid sparse attention mechanisms for long sequences☆32Mar 7, 2024Updated 2 years ago
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆123Jul 4, 2025Updated 9 months ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Official repository of "Distort, Distract, Decode: Instruction-Tuned Model Can Refine its Response from Noisy Instructions", ICLR 2024 Sp…☆21Mar 7, 2024Updated 2 years ago
- Beyond KV Caching: Shared Attention for Efficient LLMs☆20Jul 19, 2024Updated last year
- ☆12Mar 21, 2024Updated 2 years ago
- The source code and dataset mentioned in the paper Seal-Tools: Self-Instruct Tool Learning Dataset for Agent Tuning and Detailed Benchmar…☆53Nov 5, 2024Updated last year
- A Python package mapping 2D coordinates to colors based on different 2D color maps.☆16Dec 4, 2025Updated 4 months ago
- Segment Anything (SAM) at Home web app using Gradio☆14Aug 7, 2023Updated 2 years ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆113Sep 10, 2024Updated last year
- ☆39Oct 21, 2025Updated 5 months ago
- Proteus: A High-Throughput Inference-Serving System with Accuracy Scaling☆12Mar 7, 2024Updated 2 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- ☆119May 19, 2025Updated 10 months ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆82Aug 12, 2024Updated last year
- [ECCV 2024] Efficient Inference of Vision Instruction-Following Models with Elastic Cache☆43Jul 26, 2024Updated last year
- ☆14Apr 9, 2021Updated 5 years ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 10 months ago
- Python script for controlling the debug-jtag port of riscv cores☆15Mar 27, 2021Updated 5 years ago