Implementation for IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).
☆25Feb 22, 2026Updated 2 months ago
Alternatives and similar repositories for IceFormer
Users that are interested in IceFormer are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Load and run Llama from safetensors files in C☆15Oct 24, 2024Updated last year
- Accelerate multihead attention transformer model using HLS for FPGA☆12Dec 7, 2023Updated 2 years ago
- Longitudinal Evaluation of LLMs via Data Compression☆33May 29, 2024Updated last year
- 基于iOS直接调试ncnn源码,方便理解分析☆13Apr 8, 2019Updated 7 years ago
- ☆15Mar 22, 2024Updated 2 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- A high-throughput and memory-efficient inference and serving engine for LLMs☆17Jun 3, 2024Updated last year
- [ICML‘25] Official code for paper "Occult: Optimizing Collaborative Communication across Experts for Accelerated Parallel MoE Training an…☆13Apr 17, 2025Updated last year
- Optimizing the Deployment of Tiny Transformers on Low-Power MCUs☆35Sep 2, 2024Updated last year
- SGEMM optimization with cuda step by step☆22Mar 23, 2024Updated 2 years ago
- ☆72Mar 26, 2025Updated last year
- Running inference on the ZeroSCROLLS benchmark☆22Apr 18, 2024Updated 2 years ago
- Whisper in TensorRT-LLM☆17Sep 21, 2023Updated 2 years ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Aug 3, 2025Updated 9 months ago
- QAQ: Quality Adaptive Quantization for LLM KV Cache☆54Mar 27, 2024Updated 2 years ago
- Deploy open-source AI quickly and easily - Special Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- Loop Nest - Linear algebra compiler and code generator.☆20Oct 22, 2022Updated 3 years ago
- ☆33May 26, 2024Updated last year
- ☆16Mar 13, 2023Updated 3 years ago
- ☆311Jul 10, 2025Updated 9 months ago
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆125Jul 4, 2025Updated 10 months ago
- Beyond KV Caching: Shared Attention for Efficient LLMs☆20Jul 19, 2024Updated last year
- ☆18Jan 27, 2025Updated last year
- The source code and dataset mentioned in the paper Seal-Tools: Self-Instruct Tool Learning Dataset for Agent Tuning and Detailed Benchmar…☆54Nov 5, 2024Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆113Sep 10, 2024Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Implementation RefinedetLite☆29Dec 27, 2019Updated 6 years ago
- This is the implementation that supports yolov5s, yolov5m, yolov5l, yolov5x.☆34Jun 22, 2022Updated 3 years ago
- KDD21 Deep Learning Embeddings for Data Series Similarity Search☆20Aug 5, 2021Updated 4 years ago
- ☆40Oct 21, 2025Updated 6 months ago
- Proteus: A High-Throughput Inference-Serving System with Accuracy Scaling☆12Mar 7, 2024Updated 2 years ago
- Dumpy: A Compact and Adaptive Index for Large Data Series Collections (SIGMOD'23)☆13Dec 12, 2023Updated 2 years ago
- ☆119May 19, 2025Updated 11 months ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆81Aug 12, 2024Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆46Feb 28, 2026Updated 2 months ago
- Python script for controlling the debug-jtag port of riscv cores☆15Mar 27, 2021Updated 5 years ago
- Nuclei AI Library Optimized For RISC-V Vector☆15Oct 15, 2025Updated 6 months ago
- ☆35Dec 22, 2025Updated 4 months ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆46May 23, 2023Updated 2 years ago
- ☆20Mar 22, 2021Updated 5 years ago
- This repository contains the code for the paper: SirLLM: Streaming Infinite Retentive LLM☆60May 28, 2024Updated last year