aredden / torch-bnb-fp4
Faster Pytorch bitsandbytes 4bit fp4 nn.Linear ops
☆24Updated 10 months ago
Alternatives and similar repositories for torch-bnb-fp4:
Users that are interested in torch-bnb-fp4 are comparing it to the libraries listed below
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- ☆83Updated 7 months ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆43Updated 6 months ago
- A library for unit scaling in PyTorch☆118Updated last month
- Repository for CPU Kernel Generation for LLM Inference☆25Updated last year
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆67Updated 7 months ago
- ☆157Updated last year
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆75Updated this week
- Fast low-bit matmul kernels in Triton☆187Updated last week
- PyTorch half precision gemm lib w/ fused optional bias + optional relu/gelu☆47Updated last month
- Experiment of using Tangent to autodiff triton☆74Updated 11 months ago
- QuIP quantization☆48Updated 10 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆64Updated 4 months ago
- This repository contains the experimental PyTorch native float8 training UX☆219Updated 5 months ago
- Triton Implementation of HyperAttention Algorithm☆46Updated last year
- ☆56Updated 3 months ago
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆44Updated last year
- Triton kernels for Flux☆17Updated 2 weeks ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆38Updated 10 months ago
- ☆52Updated last week
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆219Updated this week
- ☆45Updated last year
- Here we will test various linear attention designs.☆58Updated 8 months ago
- ☆96Updated 4 months ago
- A safetensors extension to efficiently store sparse quantized tensors on disk☆64Updated this week
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆27Updated 9 months ago
- ☆107Updated 3 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆64Updated 7 months ago
- ☆75Updated 6 months ago
- Patch convolution to avoid large GPU memory usage of Conv2D☆81Updated 7 months ago