catid / cuda_float_compressLinks
Python package for compressing floating-point PyTorch tensors
☆13Updated last year
Alternatives and similar repositories for cuda_float_compress
Users that are interested in cuda_float_compress are comparing it to the libraries listed below
Sorting:
- ☆47Updated last year
- Latent Large Language Models☆19Updated last year
- Aana SDK is a powerful framework for building AI enabled multimodal applications.☆53Updated 3 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆130Updated last year
- MPI Code Generation through Domain-Specific Language Models☆14Updated last year
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year
- Official code for "SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient"☆147Updated 2 years ago
- ☆66Updated 8 months ago
- ☆52Updated last year
- Port of Facebook's LLaMA model in C/C++☆22Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- ☆63Updated last year
- Training hybrid models for dummies.☆29Updated last month
- ☆50Updated last year
- ☆62Updated 2 years ago
- GoldFinch and other hybrid transformer components☆45Updated last year
- Github repo for Peifeng's internship project☆13Updated 2 years ago
- Make triton easier☆49Updated last year
- ☆70Updated last year
- Scaling is a distributed training library and installable dependency designed to scale up neural networks, with a dedicated module for tr…☆66Updated 3 weeks ago
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- ☆63Updated 10 months ago
- Code repository for the paper - "AdANNS: A Framework for Adaptive Semantic Search"☆65Updated 2 years ago
- ScalarLM - a unified training and inference stack☆94Updated 3 weeks ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73Updated last year
- [WIP] Better (FP8) attention for Hopper☆32Updated 9 months ago
- train with kittens!☆63Updated last year
- Data preparation code for Amber 7B LLM☆93Updated last year
- [ICML 2023] "Outline, Then Details: Syntactically Guided Coarse-To-Fine Code Generation", Wenqing Zheng, S P Sharan, Ajay Kumar Jaiswal, …☆43Updated 2 years ago
- Linear Attention Sequence Parallelism (LASP)☆87Updated last year