catid / cuda_float_compressLinks
Python package for compressing floating-point PyTorch tensors
☆13Updated last year
Alternatives and similar repositories for cuda_float_compress
Users that are interested in cuda_float_compress are comparing it to the libraries listed below
Sorting:
- Latent Large Language Models☆19Updated last year
- Official code for "SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient"☆149Updated 2 years ago
- Port of Facebook's LLaMA model in C/C++☆21Updated 2 years ago
- ☆92Updated last week
- ☆71Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- Docker image NVIDIA GH200 machines - optimized for vllm serving and hf trainer finetuning☆53Updated 11 months ago
- ☆63Updated last year
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year
- ☆52Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆131Updated last year
- MPI Code Generation through Domain-Specific Language Models☆14Updated last year
- ☆50Updated last year
- [ICML 2023] "Outline, Then Details: Syntactically Guided Coarse-To-Fine Code Generation", Wenqing Zheng, S P Sharan, Ajay Kumar Jaiswal, …☆43Updated 2 years ago
- Code repository for the paper - "AdANNS: A Framework for Adaptive Semantic Search"☆66Updated 2 years ago
- ☆13Updated 2 years ago
- ☆62Updated 2 years ago
- ☆18Updated last year
- ☆39Updated 3 years ago
- A collection of lightweight interpretability scripts to understand how LLMs think☆89Updated 2 weeks ago
- Simplex Random Feature attention, in PyTorch☆75Updated 2 years ago
- ☆47Updated 2 years ago
- ☆34Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated 2 years ago
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆141Updated 4 months ago
- Utilities for Training Very Large Models☆58Updated last year
- A lightweight, user-friendly data-plane for LLM training.☆38Updated 4 months ago
- ☆40Updated last year
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73Updated last year
- Linear Attention Sequence Parallelism (LASP)☆88Updated last year