catid / cuda_float_compressLinks
Python package for compressing floating-point PyTorch tensors
☆13Updated last year
Alternatives and similar repositories for cuda_float_compress
Users that are interested in cuda_float_compress are comparing it to the libraries listed below
Sorting:
- Official code for "SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient"☆149Updated 2 years ago
- Latent Large Language Models☆19Updated last year
- ☆92Updated last week
- ☆40Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆131Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- ☆47Updated 2 years ago
- Cerule - A Tiny Mighty Vision Model☆68Updated 2 months ago
- train with kittens!☆63Updated last year
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆141Updated 4 months ago
- ☆24Updated last year
- ☆18Updated last year
- Training hybrid models for dummies.☆29Updated 3 months ago
- ☆22Updated last year
- Simplex Random Feature attention, in PyTorch☆75Updated 2 years ago
- Samples of good AI generated CUDA kernels☆99Updated 8 months ago
- A collection of lightweight interpretability scripts to understand how LLMs think☆89Updated 2 weeks ago
- DeMo: Decoupled Momentum Optimization☆198Updated last year
- ☆50Updated last year
- ☆63Updated last year
- Advanced Ultra-Low Bitrate Compression Techniques for the LLaMA Family of LLMs☆110Updated 2 years ago
- A tree-based prefix cache library that allows rapid creation of looms: hierarchal branching pathways of LLM generations.☆77Updated 11 months ago
- ☆34Updated last year
- ☆52Updated last year
- Standalone commandline CLI tool for compiling Triton kernels☆20Updated last year
- ☆27Updated last year
- a pipeline for using api calls to agnostically convert unstructured data into structured training data☆32Updated last year
- inference code for mixtral-8x7b-32kseqlen☆105Updated 2 years ago
- MPI Code Generation through Domain-Specific Language Models☆14Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated 2 years ago