catid / cuda_float_compressLinks
Python package for compressing floating-point PyTorch tensors
☆13Updated last year
Alternatives and similar repositories for cuda_float_compress
Users that are interested in cuda_float_compress are comparing it to the libraries listed below
Sorting:
- Latent Large Language Models☆19Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- Port of Facebook's LLaMA model in C/C++☆22Updated 2 years ago
- Official code for "SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient"☆148Updated 2 years ago
- ☆39Updated 3 years ago
- ☆52Updated last year
- Simple high-throughput inference library☆154Updated 7 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆104Updated 7 months ago
- Training hybrid models for dummies.☆29Updated 2 months ago
- Code repository for the paper - "AdANNS: A Framework for Adaptive Semantic Search"☆66Updated 2 years ago
- ☆62Updated 2 years ago
- A tree-based prefix cache library that allows rapid creation of looms: hierarchal branching pathways of LLM generations.☆77Updated 10 months ago
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year
- Very minimal (and stateless) agent framework☆44Updated 11 months ago
- Fork of Flame repo for training of some new stuff in development☆19Updated this week
- Make triton easier☆49Updated last year
- A collection of lightweight interpretability scripts to understand how LLMs think☆74Updated last week
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated 2 years ago
- ☆18Updated last year
- Cerule - A Tiny Mighty Vision Model☆68Updated last month
- new optimizer☆20Updated last year
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆141Updated 3 months ago
- inference code for mixtral-8x7b-32kseqlen☆104Updated 2 years ago
- ☆50Updated last year
- LLMs as Collaboratively Edited Knowledge Bases☆46Updated last year
- ☆63Updated last year
- RWKV-7: Surpassing GPT☆103Updated last year
- [ICML 2023] "Outline, Then Details: Syntactically Guided Coarse-To-Fine Code Generation", Wenqing Zheng, S P Sharan, Ajay Kumar Jaiswal, …☆43Updated 2 years ago
- GoldFinch and other hybrid transformer components☆45Updated last year
- Framework-Agnostic RL Environments for LLM Fine-Tuning☆40Updated last month