abacusai / gh200-llm
Docker image NVIDIA GH200 machines - optimized for vllm serving and hf trainer finetuning
☆36Updated last week
Alternatives and similar repositories for gh200-llm:
Users that are interested in gh200-llm are comparing it to the libraries listed below
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆117Updated 3 months ago
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆44Updated this week
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 4 months ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆268Updated last year
- Simple implementation of Speculative Sampling in NumPy for GPT-2.☆91Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆253Updated 7 months ago
- Data preparation code for Amber 7B LLM☆85Updated 9 months ago
- Experiments on speculative sampling with Llama models☆125Updated last year
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models"☆59Updated 4 months ago
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆119Updated 6 months ago
- Code repository for the c-BTM paper☆105Updated last year
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆218Updated last month
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆57Updated last month
- ☆49Updated 11 months ago
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆214Updated this week
- ☆192Updated 2 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆147Updated last month
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆225Updated this week
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- QuIP quantization☆51Updated 11 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆30Updated this week
- some common Huggingface transformers in maximal update parametrization (µP)☆79Updated 2 years ago
- ☆113Updated 5 months ago
- RWKV-7: Surpassing GPT☆79Updated 3 months ago
- PB-LLM: Partially Binarized Large Language Models☆151Updated last year
- PyTorch building blocks for the OLMo ecosystem☆67Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆260Updated 4 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆109Updated 3 months ago
- ☆42Updated last year