abacusai / gh200-llm
Docker image NVIDIA GH200 machines - optimized for vllm serving and hf trainer finetuning
☆38Updated last month
Alternatives and similar repositories for gh200-llm:
Users that are interested in gh200-llm are comparing it to the libraries listed below
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆125Updated 4 months ago
- Collection of autoregressive model implementation☆83Updated last month
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆272Updated last year
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆59Updated 2 months ago
- Code repository for the c-BTM paper☆106Updated last year
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆155Updated 5 months ago
- ☆49Updated last year
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆226Updated 2 months ago
- ☆76Updated 8 months ago
- Experiment of using Tangent to autodiff triton☆78Updated last year
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆124Updated 7 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- Load compute kernels from the Hub☆107Updated this week
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆54Updated this week
- Write a fast kernel and run it on Discord. See how you compare against the best!☆35Updated this week
- train with kittens!☆55Updated 5 months ago
- ring-attention experiments☆128Updated 5 months ago
- ☆195Updated 3 months ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆81Updated last year
- ☆47Updated last week
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models"☆59Updated 5 months ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆217Updated 11 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆186Updated 7 months ago
- Experiments on speculative sampling with Llama models☆125Updated last year
- Just a bunch of benchmark logs for different LLMs☆119Updated 8 months ago
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆35Updated 11 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆253Updated 8 months ago
- Train your own SOTA deductive reasoning model☆81Updated 3 weeks ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆168Updated 2 months ago