davisyoshida / jax-gptqLinks
JAX implementation of GPTQ quantization algorithm
☆10Updated 2 years ago
Alternatives and similar repositories for jax-gptq
Users that are interested in jax-gptq are comparing it to the libraries listed below
Sorting:
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆15Updated 2 years ago
- some common Huggingface transformers in maximal update parametrization (µP)☆87Updated 3 years ago
- Simple repository for training small reasoning models☆48Updated 11 months ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆79Updated last year
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆68Updated 9 months ago
- Parallel Associative Scan for Language Models☆18Updated 2 years ago
- Custom triton kernels for training Karpathy's nanoGPT.☆19Updated last year
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- A flexible and efficient implementation of Flash Attention 2.0 for JAX, supporting multiple backends (GPU/TPU/CPU) and platforms (Triton/…☆34Updated 10 months ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆18Updated 6 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆78Updated last year
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆86Updated 2 years ago
- ☆53Updated 2 years ago
- Machine Learning eXperiment Utilities☆48Updated 6 months ago
- Make triton easier☆50Updated last year
- JORA: JAX Tensor-Parallel LoRA Library (ACL 2024)☆36Updated last year
- Code for the note "NF4 Isn't Information Theoretically Optimal (and that's Good)☆21Updated 2 years ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆92Updated 2 years ago
- ☆20Updated 2 years ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆137Updated last year
- Language models scale reliably with over-training and on downstream tasks☆99Updated last year
- Experiments for efforts to train a new and improved t5☆76Updated last year
- ☆41Updated last year
- ☆13Updated last month
- Large Scale Distributed Model Training strategy with Colossal AI and Lightning AI☆56Updated 2 years ago
- ☆92Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- ☆51Updated 2 years ago