salesforce / jaxformer
Minimal library to train LLMs on TPU in JAX with pjit().
☆280Updated last year
Alternatives and similar repositories for jaxformer:
Users that are interested in jaxformer are comparing it to the libraries listed below
- CodeGen2 models for program synthesis☆274Updated last year
- Repository for analysis and experiments in the BigCode project.☆117Updated 11 months ago
- Fast Inference Solutions for BLOOM☆563Updated 4 months ago
- Fine-tune SantaCoder for Code/Text Generation.☆188Updated last year
- ☆268Updated last year
- GPTQ inference Triton kernel☆295Updated last year
- batched loras☆338Updated last year
- DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.☆164Updated 3 weeks ago
- Used for adaptive human in the loop evaluation of language and embedding models.☆306Updated last year
- The data processing pipeline for the Koala chatbot language model☆117Updated last year
- ☆456Updated last year
- Scaling Data-Constrained Language Models☆333Updated 4 months ago
- 🐙 OctoPack: Instruction Tuning Code Large Language Models☆451Updated 2 weeks ago
- Multipack distributed sampler for fast padding-free training of LLMs☆184Updated 6 months ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆541Updated 11 months ago
- Generative model for code infilling and synthesis☆299Updated last year
- DSIR large-scale data selection framework for language model training☆241Updated 10 months ago
- ☆539Updated 2 months ago
- distributed trainer for LLMs☆557Updated 9 months ago
- Official repository for LongChat and LongEval☆519Updated 8 months ago
- A framework for the evaluation of autoregressive code generation language models.☆886Updated 3 months ago
- Salesforce open-source LLMs with 8k sequence length.☆717Updated 2 weeks ago
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆794Updated 7 months ago
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆676Updated 6 months ago
- This is the official code for the paper CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning (Neur…☆515Updated last month
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆208Updated last year
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)☆461Updated 2 years ago
- Ongoing research training transformer models at scale☆380Updated 6 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆296Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆215Updated 10 months ago