davisyoshida / easy-lora-and-gptqLinks
JAX notebook showing how to LoRA + GPTQ arbitrary models
โ10Updated 2 years ago
Alternatives and similar repositories for easy-lora-and-gptq
Users that are interested in easy-lora-and-gptq are comparing it to the libraries listed below
Sorting:
- Experiments with generating opensource language model assistantsโ97Updated 2 years ago
- Large scale 4D parallelism pre-training for ๐ค transformers in Mixture of Experts *(still work in progress)*โ87Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pileโ115Updated 2 years ago
- Code repository for the c-BTM paperโ107Updated 2 years ago
- QLoRA with Enhanced Multi GPU Supportโ37Updated 2 years ago
- โ94Updated 2 years ago
- some common Huggingface transformers in maximal update parametrization (ยตP)โ86Updated 3 years ago
- HomebrewNLP in JAX flavour for maintable TPU-Trainingโ51Updated last year
- JAX implementation of the Llama 2 modelโ216Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limitโ62Updated 2 years ago
- โ50Updated last year
- โ66Updated 3 years ago
- Multipack distributed sampler for fast padding-free training of LLMsโ201Updated last year
- Collection of autoregressive model implementationโ86Updated 6 months ago
- Functional local implementations of main model parallelism approachesโ96Updated 2 years ago
- A library for squeakily cleaning and filtering language datasets.โ47Updated 2 years ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023โ136Updated last year
- Multi-Domain Expert Learningโ66Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT trainingโ132Updated last year
- Automatically take good care of your preemptible TPUsโ37Updated 2 years ago
- โ53Updated last year
- โ20Updated 2 years ago
- โ91Updated last year
- Inference code for LLaMA models in JAXโ119Updated last year
- Experiments for efforts to train a new and improved t5โ75Updated last year
- โ69Updated last year
- A set of Python scripts that makes your experience on TPU betterโ54Updated last month
- Train very large language models in Jax.โ209Updated 2 years ago
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE โฆโ114Updated last year
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning Pโฆโ35Updated 2 years ago