davisyoshida / easy-lora-and-gptqLinks
JAX notebook showing how to LoRA + GPTQ arbitrary models
โ10Updated 2 years ago
Alternatives and similar repositories for easy-lora-and-gptq
Users that are interested in easy-lora-and-gptq are comparing it to the libraries listed below
Sorting:
- Experiments with generating opensource language model assistantsโ97Updated 2 years ago
- Large scale 4D parallelism pre-training for ๐ค transformers in Mixture of Experts *(still work in progress)*โ87Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pileโ116Updated 2 years ago
- some common Huggingface transformers in maximal update parametrization (ยตP)โ82Updated 3 years ago
- Code repository for the c-BTM paperโ107Updated last year
- Inference code for LLaMA models in JAXโ118Updated last year
- HomebrewNLP in JAX flavour for maintable TPU-Trainingโ50Updated last year
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023โ136Updated last year
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning Pโฆโ34Updated 2 years ago
- โ49Updated last year
- JAX implementation of the Llama 2 modelโ219Updated last year
- Multi-Domain Expert Learningโ67Updated last year
- โ93Updated last year
- โ82Updated last year
- โ45Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limitโ63Updated 2 years ago
- A library for squeakily cleaning and filtering language datasets.โ47Updated 2 years ago
- Train very large language models in Jax.โ206Updated last year
- โ53Updated last year
- Collection of autoregressive model implementationโ86Updated 3 months ago
- My explorations into editing the knowledge and memories of an attention networkโ35Updated 2 years ago
- โ67Updated 3 years ago
- โ83Updated last year
- Experiments for efforts to train a new and improved t5โ76Updated last year
- Automatically take good care of your preemptible TPUsโ36Updated 2 years ago
- Multipack distributed sampler for fast padding-free training of LLMsโ199Updated last year
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE โฆโ114Updated last year
- โ20Updated 2 years ago
- โ75Updated last year
- Amos optimizer with JEstimator lib.โ82Updated last year