ayaka14732 / bart-base-jax
JAX implementation of the bart-base model
☆29Updated last year
Related projects ⓘ
Alternatives and complementary repositories for bart-base-jax
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆92Updated last year
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆52Updated last week
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆44Updated last year
- Utilities for Training Very Large Models☆56Updated last month
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated last year
- ☆47Updated 9 months ago
- Experiments with generating opensource language model assistants☆97Updated last year
- Evaluating LLMs with Dynamic Data☆72Updated 2 weeks ago
- ☆27Updated 5 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Repository for fine-tuning Transformers 🤗 based seq2seq speech models in JAX/Flax.☆34Updated last year
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆14Updated last year
- In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuning☆33Updated last year
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆23Updated 7 months ago
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆70Updated 8 months ago
- ☆77Updated 5 months ago
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆23Updated 8 months ago
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆71Updated last month
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆35Updated 11 months ago
- My explorations into editing the knowledge and memories of an attention network☆34Updated last year
- TrAVis: Visualise BERT attention in your browser☆55Updated last year
- Large Scale Distributed Model Training strategy with Colossal AI and Lightning AI☆58Updated last year
- Truly flash T5 realization!☆54Updated 6 months ago
- Here we will test various linear attention designs.☆56Updated 6 months ago
- 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.☆82Updated 2 years ago