ayaka14732 / bart-base-jax
JAX implementation of the bart-base model
☆29Updated last year
Alternatives and similar repositories for bart-base-jax:
Users that are interested in bart-base-jax are comparing it to the libraries listed below
- In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuning☆33Updated last year
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated last year
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- Truly flash T5 realization!☆60Updated 7 months ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆45Updated last year
- Large Scale Distributed Model Training strategy with Colossal AI and Lightning AI☆58Updated last year
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆24Updated 9 months ago
- ☆31Updated 7 months ago
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated 10 months ago
- RWKV model implementation☆37Updated last year
- GoldFinch and other hybrid transformer components☆42Updated 5 months ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆47Updated 2 years ago
- Utilities for Training Very Large Models☆57Updated 3 months ago
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆75Updated this week
- My explorations into editing the knowledge and memories of an attention network☆34Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated last year
- Experiments with generating opensource language model assistants☆97Updated last year
- ☆64Updated last month
- ☆24Updated 2 years ago
- An unofficial implementation of the Infini-gram model proposed by Liu et al. (2024)☆29Updated 7 months ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- TrAVis: Visualise BERT attention in your browser☆56Updated last year
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated 7 months ago
- ☆32Updated last year
- PyTorch building blocks for OLMo☆47Updated this week
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch☆72Updated 2 years ago