ayaka14732 / bart-base-jaxLinks
JAX implementation of the bart-base model
☆34Updated 2 years ago
Alternatives and similar repositories for bart-base-jax
Users that are interested in bart-base-jax are comparing it to the libraries listed below
Sorting:
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- Official implementation of "GPT or BERT: why not both?"☆63Updated 4 months ago
- Truly flash T5 realization!☆71Updated last year
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆112Updated last month
- ☆101Updated 6 months ago
- ☆53Updated last year
- ☆67Updated 3 years ago
- Evaluating LLMs with Dynamic Data☆101Updated this week
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆96Updated 2 years ago
- Implementation of the Mamba SSM with hf_integration.☆56Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆87Updated 3 years ago
- 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.☆81Updated 3 years ago
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- TrAVis: Visualise BERT attention in your browser☆58Updated 2 years ago
- Anh - LAION's multilingual assistant datasets and models☆27Updated 2 years ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆35Updated 2 years ago
- My explorations into editing the knowledge and memories of an attention network☆35Updated 3 years ago
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆110Updated 2 years ago
- Evaluation pipeline for the BabyLM Challenge 2023.☆77Updated 2 years ago
- ☆19Updated 3 years ago
- Large Scale Distributed Model Training strategy with Colossal AI and Lightning AI☆56Updated 2 years ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆58Updated this week
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated 2 years ago
- ☆67Updated last year
- Implementation of the Llama architecture with RLHF + Q-learning☆168Updated 10 months ago
- Helper scripts and notes that were used while porting various nlp models☆48Updated 3 years ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆138Updated last year
- A fast RWKV Tokenizer written in Rust☆54Updated 4 months ago
- Vocabulary Trimming (VT) is a model compression technique, which reduces a multilingual LM vocabulary to a target language by deleting ir…☆58Updated last year
- ☆39Updated last year