EleutherAI / magiCARP
One stop shop for all things carp
☆59Updated 2 years ago
Alternatives and similar repositories for magiCARP:
Users that are interested in magiCARP are comparing it to the libraries listed below
- A library for squeakily cleaning and filtering language datasets.☆46Updated last year
- Experiments with generating opensource language model assistants☆97Updated last year
- ☆44Updated 3 months ago
- HomebrewNLP in JAX flavour for maintable TPU-Training☆48Updated last year
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP☆58Updated 2 years ago
- Embedding Recycling for Language models☆38Updated last year
- Hidden Engrams: Long Term Memory for Transformer Model Inference☆35Updated 3 years ago
- Probabilistic LLM evaluations. [CogSci2023; ACL2023]☆73Updated 7 months ago
- Python Research Framework☆106Updated 2 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆48Updated 3 years ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- LLM sampling method for enforcing syntax adherence in generated output☆23Updated last year
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated 9 months ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated last year
- Latent Diffusion Language Models☆68Updated last year
- ☆39Updated 2 years ago
- Code repo for "Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers" (ACL 2023)☆22Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- ☆67Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- ☆19Updated last year
- Mechanistic Interpretability for Transformer Models☆49Updated 2 years ago
- ☆77Updated last year
- My explorations into editing the knowledge and memories of an attention network☆34Updated 2 years ago
- See the issue board for the current status of active and prospective projects!☆65Updated 3 years ago
- 🤗 Disaggregators: Curated data labelers for in-depth analysis.☆65Updated 2 years ago
- A dataset of alignment research and code to reproduce it☆74Updated last year
- ☆43Updated 2 years ago
- Codes and files for the paper Are Emergent Abilities in Large Language Models just In-Context Learning☆33Updated 2 months ago
- The GitHub repo for Goal Driven Discovery of Distributional Differences via Language Descriptions☆69Updated last year