EleutherAI / pilev2Links
☆13Updated 2 years ago
Alternatives and similar repositories for pilev2
Users that are interested in pilev2 are comparing it to the libraries listed below
Sorting:
- My explorations into editing the knowledge and memories of an attention network☆35Updated 2 years ago
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆37Updated last year
- ☆44Updated 7 months ago
- Implementation of some personal helper functions for Einops, my most favorite tensor manipulation library ❤️☆54Updated 2 years ago
- Index of URLs to pdf files all over the internet and scripts☆24Updated 2 years ago
- Implementation of a holodeck, written in Pytorch☆18Updated last year
- M4 experiment logbook☆58Updated last year
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)☆51Updated 3 months ago
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated last year
- Utilities for Training Very Large Models☆58Updated 9 months ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆50Updated 3 years ago
- ☆25Updated last year
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"☆28Updated 3 years ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆113Updated last year
- ☆20Updated last year
- ☆38Updated last year
- ☆54Updated 2 years ago
- Automatically take good care of your preemptible TPUs☆36Updated 2 years ago
- Source-to-Source Debuggable Derivatives in Pure Python☆15Updated last year
- DiCE: The Infinitely Differentiable Monte-Carlo Estimator☆31Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated 2 years ago
- ☆19Updated last month
- Transformers at any scale☆41Updated last year
- ☆34Updated 9 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated 2 years ago
- Implementation of Bitune: Bidirectional Instruction-Tuning☆19Updated last week
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Updated 3 years ago
- RWKV model implementation☆38Updated last year