dkopi / BituneLinks
Implementation of Bitune: Bidirectional Instruction-Tuning
☆23Updated 7 months ago
Alternatives and similar repositories for Bitune
Users that are interested in Bitune are comparing it to the libraries listed below
Sorting:
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆88Updated last year
- PyTorch library for Active Fine-Tuning☆96Updated 4 months ago
- Language models scale reliably with over-training and on downstream tasks☆99Updated last year
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆182Updated 7 months ago
- ☆91Updated last year
- Official implementation of "BERTs are Generative In-Context Learners"☆32Updated 10 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆83Updated last year
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆94Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆134Updated 3 months ago
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated 2 years ago
- Understand and test language model architectures on synthetic tasks.☆252Updated 3 weeks ago
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆75Updated 7 months ago
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆179Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆247Updated 8 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆186Updated 2 weeks ago
- Implementation of Infini-Transformer in Pytorch☆112Updated last year
- Some preliminary explorations of Mamba's context scaling.☆218Updated last year
- ☆83Updated 2 years ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Updated 2 years ago
- ☆112Updated last year
- Code for Zero-Shot Tokenizer Transfer☆142Updated last year
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆46Updated 3 months ago
- Organize the Web: Constructing Domains Enhances Pre-Training Data Curation☆76Updated 9 months ago
- ☆46Updated 2 years ago
- ☆108Updated last year
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆70Updated last year
- ☆51Updated 2 years ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆195Updated last year
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆116Updated last year
- AnchorAttention: Improved attention for LLMs long-context training☆213Updated last year