chijames / KERPLE
☆19Updated 2 years ago
Alternatives and similar repositories for KERPLE
Users that are interested in KERPLE are comparing it to the libraries listed below
Sorting:
- The official implementation for Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free☆17Updated this week
- ☆31Updated last year
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆21Updated 9 months ago
- ☆54Updated 10 months ago
- [NeurIPS 2023] Repetition In Repetition Out: Towards Understanding Neural Text Degeneration from the Data Perspective☆30Updated last year
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆64Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆51Updated 2 years ago
- Long Context Extension and Generalization in LLMs☆55Updated 7 months ago
- Code for the ACL-2022 paper "StableMoE: Stable Routing Strategy for Mixture of Experts"☆45Updated 2 years ago
- Code for preprint "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆38Updated last week
- ☆50Updated last year
- Codes for our paper "Speculative Decoding: Exploiting Speculative Execution for Accelerating Seq2seq Generation" (EMNLP 2023 Findings)☆41Updated last year
- Use the tokenizer in parallel to achieve superior acceleration☆16Updated last year
- Sparse Backpropagation for Mixture-of-Expert Training☆29Updated 10 months ago
- ☆24Updated last year
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆77Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆47Updated 4 months ago
- ☆31Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆44Updated last year
- ☆47Updated last year
- The code and data for the paper JiuZhang3.0☆45Updated 11 months ago
- ☆17Updated 11 months ago
- Transformers at any scale☆41Updated last year
- This repository is the official implementation of our EMNLP 2022 paper ELMER: A Non-Autoregressive Pre-trained Language Model for Efficie…☆26Updated 2 years ago
- ☆11Updated 11 months ago
- Revisiting Mid-training in the Era of RL Scaling☆41Updated 3 weeks ago
- [Findings of EMNLP22] From Mimicking to Integrating: Knowledge Integration for Pre-Trained Language Models☆19Updated 2 years ago
- Adding new tasks to T0 without catastrophic forgetting☆33Updated 2 years ago
- ☆28Updated last year