VITA-Group / Data-Efficient-ScalingView external linksLinks
[ICML 2023] "Data Efficient Neural Scaling Law via Model Reusing" by Peihao Wang, Rameswar Panda, Zhangyang Wang
☆14Jan 4, 2024Updated 2 years ago
Alternatives and similar repositories for Data-Efficient-Scaling
Users that are interested in Data-Efficient-Scaling are comparing it to the libraries listed below
Sorting:
- Fast and Modularized CFG-focused Models☆23Nov 8, 2023Updated 2 years ago
- Unofficial implementation of paper : Exploring the Space of Key-Value-Query Models with Intention☆12May 24, 2023Updated 2 years ago
- ☆23Jul 23, 2021Updated 4 years ago
- Recursive Bayesian Networks☆11May 11, 2025Updated 9 months ago
- Combining SOAP and MUON☆19Feb 11, 2025Updated last year
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated last year
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Oct 9, 2022Updated 3 years ago
- Leveraging Recursive Gumbel-Max Trick for Approximate Inference in Combinatorial Spaces, NeurIPS 2021☆13Dec 11, 2021Updated 4 years ago
- ☆13Feb 7, 2023Updated 3 years ago
- Expanding linear RNN state-transition matrix eigenvalues to include negatives improves state-tracking tasks and language modeling without…☆19Mar 15, 2025Updated 10 months ago
- ☆15Aug 18, 2022Updated 3 years ago
- ☆13Apr 15, 2024Updated last year
- Code for the ACL 2021 paper "Structural Guidance for Transformer Language Models"☆13Sep 17, 2025Updated 4 months ago
- The substitution of qsub.☆12Jan 25, 2019Updated 7 years ago
- Implementation and experiments for Partially Supervised NER via Expected Entity Ratio in TACL 2022☆14Nov 7, 2022Updated 3 years ago
- Mamba support for transformer lens☆19Sep 17, 2024Updated last year
- ☆18Mar 10, 2023Updated 2 years ago
- ☆15Jul 14, 2022Updated 3 years ago
- ☆15Mar 22, 2023Updated 2 years ago
- Chrome extension for OA sites like arxiv, openreivew: 1. PDF back to abstract page, 2. Rename PDF page with paper title.☆18Oct 12, 2023Updated 2 years ago
- ☆14Nov 20, 2022Updated 3 years ago
- 记录Transformer升级的论文笔记☆19Jun 25, 2023Updated 2 years ago
- ☆24Jan 1, 2025Updated last year
- Learning to Model Editing Processes☆26Aug 3, 2025Updated 6 months ago
- ☆20May 30, 2024Updated last year
- Code for GFlowNet-EM, a novel algorithm for fitting latent variable models with compositional latents and an intractable true posterior.☆42Feb 9, 2024Updated 2 years ago
- ☆19Dec 4, 2025Updated 2 months ago
- Efficient PScan implementation in PyTorch☆17Jan 2, 2024Updated 2 years ago
- ☆18May 28, 2021Updated 4 years ago
- Official Repository for Efficient Linear-Time Attention Transformers.☆18Jun 2, 2024Updated last year
- ☆52Jan 19, 2023Updated 3 years ago
- u-MPS implementation and experimentation code used in the paper Tensor Networks for Probabilistic Sequence Modeling (https://arxiv.org/ab…☆19Jul 2, 2020Updated 5 years ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆87Mar 7, 2023Updated 2 years ago
- A probabilitic model for contextual word representation. Accepted to ACL2023 Findings.☆25Oct 22, 2023Updated 2 years ago
- Code for "Does syntax need to grow on trees? Sources of inductive bias in sequence to sequence networks"☆24Jan 14, 2020Updated 6 years ago
- ☆22Jan 22, 2026Updated 3 weeks ago
- source code of NAACL2021 "PCFGs Can Do Better: Inducing Probabilistic Context-Free Grammars with Many Symbols“ and ACL2021 main conferenc…☆51Mar 28, 2025Updated 10 months ago
- Official PyTorch Implementation of the Longhorn Deep State Space Model☆56Dec 4, 2024Updated last year
- ☆24Sep 25, 2024Updated last year