bminixhofer / zettView external linksLinks
Code for Zero-Shot Tokenizer Transfer
☆142Jan 14, 2025Updated last year
Alternatives and similar repositories for zett
Users that are interested in zett are comparing it to the libraries listed below
Sorting:
- A Framework aims to wisely initialize unseen subword embeddings in PLMs for efficient large-scale continued pretraining☆18Nov 26, 2023Updated 2 years ago
- Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.☆87Sep 12, 2024Updated last year
- [EMNLP'23] Official Code for "FOCUS: Effective Embedding Initialization for Monolingual Specialization of Multilingual Models"☆36Jun 7, 2025Updated 8 months ago
- A toolkit implementing advanced methods to transfer models and model knowledge across tokenizers.☆62Jul 6, 2025Updated 7 months ago
- ☆57Dec 27, 2025Updated last month
- https://footprints.baulab.info☆17Oct 4, 2024Updated last year
- Efficient Language Model Training through Cross-Lingual and Progressive Transfer Learning☆30Jan 25, 2023Updated 3 years ago
- Code and data for the IWSLT 2022 shared task on Formality Control for SLT☆22May 24, 2023Updated 2 years ago
- mPLM-Sim: Better Cross-Lingual Similarity and Transfer in Multilingual Pretrained Language Models☆11Jan 19, 2024Updated 2 years ago
- Code for ACL 2022 paper "Expanding Pretrained Models to Thousands More Languages via Lexicon-based Adaptation"☆30Apr 2, 2022Updated 3 years ago
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆61Feb 21, 2022Updated 3 years ago
- ☆22Nov 6, 2022Updated 3 years ago
- ☆25May 7, 2025Updated 9 months ago
- Learning to route instances for Human vs AI Feedback (ACL Main '25)☆26Jul 23, 2025Updated 6 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Jun 11, 2025Updated 8 months ago
- Python source code for EMNLP 2021 Findings paper: "Subword Mapping and Anchoring Across Languages".☆13Sep 17, 2021Updated 4 years ago
- ☆13Dec 6, 2024Updated last year
- ANE accelerated embedding models!☆20Dec 11, 2024Updated last year
- ☆59Nov 18, 2025Updated 2 months ago
- ☆137Aug 19, 2024Updated last year
- PathPiece tokenizer☆13Nov 10, 2024Updated last year
- ☆130Oct 1, 2024Updated last year
- DImensionality REduction in JAX☆24Nov 21, 2025Updated 2 months ago
- Can LLMs generate code-mixed sentences through zero-shot prompting?☆11Apr 18, 2023Updated 2 years ago
- Research code for the paper "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models"☆28Oct 3, 2021Updated 4 years ago
- 🚢 Data Toolkit for Sailor Language Models☆96Feb 24, 2025Updated 11 months ago
- A scalable implementation of diffusion and flow-matching with XGBoost models, applied to calorimeter data.☆19Nov 3, 2024Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆147Sep 20, 2024Updated last year
- Triton Implementation of HyperAttention Algorithm☆48Dec 11, 2023Updated 2 years ago
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆33Jun 2, 2023Updated 2 years ago
- Code for "Merging Text Transformers from Different Initializations"☆20Feb 2, 2025Updated last year
- Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages -- ACL 2023☆106Apr 20, 2024Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆102Sep 30, 2024Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆89Oct 30, 2024Updated last year
- Learning adapter weights from task descriptions☆19Nov 12, 2023Updated 2 years ago
- ☆23Oct 30, 2023Updated 2 years ago
- State-of-the-art LLM-based translation models.☆577Apr 9, 2025Updated 10 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆149Oct 27, 2024Updated last year
- [NeurIPS 2024] VeLoRA : Memory Efficient Training using Rank-1 Sub-Token Projections☆21Oct 15, 2024Updated last year