some common Huggingface transformers in maximal update parametrization (µP)
☆87Mar 14, 2022Updated 3 years ago
Alternatives and similar repositories for mutransformers
Users that are interested in mutransformers are comparing it to the libraries listed below
Sorting:
- maximal update parametrization (µP)☆1,686Jul 17, 2024Updated last year
- Code for the paper "Function-Space Learning Rates"☆25Jun 3, 2025Updated 8 months ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Sep 18, 2025Updated 5 months ago
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆18Jul 24, 2025Updated 7 months ago
- Adding new tasks to T0 without catastrophic forgetting☆33Oct 20, 2022Updated 3 years ago
- Optimizing bit-level Jaccard Index and Population Counts for large-scale quantized Vector Search via Harley-Seal CSA and Lookup Tables☆21May 18, 2025Updated 9 months ago
- QLoRA for Masked Language Modeling☆23Sep 11, 2023Updated 2 years ago
- Code for Augment & Reduce, a scalable stochastic algorithm for large categorical distributions☆10May 16, 2018Updated 7 years ago
- The contrastive token loss function for reducing generative repetition of autoregressive neural language models.☆13May 11, 2022Updated 3 years ago
- ☆10Mar 24, 2023Updated 2 years ago
- ☆16Jul 29, 2025Updated 7 months ago
- A curated list of resources on energy-based models.☆9Mar 14, 2022Updated 3 years ago
- This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.☆48Jun 7, 2022Updated 3 years ago
- ☆27Mar 14, 2024Updated last year
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆32Jun 5, 2025Updated 8 months ago
- lanmt ebm☆12Jun 19, 2020Updated 5 years ago
- HomebrewNLP in JAX flavour for maintable TPU-Training☆51Jan 20, 2024Updated 2 years ago
- Code accompanying VarGrad: A Low-Variance Gradient Estimator for Variational Inference☆12Oct 12, 2020Updated 5 years ago
- Maximal Update Parametrization (μP) with Flax & Optax.☆16Dec 27, 2023Updated 2 years ago
- Flow Annealed Importance Sampling Bootstrap (FAB) with JAX.☆13Jun 12, 2024Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆96Feb 9, 2023Updated 3 years ago
- ☆33Nov 4, 2024Updated last year
- Code for Blog Post: Can Better Cold-Start Strategies Improve RL Training for LLMs?☆19Mar 9, 2025Updated 11 months ago
- minimal diffusion transformer in pytorch.☆16Oct 6, 2024Updated last year
- ML Reproducibility Challenge 2020: Electra reimplementation using PyTorch and Transformers☆12Apr 16, 2021Updated 4 years ago
- ☆15Dec 3, 2024Updated last year
- Minimal but scalable implementation of large language models in JAX☆35Nov 28, 2025Updated 3 months ago
- In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuning☆35Aug 9, 2023Updated 2 years ago
- Convert BART models to ONNX with quantization. 3X reduction in size, and upto 3X boost in inference speed☆33Dec 11, 2024Updated last year
- [SIGIR24] Pre-training with Bag-of-Word Prediction for Dense Passage Retrieval☆18Feb 29, 2024Updated 2 years ago
- Automatically annotates YOLO dataset using Moondream visual model☆20Aug 24, 2025Updated 6 months ago
- [ACL 2025] Outlier-Safe Pre-Training for Robust 4-Bit Quantization of Large Language Models☆34Nov 4, 2025Updated 3 months ago
- Arabic edition of ALBERT pretrained language models☆16Apr 25, 2021Updated 4 years ago
- This is the official code for the paper 'Systematically Exploring Redundancy Reduction inSummarizing Long Documents'.☆16Apr 30, 2021Updated 4 years ago
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆36Jun 7, 2024Updated last year
- Code for "Training Deep Energy-Based Models with f-Divergence Minimization" ICML 2020☆37Mar 24, 2023Updated 2 years ago
- Code repository for the c-BTM paper☆108Sep 26, 2023Updated 2 years ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆187Jan 19, 2026Updated last month
- Use the OpenAI Batch tool to make async batch requests to the OpenAI API.☆101Mar 2, 2024Updated 2 years ago