gumran / language-diffusionLinks
A quick implementation of diffusion language models.
☆47Updated 3 months ago
Alternatives and similar repositories for language-diffusion
Users that are interested in language-diffusion are comparing it to the libraries listed below
Sorting:
- Flexible library for merging large language models (LLMs) via evolutionary optimization (ACL 2025 Demo).☆97Updated 5 months ago
- Implementations of growing and pruning in neural networks☆22Updated 2 years ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆92Updated 2 years ago
- DoG is SGD's Best Friend: A Parameter-Free Dynamic Step Size Schedule☆64Updated 2 years ago
- A Python package for generating concise, high-quality summaries of a probability distribution☆57Updated last week
- Deep Networks Grok All the Time and Here is Why☆38Updated last year
- Minimum Description Length probing for neural network representations☆20Updated last year
- Clustered Compositional Embeddings☆11Updated 2 years ago
- ☆35Updated last year
- This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explaine…☆40Updated 2 years ago
- Code for minimum-entropy coupling.☆32Updated 3 weeks ago
- Latent Diffusion Language Models☆70Updated 2 years ago
- Sparse and discrete interpretability tool for neural networks☆64Updated last year
- ☆238Updated 2 months ago
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated last year
- Code for☆28Updated last year
- ☆35Updated last year
- Portfolio REgret for Confidence SEquences☆20Updated 3 weeks ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆103Updated last year
- ☆62Updated last year
- ☆82Updated last year
- ☆39Updated 9 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆88Updated last year
- A system for automating selection and optimization of pre-trained models from the TAO Model Zoo☆29Updated last year
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆116Updated last year
- ☆109Updated 6 months ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆92Updated last year
- Serialize JAX, Flax, Haiku, or Objax model params with 🤗`safetensors`☆47Updated last year
- Understanding how features learned by neural networks evolve throughout training☆41Updated last year
- Python package for generating datasets to evaluate reasoning and retrieval of large language models☆19Updated 4 months ago