apple / ml-hypercloningLinks
☆52Updated last year
Alternatives and similar repositories for ml-hypercloning
Users that are interested in ml-hypercloning are comparing it to the libraries listed below
Sorting:
- ☆48Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆87Updated 3 years ago
- Collection of autoregressive model implementation☆85Updated 2 weeks ago
- MatFormer repo☆70Updated last year
- Train, tune, and infer Bamba model☆138Updated 7 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- A repository for research on medium sized language models.☆77Updated last year
- ☆82Updated last year
- ☆47Updated 2 years ago
- Google TPU optimizations for transformers models☆133Updated last week
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆109Updated 8 months ago
- Code for NeurIPS LLM Efficiency Challenge☆60Updated last year
- ☆109Updated 6 months ago
- ☆59Updated 2 months ago
- ☆55Updated last year
- PyTorch implementation of models from the Zamba2 series.☆186Updated last year
- Triton Implementation of HyperAttention Algorithm☆48Updated 2 years ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆163Updated 9 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆201Updated last year
- Prune transformer layers☆74Updated last year
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆58Updated last week
- ☆31Updated last year
- ☆140Updated 5 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆115Updated 9 months ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆86Updated 2 years ago
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆148Updated 2 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆103Updated last year
- ☆54Updated last year
- Implementation of the Llama architecture with RLHF + Q-learning☆170Updated 11 months ago