apple / ml-cross-entropyLinks
β516Updated last month
Alternatives and similar repositories for ml-cross-entropy
Users that are interested in ml-cross-entropy are comparing it to the libraries listed below
Sorting:
- Implementation of π Ring Attention, from Liu et al. at Berkeley AI, in Pytorchβ536Updated 3 months ago
- π Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flashβ¦β262Updated last month
- Large Context Attentionβ729Updated 7 months ago
- Helpful tools and examples for working with flex-attentionβ943Updated last week
- [ICML 2024] CLLMs: Consistency Large Language Modelsβ400Updated 9 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, sparsβ¦β346Updated 8 months ago
- π₯ A minimal training framework for scaling FLA modelsβ233Updated last week
- β294Updated 4 months ago
- Load compute kernels from the Hubβ258Updated this week
- LLM KV cache compression made easyβ596Updated this week
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"β245Updated 6 months ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.β571Updated 2 weeks ago
- Efficient LLM Inference over Long Sequencesβ390Updated 2 months ago
- β194Updated 8 months ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024β328Updated 3 months ago
- Normalized Transformer (nGPT)β187Updated 9 months ago
- A family of compressed models obtained via pruning and knowledge distillationβ348Updated 9 months ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793β434Updated 3 months ago
- Scalable toolkit for efficient model reinforcementβ796Updated this week
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.β261Updated 3 weeks ago
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAIβ289Updated 2 months ago
- π³ Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"β833Updated 5 months ago
- Ring attention implementation with flash attentionβ849Updated 3 weeks ago
- Efficient triton implementation of Native Sparse Attention.β209Updated 3 months ago
- Scalable and Performant Data Loadingβ291Updated last week
- [ICLR2025 Spotlightπ₯] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parametersβ571Updated 6 months ago
- Megatron's multi-modal data loaderβ239Updated last week
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paperβ730Updated 2 weeks ago
- An extension of the nanoGPT repository for training small MOE models.β181Updated 5 months ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Headsβ487Updated 6 months ago