antimatter15 / reverse-engineering-gemma-3nLinks
Reverse Engineering Gemma 3n: Google's New Edge-Optimized Language Model
☆254Updated 6 months ago
Alternatives and similar repositories for reverse-engineering-gemma-3n
Users that are interested in reverse-engineering-gemma-3n are comparing it to the libraries listed below
Sorting:
- PyTorch implementation of models from the Zamba2 series.☆186Updated 10 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆249Updated 10 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆305Updated 2 weeks ago
- ☆219Updated 10 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆103Updated 7 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆111Updated 7 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆360Updated last year
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆272Updated this week
- Storing long contexts in tiny caches with self-study☆220Updated 2 weeks ago
- Open-source release accompanying Gao et al. 2025☆450Updated last week
- ☆204Updated last year
- Experiments on speculative sampling with Llama models☆127Updated 2 years ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆349Updated 7 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆202Updated last year
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆136Updated 4 months ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆155Updated last year
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 11 months ago
- [ICML 2024] CLLMs: Consistency Large Language Models☆408Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆130Updated last year
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆226Updated last month
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆463Updated 7 months ago
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆163Updated 4 months ago
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆261Updated this week
- PyTorch building blocks for the OLMo ecosystem☆563Updated this week
- An extension of the nanoGPT repository for training small MOE models.☆218Updated 9 months ago
- scalable and robust tree-based speculative decoding algorithm☆365Updated 10 months ago
- ☆610Updated this week
- Long context evaluation for large language models☆224Updated 9 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆245Updated last year
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆257Updated last year