cornstarch-org / Cornstarch
☆89Updated last week
Alternatives and similar repositories for Cornstarch:
Users that are interested in Cornstarch are comparing it to the libraries listed below
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆286Updated last week
- minimal GRPO implementation from scratch☆85Updated last month
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆91Updated 4 months ago
- Code for ExploreTom☆79Updated 4 months ago
- ❓Curie: Automated and Rigorous Scientific Experimentation with AI Agents☆77Updated this week
- Cray-LM unified training and inference stack.☆22Updated 2 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆171Updated 3 months ago
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆127Updated this week
- Set of scripts to finetune LLMs☆37Updated last year
- ☆46Updated 5 months ago
- Train your own SOTA deductive reasoning model☆88Updated last month
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆198Updated 9 months ago
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆66Updated this week
- ☆64Updated 2 months ago
- ☆129Updated 8 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆116Updated 10 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆88Updated this week
- Maya: An Instruction Finetuned Multilingual Multimodal Model using Aya☆108Updated 2 months ago
- This is the official repository for the paper "Flora: Low-Rank Adapters Are Secretly Gradient Compressors" in ICML 2024.☆103Updated 9 months ago
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆196Updated 2 weeks ago
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated 11 months ago
- $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources☆135Updated last month
- Train, tune, and infer Bamba model☆88Updated this week
- An extension of the nanoGPT repository for training small MOE models.☆131Updated last month
- Reward-guided Speculative Decoding (RSD) for efficiency and effectiveness.☆25Updated last month
- Source code for the collaborative reasoner research project at Meta FAIR.☆33Updated last week
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆89Updated last week
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆178Updated this week
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆86Updated 2 weeks ago
- Code for studying the super weight in LLM☆98Updated 4 months ago