Pints-AI / 1.5-PintsLinks
A compact LLM pretrained in 9 days by using high quality data
☆314Updated 2 months ago
Alternatives and similar repositories for 1.5-Pints
Users that are interested in 1.5-Pints are comparing it to the libraries listed below
Sorting:
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆225Updated 7 months ago
- An extension of the nanoGPT repository for training small MOE models.☆152Updated 3 months ago
- A project to improve skills of large language models☆429Updated this week
- The official evaluation suite and dynamic data release for MixEval.☆242Updated 7 months ago
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆238Updated last year
- A pipeline for LLM knowledge distillation☆104Updated 2 months ago
- minimal GRPO implementation from scratch☆90Updated 3 months ago
- ☆520Updated 7 months ago
- ☆118Updated 9 months ago
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆108Updated 2 months ago
- ☆132Updated 10 months ago
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆152Updated last year
- code for training & evaluating Contextual Document Embedding models☆195Updated last month
- PyTorch building blocks for the OLMo ecosystem☆238Updated this week
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 5 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆198Updated 11 months ago
- Easy to use, High Performant Knowledge Distillation for LLMs☆85Updated last month
- ☆124Updated 2 months ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆482Updated 9 months ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆311Updated last month
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆140Updated 4 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆78Updated last month
- Experiments on speculative sampling with Llama models☆128Updated 2 years ago
- A bagel, with everything.☆321Updated last year
- prime-rl is a codebase for decentralized async RL training at scale☆341Updated this week
- ☆53Updated last year
- awesome synthetic (text) datasets☆282Updated 7 months ago
- Automatic evals for LLMs☆437Updated 2 weeks ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆231Updated 7 months ago
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆205Updated 2 weeks ago