LLM360 / k2-train
☆36Updated 5 months ago
Related projects ⓘ
Alternatives and complementary repositories for k2-train
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆74Updated 2 weeks ago
- ☆61Updated 2 months ago
- ☆26Updated 4 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆44Updated 9 months ago
- ☆50Updated last week
- ☆34Updated 8 months ago
- ☆49Updated 6 months ago
- ☆20Updated this week
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated 8 months ago
- ☆101Updated last month
- A repository for research on medium sized language models.☆74Updated 5 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆129Updated last month
- ☆44Updated 2 months ago
- ☆62Updated last month
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆104Updated last month
- code for training & evaluating Contextual Document Embedding models☆92Updated this week
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆51Updated this week
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; arXiv preprint arXiv:2403.…☆36Updated 3 months ago
- This is the official repository for Inheritune.☆105Updated last month
- The first dense retrieval model that can be prompted like an LM☆62Updated last month
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆46Updated 2 months ago
- Experiments for efforts to train a new and improved t5☆76Updated 6 months ago
- ☆100Updated 3 months ago
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆128Updated this week
- A pipeline for LLM knowledge distillation☆77Updated 3 months ago
- ☆68Updated 2 months ago
- Code for EMNLP 2024 paper "Learn Beyond The Answer: Training Language Models with Reflection for Mathematical Reasoning"☆45Updated last month
- ☆124Updated 6 months ago
- ☆62Updated 4 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆137Updated this week