kyegomez / Kosmos-X
The Next Generation Multi-Modality Superintelligence
☆69Updated 2 weeks ago
Related projects: ⓘ
- An all-new Language Model That Processes Ultra-Long Sequences of 100,000+ Ultra-Fast☆131Updated 2 weeks ago
- Small and Efficient Mathematical Reasoning LLMs☆69Updated 7 months ago
- Finetune any model on HF in less than 30 seconds☆56Updated last week
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆99Updated last month
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated 6 months ago
- Official code for ACL 2023 (short, findings) paper "Recursion of Thought: A Divide and Conquer Approach to Multi-Context Reasoning with L…☆41Updated last year
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆55Updated last week
- ☆35Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated 3 months ago
- ☆30Updated 4 months ago
- Data preparation code for CrystalCoder 7B LLM☆42Updated 4 months ago
- Evaluating LLMs with CommonGen-Lite☆83Updated 5 months ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆161Updated 8 months ago
- ☆37Updated last year
- Mixing Language Models with Self-Verification and Meta-Verification☆96Updated 10 months ago
- A repository for research on medium sized language models.☆71Updated 3 months ago
- ☆62Updated 5 months ago
- Set of scripts to finetune LLMs☆36Updated 5 months ago
- Learning to Program with Natural Language☆5Updated 9 months ago
- ☆55Updated 9 months ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆23Updated last year
- QLoRA with Enhanced Multi GPU Support☆36Updated last year
- ☆92Updated last year
- The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER).…☆122Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆62Updated last year
- 🚀 Automatically convert unstructured data into a high-quality 'textbook' format, optimized for fine-tuning Large Language Models (LLMs)☆24Updated 11 months ago
- Multi-Domain Expert Learning☆67Updated 7 months ago
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆42Updated last week
- Tools for content datamining and NLP at scale☆41Updated 3 months ago
- An Implementation of "Orca: Progressive Learning from Complex Explanation Traces of GPT-4"☆43Updated last year