kyegomez / Kosmos-XLinks
The Next Generation Multi-Modality Superintelligence
☆69Updated last year
Alternatives and similar repositories for Kosmos-X
Users that are interested in Kosmos-X are comparing it to the libraries listed below
Sorting:
- Finetune any model on HF in less than 30 seconds☆55Updated 3 weeks ago
- An all-new Language Model That Processes Ultra-Long Sequences of 100,000+ Ultra-Fast☆150Updated last year
- ☆35Updated 2 years ago
- An EXA-Scale repository of Multi-Modality AI resources from papers and models, to foundational libraries!☆39Updated last year
- Mixing Language Models with Self-Verification and Meta-Verification☆109Updated 11 months ago
- ☆63Updated last year
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆169Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated 2 years ago
- Unofficial implementation and experiments related to Set-of-Mark (SoM) 👁️☆87Updated 2 years ago
- ☆53Updated last year
- Using multiple LLMs for ensemble Forecasting☆16Updated last year
- Small and Efficient Mathematical Reasoning LLMs☆72Updated last year
- An Implementation of "Orca: Progressive Learning from Complex Explanation Traces of GPT-4"☆43Updated last year
- Cerule - A Tiny Mighty Vision Model☆67Updated last week
- ☆73Updated 2 years ago
- Memoria is a human-inspired memory architecture for neural networks.☆77Updated last year
- A Data Source for Reasoning Embodied Agents☆19Updated 2 years ago
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆29Updated this week
- Track the progress of LLM context utilisation☆54Updated 7 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆35Updated last year
- GPT-2 small trained on phi-like data☆67Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- A simple package for leveraging Falcon 180B and the HF ecosystem's tools, including training/inference scripts, safetensors, integrations…☆11Updated last year
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- Evaluating LLMs with CommonGen-Lite☆91Updated last year
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆103Updated 5 months ago
- Image Diffusion block merging technique applied to transformers based Language Models.☆55Updated 2 years ago
- ☆32Updated 2 years ago
- ☆26Updated 2 years ago
- ☆55Updated last year