kyegomez / Kosmos-XLinks
The Next Generation Multi-Modality Superintelligence
☆71Updated 9 months ago
Alternatives and similar repositories for Kosmos-X
Users that are interested in Kosmos-X are comparing it to the libraries listed below
Sorting:
- Finetune any model on HF in less than 30 seconds☆57Updated 2 months ago
- An all-new Language Model That Processes Ultra-Long Sequences of 100,000+ Ultra-Fast☆149Updated 9 months ago
- An Implementation of "Orca: Progressive Learning from Complex Explanation Traces of GPT-4"☆43Updated 8 months ago
- ☆36Updated 2 years ago
- ☆33Updated 2 years ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- ☆73Updated last year
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- ☆22Updated last year
- Multi-Domain Expert Learning☆67Updated last year
- ☆54Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- Here is a Google Colab Notebook for fine-tuning Alpaca Lora (within 3 hours with a 40GB A100 GPU)☆38Updated 2 years ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- Experimental sampler to make LLMs more creative☆31Updated last year
- Using multiple LLMs for ensemble Forecasting☆16Updated last year
- ☆37Updated 2 years ago
- ☆23Updated last year
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- Learning to Program with Natural Language☆6Updated last year
- ☆34Updated last year
- Image Diffusion block merging technique applied to transformers based Language Models.☆54Updated 2 years ago
- Tools for content datamining and NLP at scale☆43Updated last year
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆103Updated last month
- LLMs as Collaboratively Edited Knowledge Bases☆45Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- Track the progress of LLM context utilisation☆54Updated 2 months ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆168Updated last year
- A simple package for leveraging Falcon 180B and the HF ecosystem's tools, including training/inference scripts, safetensors, integrations…☆13Updated last year
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated last year