kyegomez / Kosmos-X
The Next Generation Multi-Modality Superintelligence
☆71Updated 8 months ago
Alternatives and similar repositories for Kosmos-X:
Users that are interested in Kosmos-X are comparing it to the libraries listed below
- An all-new Language Model That Processes Ultra-Long Sequences of 100,000+ Ultra-Fast☆148Updated 8 months ago
- Finetune any model on HF in less than 30 seconds☆58Updated last month
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated 11 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- ☆54Updated last year
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- Using multiple LLMs for ensemble Forecasting☆16Updated last year
- Evaluating LLMs with CommonGen-Lite☆90Updated last year
- ☆37Updated last year
- Here is a Google Colab Notebook for fine-tuning Alpaca Lora (within 3 hours with a 40GB A100 GPU)☆38Updated 2 years ago
- ☆17Updated last week
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆168Updated last year
- Learning to Program with Natural Language☆6Updated last year
- ☆73Updated last year
- Cerule - A Tiny Mighty Vision Model☆67Updated 8 months ago
- ☆48Updated 6 months ago
- Experimental sampler to make LLMs more creative☆31Updated last year
- Merge LLM that are split in to parts☆26Updated last year
- Pre-training code for CrystalCoder 7B LLM☆54Updated 11 months ago
- ☆37Updated 2 years ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆78Updated last year
- Data preparation code for CrystalCoder 7B LLM☆44Updated 11 months ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆103Updated 9 months ago
- ☆63Updated 7 months ago
- Just a bunch of benchmark logs for different LLMs☆119Updated 9 months ago
- Public Inflection Benchmarks☆68Updated last year
- A data-centric AI package for ML/AI. Get the best high-quality data for the best results. Discord: https://discord.gg/t6ADqBKrdZ☆64Updated last year
- ☆20Updated last year