omegalabsinc / omegalabs-bittensor-subnetLinks
The World's Largest Decentralized AGI Multimodal Dataset
☆58Updated last month
Alternatives and similar repositories for omegalabs-bittensor-subnet
Users that are interested in omegalabs-bittensor-subnet are comparing it to the libraries listed below
Sorting:
- Making the world's first and smartest opensource any-to-any AGI system☆44Updated 2 months ago
- Cerule - A Tiny Mighty Vision Model☆68Updated 2 months ago
- Unofficial implementation and experiments related to Set-of-Mark (SoM) 👁️☆88Updated 2 years ago
- ☆87Updated last year
- ☆63Updated last year
- Mixing Language Models with Self-Verification and Meta-Verification☆111Updated last year
- Maya: An Instruction Finetuned Multilingual Multimodal Model using Aya☆125Updated 5 months ago
- Simple Implementation of TinyGPTV in super simple Zeta lego blocks☆16Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated 2 years ago
- The open-source code of MetaStone-S1.☆106Updated 5 months ago
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- Code for Paper: Harnessing Webpage Uis For Text Rich Visual Understanding☆53Updated last year
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆60Updated 7 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated 3 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆102Updated last year
- This repository contains the code for the paper: SirLLM: Streaming Infinite Retentive LLM☆60Updated last year
- a family of highly capabale yet efficient large multimodal models☆191Updated last year
- Data preparation code for Amber 7B LLM☆94Updated last year
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer☆45Updated last year
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- ☆101Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Updated 4 months ago
- An EXA-Scale repository of Multi-Modality AI resources from papers and models, to foundational libraries!☆40Updated last year
- ☆81Updated last year
- PyTorch implementation of models from the Zamba2 series.☆186Updated 11 months ago
- Maybe the new state of the art vision model? we'll see 🤷♂️☆170Updated 2 years ago
- A repository for research on medium sized language models.☆77Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆35Updated last year
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆37Updated 3 months ago
- Finetune any model on HF in less than 30 seconds☆56Updated last week