kyegomez / EXA-1Links
An EXA-Scale repository of Multi-Modality AI resources from papers and models, to foundational libraries!
☆39Updated last year
Alternatives and similar repositories for EXA-1
Users that are interested in EXA-1 are comparing it to the libraries listed below
Sorting:
- Finetune any model on HF in less than 30 seconds☆55Updated last month
- The Next Generation Multi-Modality Superintelligence☆69Updated last year
- Implementation of SelfExtend from the paper "LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning" from Pytorch and Zeta☆12Updated 11 months ago
- ☆61Updated last year
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆29Updated this week
- ☆63Updated last year
- A simple package for leveraging Falcon 180B and the HF ecosystem's tools, including training/inference scripts, safetensors, integrations…☆11Updated last year
- Training hybrid models for dummies.☆26Updated 2 weeks ago
- GoldFinch and other hybrid transformer components☆45Updated last year
- Simplex Random Feature attention, in PyTorch☆73Updated 2 years ago
- Implementation of the LDP module block in PyTorch and Zeta from the paper: "MobileVLM: A Fast, Strong and Open Vision Language Assistant …☆14Updated last year
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer☆44Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆62Updated 2 years ago
- Implementation of a holodeck, written in Pytorch☆18Updated last year
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 5 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆34Updated last year
- An all-new Language Model That Processes Ultra-Long Sequences of 100,000+ Ultra-Fast☆151Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated 2 years ago
- An plug in and play pipeline that utilizes segment anything to segment datasets with rich detail for downstream fine-tuning on vision mod…☆19Updated last year
- A Data Source for Reasoning Embodied Agents☆19Updated 2 years ago
- Implementation of the Mamba SSM with hf_integration.☆56Updated last year
- PyTorch Implementation of the paper "MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training"☆24Updated 2 weeks ago
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆57Updated 4 months ago
- Latent Diffusion Language Models☆68Updated 2 years ago
- A repository for research on medium sized language models.☆78Updated last year
- ☆53Updated last year
- Unofficial implementation and experiments related to Set-of-Mark (SoM) 👁️☆87Updated last year
- RWKV model implementation☆38Updated 2 years ago
- Cerule - A Tiny Mighty Vision Model☆67Updated last year