kyegomez / EXA-1Links
An EXA-Scale repository of Multi-Modality AI resources from papers and models, to foundational libraries!
☆41Updated last year
Alternatives and similar repositories for EXA-1
Users that are interested in EXA-1 are comparing it to the libraries listed below
Sorting:
- The Next Generation Multi-Modality Superintelligence☆71Updated 9 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆19Updated 2 years ago
- Implementation of SelfExtend from the paper "LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning" from Pytorch and Zeta☆13Updated 6 months ago
- Finetune any model on HF in less than 30 seconds☆58Updated 2 months ago
- ☆54Updated last year
- ☆22Updated last year
- ☆36Updated 2 years ago
- ☆26Updated 2 years ago
- ☆33Updated 2 years ago
- An plug in and play pipeline that utilizes segment anything to segment datasets with rich detail for downstream fine-tuning on vision mod…☆21Updated last year
- A repository of projects and datasets under active development by Alignment Lab AI☆22Updated last year
- ☆13Updated 2 years ago
- Implementation of a holodeck, written in Pytorch☆18Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- ☆60Updated last year
- PyTorch Implementation of the paper "MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training"☆23Updated last week
- Training hybrid models for dummies.☆21Updated 4 months ago
- Fast approximate inference on a single GPU with sparsity aware offloading☆38Updated last year
- ☆29Updated last year
- A Data Source for Reasoning Embodied Agents☆19Updated last year
- ☆20Updated last year
- ☆32Updated 2 years ago
- BH hackathon☆14Updated last year
- Learning to Program with Natural Language☆6Updated last year
- A library for simplifying fine tuning with multi gpu setups in the Huggingface ecosystem.☆16Updated 7 months ago
- Command-line script for inferencing from models such as LLaMA, in a chat scenario, with LoRA adaptations☆33Updated 2 years ago
- a pipeline for using api calls to agnostically convert unstructured data into structured training data☆30Updated 8 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Latent Large Language Models☆18Updated 9 months ago
- GoldFinch and other hybrid transformer components☆45Updated 10 months ago