kyegomez / EXA-1
An EXA-Scale repository of Multi-Modality AI resources from papers and models, to foundational libraries!
☆42Updated last year
Alternatives and similar repositories for EXA-1
Users that are interested in EXA-1 are comparing it to the libraries listed below
Sorting:
- A repository of projects and datasets under active development by Alignment Lab AI☆22Updated last year
- Finetune any model on HF in less than 30 seconds☆58Updated last month
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- An plug in and play pipeline that utilizes segment anything to segment datasets with rich detail for downstream fine-tuning on vision mod…☆21Updated last year
- Implementation of SelfExtend from the paper "LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning" from Pytorch and Zeta☆13Updated 6 months ago
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆29Updated this week
- Implementation of Spectral State Space Models☆16Updated last year
- PyTorch Implementation of the paper "MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training"☆23Updated 3 weeks ago
- The Next Generation Multi-Modality Superintelligence☆71Updated 8 months ago
- MPI Code Generation through Domain-Specific Language Models☆13Updated 5 months ago
- GoldFinch and other hybrid transformer components☆45Updated 9 months ago
- Using multiple LLMs for ensemble Forecasting☆16Updated last year
- Public reports detailing responses to sets of prompts by Large Language Models.☆30Updated 4 months ago
- ☆37Updated 2 years ago
- Training hybrid models for dummies.☆21Updated 4 months ago
- Nexusflow function call, tool use, and agent benchmarks.☆19Updated 5 months ago
- a pipeline for using api calls to agnostically convert unstructured data into structured training data☆30Updated 7 months ago
- Advanced Reasoning Benchmark Dataset for LLMs☆46Updated last year
- ☆63Updated 7 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated 11 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- A data-centric AI package for ML/AI. Get the best high-quality data for the best results. Discord: https://discord.gg/t6ADqBKrdZ☆64Updated last year
- Data preparation code for CrystalCoder 7B LLM☆44Updated last year
- A forest of autonomous agents.☆19Updated 3 months ago
- A library for simplifying fine tuning with multi gpu setups in the Huggingface ecosystem.☆16Updated 6 months ago
- LLMs as Collaboratively Edited Knowledge Bases☆45Updated last year
- ☆22Updated last year
- ☆26Updated 2 years ago
- Latent Large Language Models☆18Updated 8 months ago
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆50Updated 3 months ago