facebookresearch / dual-system-for-visual-language-reasoningLinks
Github repo for Peifeng's internship project
☆13Updated last year
Alternatives and similar repositories for dual-system-for-visual-language-reasoning
Users that are interested in dual-system-for-visual-language-reasoning are comparing it to the libraries listed below
Sorting:
- ☆63Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆34Updated last year
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- ☆23Updated last year
- ☆13Updated 9 months ago
- Finetune any model on HF in less than 30 seconds☆57Updated 2 weeks ago
- Lottery Ticket Adaptation☆39Updated 10 months ago
- ☆35Updated 2 years ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- This repository contains the code for the paper: SirLLM: Streaming Infinite Retentive LLM☆60Updated last year
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆22Updated 10 months ago
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer☆43Updated last year
- The official repo for "LLoCo: Learning Long Contexts Offline"☆116Updated last year
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year
- A repository for research on medium sized language models.☆77Updated last year
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆40Updated 10 months ago
- Cerule - A Tiny Mighty Vision Model☆68Updated last year
- ☆21Updated last year
- ☆39Updated last year
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆59Updated last year
- distill chatGPT coding ability into small model (1b)☆30Updated 2 years ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆99Updated 11 months ago
- ☆15Updated last year
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated 2 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆119Updated last year
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated 2 years ago
- Challenge LLMs to Reason About Reasoning: A Benchmark to Unveil Cognitive Depth in LLMs☆50Updated last year
- Latent Large Language Models☆18Updated last year