facebookresearch / dual-system-for-visual-language-reasoning
Github repo for Peifeng's internship project
☆14Updated last year
Alternatives and similar repositories for dual-system-for-visual-language-reasoning:
Users that are interested in dual-system-for-visual-language-reasoning are comparing it to the libraries listed below
- ☆13Updated 3 months ago
- Data preparation code for CrystalCoder 7B LLM☆44Updated 10 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆21Updated 4 months ago
- ☆63Updated 6 months ago
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year
- Lottery Ticket Adaptation☆38Updated 4 months ago
- ☆16Updated 3 weeks ago
- ☆19Updated 3 weeks ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated last year
- ☆32Updated 3 weeks ago
- ☆13Updated last year
- ☆20Updated 9 months ago
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆35Updated 11 months ago
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆50Updated this week
- ☆13Updated last year
- ☆32Updated 9 months ago
- ☆36Updated 2 years ago
- ☆24Updated 6 months ago
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆28Updated 3 weeks ago
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆26Updated 3 weeks ago
- Finetune any model on HF in less than 30 seconds☆58Updated 2 months ago
- Official implementation of "Gemini in Reasoning: Unveiling Commonsense in Multimodal Large Language Models"☆36Updated last year
- Pytorch implementation of HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language Models☆28Updated last year
- Official implementation of the paper "MMInA: Benchmarking Multihop Multimodal Internet Agents"☆41Updated last month
- Official implementation of ECCV24 paper: POA☆24Updated 7 months ago
- ☆42Updated this week
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆36Updated last year
- FuseAI Project☆84Updated 2 months ago