facebookresearch / RAM
A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).
β15Updated this week
Related projects: β
- π Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623β52Updated 3 weeks ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"β24Updated 5 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"β44Updated 8 months ago
- Official implementation for the paper *π―DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*β57Updated 3 weeks ago
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".β43Updated last week
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs