fairydreaming / farel-bench
Testing LLM reasoning abilities with family relationship quizzes.
☆57Updated this week
Alternatives and similar repositories for farel-bench:
Users that are interested in farel-bench are comparing it to the libraries listed below
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆120Updated this week
- Low-Rank adapter extraction for fine-tuned transformers models☆167Updated 8 months ago
- ☆122Updated 5 months ago
- ☆65Updated 8 months ago
- ☆109Updated last month
- Easy to use, High Performant Knowledge Distillation for LLMs☆40Updated 2 weeks ago
- Full finetuning of large language models without large memory requirements☆93Updated last year
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆38Updated 8 months ago
- Video+code lecture on building nanoGPT from scratch☆65Updated 7 months ago
- ☆110Updated 4 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆190Updated 6 months ago
- look how they massacred my boy☆63Updated 3 months ago
- A pipeline parallel training script for LLMs.☆121Updated this week
- An introduction to LLM Sampling☆75Updated last month
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- Set of scripts to finetune LLMs☆36Updated 10 months ago
- ☆49Updated 10 months ago
- Distributed Inference for mlx LLm☆79Updated 5 months ago
- Large Model Proxy is designed to make it easy to run multiple resource-heavy Large Models (LM) on the same machine with limited amount of…☆49Updated 3 months ago
- Scripts to create your own moe models using mlx☆86Updated 11 months ago
- Fast approximate inference on a single GPU with sparsity aware offloading☆38Updated last year
- inference code for mixtral-8x7b-32kseqlen☆99Updated last year
- AnyModal is a Flexible Multimodal Language Model Framework for PyTorch☆81Updated last month
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆140Updated 10 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆158Updated 2 weeks ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆232Updated 8 months ago
- Simple examples using Argilla tools to build AI☆52Updated 2 months ago
- 1.58-bit LLaMa model☆80Updated 9 months ago
- Embed arbitrary modalities (images, audio, documents, etc) into large language models.☆177Updated 10 months ago