fairydreaming / farel-benchLinks
Testing LLM reasoning abilities with family relationship quizzes.
☆62Updated 4 months ago
Alternatives and similar repositories for farel-bench
Users that are interested in farel-bench are comparing it to the libraries listed below
Sorting:
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆140Updated 4 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆101Updated 3 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆173Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆78Updated last month
- ☆53Updated last year
- ☆132Updated 10 months ago
- Video+code lecture on building nanoGPT from scratch☆68Updated last year
- ☆66Updated last year
- Easy to use, High Performant Knowledge Distillation for LLMs☆85Updated last month
- ☆49Updated last year
- Simple GRPO scripts and configurations.☆58Updated 4 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- An introduction to LLM Sampling☆78Updated 6 months ago
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆108Updated 2 months ago
- ☆114Updated 6 months ago
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆41Updated last year
- Self-hosted LLM chatbot arena, with yourself as the only judge☆41Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated last year
- RWKV-7: Surpassing GPT☆91Updated 7 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆198Updated 11 months ago
- ☆124Updated 2 months ago
- Lego for GRPO☆28Updated 3 weeks ago
- ☆63Updated last month
- A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.☆85Updated last week
- Transplants vocabulary between language models, enabling the creation of draft models for speculative decoding WITHOUT retraining.☆31Updated 2 months ago
- Train your own SOTA deductive reasoning model☆94Updated 3 months ago
- A pipeline parallel training script for LLMs.☆149Updated last month
- entropix style sampling + GUI☆26Updated 7 months ago
- 1.58-bit LLaMa model☆81Updated last year
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆154Updated last year