bjoernpl / lm-evaluation-harness-de
A framework for few-shot evaluation of autoregressive language models.
☆13Updated 7 months ago
Related projects: ⓘ
- A repository containing the code for translating popular LLM benchmarks to German.☆22Updated last year
- ☆75Updated 3 weeks ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆217Updated 2 months ago
- Let's build better datasets, together!☆195Updated last month
- awesome synthetic (text) datasets☆213Updated last week
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆177Updated 4 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆81Updated last year
- An Open Source Toolkit For LLM Distillation☆284Updated last month
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆217Updated 6 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆158Updated 2 months ago
- experiments with inference on llama☆106Updated 3 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆107Updated last year
- ☆89Updated 11 months ago
- Just a bunch of benchmark logs for different LLMs☆112Updated last month
- ☆92Updated last year
- Vision Document Retrieval (ViDoRe): Benchmark. Evaluation code for the ColPali paper.☆101Updated last week
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆237Updated last week
- ☆82Updated 3 weeks ago
- Prune transformer layers☆60Updated 3 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆170Updated last month
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆118Updated last week
- A simple unified framework for evaluating LLMs☆121Updated this week
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆139Updated 3 weeks ago
- A set of scripts and notebooks on LLM finetunning and dataset creation☆89Updated last week
- ☆201Updated 7 months ago
- Low-Rank adapter extraction for fine-tuned transformers model☆154Updated 4 months ago
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vecto…☆192Updated 4 months ago
- Set of scripts to finetune LLMs☆36Updated 5 months ago
- RAGElo is a set of tools that helps you selecting the best RAG-based LLM agents by using an Elo ranker☆101Updated last week
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆128Updated this week