bjoernpl / GermanBenchmarkLinks
A repository containing the code for translating popular LLM benchmarks to German.
☆29Updated 2 years ago
Alternatives and similar repositories for GermanBenchmark
Users that are interested in GermanBenchmark are comparing it to the libraries listed below
Sorting:
- Code for Zero-Shot Tokenizer Transfer☆137Updated 8 months ago
- Minimum Bayes Risk Decoding for Hugging Face Transformers☆59Updated last year
- Datasets collection and preprocessings framework for NLP extreme multitask learning☆186Updated 2 months ago
- ☆75Updated last year
- Erasing concepts from neural representations with provable guarantees☆234Updated 7 months ago
- Prune transformer layers☆69Updated last year
- ☆54Updated 2 years ago
- A collection of datasets for language model pretraining including scripts for downloading, preprocesssing, and sampling.☆61Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆273Updated last year
- ☆65Updated 2 years ago
- ☆39Updated last year
- ☆72Updated 2 years ago
- Utilities for the HuggingFace transformers library☆71Updated 2 years ago
- Wrapper to easily generate the chat template for Llama2☆66Updated last year
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆107Updated 6 months ago
- Evaluation pipeline for the BabyLM Challenge 2023.☆77Updated last year
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆133Updated last year
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆212Updated 3 weeks ago
- LLM-Merging: Building LLMs Efficiently through Merging☆203Updated 11 months ago
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆107Updated last year
- A framework for few-shot evaluation of autoregressive language models.☆13Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆256Updated last year
- Scaling Data-Constrained Language Models☆342Updated 2 months ago
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆178Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆95Updated 2 years ago
- What's In My Big Data (WIMBD) - a toolkit for analyzing large text datasets☆224Updated 10 months ago
- Official Code for M-RᴇᴡᴀʀᴅBᴇɴᴄʜ: Evaluating Reward Models in Multilingual Settings (ACL 2025 Main)☆35Updated 4 months ago
- ☆81Updated 6 months ago
- A framework for few-shot evaluation of autoregressive language models.☆105Updated 2 years ago
- Official implementation of "GPT or BERT: why not both?"☆59Updated last month