bjoernpl / GermanBenchmark
A repository containing the code for translating popular LLM benchmarks to German.
☆25Updated last year
Alternatives and similar repositories for GermanBenchmark:
Users that are interested in GermanBenchmark are comparing it to the libraries listed below
- A framework for few-shot evaluation of autoregressive language models.☆13Updated last year
- Code for Zero-Shot Tokenizer Transfer☆125Updated 2 months ago
- ☆73Updated 11 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆72Updated 7 months ago
- Evaluation pipeline for the BabyLM Challenge 2023.☆75Updated last year
- Minimum Bayes Risk Decoding for Hugging Face Transformers☆57Updated 9 months ago
- ☆38Updated 11 months ago
- PyTorch library for Active Fine-Tuning☆62Updated last month
- Manage scalable open LLM inference endpoints in Slurm clusters☆253Updated 8 months ago
- A collection of datasets for language model pretraining including scripts for downloading, preprocesssing, and sampling.☆56Updated 8 months ago
- Official Code for M-RᴇᴡᴀʀᴅBᴇɴᴄʜ: Evaluating Reward Models in Multilingual Settings☆26Updated last month
- Prune transformer layers☆68Updated 10 months ago
- ☆65Updated last year
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆126Updated last year
- Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages -- ACL 2023☆100Updated 11 months ago
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆194Updated last week
- Erasing concepts from neural representations with provable guarantees☆226Updated 2 months ago
- ☆72Updated last year
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆108Updated last year
- Code and Data Repo for the CoNLL Paper -- Future Lens: Anticipating Subsequent Tokens from a Single Hidden State☆18Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- Multilingual Large Language Models Evaluation Benchmark☆119Updated 7 months ago
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated last year
- Experiments for efforts to train a new and improved t5☆77Updated 11 months ago
- Understand and test language model architectures on synthetic tasks.☆185Updated 3 weeks ago
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆71Updated last year
- Functional Benchmarks and the Reasoning Gap☆84Updated 5 months ago
- ☆54Updated last year
- Datasets collection and preprocessings framework for NLP extreme multitask learning☆176Updated 2 months ago
- ☆42Updated 2 months ago