bjoernpl / GermanBenchmarkLinks
A repository containing the code for translating popular LLM benchmarks to German.
☆28Updated 2 years ago
Alternatives and similar repositories for GermanBenchmark
Users that are interested in GermanBenchmark are comparing it to the libraries listed below
Sorting:
- Code for Zero-Shot Tokenizer Transfer☆135Updated 7 months ago
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆107Updated 5 months ago
- Minimum Bayes Risk Decoding for Hugging Face Transformers☆58Updated last year
- Erasing concepts from neural representations with provable guarantees☆232Updated 7 months ago
- ☆75Updated last year
- Datasets collection and preprocessings framework for NLP extreme multitask learning☆186Updated last month
- PyTorch library for Active Fine-Tuning☆89Updated 6 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆270Updated last year
- A framework for few-shot evaluation of autoregressive language models.☆13Updated last year
- Utilities for the HuggingFace transformers library☆70Updated 2 years ago
- Prune transformer layers☆69Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆177Updated 11 months ago
- ☆39Updated last year
- Official implementation of "GPT or BERT: why not both?"☆57Updated last month
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆107Updated last year
- ☆66Updated 2 years ago
- ☆53Updated 2 years ago
- ☆72Updated 2 years ago
- What's In My Big Data (WIMBD) - a toolkit for analyzing large text datasets☆224Updated 9 months ago
- Simple and scalable tools for data-driven pretraining data selection.☆25Updated 2 months ago
- Evaluation pipeline for the BabyLM Challenge 2023.☆77Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆256Updated last year
- Code repository for the c-BTM paper☆107Updated last year
- Experiments for efforts to train a new and improved t5☆76Updated last year
- Official implementation of "BERTs are Generative In-Context Learners"☆32Updated 5 months ago
- Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback☆97Updated 2 years ago
- Pretraining Efficiently on S2ORC!☆166Updated 10 months ago
- ☆81Updated 6 months ago
- A framework for few-shot evaluation of autoregressive language models.☆105Updated 2 years ago