ai-forever / LIBRA
☆18Updated 3 months ago
Alternatives and similar repositories for LIBRA:
Users that are interested in LIBRA are comparing it to the libraries listed below
- ☆22Updated last year
- RuBLiMP: Russian Benchmark of Linguistic Minimal Pairs☆17Updated last month
- ☆31Updated 6 months ago
- Creating multimodal multitask models☆50Updated 2 years ago
- MMLU eval for RU/EN☆15Updated last year
- Framework for processing and filtering datasets☆27Updated 7 months ago
- MERA (Multimodal Evaluation for Russian-language Architectures) is a new open benchmark for the Russian language for evaluating fundament…☆62Updated 5 months ago
- Effective LLM Alignment Toolkit☆123Updated 2 weeks ago
- RuTransform: python framework for adversarial attacks and text data augmentation for Russian☆19Updated last year
- RuCLIP tiny (Russian Contrastive Language–Image Pretraining) is a neural network trained to work with different pairs (images, texts).☆32Updated 2 years ago
- ☆11Updated last year
- FusionBrain Challenge 2.0: creating multimodal multitask model☆16Updated 2 years ago
- RUSSE 2022: Russian Text Detoxification Based on Parallel Corpora☆20Updated last month
- Train punctuation and capitalization models for different languages☆24Updated 2 years ago
- Augmentex — a library for augmenting texts with errors☆63Updated 8 months ago
- Russian dialog datasets parsers and crawlers.☆16Updated 3 years ago
- Russian Drug Reaction Corpus (RuDReC)☆9Updated 4 years ago
- Modified Arena-Hard-Auto LLM evaluation toolkit with an emphasis on Russian language☆39Updated last week
- ☆18Updated 2 years ago
- Russian Artificial Text Detection☆17Updated 2 years ago
- Repository for the paper: "Revisiting BPR: A Replicability Study of a Common Recommender System Baseline"☆50Updated 4 months ago
- Evalica, your favourite evaluation toolkit☆32Updated 3 weeks ago
- Bunch of notebooks for pre-training custom Saiga-like LLM☆13Updated last year
- Top ML papers of the week.☆25Updated this week
- Training BERT for punctuation task☆10Updated 4 years ago
- MOdel ResOurCe COnsumption. Evaluate Russian SuperGLUE models performance: inference speed, RAM usage. Reproducible scores using Docker☆22Updated 2 years ago
- Code for the paper "PALBERT: Teaching ALBERT to Ponder", NeurIPS 2022 Spotlight☆37Updated last year
- Efficient DL/ML Models Seminars☆28Updated 2 months ago
- Official implementation of the paper "You Do Not Fully Utilize Transformer's Representation Capacity"☆26Updated last month
- Reinforcement Learning Library.☆28Updated 2 years ago