DSBA-Lab / Contrastive-AccumulationLinks
☆10Updated last year
Alternatives and similar repositories for Contrastive-Accumulation
Users that are interested in Contrastive-Accumulation are comparing it to the libraries listed below
Sorting:
- BERT score for text generation☆12Updated 7 months ago
- huggingface에 있는 한국어 데이터 세트☆29Updated 10 months ago
- KoCommonGEN v2: A Benchmark for Navigating Korean Commonsense Reasoning Challenges in Large Language Models☆25Updated last year
- CLIcK: A Benchmark Dataset of Cultural and Linguistic Intelligence in Korean☆45Updated 8 months ago
- ☆20Updated last year
- Naver Boostcamp AI Tech Stage 3 : MRC (Machine Reading Comprehension)☆10Updated 4 years ago
- StrategyQA 데이터 세트 번역☆22Updated last year
- DSBA code study☆29Updated last year
- huggingface transformers tutorial, code, resources☆26Updated last year
- A hackable, simple, and reseach-friendly GRPO Training Framework with high speed weight synchronization in a multinode environment.☆19Updated this week
- Official repository for KoMT-Bench built by LG AI Research☆66Updated last year
- Evaluate gpt-4o on CLIcK (Korean NLP Dataset)☆20Updated last year
- 구글에서 발표한 Chain-of-Thought Reasoning without Prompting을 코드로 구현한 레포입니다.☆67Updated 11 months ago
- Reward Model을 이용하여 언어모델의 답변을 평가하기☆29Updated last year
- ☆35Updated last year
- "CS224n 2021 winter" study - KoreaUniv. DSBA Lab☆15Updated 3 years ago
- 자체 구축한 한국어 평가 데이터셋을 이용한 한국어 모델 평가☆31Updated last year
- ☆20Updated last year
- Performs benchmarking on two Korean datasets with minimal time and effort.☆43Updated 2 weeks ago
- The most modern LLM evaluation toolkit☆70Updated this week
- Difference-based Contrastive Learning for Korean Sentence Embeddings☆25Updated 2 years ago
- [Google Meet] MLLM Arxiv Casual Talk☆52Updated 2 years ago
- [Findings of NAACL2022] A Dog Is Passing Over The Jet? A Text-Generation Dataset for Korean Commonsense Reasoning and Evaluation☆27Updated 2 years ago
- For the rlhf learning environment of Koreans☆23Updated last year
- 거꾸로 읽는 self-supervised learning in NLP☆27Updated 2 years ago
- bpe based korean t5 model for text-to-text unified framework☆63Updated last year
- KLUE Benchmark 1st place (2021.12) solutions. (RE, MRC, NLI, STS, TC)☆25Updated 3 years ago
- ☆32Updated last year
- Beyond LM: How can language model go forward in the future?☆15Updated 2 years ago
- 한국어 T5 모델☆54Updated 3 years ago