jason9693 / ETA4LLMs
Calculating Expected Time for training LLM.
☆38Updated last year
Related projects ⓘ
Alternatives and complementary repositories for ETA4LLMs
- Difference-based Contrastive Learning for Korean Sentence Embeddings☆24Updated last year
- 언어모델을 학습하기 위한 공개 한국어 instruction dataset들을 모아두었습니다.☆19Updated last year
- Polyglot을 활용한 image-text multimodal☆11Updated last year
- ☆19Updated 2 years ago
- Hate speech detection corpus in Korean, shared with EMNLP 2023 paper☆13Updated 7 months ago
- KLUE Benchmark 1st place (2021.12) solutions. (RE, MRC, NLI, STS, TC)☆25Updated 2 years ago
- ☆18Updated last year
- CareCall for Seniors: Role Specified Open-Domain Dialogue dataset generated by leveraging LLMs (NAACL 2022).☆59Updated 2 years ago
- Beyond LM: How can language model go forward in the future?☆15Updated last year
- ☆23Updated last year
- Google 공식 Rouge Implementation을 한국어에서 사용할 수 있도록 처리☆13Updated 10 months ago
- ☆26Updated 4 years ago
- ☆14Updated 2 years ago
- Implementation of stop sequencer for Huggingface Transformers☆15Updated last year
- Korean Nested Named Entity Corpus☆16Updated last year
- ☆32Updated last year
- [Findings of NAACL2022] A Dog Is Passing Over The Jet? A Text-Generation Dataset for Korean Commonsense Reasoning and Evaluation☆28Updated last year
- ☆25Updated last year
- Keep Me Updated! Memory Management in Long-term Conversations (Findings of EMNLP 2022)☆28Updated last year
- "Why do I feel offended?" - Korean Dataset for Offensive Language Identification (EACL2023 Findings)☆14Updated last year
- For the rlhf learning environment of Koreans☆23Updated last year
- Don't Judge a Language Model by Its Last Layer: Contrastive Learning with Layer-Wise Attention Pooling☆9Updated 2 years ago
- [Findings of NAACL2022] A Dog Is Passing Over The Jet? A Text-Generation Dataset for Korean Commonsense Reasoning and Evaluation☆12Updated 2 years ago
- Official code and dataset repository of KoBBQ (TACL 2024)☆14Updated 6 months ago
- A Framework aims to wisely initialize unseen subword embeddings in PLMs for efficient large-scale continued pretraining☆12Updated 11 months ago
- ☆20Updated last year