sjchoi86 / yet-another-gpt-tutorial
☆41Updated 11 months ago
Related projects ⓘ
Alternatives and complementary repositories for yet-another-gpt-tutorial
- ☆43Updated 9 months ago
- Yet Another PyTorch Tutorial☆11Updated 3 years ago
- ☆51Updated this week
- Information and Materials for the Deep Learning Course☆31Updated 2 years ago
- Evaluate gpt-4o on CLIcK (Korean NLP Dataset)☆20Updated 5 months ago
- 거꾸로 읽는 self-supervised learning in NLP☆27Updated 2 years ago
- my useful torch lightning training template☆33Updated last year
- 허깅페이스의 디퓨전 모델 코스에 대한 자료☆10Updated last year
- [Google Meet] MLLM Arxiv Casual Talk☆55Updated last year
- ☆21Updated last year
- Serving Example of CodeGen-350M-Mono-GPTJ on Triton Inference Server with Docker and Kubernetes☆20Updated last year
- Yet Another Reinforcement Learning Tutorial☆71Updated last year
- ☆63Updated last year
- OpenOrca-KO dataset을 활용하여 llama2를 fine-tuning한 Korean-OpenOrca☆19Updated last year
- ☆37Updated last year
- ☆21Updated 3 years ago
- A clean and structured implementation of Transformer with wandb and pytorch-lightning☆71Updated 2 years ago
- CLIcK: A Benchmark Dataset of Cultural and Linguistic Intelligence in Korean☆41Updated 2 months ago
- RL Implementation☆19Updated 2 years ago
- 어느 고등학생의 심플한 확률론적 앵무새 만들기☆19Updated last year
- 한국어 심리 상담 데이터셋☆74Updated last year
- "Learning-based One-line intelligence Owner Network Connectivity Tool"☆15Updated last year
- Deep-RL algorithm Implementations using Pytorch☆14Updated last year
- These are papers that I read and reviewed related to NLP, CV, and Deep Learning 😉 You can check paper links and my reviews 😊☆12Updated 10 months ago
- LINER PDF Chat Tutorial with ChatGPT & Pinecone☆46Updated last year
- A clean and structured implementation of the RNN family with wandb and pytorch-lightning☆48Updated 2 years ago
- 구글에서 발표한 Chain-of-Thought Reasoning without Prompting을 코드로 구현한 레포입니다.☆53Updated last month
- ☆101Updated last year
- Fine tuning pretrained model with KLUE benchmark dataset using Transformers library☆8Updated last year
- 42dot LLM consists of a pre-trained language model, 42dot LLM-PLM, and a fine-tuned model, 42dot LLM-SFT, which is trained to respond to …☆122Updated 8 months ago