ChanCheeKean / DataScienceLinks
☆84Updated last year
Alternatives and similar repositories for DataScience
Users that are interested in DataScience are comparing it to the libraries listed below
Sorting:
- Tutorial for how to build BERT from scratch☆98Updated last year
- Building a 2.3M-parameter LLM from scratch with LLaMA 1 architecture.☆182Updated last year
- LLM (Large Language Model) FineTuning☆559Updated 4 months ago
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆113Updated 2 years ago
- 1st Place Solution for LLM - Detect AI Generated Text Kaggle Competition☆202Updated last year
- Fine-tuning LLM with LoRA (Low-Rank Adaptation) from scratch (Oct 2023)☆27Updated last month
- This repository contains a custom implementation of the BERT model, fine-tuned for specific tasks, along with an implementation of Low Ra…☆77Updated last year
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆125Updated last year
- LLaMA 3 is one of the most promising open-source model after Mistral, we will recreate it's architecture in a simpler manner.☆179Updated last year
- Collection of links, tutorials and best practices of how to collect the data and build end-to-end RLHF system to finetune Generative AI m…☆223Updated 2 years ago
- LLM Workshop by Sourab Mangrulkar☆392Updated last year
- Exploring the potential of fine-tuning Large Language Models (LLMs) like Llama2 and StableLM for medical entity extraction. This project …☆82Updated last year
- Notes about "Attention is all you need" video (https://www.youtube.com/watch?v=bCz4OMemCcA)☆301Updated 2 years ago
- Master the essential steps of pretraining large language models (LLMs). Learn to create high-quality datasets, configure model architectu…☆22Updated last year
- Scripts for fine-tuning Llama2 via SFT and DPO.☆203Updated 2 years ago
- Attention Is All You Need | a PyTorch Tutorial to Transformers☆335Updated last year
- LLaMA 2 implemented from scratch in PyTorch☆347Updated last year
- Notes and commented code for RLHF (PPO)☆104Updated last year
- Code Transformer neural network components piece by piece☆361Updated 2 years ago
- Distributed training (multi-node) of a Transformer model☆79Updated last year
- Define Transformers, T5 model and RoBERTa Encoder decoder model for product names generation☆48Updated 3 years ago
- ☆64Updated 2 years ago
- Advanced Retrieval-Augmented Generation (RAG) through practical notebooks, using the power of the Langchain, OpenAI GPTs ,META LLAMA3 , A…☆79Updated last year
- Well documented, unit tested, type checked and formatted implementation of a vanilla transformer - for educational purposes.☆257Updated last year
- ☆1,268Updated 6 months ago
- LLM_library is a comprehensive repository serves as a one-stop resource hands-on code, insightful summaries.☆69Updated last year
- Unlock the potential of finetuning Large Language Models (LLMs). Learn from industry expert, and discover when to apply finetuning, data …☆65Updated last year
- Advanced Retrieval-Augmented Generation (RAG) through practical notebooks, using the power of the Langchain, OpenAI GPTs ,META LLAMA3 ,A…☆378Updated last year
- Instruct LLMs for flat and nested NER. Fine-tuning Llama and Mistral models for instruction named entity recognition. (Instruction NER)☆85Updated last year
- What can I do with a LLM model?☆157Updated 4 months ago