ChanCheeKean / DataScienceLinks
☆82Updated last year
Alternatives and similar repositories for DataScience
Users that are interested in DataScience are comparing it to the libraries listed below
Sorting:
- 1st Place Solution for LLM - Detect AI Generated Text Kaggle Competition☆210Updated last year
- Building a 2.3M-parameter LLM from scratch with LLaMA 1 architecture.☆196Updated last year
- LLM (Large Language Model) FineTuning☆566Updated 9 months ago
- Tutorial for how to build BERT from scratch☆101Updated last year
- This repository contains a custom implementation of the BERT model, fine-tuned for specific tasks, along with an implementation of Low Ra…☆78Updated 2 years ago
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆119Updated 2 years ago
- Master the essential steps of pretraining large language models (LLMs). Learn to create high-quality datasets, configure model architectu…☆24Updated last year
- Collection of links, tutorials and best practices of how to collect the data and build end-to-end RLHF system to finetune Generative AI m…☆224Updated 2 years ago
- Scripts for fine-tuning Llama2 via SFT and DPO.☆206Updated 2 years ago
- Unlock the potential of finetuning Large Language Models (LLMs). Learn from industry expert, and discover when to apply finetuning, data …☆69Updated 2 years ago
- LLM Workshop by Sourab Mangrulkar☆400Updated last year
- This is an implementation of the paper: Searching for Best Practices in Retrieval-Augmented Generation (EMNLP2024)☆344Updated last year
- Curated list of weekly published LLM papers☆196Updated 2 months ago
- LLaMA 3 is one of the most promising open-source model after Mistral, we will recreate it's architecture in a simpler manner.☆196Updated last year
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆129Updated 2 years ago
- Comparing the Performance of LLMs: A Deep Dive into Roberta, Llama, and Mistral for Disaster Tweets Analysis with Lora☆51Updated 2 years ago
- Instruct LLMs for flat and nested NER. Fine-tuning Llama and Mistral models for instruction named entity recognition. (Instruction NER)☆87Updated last year
- Distributed training (multi-node) of a Transformer model☆91Updated last year
- Advanced Retrieval-Augmented Generation (RAG) through practical notebooks, using the power of the Langchain, OpenAI GPTs ,META LLAMA3 ,A…☆437Updated last year
- Notes and commented code for RLHF (PPO)☆121Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆113Updated last year
- RAG-VectorDB-Embedings-LlamaIndex-Langchain☆277Updated 2 months ago
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆77Updated last year
- Llama from scratch, or How to implement a paper without crying☆582Updated last year
- ☆85Updated 2 years ago
- LLM_library is a comprehensive repository serves as a one-stop resource hands-on code, insightful summaries.☆69Updated 2 years ago
- Fine-Tuning Llama3-8B LLM in a multi-GPU environment using DeepSpeed☆18Updated last year
- ☆1,334Updated 10 months ago
- LoRA and DoRA from Scratch Implementations☆215Updated last year
- This is a repository of RALM surveys containing a summary of state-of-the-art RAG and other technologies☆201Updated last year