muellerzr / RAG-ExperimentsLinks
My learnings (publicly) on RAG systems
☆14Updated last year
Alternatives and similar repositories for RAG-Experiments
Users that are interested in RAG-Experiments are comparing it to the libraries listed below
Sorting:
- Source notebook code for the course, stripped of all information. Please consider puchasing the course at https://store.walkwithfastai.co…☆37Updated last year
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆197Updated last year
- Highly commented implementations of Transformers in PyTorch☆137Updated 2 years ago
- ☆95Updated 2 years ago
- ☆78Updated last year
- ☆170Updated last year
- A miniture AI training framework for PyTorch☆42Updated 7 months ago
- Train fastai models faster (and other useful tools)☆70Updated 2 months ago
- ☆93Updated last year
- An introduction to LLM Sampling☆79Updated 8 months ago
- 📝 Reference-Free automatic summarization evaluation with potential hallucination detection☆103Updated last year
- NLP Examples using the 🤗 libraries☆40Updated 4 years ago
- Gzip and nearest neighbors for text classification☆57Updated 2 years ago
- Genalog is an open source, cross-platform python package allowing generation of synthetic document images with custom degradations and te…☆42Updated last year
- ☆54Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆110Updated 11 months ago
- ML/DL Math and Method notes☆63Updated last year
- The fastai book, 2nd edition (in progress)☆56Updated last year
- The Batched API provides a flexible and efficient way to process multiple requests in a batch, with a primary focus on dynamic batching o…☆143Updated last month
- NLP with Rust for Python 🦀🐍☆64Updated 3 months ago
- ☆155Updated 8 months ago
- Use Actions to acquire those precious lambda GPUs☆19Updated last year
- ☆209Updated last year
- Build fast gradio demos of fastai learners☆35Updated 3 years ago
- Check for data drift between two OpenAI multi-turn chat jsonl files.☆37Updated last year
- Supervised instruction finetuning for LLM with HF trainer and Deepspeed☆35Updated 2 years ago
- experiments with inference on llama☆104Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆83Updated last year
- ☆80Updated last year
- Chunk your text using gpt4o-mini more accurately☆44Updated last year