cloneofsimo / auto_llm_codebase_analysis
☆26Updated 11 months ago
Alternatives and similar repositories for auto_llm_codebase_analysis:
Users that are interested in auto_llm_codebase_analysis are comparing it to the libraries listed below
- Modified Beam Search with periodical restart☆12Updated 5 months ago
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆21Updated 2 months ago
- [WIP] Transformer to embed Danbooru labelsets☆13Updated 10 months ago
- BH hackathon☆14Updated 10 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated 11 months ago
- ☆48Updated 3 months ago
- This library supports evaluating disparities in generated image quality, diversity, and consistency between geographic regions.☆20Updated 8 months ago
- implementation of https://arxiv.org/pdf/2312.09299☆20Updated 7 months ago
- Latent Large Language Models☆17Updated 5 months ago
- Apps that run on modal.com☆12Updated 8 months ago
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆17Updated last week
- Fast approximate inference on a single GPU with sparsity aware offloading☆38Updated last year
- ☆27Updated 6 months ago
- 🍳 AyaMCooking is a Voice-to-Voice Mutli-lingual RAG Agent that makes a perfect sous chef for your kitchen, in upto 10 Languages 🤌🧑🍳☆21Updated 3 months ago
- ☆31Updated last year
- ☆20Updated 8 months ago
- ☆16Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- ☆37Updated 6 months ago
- NanoGPT (124M) quality in 2.67B tokens☆27Updated this week
- ☆16Updated 11 months ago
- Visual RAG using less than 300 lines of code.☆25Updated 11 months ago
- Pixel Parsing. A reproduction of OCR-free end-to-end document understanding models with open data☆21Updated 6 months ago
- QLoRA for Masked Language Modeling☆21Updated last year
- ☆11Updated 3 months ago
- Recaption large (Web)Datasets with vllm and save the artifacts.☆44Updated 2 months ago
- ☆24Updated last year