drengskapur / docker-in-colab
Run Docker inside Google Colab
☆80Updated 8 months ago
Related projects: ⓘ
- Fine-tune and quantize Llama-2-like models to generate Python code using QLoRA, Axolot,..☆64Updated 7 months ago
- InsightSolver: Colab notebooks for exploring and solving operational issues using deep learning, machine learning, and related models.☆89Updated 3 months ago
- Examples of RAG using Llamaindex with local LLMs - Gemma, Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B☆118Updated 6 months ago
- ☆47Updated this week
- Jupyter Notebooks for Ollama integration☆114Updated last month
- Connect to Google Colab VM from your local VSCode☆240Updated 6 months ago
- ☆149Updated last year
- Repository featuring fine-tuning code for various LLMs, complemented by occasional explanations, deep dives.☆40Updated last week
- This repo contains codes covered in the youtube tutorials.☆65Updated 2 weeks ago
- ToolMate AI is a cutting-edge AI companion that seamlessly integrates agents, tools, and plugins to excel in conversations, generative wo…☆103Updated this week
- Function Calling Mistral 7B. Learn how to make functions call for open source LLMs.☆46Updated 7 months ago
- Example of calling OpenRouter from a Streamit app☆88Updated last year
- 🎧 | RunPod worker of the faster-whisper model for Serverless Endpoint.☆64Updated last month
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆217Updated 6 months ago
- Retrieval-Augmented Generation (RAG) over a Large Language Model (LLM) For PDF data extraction☆12Updated 7 months ago
- ☆181Updated 3 months ago
- Code generation with LLMs 🔗☆51Updated last year
- Large Language Model (LLM) Inference API and Chatbot☆123Updated 5 months ago
- Webinterface for administrating Ollama and model Quantization with public endpoints and automized OPENAI proxy☆48Updated 4 months ago
- Example code for extracting Q&A datasets from LLM's☆74Updated last year
- The RunPod worker template for serving our large language model endpoints. Powered by vLLM.☆222Updated this week
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing ⚡☆58Updated 2 weeks ago
- Testing and evaluating the capabilities of Vision-Language models (PaliGemma) in performing computer vision tasks such as object detectio…☆77Updated 3 months ago
- Auto Data is a library designed for quick and effortless creation of datasets tailored for fine-tuning Large Language Models (LLMs).☆84Updated 3 weeks ago
- Code and resources showcasing the Retrieval-Augmented Generation (RAG) technique, a solution for enhancing data freshness in Large Langua…☆38Updated last year
- Awesome LLM application repo☆56Updated last month
- HuggingChat like UI in Gradio☆63Updated last year
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆154Updated 11 months ago
- One click templates for inferencing Language Models☆97Updated last week
- Some helpers and examples for creating an LLM fine-tuning dataset☆60Updated 6 months ago