skypilot-org / skypilot
SkyPilot: Run AI and batch jobs on any infra (Kubernetes or 12+ clouds). Get unified execution, cost savings, and high GPU availability via a simple interface.
β7,048Updated this week
Alternatives and similar repositories for skypilot:
Users that are interested in skypilot are comparing it to the libraries listed below
- Large Language Model Text Generation Inferenceβ9,592Updated this week
- πΈ Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloadingβ9,349Updated 4 months ago
- QLoRA: Efficient Finetuning of Quantized LLMsβ10,168Updated 7 months ago
- Go ahead and axolotl questionsβ8,293Updated this week
- Tensor library for machine learningβ11,541Updated this week
- OpenLLaMA, a permissively licensed open source reproduction of Meta AIβs LLaMA 7B trained on the RedPajama datasetβ7,417Updated last year
- Gorilla: Training and Evaluating LLMs for Function Calls (Tool Calls)β11,688Updated this week
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMathβ9,312Updated 5 months ago
- Python bindings for llama.cppβ8,420Updated last week
- SGLang is a fast serving framework for large language models and vision language models.β7,353Updated this week
- A fast inference library for running LLMs locally on modern consumer-class GPUsβ3,845Updated last week
- Structured Text Generationβ10,350Updated this week
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinksβ6,749Updated 6 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMsβ33,809Updated this week
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achβ¦β4,812Updated last month
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adβ¦β6,021Updated 4 months ago
- A language for constraint-guided and efficient LLM programming.β3,768Updated 7 months ago
- Semantic cache for LLMs. Fully integrated with LangChain and llama_index.β7,341Updated 4 months ago
- the AI-native open-source embedding databaseβ17,023Updated this week
- Run any open-source LLMs, such as Llama, Mistral, as OpenAI compatible API endpoint in the cloud.β10,380Updated this week
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.β2,797Updated last year
- Tools for merging pretrained large language models.β5,113Updated last week
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.β4,620Updated this week
- tiktoken is a fast BPE tokeniser for use with OpenAI's models.β13,023Updated 3 months ago
- Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagβ¦β16,235Updated this week
- PyTorch native post-training libraryβ4,703Updated this week
- Accessible large language models via k-bit quantization for PyTorch.β6,522Updated this week
- A Bulletproof Way to Generate Structured JSON from Language Modelsβ4,527Updated 10 months ago
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.β8,113Updated 8 months ago