pointnetwork / point-alpaca
☆406Updated last year
Related projects ⓘ
Alternatives and complementary repositories for point-alpaca
- ☆534Updated 11 months ago
- ☆454Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆415Updated 11 months ago
- Alpaca dataset from Stanford, cleaned and curated☆1,519Updated last year
- SoTA Transformers with C-backend for fast inference on your CPU.☆312Updated 11 months ago
- Tune any FALCON in 4-bit☆468Updated last year
- LLaMa retrieval plugin script using OpenAI's retrieval plugin☆324Updated last year
- fastLLaMa: An experimental high-performance framework for running Decoder-only LLMs with 4-bit quantization in Python using a C/C++ backe…☆410Updated last year
- LLM that combines the principles of wizardLM and vicunaLM☆711Updated last year
- C++ implementation for BLOOM☆811Updated last year
- Customizable implementation of the self-instruct paper.☆1,024Updated 8 months ago
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆812Updated last year
- A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer☆1,618Updated last year
- [NeurIPS 22] [AAAI 24] Recurrent Transformer-based long-context architecture.☆756Updated 3 weeks ago
- ☆411Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆124Updated last year
- C++ implementation for 💫StarCoder☆446Updated last year
- A llama.cpp drop-in replacement for OpenAI's GPT endpoints, allowing GPT-powered apps to run off local llama.cpp models instead of OpenAI…☆595Updated last year
- ☆527Updated 10 months ago
- Reflexion: an autonomous agent with dynamic memory and self-reflection☆380Updated 11 months ago
- Quantized inference code for LLaMA models☆1,051Updated last year
- ☆1,430Updated last year
- Chat with Meta's LLaMA models at home made easy☆837Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,409Updated 3 months ago
- Finetuning Large Language Models on One Consumer GPU in 2 Bits☆707Updated 5 months ago
- Simple, hackable and fast implementation for training/finetuning medium-sized LLaMA-based models☆153Updated this week
- Official supported Python bindings for llama.cpp + gpt4all☆1,023Updated last year
- MiniLLM is a minimal system for running modern LLMs on consumer-grade GPUs☆868Updated last year
- This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as…☆348Updated last year