closedai-project / closedaiLinks
Drop in replacement for OpenAI, but with Open models.
☆153Updated 2 years ago
Alternatives and similar repositories for closedai
Users that are interested in closedai are comparing it to the libraries listed below
Sorting:
- Full finetuning of large language models without large memory requirements☆93Updated last month
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated 2 years ago
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆108Updated 2 years ago
- Command-line script for inferencing from models such as MPT-7B-Chat☆99Updated 2 years ago
- [WIP] A 🔥 interface for running code in the cloud☆85Updated 2 years ago
- Command-line script for inferencing from models such as falcon-7b-instruct☆74Updated 2 years ago
- ☆197Updated last year
- Smol but mighty language model☆62Updated 2 years ago
- The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER).…☆121Updated 2 years ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated 2 years ago
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆160Updated 2 years ago
- Instruct-tuning LLaMA on consumer hardware☆65Updated 2 years ago
- Extend the original llama.cpp repo to support redpajama model.☆118Updated last year
- A Simple Discord Bot for the Alpaca LLM☆98Updated 2 years ago
- Cerule - A Tiny Mighty Vision Model☆67Updated last week
- The code we currently use to fine-tune models.☆117Updated last year
- Simple embedding -> text model trained on a small subset of Wikipedia sentences.☆157Updated 2 years ago
- Reimplementation of the task generation part from the Alpaca paper☆118Updated 2 years ago
- inference code for mixtral-8x7b-32kseqlen☆102Updated last year
- ☆50Updated 2 years ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- Small finetuned LLMs for a diverse set of useful tasks☆126Updated 2 years ago
- ☆40Updated 2 years ago
- ☆73Updated 2 years ago
- Modified Stanford-Alpaca Trainer for Training Replit's Code Model☆41Updated 2 years ago
- Tune MPTs☆84Updated 2 years ago
- ☆94Updated 2 years ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆231Updated last year
- Maybe the new state of the art vision model? we'll see 🤷♂️☆165Updated last year
- A collection of LLM services you can self host via docker or modal labs to support your applications development☆197Updated last year