replicate / cog-llamaLinks
A template to run LLaMA in Cog
☆64Updated last year
Alternatives and similar repositories for cog-llama
Users that are interested in cog-llama are comparing it to the libraries listed below
Sorting:
- 🔓 The open-source autonomous agent LLM initiative 🔓☆91Updated last year
- Falcon40B and 7B (Instruct) with streaming, top-k, and beam search☆39Updated last year
- A feed of trending repos/models from GitHub, Replicate, HuggingFace, and Reddit.☆132Updated 8 months ago
- Conduct consumer interviews with synthetic focus groups using LLMs and LangChain☆43Updated last year
- Not financial advice.☆27Updated 2 years ago
- ☆39Updated 3 weeks ago
- 🌸 The open framework for question answering fine-tuning LLMs on private data☆69Updated last year
- ✦ The intuitive python LLM framework☆172Updated 6 months ago
- Example of running LangChain on Cloud Run☆61Updated 2 years ago
- A langchain app to visualise a debate using Tree-of-Thought reasoning☆60Updated last year
- Natural Language Interfaces Powered by LLMs☆89Updated 10 months ago
- ☆56Updated 2 years ago
- automatically generate @openai plugins by specifying your API in markdown in smol-developer style☆120Updated 2 years ago
- A Personalised AI Assistant Inspired by 'Diamond Age, Powered by SMS☆92Updated 2 years ago
- Tutorial and template for a semantic search app powered by the Atlas Embedding Database, Langchain, OpenAI and FastAPI☆116Updated last year
- Command-line script for inferencing from models such as MPT-7B-Chat☆101Updated last year
- ✅ Pytest-style test runner for langchain projects☆25Updated 2 years ago
- Record and replay LLM interactions for langchain☆82Updated 11 months ago
- Embedding models from Jina AI☆60Updated last year
- A guidance compatibility layer for llama-cpp-python☆34Updated last year
- ☆74Updated last year
- CLARA: Code Language Assistant & Repository Analyzer☆94Updated last year
- ☆44Updated 2 years ago
- ☆131Updated 2 years ago
- auto fine tune of models with synthetic data☆75Updated last year
- A web UI for LangChainHub, built on Next.js☆39Updated 2 years ago
- Run GPU inference and training jobs on serverless infrastructure that scales with you.☆102Updated 11 months ago
- A collection of LLM services you can self host via docker or modal labs to support your applications development☆187Updated last year
- LLM finetuning☆42Updated last year
- codellama on CPU without Docker☆25Updated last year