dptrsa-300 / start_with_bloom
Bloom is a new multi-lingual LLM (Large Language Model) from BigScience, a Hunggingface-hosted open collaboration with hundreds of researchers and institutions around the world. This repo contains a notebook and configuration scripts to get started with the basics of text generation using Bloom's 1.3B parameter pre-trained model.
☆47Updated 2 years ago
Alternatives and similar repositories for start_with_bloom:
Users that are interested in start_with_bloom are comparing it to the libraries listed below
- ☆65Updated last year
- Instruct-tuning LLaMA on consumer hardware☆66Updated last year
- Tutorial and template for a semantic search app powered by the Atlas Embedding Database, Langchain, OpenAI and FastAPI☆115Updated last year
- ☆22Updated last year
- Example of Alpaca-LoRA with llama index.☆31Updated last year
- ☆121Updated last year
- A collection of simple transformer based chatbots.☆18Updated 2 years ago
- Based on the tree of thoughts paper☆46Updated last year
- Small finetuned LLMs for a diverse set of useful tasks☆126Updated last year
- Fact-checking LLM outputs with self-ask☆292Updated last year
- Search and indexing your own Google Drive Files using GPT3, LangChain, and Python☆42Updated 2 years ago
- ChatGPT API Usage using LangChain, LlamaIndex, Guardrails, AutoGPT and more☆126Updated 6 months ago
- Source codes for the paper "Bounding the Capabilities of Large Language Models in Open Text Generation with Prompt Constraints"☆27Updated 2 years ago
- Integration between Dolly 2.0 (commercial usage) with HF embedding and Llamaindex☆9Updated last year
- A demonstration of a chatbot interface that uses the OpenAI ChatGPT API☆44Updated last year
- Prompt Engineering for Large Language Models - Notebooks, Demos, Exercises, and Projects☆22Updated last year
- ☆33Updated last year
- Python bindings for llama.cpp☆65Updated 11 months ago
- Supervised instruction finetuning for LLM with HF trainer and Deepspeed☆34Updated last year
- ☆68Updated last year
- Falcon40B and 7B (Instruct) with streaming, top-k, and beam search☆40Updated last year
- ☆16Updated 2 years ago
- Repository of the code base for KT Generation process that we worked at Google Cloud and Searce GenAI Hackathon.☆74Updated last year
- ☆24Updated last year
- Seed, Code, Harvest: Grow Your Own App with Tree of Thoughts!☆144Updated last year
- Use vector search or embedding technique to feed addtional knowledge base to LLM like GPT-3, BLOOMZ☆104Updated last year
- Text to Python Objects via a LLM Function Call☆56Updated 10 months ago
- Some information for working with the Together inference API for Open Source AI models☆56Updated last year
- Command-line script for inferencing from models such as MPT-7B-Chat☆101Updated last year
- A joint community effort to create one central leaderboard for LLMs.☆289Updated 5 months ago