hyintell / BLOOM-fine-tuningLinks
Finetune BLOOM
☆40Updated 2 years ago
Alternatives and similar repositories for BLOOM-fine-tuning
Users that are interested in BLOOM-fine-tuning are comparing it to the libraries listed below
Sorting:
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆110Updated 2 years ago
- ☆122Updated last year
- Due to restriction of LLaMA, we try to reimplement BLOOM-LoRA (much less restricted BLOOM license here https://huggingface.co/spaces/bigs…☆185Updated 2 years ago
- This repository contains the code to train flan t5 with alpaca instructions and low rank adaptation.☆51Updated 2 years ago
- Scripts for fine-tuning Llama2 via SFT and DPO.☆201Updated last year
- Instruct-tune Open LLaMA / RedPajama / StableLM models on consumer hardware using QLoRA☆81Updated last year
- Efficient Attention for Long Sequence Processing☆94Updated last year
- Text classification with Foundation Language Model LLaMA☆114Updated 2 years ago
- LLaMa Tuning with Stanford Alpaca Dataset using Deepspeed and Transformers☆51Updated 2 years ago
- A simple example for finetuning HuggingFace T5 model. Includes code for intermediate generation.☆27Updated 4 years ago
- [TMLR'23] Contrastive Search Is What You Need For Neural Text Generation☆119Updated 2 years ago
- A dataset for training/evaluating Question Answering Retrieval models on ChatGPT responses with the possibility to training/evaluating on…☆142Updated last year
- ☆40Updated 2 years ago
- MultilingualSIFT: Multilingual Supervised Instruction Fine-tuning☆91Updated last year
- Instruct LLMs for flat and nested NER. Fine-tuning Llama and Mistral models for instruction named entity recognition. (Instruction NER)☆84Updated last year
- Multi-language Enhanced LLaMA☆301Updated 2 years ago
- Simply, faster, sentence-transformers☆143Updated 10 months ago
- minichatgpt - To Train ChatGPT In 5 Minutes☆168Updated last year
- Implementation of Reinforcement Learning from Human Feedback (RLHF)☆171Updated 2 years ago
- Evaluating ChatGPT’s Information Extraction Capabilities: An Assessment of Performance, Explainability, Calibration, and Faithfulness☆144Updated 10 months ago
- A multilingual version of MS MARCO passage ranking dataset☆145Updated last year
- Code and model release for the paper "Task-aware Retrieval with Instructions" by Asai et al.☆162Updated last year
- An experimental implementation of the retrieval-enhanced language model☆74Updated 2 years ago
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆208Updated last year
- A Multilingual Replicable Instruction-Following Model☆93Updated 2 years ago
- Convert BART models to ONNX with quantization. 3X reduction in size, and upto 3X boost in inference speed☆34Updated 6 months ago
- This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as…☆352Updated last year
- Repo for fine-tuning Casual LLMs☆456Updated last year
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆77Updated 8 months ago
- Large language Model fintuning bloom , opt , gpt, gpt2 ,llama,llama-2,cpmant and so on☆97Updated last year