hyintell / BLOOM-fine-tuningLinks
Finetune BLOOM
☆40Updated 2 years ago
Alternatives and similar repositories for BLOOM-fine-tuning
Users that are interested in BLOOM-fine-tuning are comparing it to the libraries listed below
Sorting:
- Due to restriction of LLaMA, we try to reimplement BLOOM-LoRA (much less restricted BLOOM license here https://huggingface.co/spaces/bigs…☆184Updated 2 years ago
- LLaMa Tuning with Stanford Alpaca Dataset using Deepspeed and Transformers☆50Updated 2 years ago
- Multi-language Enhanced LLaMA☆303Updated 2 years ago
- This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as…☆358Updated 2 years ago
- User-friendly LLaMA: Train or Run the model using PyTorch. Nothing else.☆339Updated 2 years ago
- Fine tune a T5 transformer model using PyTorch & Transformers🤗☆220Updated 4 years ago
- ☆457Updated 2 years ago
- A Simple but Powerful SOTA NER Model | Official Code For Label Supervised LLaMA Finetuning☆152Updated last year
- Scripts for fine-tuning Llama2 via SFT and DPO.☆207Updated 2 years ago
- ☆122Updated 2 years ago
- Crosslingual Generalization through Multitask Finetuning☆537Updated last year
- Implementation of Reinforcement Learning from Human Feedback (RLHF)☆173Updated 2 years ago
- Efficient Attention for Long Sequence Processing☆98Updated 2 years ago
- A PyTorch-based model pruning toolkit for pre-trained language models☆388Updated 2 years ago
- Implementation of paper: HLATR: Enhance Multi-stage Text Retrieval with Hybrid List Aware Transformer Reranking☆74Updated 3 years ago
- PromptBERT: Improving BERT Sentence Embeddings with Prompts☆342Updated 2 years ago
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆110Updated 2 years ago
- Reverse Instructions to generate instruction tuning data with corpus examples☆216Updated last year
- Multilingual/multidomain question generation datasets, models, and python library for question generation.☆372Updated last year
- DialogStudio: Towards Richest and Most Diverse Unified Dataset Collection and Instruction-Aware Models for Conversational AI☆519Updated last year
- BigTranslate: Augmenting Large Language Models with Multilingual Translation Capability over 100 Languages☆229Updated 2 years ago
- Repo for fine-tuning Casual LLMs☆458Updated last year
- MultilingualSIFT: Multilingual Supervised Instruction Fine-tuning☆96Updated 2 years ago
- Evaluating ChatGPT’s Information Extraction Capabilities: An Assessment of Performance, Explainability, Calibration, and Faithfulness☆144Updated last year
- ☆40Updated 2 years ago
- Repository for EMNLP 2022 Paper: Towards a Unified Multi-Dimensional Evaluator for Text Generation☆214Updated last year
- Large language Model fintuning bloom , opt , gpt, gpt2 ,llama,llama-2,cpmant and so on☆99Updated last year
- Instruct LLMs for flat and nested NER. Fine-tuning Llama and Mistral models for instruction named entity recognition. (Instruction NER)☆87Updated last year
- Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.☆183Updated 3 years ago
- Comparing the Performance of LLMs: A Deep Dive into Roberta, Llama, and Mistral for Disaster Tweets Analysis with Lora☆51Updated 2 years ago