hyintell / BLOOM-fine-tuning
Finetune BLOOM
☆40Updated 2 years ago
Alternatives and similar repositories for BLOOM-fine-tuning:
Users that are interested in BLOOM-fine-tuning are comparing it to the libraries listed below
- Due to restriction of LLaMA, we try to reimplement BLOOM-LoRA (much less restricted BLOOM license here https://huggingface.co/spaces/bigs…☆184Updated last year
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆110Updated 2 years ago
- LLaMa Tuning with Stanford Alpaca Dataset using Deepspeed and Transformers☆51Updated 2 years ago
- Implementation of paper: HLATR: Enhance Multi-stage Text Retrieval with Hybrid List Aware Transformer Reranking☆68Updated 2 years ago
- Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.☆179Updated 2 years ago
- ☆122Updated last year
- ☆40Updated last year
- Scripts for fine-tuning Llama2 via SFT and DPO.☆195Updated last year
- A multilingual version of MS MARCO passage ranking dataset☆143Updated last year
- Multi-language Enhanced LLaMA☆301Updated last year
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆47Updated last year
- An experimental implementation of the retrieval-enhanced language model☆74Updated 2 years ago
- A Multi-Turn Dialogue Corpus based on Alpaca Instructions☆168Updated last year
- Code, datasets, and checkpoints for the paper "Improving Passage Retrieval with Zero-Shot Question Generation (EMNLP 2022)"☆100Updated 2 years ago
- MultilingualSIFT: Multilingual Supervised Instruction Fine-tuning☆89Updated last year
- [EMNLP 2022] Improved Universal Sentence Embeddings with Prompt-based Contrastive Learning and Energy-based Learning☆134Updated last year
- Instruct LLMs for flat and nested NER. Fine-tuning Llama and Mistral models for instruction named entity recognition. (Instruction NER)☆80Updated 10 months ago
- Tool for converting LLMs from uni-directional to bi-directional by removing causal mask for tasks like classification and sentence embedd…☆57Updated 3 months ago
- Fine tune a T5 transformer model using PyTorch & Transformers🤗☆209Updated 4 years ago
- Instruct-tune Open LLaMA / RedPajama / StableLM models on consumer hardware using QLoRA☆81Updated last year
- Efficient Attention for Long Sequence Processing☆92Updated last year
- SpanNER: Named EntityRe-/Recognition as Span Prediction☆127Updated 2 years ago
- Implementation of Reinforcement Learning from Human Feedback (RLHF)☆172Updated last year
- ☆44Updated last year
- This repository contains the code to train flan t5 with alpaca instructions and low rank adaptation.☆51Updated last year
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆208Updated last year
- ☆182Updated last year
- All available datasets for Instruction Tuning of Large Language Models☆247Updated last year
- Simple implementation of using lora form the peft library to fine-tune the chatglm-6b☆85Updated last year
- The multilingual variant of GLM, a general language model trained with autoregressive blank infilling objective☆62Updated 2 years ago