teticio / llama-squad
Train Llama 2 & 3 on the SQuAD v2 task as an example of how to specialize a generalized (foundation) model.
☆44Updated 3 months ago
Related projects: ⓘ
- Multilingual Large Language Models Evaluation Benchmark☆91Updated last month
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆118Updated 6 months ago
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆102Updated 2 months ago
- Code and model release for the paper "Task-aware Retrieval with Instructions" by Asai et al.☆159Updated 11 months ago
- Wrapper to easily generate the chat template for Llama2☆62Updated 6 months ago
- A Multilingual Replicable Instruction-Following Model☆91Updated last year
- Code for Multilingual Eval of Generative AI paper published at EMNLP 2023☆62Updated 6 months ago
- Calculate perplexity on a text with pre-trained language models. Support MLM (eg. DeBERTa), recurrent LM (eg. GPT3), and encoder-decoder …☆123Updated last month
- Finetune mistral-7b-instruct for sentence embeddings☆65Updated 4 months ago
- ☆160Updated last year
- Token-level Reference-free Hallucination Detection☆92Updated last year
- Repository for EMNLP 2022 Paper: Towards a Unified Multi-Dimensional Evaluator for Text Generation☆187Updated 7 months ago
- Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.☆177Updated last year
- Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback☆89Updated last year
- Code for paper "G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment"☆236Updated 7 months ago
- Scripts for fine-tuning Llama2 via SFT and DPO.☆178Updated last year
- MultilingualSIFT: Multilingual Supervised Instruction Fine-tuning☆80Updated last year
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆64Updated 2 months ago
- Code and Data for "Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering"☆77Updated last month
- Source Code of Paper "GPTScore: Evaluate as You Desire"☆219Updated last year
- RARR: Researching and Revising What Language Models Say, Using Language Models☆41Updated last year
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆82Updated 2 months ago
- Efficient Attention for Long Sequence Processing☆84Updated 9 months ago
- Inquisitive Parrots for Search☆175Updated 6 months ago
- Scalable training for dense retrieval models.☆268Updated last year
- ☆75Updated last month
- Dense X Retrieval: What Retrieval Granularity Should We Use?☆120Updated 8 months ago
- What's In My Big Data (WIMBD) - a toolkit for analyzing large text datasets☆174Updated last week
- Reverse Instructions to generate instruction tuning data with corpus examples☆201Updated 6 months ago
- [Preprint] Learning to Filter Context for Retrieval-Augmented Generaton☆182Updated 5 months ago