alinourian / Fine-tuning-Mistral-7b-QA
Fine tuning Mistral-7b with PEFT(Parameter Efficient Fine-Tuning) and LoRA(Low-Rank Adaptation) on Puffin Dataset(multi-turn conversations between GPT-4 and real humans)
☆11Updated 9 months ago
Related projects: ⓘ
- Small and Efficient Mathematical Reasoning LLMs☆69Updated 7 months ago
- ☆30Updated 4 months ago
- Code and data for "StructLM: Towards Building Generalist Models for Structured Knowledge Grounding" (COLM 2024)☆67Updated 2 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated 6 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆39Updated 3 weeks ago
- Codebase accompanying the Summary of a Haystack paper.☆65Updated 2 months ago
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆22Updated 6 months ago
- ReBase: Training Task Experts through Retrieval Based Distillation☆27Updated 2 months ago
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated last year
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆48Updated last week
- Retrieval Augmented Generation Generalized Evaluation Dataset☆51Updated this week
- ☆29Updated 2 weeks ago
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆72Updated 8 months ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆23Updated last year
- Set of scripts to finetune LLMs☆36Updated 5 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆62Updated last year
- Improving Text Embedding of Language Models Using Contrastive Fine-tuning☆54Updated last month
- implementation of https://arxiv.org/pdf/2312.09299☆19Updated 2 months ago
- distill chatGPT coding ability into small model (1b)☆24Updated last year
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆55Updated last week
- Code for NeurIPS LLM Efficiency Challenge☆52Updated 5 months ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆42Updated 10 months ago
- ☆47Updated 3 weeks ago
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆48Updated 2 months ago
- Evaluating LLMs with CommonGen-Lite☆83Updated 6 months ago
- In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuning☆33Updated last year
- Official repo for NAACL 2024 Findings paper "LeTI: Learning to Generate from Textual Interactions."☆60Updated last year
- Simple replication of [ColBERT-v1](https://arxiv.org/abs/2004.12832).☆73Updated 6 months ago
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer☆34Updated 5 months ago
- ☆75Updated 3 weeks ago