philschmid / knowledge-distillation-transformers-pytorch-sagemakerLinks
☆47Updated 3 years ago
Alternatives and similar repositories for knowledge-distillation-transformers-pytorch-sagemaker
Users that are interested in knowledge-distillation-transformers-pytorch-sagemaker are comparing it to the libraries listed below
Sorting:
- Finetune mistral-7b-instruct for sentence embeddings☆87Updated last year
- What's In My Big Data (WIMBD) - a toolkit for analyzing large text datasets☆223Updated last year
- Scalable training for dense retrieval models.☆297Updated 5 months ago
- DSIR large-scale data selection framework for language model training☆265Updated last year
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆135Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆276Updated last year
- This is the repository for our paper "INTERS: Unlocking the Power of Large Language Models in Search with Instruction Tuning"☆205Updated 11 months ago
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆207Updated 11 months ago
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆78Updated last year
- ☆294Updated last year
- [ACL 2025] AIR-Bench: Automated Heterogeneous Information Retrieval Benchmark☆160Updated last month
- A framework for few-shot evaluation of autoregressive language models.☆104Updated 2 years ago
- Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.☆182Updated 3 years ago
- Multilingual Large Language Models Evaluation Benchmark☆133Updated last year
- Scripts for fine-tuning Llama2 via SFT and DPO.☆205Updated 2 years ago
- Official repository for ORPO☆465Updated last year
- Code and model release for the paper "Task-aware Retrieval with Instructions" by Asai et al.☆165Updated 2 years ago
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆403Updated last year
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆248Updated 2 years ago
- Train Llama 2 & 3 on the SQuAD v2 task as an example of how to specialize a generalized (foundation) model.☆53Updated last year
- MultilingualSIFT: Multilingual Supervised Instruction Fine-tuning☆94Updated 2 years ago
- The official evaluation suite and dynamic data release for MixEval.☆252Updated last year
- Benchmarking library for RAG☆243Updated last month
- Code for Multilingual Eval of Generative AI paper published at EMNLP 2023☆70Updated last year
- ☆552Updated 11 months ago
- Scaling Data-Constrained Language Models☆342Updated 4 months ago
- ☆189Updated 4 months ago
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆215Updated 2 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆477Updated last year
- Code for Zero-Shot Tokenizer Transfer☆140Updated 10 months ago