daniel-furman / sft-demos
Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.
☆64Updated last month
Related projects ⓘ
Alternatives and complementary repositories for sft-demos
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆122Updated 8 months ago
- Dense X Retrieval: What Retrieval Granularity Should We Use?☆134Updated 10 months ago
- ☆73Updated 10 months ago
- experiments with inference on llama☆105Updated 5 months ago
- Experiments with generating opensource language model assistants☆97Updated last year
- ☆126Updated 7 months ago
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆74Updated 10 months ago
- Code for NeurIPS LLM Efficiency Challenge☆54Updated 7 months ago
- Small and Efficient Mathematical Reasoning LLMs☆71Updated 9 months ago
- This is the repository for our paper "INTERS: Unlocking the Power of Large Language Models in Search with Instruction Tuning"☆197Updated 6 months ago
- Datasets collection and preprocessings framework for NLP extreme multitask learning☆149Updated 4 months ago
- AIR-Bench: Automated Heterogeneous Information Retrieval Benchmark☆106Updated last month
- minimal pytorch implementation of bm25 (with sparse tensors)☆90Updated 8 months ago
- ☆112Updated last month
- GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embeddings☆37Updated 8 months ago
- ☆94Updated 2 months ago
- Simple replication of [ColBERT-v1](https://arxiv.org/abs/2004.12832).☆77Updated 8 months ago
- MultilingualSIFT: Multilingual Supervised Instruction Fine-tuning☆86Updated last year
- A pipeline for LLM knowledge distillation☆78Updated 3 months ago
- Codebase accompanying the Summary of a Haystack paper.☆72Updated 2 months ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated last year
- This is the official repository for Inheritune.☆105Updated last month
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆112Updated last year
- MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents [EMNLP 2024]☆104Updated last month
- Multipack distributed sampler for fast padding-free training of LLMs☆178Updated 3 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆81Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆124Updated 3 weeks ago
- 🚢 Data Toolkit for Sailor Language Models☆82Updated 4 months ago
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆130Updated this week
- An experimental implementation of the retrieval-enhanced language model☆75Updated last year