geronimi73 / 3090_shortsLinks
minimal scripts for 24GB VRAM GPUs. training, inference, whatever
☆48Updated this week
Alternatives and similar repositories for 3090_shorts
Users that are interested in 3090_shorts are comparing it to the libraries listed below
Sorting:
- ☆86Updated last year
- ☆51Updated 9 months ago
- Small and Efficient Mathematical Reasoning LLMs☆72Updated last year
- ☆51Updated last year
- Codebase accompanying the Summary of a Haystack paper.☆79Updated last year
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆78Updated last year
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆80Updated last year
- ☆48Updated last year
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- Simple replication of [ColBERT-v1](https://arxiv.org/abs/2004.12832).☆79Updated last year
- Improving Text Embedding of Language Models Using Contrastive Fine-tuning☆65Updated last year
- Verifiers for LLM Reinforcement Learning☆79Updated 7 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆140Updated 2 years ago
- Using open source LLMs to build synthetic datasets for direct preference optimization☆69Updated last year
- Data preparation code for Amber 7B LLM☆93Updated last year
- Code for NeurIPS LLM Efficiency Challenge☆59Updated last year
- ☆55Updated last year
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆39Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆35Updated last year
- Simple GRPO scripts and configurations.☆59Updated 9 months ago
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- Set of scripts to finetune LLMs☆38Updated last year
- Code and data for "StructLM: Towards Building Generalist Models for Structured Knowledge Grounding" (COLM 2024)☆75Updated last year
- ☆39Updated last year
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆56Updated 3 weeks ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆169Updated last year
- FuseAI Project☆87Updated 9 months ago
- A pipeline for LLM knowledge distillation☆109Updated 7 months ago
- ☆62Updated last year
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆50Updated last year