scaleapi / open-tgi
☆22Updated this week
Related projects: ⓘ
- Tune MPTs☆84Updated last year
- Fine-tune Mistral 7B to generate fashion style suggestions☆30Updated 8 months ago
- Chat Markup Language conversation library☆53Updated 8 months ago
- ☆201Updated 7 months ago
- QLoRA with Enhanced Multi GPU Support☆36Updated last year
- ☆48Updated 6 months ago
- ☆58Updated 3 weeks ago
- ☆41Updated 3 months ago
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing ⚡☆58Updated 2 weeks ago
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vecto…☆40Updated 6 months ago
- Full finetuning of large language models without large memory requirements☆94Updated 8 months ago
- ☆18Updated last year
- Completion After Prompt Probability. Make your LLM make a choice☆68Updated last week
- Experiments with generating opensource language model assistants☆97Updated last year
- Modified Stanford-Alpaca Trainer for Training Replit's Code Model☆40Updated last year
- ☆75Updated 3 weeks ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆81Updated last year
- BIG: Back In the Game of Creative AI☆25Updated last year
- A library for squeakily cleaning and filtering language datasets.☆45Updated last year
- Low-Rank adapter extraction for fine-tuned transformers model☆154Updated 4 months ago
- ☆89Updated 11 months ago
- Fast approximate inference on a single GPU with sparsity aware offloading☆39Updated 8 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated 6 months ago
- Simple replication of [ColBERT-v1](https://arxiv.org/abs/2004.12832).☆73Updated 6 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆117Updated 8 months ago
- ☆65Updated 2 months ago
- ☆26Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆62Updated last year
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆22Updated 6 months ago
- Let's create synthetic textbooks together :)☆70Updated 7 months ago