VatsaDev / NanoPhi-alpha
GPT-2 small trained on phi-like data
☆65Updated 9 months ago
Related projects ⓘ
Alternatives and complementary repositories for NanoPhi-alpha
- Let's create synthetic textbooks together :)☆70Updated 9 months ago
- ☆72Updated last year
- Model REVOLVER, a human in the loop model mixing system.☆33Updated last year
- An unsupervised model merging algorithm for Transformers-based language models.☆100Updated 6 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆124Updated last year
- 5X faster 60% less memory QLoRA finetuning☆21Updated 5 months ago
- A guidance compatibility layer for llama-cpp-python☆34Updated last year
- Low-Rank adapter extraction for fine-tuned transformers model☆162Updated 6 months ago
- ☆104Updated 8 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆77Updated 7 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated 10 months ago
- ☆20Updated last year
- 1.58-bit LLaMa model☆79Updated 7 months ago
- entropix style sampling + GUI☆25Updated 3 weeks ago
- Experimental sampler to make LLMs more creative☆30Updated last year
- Full finetuning of large language models without large memory requirements☆93Updated 10 months ago
- ☆27Updated last year
- ☆64Updated 5 months ago
- Easily view and modify JSON datasets for large language models☆62Updated last month
- The one who calls upon functions - Function-Calling Language Model☆36Updated last year
- ☆37Updated 11 months ago
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆155Updated last year
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vecto…☆42Updated 8 months ago
- A fast batching API to serve LLM models☆172Updated 6 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆232Updated 5 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆81Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated 6 months ago
- Merge Transformers language models by use of gradient parameters.☆201Updated 3 months ago
- ☆31Updated 10 months ago