mchl-labs / stambeccoLinks
The home of Stambecco ๐ฆ: Italian Instruction-following LLaMA Model
โ20Updated 2 years ago
Alternatives and similar repositories for stambecco
Users that are interested in stambecco are comparing it to the libraries listed below
Sorting:
- Camoscio: An Italian instruction-tuned language model based on LLaMAโ127Updated last year
- โ38Updated last year
- Get ready to meet Fauno - the Italian language model crafted by the RSTLess Research Group from the Sapienza University of Rome.โ83Updated last year
- Tune MPTsโ84Updated 2 years ago
- Toolkit for attaching, training, saving and loading of new heads for transformer modelsโ280Updated 3 months ago
- โ29Updated 2 years ago
- Simple Question Answering system, based on data crawled from Twin Peaks Wiki. It is built using ๐ Haystack, an awesome open-source frameโฆโ11Updated 2 years ago
- Cerbero-7b is the first 100% Free and Open Source Italian Large Language Model (LLM) ready to be used for research or commercial applicatโฆโ34Updated last year
- Let's build better datasets, together!โ260Updated 6 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for freeโ231Updated 7 months ago
- A set of scripts and notebooks on LLM finetunning and dataset creationโ110Updated 9 months ago
- โ167Updated 2 years ago
- Custom fastapi server packaged as docker image for Huggingface inference endpoints deploymentโ11Updated last year
- FastFit โก When LLMs are Unfit Use FastFit โก Fast and Effective Text Classification with Many Classesโ207Updated last month
- โ9Updated last year
- Domain Adapted Language Modeling Toolkit - E2E RAGโ322Updated 7 months ago
- Materials for "IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation" ๐ฎ๐นโ30Updated last year
- โ455Updated last year
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updatesโ456Updated last year
- Tune any FALCON in 4-bitโ467Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retrainingโ699Updated last year
- Maybe the new state of the art vision model? we'll see ๐คทโโ๏ธโ165Updated last year
- Knowledge pills on Neural Searchโ26Updated 2 years ago
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmindโ177Updated 9 months ago
- ๐ฆ XโLLM: Cutting Edge & Easy LLM Finetuningโ402Updated last year
- This project is a collection of fine-tuning scripts to help researchers fine-tune Qwen 2 VL on HuggingFace datasets.โ73Updated 9 months ago
- โ205Updated last year
- Mistral + Haystack: build RAG pipelines that rock ๐คโ105Updated last year
- Fine-tuning Open-Source LLMs for Adaptive Machine Translationโ80Updated last month
- FRP Forkโ169Updated 2 months ago