sshh12 / multi_tokenLinks
Embed arbitrary modalities (images, audio, documents, etc) into large language models.
☆184Updated last year
Alternatives and similar repositories for multi_token
Users that are interested in multi_token are comparing it to the libraries listed below
Sorting:
- This is our own implementation of 'Layer Selective Rank Reduction'☆239Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆173Updated last year
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆198Updated 11 months ago
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆155Updated last year
- Maybe the new state of the art vision model? we'll see 🤷♂️☆165Updated last year
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 8 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 5 months ago
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated last year
- Beyond Language Models: Byte Models are Digital World Simulators☆322Updated last year
- PyTorch implementation of models from the Zamba2 series.☆182Updated 5 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆78Updated last year
- The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction☆388Updated 11 months ago
- A bagel, with everything.☆321Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆191Updated 10 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆140Updated 4 months ago
- Merge Transformers language models by use of gradient parameters.☆206Updated 10 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆239Updated 4 months ago
- ☆133Updated 10 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆144Updated 9 months ago
- Implementation of DoRA☆294Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆277Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated last year
- run paligemma in real time☆131Updated last year
- GRadient-INformed MoE☆263Updated 9 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆231Updated 7 months ago
- Video+code lecture on building nanoGPT from scratch☆68Updated last year
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆238Updated last year
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"☆629Updated last year
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vecto…☆239Updated 4 months ago