evintunador / templateGPT
customizable template GPT code designed for easy novel architecture experimentation
☆26Updated last week
Alternatives and similar repositories for templateGPT:
Users that are interested in templateGPT are comparing it to the libraries listed below
- A compact LLM pretrained in 9 days by using high quality data☆303Updated 4 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆168Updated 2 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆138Updated last month
- ☆126Updated 7 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆196Updated 8 months ago
- ☆91Updated 2 months ago
- Code for ExploreTom☆78Updated 3 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆91Updated 3 weeks ago
- ☆106Updated 3 months ago
- ☆112Updated 6 months ago
- code for training & evaluating Contextual Document Embedding models☆176Updated 2 months ago
- Banishing LLM Hallucinations Requires Rethinking Generalization☆272Updated 8 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆208Updated 4 months ago
- Collection of autoregressive model implementation☆83Updated last month
- A pipeline for LLM knowledge distillation☆99Updated this week
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆155Updated 5 months ago
- model activation visualiser☆90Updated this week
- DeMo: Decoupled Momentum Optimization☆185Updated 3 months ago
- Train your own SOTA deductive reasoning model☆81Updated 3 weeks ago
- Reference implementation of Mistral AI 7B v0.1 model.☆28Updated last year
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆186Updated 3 months ago
- smolLM with Entropix sampler on pytorch☆151Updated 4 months ago
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆145Updated last year
- ☆107Updated last week
- Low-Rank adapter extraction for fine-tuned transformers models☆171Updated 10 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆230Updated 4 months ago
- My fork os allen AI's OLMo for educational purposes.☆30Updated 3 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆85Updated last week
- An Open Source Toolkit For LLM Distillation☆554Updated 2 months ago
- Long context evaluation for large language models☆201Updated 3 weeks ago