jquesnelle / crt-terminalLinks
Retro styled terminal shell
☆26Updated last year
Alternatives and similar repositories for crt-terminal
Users that are interested in crt-terminal are comparing it to the libraries listed below
Sorting:
- smolLM with Entropix sampler on pytorch☆150Updated 7 months ago
- A Collection of Pydantic Models to Abstract IRL☆18Updated last week
- look how they massacred my boy☆63Updated 7 months ago
- entropix style sampling + GUI☆26Updated 7 months ago
- MiniHF is an inference, human preference data collection, and fine-tuning tool for local language models. It is intended to help the user…☆172Updated last week
- ☆48Updated last year
- Plotting (entropy, varentropy) for small LMs☆96Updated 2 weeks ago
- Karpathy's llama2.c transpiled to MLX for Apple Silicon☆14Updated last year
- ☆113Updated 5 months ago
- ☆66Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆81Updated last year
- ☆38Updated 10 months ago
- A strongly typed Python DSL for developing message passing multi agent systems☆53Updated last year
- This repository explains and provides examples for "concept anchoring" in GPT4.☆72Updated last year
- smol models are fun too☆92Updated 6 months ago
- Approximating the joint distribution of language models via MCTS☆21Updated 7 months ago
- Synthetic data derived by templating, few shot prompting, transformations on public domain corpora, and monte carlo tree search.☆32Updated 3 months ago
- ☆22Updated last year
- A tree-based prefix cache library that allows rapid creation of looms: hierarchal branching pathways of LLM generations.☆68Updated 3 months ago
- Chat Markup Language conversation library☆55Updated last year
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆64Updated 7 months ago
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆221Updated last year
- Full finetuning of large language models without large memory requirements☆93Updated last year
- Preprint: Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆28Updated last year
- A guidance compatibility layer for llama-cpp-python☆34Updated last year
- Modified Stanford-Alpaca Trainer for Training Replit's Code Model☆40Updated 2 years ago
- ☆111Updated 5 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆139Updated 3 months ago
- Modify Entropy Based Sampling to work with Mac Silicon via MLX☆50Updated 6 months ago
- Entropy Based Sampling and Parallel CoT Decoding☆17Updated 7 months ago