philschmid / aws-neuron-samplesLinks
☆14Updated 2 years ago
Alternatives and similar repositories for aws-neuron-samples
Users that are interested in aws-neuron-samples are comparing it to the libraries listed below
Sorting:
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- Github repo for Peifeng's internship project☆13Updated last year
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆22Updated 10 months ago
- ☆63Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆34Updated last year
- a pipeline for using api calls to agnostically convert unstructured data into structured training data☆31Updated last year
- ☆56Updated 3 months ago
- Library to facilitate pruning of LLMs based on context☆32Updated last year
- ☆13Updated 9 months ago
- ☆20Updated 11 months ago
- Alice in Wonderland code base for experiments and raw experiments data☆131Updated last week
- Aana SDK is a powerful framework for building AI enabled multimodal applications.☆52Updated last month
- Using modal.com to process FineWeb-edu data☆20Updated 5 months ago
- This library supports evaluating disparities in generated image quality, diversity, and consistency between geographic regions.☆20Updated last year
- ☆50Updated last year
- Visualize multi-model embedding spaces. The first goal is to quickly get a lay of the land of any embedding space. Then be able to scroll…☆28Updated last year
- ☆18Updated last year
- AI Pull-Request Reviewer Companion (in the command line)☆13Updated last year
- ☆49Updated last year
- Collection of autoregressive model implementation☆86Updated 5 months ago
- Latent Large Language Models☆18Updated last year
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer☆43Updated last year
- A repository for research on medium sized language models.☆77Updated last year
- Unleash the full potential of exascale LLMs on consumer-class GPUs, proven by extensive benchmarks, with no long-term adjustments and min…☆26Updated 10 months ago
- ☆16Updated 5 months ago
- ☆27Updated 2 years ago
- BH hackathon☆14Updated last year
- ☆26Updated last year
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆41Updated last year
- implementation of https://arxiv.org/pdf/2312.09299☆21Updated last year