intel / document-automationLinks
Document Automation Reference Kit
☆15Updated last year
Alternatives and similar repositories for document-automation
Users that are interested in document-automation are comparing it to the libraries listed below
Sorting:
- ☆19Updated last month
- FMS Model Optimizer is a framework for developing reduced precision neural network models.☆20Updated 3 weeks ago
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated 2 years ago
- ☆13Updated last year
- Unit Scaling demo and experimentation code☆16Updated last year
- ☆68Updated last month
- Train, tune, and infer Bamba model☆132Updated 3 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆56Updated this week
- Minimum Description Length probing for neural network representations☆20Updated 8 months ago
- Advanced Reasoning Benchmark Dataset for LLMs☆47Updated last year
- Intel Gaudi's Megatron DeepSpeed Large Language Models for training☆13Updated 9 months ago
- Download, parse, and filter data from Phil Papers. Data-ready for The-Pile.☆18Updated 2 years ago
- ☆54Updated 10 months ago
- LM engine is a library for pretraining/finetuning LLMs☆67Updated last week
- train with kittens!☆62Updated 11 months ago
- ☆39Updated 2 years ago
- ☆26Updated last year
- OLMost every training recipe you need to perform data interventions with the OLMo family of models.☆48Updated this week
- Utilities for Training Very Large Models☆58Updated last year
- Benchmarking PyTorch 2.0 different models☆20Updated 2 years ago
- Hugging Face and Pyserini interoperability☆19Updated 2 years ago
- IBM development fork of https://github.com/huggingface/text-generation-inference☆61Updated last week
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.☆15Updated last year
- Repository for CPU Kernel Generation for LLM Inference☆26Updated 2 years ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated 11 months ago
- 🚀 Collection of libraries used with fms-hf-tuning to accelerate fine-tuning and training of large models.☆11Updated last week
- ☆20Updated 2 years ago
- Embroid: Unsupervised Prediction Smoothing Can Improve Few-Shot Classification☆11Updated 2 years ago
- codebase release for EMNLP2023 paper publication☆19Updated last week
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated 2 months ago