OSS Project

vLLM

A high-throughput and memory-efficient inference and serving engine for LLMs

Rank
82
Increased by 22
Git Repositories
vllm
Started
2023-02-09 655 days ago
Open Core Products
Open Core Company Product
Anyscale Anyscale
Neural Magic nm-vllm
Tembo Tembo Cloud
Tembo Tembo Self Hosted
GitHub Stars
30,526 #345
Weekly commits since inception
2023 2023 2024
Weekly contributors since inception
2023 2023 2024
Recent Project Activity
Day Span Commits Contributors
30 420 #103 140 #29
90 1,088 #146 273 #26
365 3,059 #249 590 #32
1095 3,553 #732 672 #113
All time 3,553 672
Contributing Individuals
Commits past X days
Contributor 30 90 All
247 explainerauthors 0 0 2
247 Joe Runde 0 0 2
247 FlorianJoncour 0 0 2
247 nunjunj 0 0 2
247 Junyang Lin 0 0 2
247 Abhinav Goyal 0 0 2
247 dancingpipi 0 0 2
247 Kante Yin 0 0 2
247 Harry Mellor 0 0 2
247 Jae-Won Chung 0 0 2
247 dllehr-amd 0 0 2
247 Zach Zheng 0 0 2
293 fyuan1316 0 1 1
293 Yuhong Guo 0 1 1
293 Wei-Sheng Chin 0 1 1
293 Sam Stoelinga 0 1 1
293 Guillaume Calmettes 0 1 1
293 Ewout ter Hoeven 0 1 1
293 Allen Wang 0 1 1
293 Aarni Koskela 0 1 1
Contributing Companies

Add this OSSRank shield to this project's README.md

[![OSSRank](https://shields.io/endpoint?url=https://ossrank.com/shield/4026)](https://ossrank.com/p/4026)