OSS Project

vLLM

A high-throughput and memory-efficient inference and serving engine for LLMs

Rank
81
Increased by 23
Git Repositories
vllm
Started
2023-02-09 653 days ago
Open Core Products
Open Core Company Product
Anyscale Anyscale
Neural Magic nm-vllm
Tembo Tembo Cloud
Tembo Tembo Self Hosted
GitHub Stars
30,526 #345
Weekly commits since inception
2023 2023 2024
Weekly contributors since inception
2023 2023 2024
Recent Project Activity
Day Span Commits Contributors
30 424 #104 142 #28
90 1,096 #147 274 #25
365 3,056 #250 590 #32
1095 3,550 #734 672 #113
All time 3,550 672
Contributing Individuals
Commits past X days
Contributor 30 90 All
21 Travis Johnson 5 9 22
22 Alexander Matveev 0 9 25
23 Alex Brooks 3 16 17
24 sroy745 3 14 18
25 bnellnm 4 11 19
26 Lily Liu 1 9 26
27 Yao Lu (Jason) 0 0 31
28 Philipp Moritz 0 0 27
29 Cade Daniel 0 0 27
30 bigPYJ1151 4 8 17
31 Ji Kunshang 1 7 20
32 Thomas Parnell 0 2 24
33 ProExpertProg 4 10 14
34 Kuntai Du 0 6 17
35 sasha0552 1 6 16
36 William Lin 0 6 16
36 YongzaoDan 6 9 9
38 Hongxia Yang 0 2 18
39 Patrick von Platen 3 10 10
40 Zifei Tong 3 5 13
Contributing Companies

Add this OSSRank shield to this project's README.md

[![OSSRank](https://shields.io/endpoint?url=https://ossrank.com/shield/4026)](https://ossrank.com/p/4026)