OSS Project

vLLM

A high-throughput and memory-efficient inference and serving engine for LLMs

Rank
165
Increased by 55
Git Repositories
vllm
Started
2023-02-09 510 days ago
Open Core Products
Open Core Company Product
Anyscale Anyscale
Neural Magic nm-vllm
Tembo Tembo Cloud
Tembo Tembo Self Hosted
GitHub Stars
21,774 #525
Weekly commits since inception
2023 2023 2024
Weekly contributors since inception
2023 2023 2024
Recent Project Activity
Day Span Commits Contributors
30 330 #146 91 #53
90 759 #229 187 #49
365 1,569 #530 391 #65
1095 1,788 #1,237 398 #254
All time 1,805 401
Contributing Individuals
Commits past X days
Contributor 30 90 All
21 Jee Li 1 7 12
22 Ronen Schaffer
IBM
1 5 12
23 Fu Jie 6 6 6
23 Alexander Matveev 2 8 8
25 bigPYJ1151 3 6 8
26 Allen.Dou 2 4 10
27 Isotr0py 5 6 6
28 Ji Kunshang 2 6 8
29 Kuntai Du 4 6 6
30 Lily Liu 1 3 11
30 leiwen83 1 6 8
32 sasha0552 2 6 6
33 Harry Mellor 0 5 8
34 xwjiang2010 4 4 5
34 Isotr0py 0 6 7
36 Zifei Tong 3 5 5
37 Chang Su 3 4 5
38 Varun Sundar Rabindranath 3 4 4
39 ljss 0 0 9
40 Travis Johnson 1 4 5
Contributing Companies

Add this OSSRank shield to this project's README.md

[![OSSRank](https://shields.io/endpoint?url=https://ossrank.com/shield/4026)](https://ossrank.com/p/4026)