OSS Project

vLLM

A high-throughput and memory-efficient inference and serving engine for LLMs

Rank
81
Increased by 23
Git Repositories
vllm
Started
2023-02-09 654 days ago
Open Core Products
Open Core Company Product
Anyscale Anyscale
Neural Magic nm-vllm
Tembo Tembo Cloud
Tembo Tembo Self Hosted
GitHub Stars
30,526 #345
Weekly commits since inception
2023 2023 2024
Weekly contributors since inception
2023 2023 2024
Recent Project Activity
Day Span Commits Contributors
30 424 #104 142 #28
90 1,096 #147 274 #25
365 3,056 #250 590 #32
1095 3,550 #734 672 #113
All time 3,553 672
Contributing Individuals
Commits past X days
Contributor 30 90 All
61 afeldman-nm 0 4 8
62 Rui Qiao 0 2 9
63 Noam Gat 1 1 9
63 Andy Dai 0 6 6
63 B-201 4 4 4
66 whyiug 1 4 6
66 Terry Tang 1 4 6
68 Maximilien de Bayser 2 4 5
69 Pavani Majety 3 4 4
70 Yuan Zhou 2 3 5
70 chenqianfzh 0 4 6
70 Chang Su 0 1 8
73 Wallas Henrique 1 4 5
74 litianjian 2 4 4
75 omrishiv 0 3 6
75 Divakar Verma 0 3 6
75 Jiaxin Shan 0 3 6
78 leiwen83 0 0 8
78 chaunceyjiang 3 3 3
78 Ricky Xu 3 3 3
Contributing Companies

Add this OSSRank shield to this project's README.md

[![OSSRank](https://shields.io/endpoint?url=https://ossrank.com/shield/4026)](https://ossrank.com/p/4026)