OSS Project

vLLM

A high-throughput and memory-efficient inference and serving engine for LLMs

Rank
165
Increased by 45
Git Repositories
vllm
Started
2023-02-09 512 days ago
Open Core Products
Open Core Company Product
Anyscale Anyscale
Neural Magic nm-vllm
Tembo Tembo Cloud
Tembo Tembo Self Hosted
GitHub Stars
21,774 #526
Weekly commits since inception
2023 2023 2024
Weekly contributors since inception
2023 2023 2024
Recent Project Activity
Day Span Commits Contributors
30 324 #145 89 #52
90 771 #221 188 #49
365 1,597 #526 392 #65
1095 1,820 #1,220 402 #251
All time 1,820 402
Contributing Individuals
Commits past X days
Contributor 30 90 All
41 ljss 0 0 9
42 alexm-nm 0 5 5
42 Alexei-V-Ivanov-AMD 0 5 5
44 James Whedbee 1 3 5
45 Matt Wong 2 3 4
46 Yineng Zhang 3 3 3
46 Sanger Steel 1 4 4
48 Wen Sun 0 0 7
48 Noam Gat 0 3 5
48 陈序 0 0 7
51 Breno Faria 1 3 4
52 HE, Tao 0 1 6
52 zhaoyang-star 0 1 6
52 Itay Etelis 2 3 3
52 DefTruth 0 4 4
56 zspo 0 2 5
57 zifeitong 1 3 3
57 Divakar Verma 1 3 3
57 Jinzhen Lin 1 3 3
60 bnellnm 1 1 4
Contributing Companies

Add this OSSRank shield to this project's README.md

[![OSSRank](https://shields.io/endpoint?url=https://ossrank.com/shield/4026)](https://ossrank.com/p/4026)