OSS Project

vLLM

A high-throughput and memory-efficient inference and serving engine for LLMs

Rank
81
Increased by 23
Git Repositories
vllm
Started
2023-02-09 653 days ago
Open Core Products
Open Core Company Product
Anyscale Anyscale
Neural Magic nm-vllm
Tembo Tembo Cloud
Tembo Tembo Self Hosted
GitHub Stars
30,526 #345
Weekly commits since inception
2023 2023 2024
Weekly contributors since inception
2023 2023 2024
Recent Project Activity
Day Span Commits Contributors
30 424 #104 142 #28
90 1,096 #147 274 #25
365 3,056 #250 590 #32
1095 3,550 #734 672 #113
All time 3,550 672
Contributing Individuals
Commits past X days
Contributor 30 90 All
119 Aurick Qiao 0 1 4
122 Junichi Sato 1 1 3
122 Austin Veselka 1 1 3
122 Massimiliano Pronesti 0 0 5
125 Stas Bekman 0 2 3
126 Yunmeng 1 2 2
126 Flávia Béo 1 2 2
126 Jiangtao Hu 1 2 2
126 Alan Ji 1 2 2
126 Aaron Pham 1 2 2
126 tastelikefeet 1 2 2
126 Richard Liu 1 2 2
126 Hollow Man 1 2 2
126 Megha Agarwal 0 1 4
126 Elfie Guo 1 2 2
136 Liangfu Chen 0 0 4
136 TianYu GUO 0 0 4
136 Wen Sun 0 0 7
136 Dylan Hawk 0 0 4
136 Breno Faria 0 0 4
Contributing Companies

Add this OSSRank shield to this project's README.md

[![OSSRank](https://shields.io/endpoint?url=https://ossrank.com/shield/4026)](https://ossrank.com/p/4026)