OSS Project

vLLM

A high-throughput and memory-efficient inference and serving engine for LLMs

Rank
81
Increased by 23
Git Repositories
vllm
Started
2023-02-09 656 days ago
Open Core Products
Open Core Company Product
Anyscale Anyscale
Neural Magic nm-vllm
Tembo Tembo Cloud
Tembo Tembo Self Hosted
GitHub Stars
30,526 #345
Weekly commits since inception
2023 2023 2024
Weekly contributors since inception
2023 2023 2024
Recent Project Activity
Day Span Commits Contributors
30 436 #100 140 #29
90 1,083 #144 274 #24
365 3,084 #245 592 #32
1095 3,579 #734 675 #112
All time 3,579 675
Contributing Individuals
Commits past X days
Contributor 30 90 All
374 maor-ps 0 0 1
374 wenyujin333 0 0 1
374 Amit Garg 0 0 1
374 Charles Riggins 0 0 1
374 sergey-tinkoff 0 0 1
374 Joshua Rosenkranz 0 0 1
374 aws-patlange 0 0 1
374 Woo-Yeon Lee 0 0 1
374 mcalman 0 0 1
374 Qubitium-ModelCloud 0 0 1
374 Sirej Dua 0 0 1
374 jvlunteren 0 0 1
374 Baoyuan Qi 0 0 1
374 aniaan 0 0 1
374 Lim Xiang Yang 0 0 1
374 pushan 0 0 1
374 adityagoel14 0 0 1
374 Fish 0 0 1
374 Ethan Xu 0 0 1
374 Wushi Dong 0 0 1
Contributing Companies

Add this OSSRank shield to this project's README.md

[![OSSRank](https://shields.io/endpoint?url=https://ossrank.com/shield/4026)](https://ossrank.com/p/4026)