OSS Project

vLLM

A high-throughput and memory-efficient inference and serving engine for LLMs

Rank
112
Increased by 10
Git Repositories
vllm
Started
2023-02-09 604 days ago
Open Core Products
Open Core Company Product
Anyscale Anyscale
Neural Magic nm-vllm
Tembo Tembo Cloud
Tembo Tembo Self Hosted
GitHub Stars
27,856 #395
Weekly commits since inception
2023 2023 2024
Weekly contributors since inception
2023 2023 2024
Recent Project Activity
Day Span Commits Contributors
30 317 #159 117 #36
90 1,045 #151 239 #34
365 2,486 #318 511 #39
1095 2,877 #876 565 #144
All time 2,877 565
Contributing Individuals
Commits past X days
Contributor 30 90 All
241 jmkuebler 0 1 1
241 Peter Salas 0 1 1
241 Saliya Ekanayake 0 1 1
241 Wei-Sheng Chin 0 1 1
241 Philipp Schmid 0 1 1
241 Pavani Majety 0 1 1
241 Benjamin Muskalla 0 1 1
241 Jonathan Berkhahn 0 1 1
241 rscohn2 0 1 1
241 Hollow Man 0 1 1
241 Richard Liu 0 1 1
241 Pooya Davoodi 0 1 1
241 daquexian 0 1 1
241 Maximilien de Bayser 0 1 1
241 Andrew Wang 0 1 1
241 Dongmao Zhang 0 1 1
241 Anthony Platanios 0 1 1
241 Brian Li 0 1 1
241 PHILO-HE 0 1 1
241 Helena Kloosterman 0 1 1
Contributing Companies

Add this OSSRank shield to this project's README.md

[![OSSRank](https://shields.io/endpoint?url=https://ossrank.com/shield/4026)](https://ossrank.com/p/4026)