OSS Project

vLLM

A high-throughput and memory-efficient inference and serving engine for LLMs

Rank
81
Increased by 24
Git Repositories
vllm
Started
2023-02-09 655 days ago
Open Core Products
Open Core Company Product
Anyscale Anyscale
Neural Magic nm-vllm
Tembo Tembo Cloud
Tembo Tembo Self Hosted
GitHub Stars
30,526 #345
Weekly commits since inception
2023 2023 2024
Weekly contributors since inception
2023 2023 2024
Recent Project Activity
Day Span Commits Contributors
30 431 #101 139 #29
90 1,091 #146 272 #26
365 3,073 #248 590 #32
1095 3,567 #733 672 #112
All time 3,567 672
Contributing Individuals
Commits past X days
Contributor 30 90 All
373 Peter Salas 0 0 1
373 Elinx Hsi 0 0 1
373 Shukant Pal 0 0 1
373 Peter Götz 0 0 1
373 Dongwoo Kim 0 0 1
373 Or Sharir 0 0 1
373 Saliya Ekanayake 0 0 1
373 Juan Villamizar 0 0 1
373 Adam Boeglin 0 0 1
373 GeauxEric 0 0 1
373 Norman Mu 0 0 1
373 Mahmoud Ashraf 0 0 1
373 jvmncs 0 0 1
373 pandyamarut 0 0 1
373 Ye Cao 0 0 1
373 Benjamin Muskalla 0 0 1
373 kota-iizuka 0 0 1
373 Andrew Wang 0 0 1
373 Adrian Abeyta 0 0 1
373 Jason Cox 0 0 1
Contributing Companies

Add this OSSRank shield to this project's README.md

[![OSSRank](https://shields.io/endpoint?url=https://ossrank.com/shield/4026)](https://ossrank.com/p/4026)