OSS Project

vLLM

A high-throughput and memory-efficient inference and serving engine for LLMs

Rank
110
Increased by 15
Git Repositories
vllm
Started
2023-02-09 605 days ago
Open Core Products
Open Core Company Product
Anyscale Anyscale
Neural Magic nm-vllm
Tembo Tembo Cloud
Tembo Tembo Self Hosted
GitHub Stars
27,856 #395
Weekly commits since inception
2023 2023 2024
Weekly contributors since inception
2023 2023 2024
Recent Project Activity
Day Span Commits Contributors
30 321 #153 121 #36
90 1,054 #149 243 #33
365 2,504 #318 516 #39
1095 2,897 #871 570 #142
All time 2,897 570
Contributing Individuals
Commits past X days
Contributor 30 90 All
323 Ye Cao 0 0 1
323 kota-iizuka 0 0 1
323 Adrian Abeyta 0 0 1
323 Jason Cox 0 0 1
323 Federico Galatolo 0 0 1
323 JGSweets 0 0 1
323 Fluder-Paradyne 0 0 1
323 Pierre Stock 0 0 1
323 robcaulk 0 0 1
323 Light Lin 0 0 1
323 Alexandre Payot 0 0 1
323 kczimm 0 0 1
323 WRH 0 0 2
323 Daniele 0 0 1
323 Jun Gao 0 0 1
323 Jong-hun Shin 0 0 1
323 JohnSaxon 0 0 1
323 Lu Wang 0 0 1
323 akhoroshev 0 0 1
323 Bruce Fontaine 0 0 1
Contributing Companies

Add this OSSRank shield to this project's README.md

[![OSSRank](https://shields.io/endpoint?url=https://ossrank.com/shield/4026)](https://ossrank.com/p/4026)