OSS Project

vLLM

A high-throughput and memory-efficient inference and serving engine for LLMs

Rank
112
Increased by 9
Git Repositories
vllm
Started
2023-02-09 604 days ago
Open Core Products
Open Core Company Product
Anyscale Anyscale
Neural Magic nm-vllm
Tembo Tembo Cloud
Tembo Tembo Self Hosted
GitHub Stars
27,440 #401
Weekly commits since inception
2023 2023 2024
Weekly contributors since inception
2023 2023 2024
Recent Project Activity
Day Span Commits Contributors
30 317 #158 117 #37
90 1,045 #150 239 #34
365 2,486 #319 511 #39
1095 2,877 #876 565 #144
All time 2,877 565
Contributing Individuals
Commits past X days
Contributor 30 90 All
120 Jungho Christopher Cho 0 2 2
120 Mahesh Keralapura 0 2 2
120 Bongwon Jang 0 2 2
120 Yihuan Bu 0 2 2
120 Zach Zheng 0 2 2
120 Harsha vardhan manoj Bikki 0 2 2
120 fzyzcjy 0 2 2
120 Joe 0 2 2
120 jon-chuang 0 2 2
120 Haichuan 0 2 2
120 Abhinav Goyal 0 2 2
132 Michal Moskal 0 0 3
132 Yineng Zhang 0 0 3
132 Jinzhen Lin 0 0 3
132 Qing 0 0 5
132 Hanzhi Zhou 0 0 3
132 Casper Bøgeskov Hansen 0 0 3
132 Terry 0 0 3
132 kliuae 0 0 3
132 Ricardo Lu 0 0 5
Contributing Companies

Add this OSSRank shield to this project's README.md

[![OSSRank](https://shields.io/endpoint?url=https://ossrank.com/shield/4026)](https://ossrank.com/p/4026)