OSS Project

vLLM

A high-throughput and memory-efficient inference and serving engine for LLMs

Rank
85
Increased by 20
Git Repositories
vllm
Started
2023-02-09 653 days ago
Open Core Products
Open Core Company Product
Anyscale Anyscale
Neural Magic nm-vllm
Tembo Tembo Cloud
Tembo Tembo Self Hosted
GitHub Stars
30,526 #345
Weekly commits since inception
2023 2023 2024
Weekly contributors since inception
2023 2023 2024
Recent Project Activity
Day Span Commits Contributors
30 422 #107 138 #32
90 1,080 #148 268 #26
365 3,036 #250 585 #32
1095 3,530 #738 667 #114
All time 3,550 672
Contributing Individuals
Commits past X days
Contributor 30 90 All
41 Rafael Vasquez 4 7 9
42 Varun Sundar Rabindranath 0 4 14
42 dtrifiro 0 7 12
42 MengqingCao 6 7 7
45 Mor Zusman 2 5 11
46 yan ma 5 7 7
47 ElizaWszola 2 6 9
47 Ronen Schaffer
IBM
0 1 14
49 alexeykondrat 0 8 9
50 andoorve 1 2 12
51 Alexei-V-Ivanov-AMD 2 2 11
52 Peter Salas 2 6 8
53 Harry Mellor 1 2 11
54 Guillaume Calmettes 5 5 5
54 tomeras91 1 4 9
56 Jee Li 0 0 12
57 rasmith 2 6 6
57 Chen Zhang 0 7 7
59 Fu Jie 1 1 10
60 Allen.Dou 0 0 11
Contributing Companies

Add this OSSRank shield to this project's README.md

[![OSSRank](https://shields.io/endpoint?url=https://ossrank.com/shield/4026)](https://ossrank.com/p/4026)