Metrics
Affected Vendors & Products
| Source | ID | Title |
|---|---|---|
Github GHSA |
GHSA-2pc9-4j83-qjmr | vLLM affected by RCE via auto_map dynamic module loading during model initialization |
Solution
No solution given by the vendor.
Workaround
No workaround given by the vendor.
Fri, 23 Jan 2026 16:45:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| First Time appeared |
Vllm-project
Vllm-project vllm |
|
| Vendors & Products |
Vllm-project
Vllm-project vllm |
Thu, 22 Jan 2026 23:00:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| Metrics |
ssvc
|
Thu, 22 Jan 2026 12:15:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| References |
| |
| Metrics |
threat_severity
|
threat_severity
|
Wed, 21 Jan 2026 21:30:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| Description | vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.14.0, vLLM loads Hugging Face `auto_map` dynamic modules during model resolution without gating on `trust_remote_code`, allowing attacker-controlled Python code in a model repo/path to execute at server startup. An attacker who can influence the model repo/path (local directory or remote Hugging Face repo) can achieve arbitrary code execution on the vLLM host during model load. This happens before any request handling and does not require API access. Version 0.14.0 fixes the issue. | |
| Title | vLLM affected by RCE via auto_map dynamic module loading during model initialization | |
| Weaknesses | CWE-94 | |
| References |
| |
| Metrics |
cvssV3_1
|
Projects
Sign in to view the affected projects.
Status: PUBLISHED
Assigner: GitHub_M
Published:
Updated: 2026-01-22T16:50:33.696Z
Reserved: 2026-01-09T22:50:10.288Z
Link: CVE-2026-22807
Updated: 2026-01-22T15:11:02.864Z
Status : Awaiting Analysis
Published: 2026-01-21T22:15:49.077
Modified: 2026-01-26T15:04:59.737
Link: CVE-2026-22807
OpenCVE Enrichment
Updated: 2026-01-22T10:08:43Z
Github GHSA