Two model implementation files hardcode trust_remote_code=True when loading sub-components, bypassing the user's explicit --trust-remote-code=False security opt-out. This enables remote code execution via malicious model
repositories even when the user has explicitly disabled remote code trust.
Affected files (latest main branch):
vllm/model_executor/models/nemotron_vl.py:430vision_model = AutoModel.from_config(config.vision_config, trust_remote_code=True)
cached_get_image_processor(self.ctx.model_config.model, trust_remote_code=True)
Both pass a hardcoded trust_remote_code=True to HuggingFace API calls, overriding the user's global --trust-remote-code=False setting.
Relation to prior CVEs:
Remote code execution. An attacker can craft a malicious model repository that executes arbitrary Python code when loaded by vLLM, even when the user has explicitly set --trust-remote-code=False. This undermines the security guarantee that trust_remote_code=False is intended to provide.
Remediation: Replace hardcoded trust_remote_code=True with self.config.model_config.trust_remote_code in both files. Raise a clear error if the model component requires remote code but the user hasn't opted in.
0.18.0Exploitability
AV:NAC:LPR:NUI:RScope
S:UImpact
C:HI:HA:H8.8/CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H