The Agentic Assistant feature in Langflow executes LLM-generated Python code during its validation phase. Although this phase appears intended to validate generated component code, the implementation reaches dynamic execution sinks and instantiates the generated class server-side.
In deployments where an attacker can access the Agentic Assistant feature and influence the model output, this can result in arbitrary server-side Python execution.
The Agentic Assistant endpoints are designed to help users generate and validate components for a flow. Users can submit requests to the assistant, which returns candidate component code for further processing.
A reasonable security expectation is that validation should treat model output as untrusted text and perform only static or side-effect-free checks.
The externally reachable endpoints are:
The request model accepts attacker-influenceable fields such as input_value, flow_id, provider, model_name, session_id, and max_retries:
In the affected code path, Langflow processes model output through the following chain:
/assist
→ execute_flow_with_validation()
→ execute_flow_file()
→ LLM returns component code
→ extract_component_code()
→ validate_component_code()
→ create_class()
→ generated class is instantiated...
1.9.0Exploitability
AV:NAC:LAT:NPR:LUI:NVulnerable System
VC:HVI:HVA:NSubsequent System
SC:HSI:HSA:N9.3/CVSS:4.0/AV:N/AC:L/AT:N/PR:L/UI:N/VC:H/VI:H/VA:N/SC:H/SI:H/SA:N