You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"summary": "brace-expansion: Zero-step sequence causes process hang and memory exhaustion",
10
+
"details": "### Impact\n\nA brace pattern with a zero step value (e.g., `{1..2..0}`) causes the sequence generation loop to run indefinitely, making the process hang for seconds and allocate heaps of memory.\n\nThe loop in question:\n\nhttps://github.com/juliangruber/brace-expansion/blob/daa71bcb4a30a2df9bcb7f7b8daaf2ab30e5794a/src/index.ts#L184\n\n`test()` is one of\n\nhttps://github.com/juliangruber/brace-expansion/blob/daa71bcb4a30a2df9bcb7f7b8daaf2ab30e5794a/src/index.ts#L107-L113\n\nThe increment is computed as `Math.abs(0) = 0`, so the loop variable never advances. On a test machine, the process hangs for about 3.5 seconds and allocates roughly 1.9 GB of memory before throwing a `RangeError`. Setting max to any value has no effect because the limit is only checked at the output combination step, not during sequence generation.\n\nThis affects any application that passes untrusted strings to expand(), or by error sets a step value of `0`. That includes tools built on minimatch/glob that resolve patterns from CLI arguments or config files. The input needed is just 10 bytes.\n\n### Patches\n\n\nUpgrade to versions\n- 5.0.5+\n\nA step increment of 0 is now sanitized to 1, which matches bash behavior.\n\n### Workarounds\n\nSanitize strings passed to `expand()` to ensure a step value of `0` is not used.",
"summary": "Langflow has Authenticated Code Execution in Agentic Assistant Validation",
10
+
"details": "## Description\n\n### 1. Summary\n\nThe Agentic Assistant feature in Langflow executes LLM-generated Python code during its **validation** phase. Although this phase appears intended to validate generated component code, the implementation reaches dynamic execution sinks and instantiates the generated class server-side.\n\nIn deployments where an attacker can access the Agentic Assistant feature and influence the model output, this can result in arbitrary server-side Python execution.\n\n### 2. Description\n\n#### 2.1 Intended Functionality\n\nThe Agentic Assistant endpoints are designed to help users generate and validate components for a flow. Users can submit requests to the assistant, which returns candidate component code for further processing.\n\nA reasonable security expectation is that validation should treat model output as **untrusted text** and perform only static or side-effect-free checks.\n\nThe externally reachable endpoints are:\n\n[https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/api/router.py#L252-L297](https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/api/router.py#L252-L297)\n\nThe request model accepts attacker-influenceable fields such as `input_value`, `flow_id`, `provider`, `model_name`, `session_id`, and `max_retries`:\n\n[https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/api/schemas.py#L20-L31](https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/api/schemas.py#L20-L31)\n\n#### 2.2 Root Cause\n\nIn the affected code path, Langflow processes model output through the following chain:\n\n`/assist`\n→ `execute_flow_with_validation()`\n→ `execute_flow_file()`\n→ LLM returns component code\n→ `extract_component_code()`\n→ `validate_component_code()`\n→ `create_class()`\n→ generated class is instantiated\n\nThe assistant service reaches the validation path here:\n\n[https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/services/assistant_service.py#L58-L79](https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/services/assistant_service.py#L58-L79)\n\nThe code extraction step occurs here:\n\n[https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/helpers/code_extraction.py#L11-L53](https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/helpers/code_extraction.py#L11-L53)\n\nThe validation entry point is here:\n\n[https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/helpers/validation.py#L27-L47](https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/helpers/validation.py#L27-L47)\n\nThe issue is that this validation path is not purely static. It ultimately invokes `create_class()` in `lfx.custom.validate`, where Python code is dynamically executed via `exec(...)`, including both global-scope preparation and class construction.\n\n[https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/lfx/src/lfx/custom/validate.py#L241-L272](https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/lfx/src/lfx/custom/validate.py#L241-L272)\n\n[https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/lfx/src/lfx/custom/validate.py#L394-L399](https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/lfx/src/lfx/custom/validate.py#L394-L399)\n\n[https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/lfx/src/lfx/custom/validate.py#L441-L443](https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/lfx/src/lfx/custom/validate.py#L441-L443)\n\nAs a result, LLM-generated code is treated as executable Python rather than inert data. This means the “validation” step crosses a trust boundary and becomes an execution sink.\n\nThe streaming path can also reach this sink when the request is classified into the component-generation branch:\n\n[https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/services/assistant_service.py#L142-L156](https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/services/assistant_service.py#L142-L156)\n\n[https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/services/assistant_service.py#L259-L300](https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/services/assistant_service.py#L259-L300)\n\n### 3. Proof of Concept (PoC)\n\n1. Send a request to the Agentic Assistant endpoint.\n2. Provide input that causes the model to return malicious component code.\n3. The returned code reaches the validation path.\n4. During validation, the server dynamically executes the generated Python.\n5. Arbitrary server-side code execution occurs.\n\n### 4. Impact\n\n* Attackers who can access the Agentic Assistant feature and influence model output may execute arbitrary Python code on the server.\n* This can lead to:\n\n * OS command execution\n * file read/write\n * credential or secret disclosure\n * full compromise of the Langflow process\n\n### 5. Exploitability Notes\n\nThis issue is most accurately described as an **authenticated or feature-reachable code execution vulnerability**, rather than an unconditional unauthenticated remote attack.\n\nSeverity depends on deployment model:\n\n* In **local-only, single-user development setups**, the issue may be limited to self-exposure by the operator.\n* In **shared, team, or internet-exposed deployments**, it may be exploitable by other users or attackers who can reach the assistant feature.\n\nThe assistant feature depends on an active user context:\n\n[https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/api/utils/core.py#L38](https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/api/utils/core.py#L38)\n\nAuthentication sources include bearer token, cookie, or API key:\n\n[https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/services/auth/utils.py#L39-L53](https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/services/auth/utils.py#L39-L53)\n\n[https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/services/auth/utils.py#L156-L163](https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/services/auth/utils.py#L156-L163)\n\nDefault deployment settings may widen exposure, including `AUTO_LOGIN=true` and the `/api/v1/auto_login` endpoint:\n\n[https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/lfx/src/lfx/services/settings/auth.py#L71-L87](https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/lfx/src/lfx/services/settings/auth.py#L71-L87)\n\n[https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/api/v1/login.py#L96-L135](https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/api/v1/login.py#L96-L135)\n\n### 6. Patch Recommendation\n\n* Remove all dynamic execution from the validation path.\n* Ensure validation is strictly static and side-effect-free.\n* Treat all LLM output as untrusted input.\n* If code generation must be supported, require explicit approval and run it in a hardened sandbox isolated from the main server process.\n\nDiscovered by: @kexinoh ([https://github.com/kexinoh](https://github.com/kexinoh), works at Tencent Zhuque Lab)",
"summary": "n8n Vulnerable to LDAP Filter Injection in LDAP Node",
10
+
"details": "## Impact\nA flaw in the LDAP node's filter escape logic allowed LDAP metacharacters to pass through unescaped when user-controlled input was interpolated into LDAP search filters. In workflows where external user input is passed via expressions into the LDAP node's search parameters, an attacker could manipulate the constructed filter to retrieve unintended LDAP records or bypass authentication checks implemented in the workflow.\n\nExploitation requires a specific workflow configuration:\n- The LDAP node must be used with user-controlled input passed via expressions (e.g., from a form or webhook).\n\n## Patches\nThe issue has been fixed in n8n versions 1.123.27, 2.13.3, and 2.14.1. Users should upgrade to one of these versions or later to remediate the vulnerability.\n\n## Workarounds\nIf upgrading is not immediately possible, administrators should consider the following temporary mitigations:\n- Limit workflow creation and editing permissions to fully trusted users only.\n- Disable the LDAP node by adding `n8n-nodes-base.ldap` to the `NODES_EXCLUDE` environment variable.\n- Avoid passing unvalidated external user input into LDAP node search parameters via expressions.\n\nThese workarounds do not fully remediate the risk and should only be used as short-term mitigation measures.",
0 commit comments