Hello everyone,
The webinar “Closing the Loop Between MBSE and Cybersecurity: A Vulnerability Analysing Viewpoiint for Capella” generated many questions.
We would like to thank Jucimar Antunes Cabral and Forough Mokabberi from ReeR Safety for taking the time to provide detailed answers during this presentation: Capella Days 2025 | Learn from Capella ecosystem members
We would also like to thank all the participants for the quality of the discussions and for the relevant questions they asked the speakers.
Below are the answers to the questions that could not be addressed live:
| Full name | Question | Answer |
|---|---|---|
| Hicham Ben youssef | Usually, the SBOM or HBOM is not available during the architecture and system design phases. So how do you account for vulnerabilities without knowing the software or hardware components in the early steps of the traditional V-cycle? (Legacy systems are a different case.) | While it is true that we cannot scan for specific vulnerabilities without an SBOM/HBOM, in the early phases of the V-model focuses on architectural resilience rather than implementation bugs. In Capella, we use the functional and logical architectures to perform Threat Modeling (e.g., STRIDE) on data flows and trust boundaries, allowing us to identify design flaws such as unencrypted interfaces or missing authentication gates before technology selection occurs. This enables us to derive strict security requirements that act as constraints for the downstream development, effectively defining the security criteria that the future SBOM and HBOM must meet, rather than reacting to them later. |
| Fabien Cavenne | Thank you for the presentation and example. Concepts make good sense. Do you have experience on how it copes with load many vulnerabilities, many assets, different attack paths? In my experience this is where models have their limits. (humble question, I don’t know of magic bullets in this field) | You’re absolutely right as the number of vulnerabilities, assets, and attack paths increases, scalability becomes a real challenge. In our experience, the key to managing this complexity is prioritization, visual filtering, and separation of concerns across views. While our current prototype does not yet implement these filtered overlays, we are actively working on supporting them. The goal is to let users isolate specific concerns for example, viewing only critical vulnerabilities in a given subsystem or tracing a single exploit path across components rather than visualizing everything at once. This layered, query-driven approach isn’t a magic bullet, but it helps maintain clarity and keeps the model actionable as systems grow in size and complexity. |
| Hicham Ben youssef | What vulnerabilities can be detected during the system architecture and system design phases that might impact the future system, even before development/implementation and without knowing the specific software components ? CVE → SW Component NOT Applicable directly to Functional level components … | You are absolutely correct that we cannot look for CVEs (implementation bugs) at this stage. Instead, we look for CWEs (Common Weakness Enumerations) which are architectural and design flaws. In a Capella model, we can detect vulnerabilities by analyzing the relationships between functions, data, and components. Specifically, we look for: Broken Trust Boundaries (Spoofing/Elevation of Privilege): Example: We see a ‘Public’ actor (like a remote diagnostic tool) connected directly to a ‘Critical’ function (like engine control) without an intermediary ‘Authentication’ function or gateway. The design is logically flawed because it trusts an untrusted source. Missing Protection of Data-in-Transit (Information Disclosure): Example: A functional exchange carries data labeled ‘Sensitive’ (like a PIN), but it flows over a physical link defined as ‘Wireless’ or ‘Public Bus’ without a requirement for encryption. Single Points of Failure & Logic Loops (Denial of Service): Example: A critical functional chain relies on a single component with no redundant path, or a state machine that has an entry condition but no valid exit condition (infinite loop potential). Insecure State Transitions: Example: Examining the System State Machine to see if the system ‘Fails Open’ (insecure) instead of ‘Fails Safe’ (secure) when a power-loss event occurs. We are detecting logic errors, not coding errors. |
| Fabien Cavenne | In the example it seems that vulnerabilities are assessed on Physical view point with level 1 physical products (console, network link, etc.). Can it be applied at lower level (e.g. software components)? | Great question yes, the approach is not limited to level 1 physical components. The viewpoint was designed to be extensible and can be applied at lower abstraction levels, including software components, you can bind vulnerabilities to internal software modules, services, or even functions. For example, a vulnerability in an embedded TLS library can be linked to a specific interface in the logical architecture and traced through to physical deployment. Looking forward, we plan to integrate this capability with the SafeSource Vulnerability Platform, enabling automatic SBOM ingestion and CVE mapping directly onto Capella elements. This would support fine-grained vulnerability tracking even in early design, making continuous risk assessment possible throughout the system lifecycle. |
| Filipe de Paulo Oliveira | The focus on SBOMs and CVEs suggests a strong bottom-up workflow specifically for component vulnerabilities. How does VAV handle top-down architectural flaws? for instance logical errors or unsafe interactions between secure components, where there is no CVE entry in a database, but the system is still vulnerable due to its design. | Excellent and important question. While VAV currently emphasizes bottom-up integration with SBOM/HBOM and for known component-level vulnerabilities, it is also designed to support top-down modeling of architectural weaknesses that are not represented in public vulnerability databases. For example, unsafe interactions like a trusted component accepting inputs from an untrusted interface without validation, or missing security boundaries between subsystems can be modeled explicitly using the existing Vulnerability, Exploit Path, and RiskAssessment elements. These are manually instantiated and assessed by the system engineer or security expert. We are also exploring integration with threat modeling outputs (e.g. from the DARC viewpoint or STRIDE), so that design-level weaknesses can be introduced early in the architecture, even in the absence of a CVE. The ultimate goal is to support both top-down and bottom-up reasoning in a unified, model-centric risk management cycle. |