I am evaluating and learning the capabilities of Capella tool and the related Arcadia Method. My question is about the possibility of using Capella to model a given existing system, for example a company that never used an MBSE have a system that you have to model, maybe it is not very helpful to start from the Top from operational analysis as the Arcadia Method recommend.
So my question is Capella/Arcadia Method suitable for starting from Physical Architecture and maybe go to operational analysis? I know it can be very not recommended process a bottom-up approach for many reasons, but I think for system integrator they can relay more on this approach that from designing it from scratch every time. I am just curiose about the tool capabilities.
Thanks in advance for any response.
There is no obligation to proceed in a top-down manner. And in practice, in many cases, other approaches are used. Actually, this is one of Arcadia/Capella strengths. For details, see for example this extract from the Arcadia web page
Adaptation to different Lifecycles
The recommended method described in this document takes best benefit from a top-down approach:
- Starting from operational and system need to define and validate requirements
- Building a “technology neutral” logical architecture dealing with non-functional constraints
- Then specifying technical functions and services of a physical architecture to implement it in the best way
Yet many constraints which need to be taken into account arise from the industrial context:
- Technical or technological limits
- Available technology, COTS
- Existing legacy to be reused
- Product policy imposing the use of given hardware boards, software components…
- Industrial constraints such as available skills, the necessity to sub-contract, and export control…
This is the reason why Arcadia can be applied according to several lifecycles and work sharing schemes. Great care has been taken in the method, the language and the Capella workbench to not impose one single engineering path (e.g. top-down) but to be adaptable to many lifecycles: Incremental, iterative, top-down, bottom-up, middle-out, Etc… The method is inherently iterative.
Examples of iterations or non-linear courses are:
- Need analysis starting from requirements, due to a lack of operational knowledge (a kind of reverse engineering of operational need)
- Requirements analysis anticipating logical or even physical architecture, to check for feasibility by defining/confronting to an early architecture
- Logical architecture anticipating (part of) physical architecture, e.g. to check for performance issues
- Physical architecture adapting to subcontracting constraints, or built from assembling reusable, existing components
- Components contract definition iterating on physical architecture to secure integration and refine contract parameters
For a complete detailed description, see Jean-Luc Voirin’s book Model-based System and Architecture Engineering with the Arcadia Method.
Thanks for your reply. Yes I know from the theoretical point that cab achieved, my question is that the capella tool make that practically possibile. For example, a reverse engineering that start from physical architecture and go top with all the inverse transitions and links that you normally build from the top-down approach would be possibile? Anyone has tryed it?
I’ am just a beginner with the Capella tool, I did many of the examples provided and now I am try to experimenting something different for evaluating the tool, I am really fan of the method and the tool anyway. Hope that I clarified better.
Thanks for your support.
In my experience, starting at the PA can be really valuable. As we often have a product in mind and then need elicit the requirements afterwards. Or we are back-engineering a system already deployed and would like to understand how it works first.
There are a number of ways to show traceability in the model, working backwards.
The “Realized Function/Realized Exchanged/Realized Component” selection can be used
Use the breakdown diagrams (functions or components). Do the modelling at the layer above, transition the functions/elements down then trace.
Using the Traceability Wizard to show traceability in the semantic browser
Using the requirements allocation using the Requirements Add-In.
This is by no means as automated as going top-down. But is very achievable if you keep the SA and LA reasonably light to begin with.
Thanks for your reply! This was exaclty the feedback that I am looking for. I will try the method backwards as your suggested. If any others want to post a feedback about working like this I will be curiose. Thanks to all for the support.
@xVanish69 having now had quite a bit of practical experience of using Arcadia/Capella I have come to the conclusion that you rarely work top-to-bottom in a linear fashion. This in reality should not be too surprising, it is the nature of systems engineering in general. When I was first learning the Capella tool I would use the automatic transition method frequently, now I rarely use it. Partly this is because even when starting at the OA perspective (green field) you never really complete this before attempting some aspect of the SA. Once you start thinking about the SA you realize something is missing from the OA – so you iterate. Once these iterations settle down a bit I tend to create the OA/SF realization links etc. manually, so it does not really matter that I created the SF before the OA. The LA then tends to proceed in a similar fashion: when performing this analysis you realize you missed something at the SA.
Both the Arcadia method and the Capella tool are very flexible to this. And you always have the validation function to make sure you have not missed an allocation somewhere.