Progress Monitoring / Change Management

Hello,

I am wondering how progress monitoring is usually managed with Capella.

Actually, I would like to monitor the progress of some elements between 2 model Released.

For example, for physical links or physical paths I would like to be able to define if one of these object have been added, deleted, or modified and give precise information (be able on a physical path to define if a physical link involvment have been added or deleted for example). And maybe to compare description, etc…

What a i am thinking of to do it is :

  • either to use Progress Monitoring Variable to do it, but this has the disadvantage of having to define the progress status variable each time, so potentially you can forget to set the value when a modification happens, and also if we delete an item between two releases, it will not appear in the export of the second release (planning to export it in excel with P4C).

  • or to export all the data we need in a excel export and then to compare the two excel sheets (the one of the Release 1 and the other of the release 2). But this has the disadvantage to script some tool that can become easilly hard to manage through the project. Or maybe to use python script to do it .

So I wonder if there are usual and frequent ways inherent to the tool (or not) to monitor the progress of elements (and all the data relative to these elements) in the model between two versions, that does not have the above-mentioned disadvantages or that allows them to be minimized?

Thanks,

Edgar

1 Like

Hi Edgar,
In my head, a capella model is a representation of a product: it concerns the expected product view at the main level: System Analysis.
SA is main because this is where you set the scope of your product and its relation to context.
Other layer are part of architecture work to be done during product design, until EPBS where your continue to develop along two processes:
Continue to breakdown: then you transition your configuration items in new projet SA: your are setting new expected product (components). This ends when the proportion of non declinable requirements to component is seen unreasonable: the system view of this component is not secable anymore.
Continue technical design that was started already to feedback physical analysis, populating design product.
The purpose of the complete model is build for each PBS item the SA layer which is part of the entry for design product (some other like DMU allocated volume or software urbanisme or others). Then the model should have a main version level reflecting SA situation, a secondary reflecting the downstream situation to EPBS layer. Or As a part of expected product data, the model stopped at SA frozen should follow or have dependent level to expected product in your PLM data, as beginning of product design, the model continued should be linked to designed product in your PLM data. The items in model then have same lifecycle as dimension on a mechanical drawing regarding PLM. The content of a change for the new version should be explained in a description text.
Regards
Thierry P.

This is an interesting challenge with MBSE tools in general. There are many certification processes which require an understanding of what has changed between baselines. Configuration management and control is harder with models because there is so much more data and the relationships are explicit. In a word document it is easy to show what text has changed, but there is so much semantic information hidden which does change… but is not called out explicitly. In a model, these relationships are explicitly modelled so the amount of data which changes between baselines increases significantly.

The problem to solve is… how to demonstrate to an authority, design reviewer etc. what ALL the changes are, in a manner that isnt an overwhelming list of adjustements?

It is a problem that I am also thinking about how to solve, but not one that I have a definitive answer to yet.

In my opinion there is an even more fundamental problem here. If your design is encapsulated in a model how can that model be approved (i.e. accepted as being correct and complete) when the views are just perspectives of the model and there is so much hidden detail between the views, the modelling elements and the associated metadata, as @JoshWedgwood suggests. A skilled modeller who is proficient in the relevant modelling language can comprehend these semantics – you would assume. But in a environment where technical/design authorities – who are unlikely to have these skills – have dominion over the design how can you obtain their acceptance?

I suspect the only way to reconcile this, for now, is to consider the model as being the intellectual output of the skilled engineering practice which can be used to create some artefacts that can be inspected by reviewers without the relevant modelling skills. If the model changes the transformed artefacts would also change.

2 Likes

I think you are absolutely correct @woodske.

For example, It may be that you issue textual requirements that just contain a subset model that represent operational, functional and non-functional requirements that are “one-click” derived from the model e.g. through Python4Capella or M2Doc.

The model is the single source of truth, but the authorities and design reviews see only a subset of this as controlled through selective presentation.

2 Likes

Hello all,

Thank you for your replies. I totally agree with you, I think it is quite key when we talk about “model based” to explains that what is truely reviewed is only subsets of data stored in a unique model. But I think for successful adoption from all the stakeholders in your project internally, you need to show that you are able to do some basic thinks on the data stored that you would do if you had not stored them it in the model.

The design is not encapsuled in the model and it is clearly not the objective. The objective is just to demonstrate that we could be able to ensure progress monitoring of a small set of model elements to support engineers work in the company. It is not intended to be reviewed by design authorities. Additionaly, to avoid to have two references of this data subset in different tools, you need to show that you are able to track some basic changes in Capella.
And from a general manner it is clear that the model remains a tool for the engineering office, and the reviews should be done through documentation that contains, diagrams or exported subset through P4C for example.

We come here in the change process in general:
I see two main levels of changes:
Inside the engineering office: all team has the necessary skill to review documents and model eg: designers and design leader. The change is reviewed in the model and is being assessed about wether this new version (which has a potential version number) is able to solve the issue raised in the EC Request?
Then the engineering teams agrees to submit a plain version as the new one to be applied. This second submission made across reviews in which not all attendees are familiar with all model details. That’s why some comment data are required to present the design set. This process is is needed for each “facet” expected/designed/manufactured product and concerns DMUs, drawings, codes, architecture… as well system model. The aim is to ensure all team around the table understands the agreement they give to proceed EC Order, and all implications they will have to face to.

1 Like

Thanks for your details, I get your point but I don’t know what does EC mean ?

I get your point on how you would perform change process review and I think it is really clear and absolutely correct.
The only think is that sometimes you can’t review the data directly in the model (for different reasons), and you have to export them and highlight un particular what has change since the last review. And I think the progress status should be fine to do it quite accurately finallly.

EC meansEngineering Change
Within a change/creation progress, you may look for tracking the modeling progress:
Then you would have the original model (released or at least formal version), the new running version, and the expected change to conduct. The progress is then the diffmerge result between the two models vs expected change list.

1 Like

Ok thanks,
Really clear !