Question on validation rule TJ_G_04

Hi to all,

When validating my model, I get a lot of TJ_G_04 warnings : “FE1 (Functional Exchange) is defined between PF1 and PF2, but there is no exchange defined between the corresponding source and target elements in the previous phase”

For me this is a very common use case: a logical (resp. system) function is refined into several physical (resp. logical) functions, which then communicate to collectively realize this upper level function. Then there is no upper level FE to be realized, since there is only the one upper level function in the first place.

Unless I am mistaken, complying to this rule would more or less force one to define “mirror” functional architectures at each level, which is obviously not the point.

Is there something wrong about my approach ? Would you see any strong reason to keep this rule enabled and try to comply to it ?

Thanks in advance for your insight / experience feedback

Hello @cedpei,

I’d like to second your request since I’m struggling with the very same validation issue. If anyone can help out with this, I’d also be very grateful.

I agree with you this rule seems a little bit too strict or Capella does not allow to do enough. The equivalent rule for the functions could be correct because it could be necessay to justify the existences of all physical functions by a logical functions (I have just discovered this rule does not exist?) but in the case of functional exchanges either it is possible to create a link realize from a fe to a function (see pictures above) or it is too strict. After, it is just a “warning” and it is not a so big deal (warning can be ignored).
What it is possible to do with Capella:

What it could be useful to do in order to have fine grain traceability (not recommended in most cases):

Hi @SMonier,

thank you for sharing your workaround. That makes perfect sense with the Transform data function.

I’m dealing with functional decompositions that look like shown below:

In the transition from the SA to the LA level, Do something else gets broken down into two separate functions (which do definitely belong to Do something else as nested leaf functions and which do not belong to Publish results (unlike Transform data in your example which you traced to two different functions in the LA level):

This gives me TJ_G_04 which is exactly what @cedpei described:

FunctionalExchange 1 (Functional Exchange) is defined between Do something else - step 1 and Do something else - step 2, but there is no exchange defined between the corresponding source and target elements in the previous phase.

In fact, when combining this with exchange items, the Capella validation drives me to a direction where I don’t want to go. As shown in the screenshot above, ExchangeItem 1 gets passed through the chain of functions. Because of DCOM_13 (same exchange items on function ports and exchanges), I have to assign ExchangeItem 1 to the outFunctionalExchange 1 and inFunctionalExchange 1 ports. So far, that’s fine.

However, now I get TC_DF_11:

inFunctionalExchange 1 (Function Port) on Do something else - step 2 (Function) shall realize inInteraction 1 (Function Port) on Do something else (Function)

outFunctionalExchange 1 (Function Port) on Do something else - step 1 (Function) shall realize outInteraction 2 (Function Port) on Do something else (Function)

Here, I’m not sure if this makes perfect sense since the internal ports do not necessarily realize the external ports of the leaf functions.

When making the example a bit more complex by adding another purely internal function Do something else - internal to allocate the sub-functions to different logical entities, I really disagree with what the validation tells me to do:

This gives me TC_DF_11:

outFE 2 (Function Port) on Do something else - step 1 (Function) shall realize outInteraction 2 (Function Port) on Do something else (Function)

The port outFE 2 does definitely not realize the port outInteraction 2. Is this something you also see and have you found a solution to this? I’d be ok with suppressing the validation check for this specific port if that was supported. I’m not happy with globally disabling the complete check * TC_DF_11* because of this very specific single port.

Does anyone have an idea how to deal with that?

Thank you,

Hello @SMonier,

Thanks a lot for your answer!

@jdes, thanks for your upvote!

I think I will disable this warning now, because I feel comfortable that there is no use case, at least for me, that could benefit from having this warning enabled.

I do disagree a little though when you say that it could be ignored because it is just a warning. I really want to be able, like @jdes mentioned in a previous post, to reach zero warning. I think the model validation mechanism in capella has a lot of value and I would like to be able to rely on it heavily.

As a general approach, I tend to treat warnings as errors, because I think they are a way to enforce good practices, as well as a way to signal to the user that there might be something suspicious that needs to be looked at. For example, a warning could be the sign of an obvious modeling error, but not so obvious as to be directly visible on an existing diagram, and not an actual semantic error either, just inconsistency with the reality I am trying to model.

@jdes, regarding your follow up question, TC_DF_11 disappears if you define a different exchange item for the inner FEs. Is this where you said you did not want to go ?


Just a side comment: I think there are ways to extend/create your own validation rules as well in Capella - it may become useful if you use model validation extensively and want to enforce special modeling rules you may have defined in your project.
Obeo Canada

@cedpei, I fully agree with you when you say “the model validation mechanism in capella has a lot of value” (I think, the model validation is not enough used) but the validation rules are not perfect and complete ; first it depends on the way to model (even if Capella implements Arcadia, there are a lot of decisions to do), and so it’s why some validation rules are not useful or complete (don’t match fully at needs) in some cases. It’s why to ignore some warnings (if I know why I ignore it), it’s not for me a big deal. So, I don’t try to reach the zero warning. But I can understand it disturbs and discourages Capella users to have more than a thousand problems after a validation model. In this case, after to have cleaned the validation rules not useful during all the life the model, I think it’s better to use validation profile: define some specific profiles according to the development phase of the model (I;e remove validation rules about “consistency” checking that Capella elements are realized by below layer, when I start a new Arcadia perspective) and keep the usage of the full model validation on specific reviews. Of course as suggested by @StephaneLacrampe it’s good to complete by your owns validation rules and/or queries according to your way to model.

Hello @SMonier,
I fully agree,
thanks again for your time.

Yes, this warning disappears with a different exchange item. As you assumed, that’s exactly where I do not want to go. In some of my modelled functions, the internal exchange item should be identical. If I used to this workaround, I’d have to introduce a different exchange item as a pseudo-item just to get rid of the warning. I’ll probably have to ignore this warning here since it would semantically not make sense to use a different exchange item.

Many thanks also from my side to everyone who contributed with comments. We’ll have to think about a more customized validation procedure with some customized validation rules.

Best regards,

Copyright © Eclipse Capella, the Eclipse Capella logo, Eclipse and the Eclipse logo are Trademarks of The Eclipse Foundation.