Why are functions not common across the ARCADIA levels?

Can anyone please explain the rationale in the design of the ARCADIA method for having unique sets of functions at the different levels, rather than functions (and their decomposition) being common across all levels?
I understand that the allocation of functions will be different at each of the levels. With high level functions allocated at the System Analysis level and low level leaf functions are allocated to individual components at the Physical Architecture level.
However, it makes sense to me that it would be better if functions and their decomposition into leaf functions could be done in a centralised place that is common to all ARCADIA levels. Then the appropriate level function could be allocated in the appropriate place.
At the moment, with each transition between ARCADIA levels, we end up with lots of copies of functions (and realisation links to the level above), which feels messy. It means there is no single place where you can get the top to bottom view of the decomposition.
Was this design choice deliberate and for a particular reason?
Thanks!

Hello,
Each one of the Arcadia perspectives has its own objective and is a particular way to analyze the system of interest and its life-cycle context. On each of these you can perform a functional analysis, which may be materialized by the breakdown of functions (or operational activities).
But each one of these functional analysis has a different justification and meaning, e.g. the breakdown of System Functions in Systems Analysis perspective will be driven by the refinement of what the system is intended to do, while the breakdown of Physical Functions in Physical Architecture perspective may be driven by the use of a given technology.
If there was a unique functional breakdown across perspectives, it would be quite confusing to identify the rationale behind the breakdown of functions at each node of the tree structure. By having a dedicated functional breakdown, each perspective provides a complete view of a given engineering objective (e.g. elicitate the needs of the system in System Analysis).
Hope it helps.

Yes, to add a little explanation:

  • Functions at SA level are “need” functions, understood by customers and users
  • Functions at LA and PA levels are “solution” functions, created by technical people to answer the need
    They may be very different, and we need the Realization links to describe traceability between both!

I think that the question is about the “proximity” we want to have between the three functional analysis SA, LA and PA. And remenber that we shall keep the consistency between the layers (realization links)!
I don’t want to perform independent functional analysis SA, LA an PA and after to manage “manually” realization links", it will be not maintainable … furthermore complete independant functional analysis inside each layer lead to difficulties of understanding the links between layers (different names for instance without justification)…
I want to keep a complete view of my refining work from SA to PA (agree with James, it seems very useful for me)
My proposition today is to :

  • transit always the full functional tree between two layers without modifying the names;
  • refine always “inside the box” with a dedicated data flow diagram that explains the refinement
  • keep the consistency with capella transition mechanism and native validation rules
    advantages:
  • auto-maintain of the realization links
  • very easy to understand the refinement from one layer to another
  • view of complete functional analysis in PA layer

emmanuel hiron wrote on Mon, 29 June 2020 07:05

- refine always “inside the box” with a dedicated data flow diagram that explains the refinement …
This is good also for turning explicit functions that are derived:
The so called “derived functions” (which are proxy for ‘derived requirements’) are not traceable to higher level functions because they appear (by definition) because of some design decision or constraint at that level.
For example: enabling functions such as “generate electricity” on an aircraft are not expected to be on the SA level since they are not the Value-related when considering the system as a whole. This would appear maybe on PA or on a sub-model.
If you follow the rule of always tracing refining functions “inside the box” of functions transitioned from upper perspectives, all functions that are not inside these will be easily recognizable as derived.

My experience and issues are following:
The universe described in OA is way bigger than in SA. My system describes in fact a subsystem which is unkown by user. So my OA describes the user interacting system which is 3 PBS levels above mine. So what happens is transitionning OA to SA, actors are result of fusion of several OE, their SF are fusions of several activities. This is because even if I had first to understand how the big system works to have the correct list of OC, I don’t have to know the details of indirect actors contribution, neither actors interactions. My point is after checking my system structure, I transitioned my first OAS to FS:

  • OE parents to ones used as system actors are set as system actors

  • OA realized by a SF fusioned are transitioned again as a new function (especially when supported by an OE that was realized in addition by a fusioned actor.

Here is my question or remark: I have the choice to redo all my fusions after each scenario transition, or to build FS from scratch and link to OAS afterwards… Scenario? Messages? INstances? Executions?..
Capella is pushing me to have the same context in both levels OA and SA, with the same functions in. Then what is the use of the OA?
On the contrary, I have seen that since some versions ago, transitioning to SA transitions EIs. I can understand that in OA, EI description may be less detailed, but needs to be precise in SA (I mean EI elements and types), as the purpose is to build the precise system interfaces, so we can make some differences then. Still is there anything to uncheck if I want to continue to use operational EIs?
After this level, I think as well that LF and PF function should maintain the tree function of previous level, but I see an exception: I have 2 SF using same inputs and producing different outputs. Then in LA, they have been split in several LF, but one of the children is in fact common for the 2 parent LF. I have the choice forget the parents, or have 2 instances of the same thing, allocated in same LC.

Thanks for any suggestion.
Thierry Poupon

There is something important to note here: transitions are not mandatory.
(and I wonder if in your particular case they are even useful…)

The Operational Analysis shall permit you to understand the operational context in which your system of interest will be embedded. Once this is understood and relevant questions emerge, you may want to work on the Systems Analysis to focus on your system of interest and the interactions with external entities (Actors). At this stage, you change the point of view of your analysis and the model elements are not necessairly the same that you find in the Operational Analysis. That’s why these points of view are called Arcadia perspectives.

To give you an example: in en Operational Analysis you may be interested on analyzing interactions between members of a SW development team: what they do, how they work together, what are their interactions and interdependencies… This analysis may lead you to identify the major bottlenecks and some possibilities to improve the way the team works. In System Analysis you may decide that the system of interest that you will propose is a project management tool, and you will find some of the entities of the Operational Analysis as Actors (e.g. the SW engineers), but others may emerge from the choice of the type of system of interest (e.g. the guy who will host and maintain this project management tool). This is only an example of how the fact that you change your point of view (perspective) drives to model elements that are not necessairly “transitions” of the ones from the previous perspectives.

Obviosuly, transitions are powerful tools to optimize the modelling tasks, especially because they create realization links, but in some cases they may be counterproductive. You may try instead partial transitions: only transition a subset of elements of the previous perspective when you see that they are indeed the same.

Hi
I’m with you when you say transition are not mandatory, and I do partial transitions. But to me transition scenario from OAS to (S)FS is a mean to make sure about keeping them consistent. This copies the OA universe in the SA one, and I have reports about non transitioned items because they are already there about items that have no link with the concerned scenario.
Because we have revised the system perimeter to shorter, I have divided the OE previously realized by the system. I could not only revise my SA, because I could not delete uneeded function involvment in capabilities. These functions were absent of the scenario, but children were (not always in the concerned scenario), and allocated to actors and system separately. This was leading to an error because the now parent function was not allocated, and was even causing Capella termination. So I had to delete the complete SF tree and the capabilities, to start again.
OA to SA transition is in my understanding very different compared to others because not everything should be mandatory in the SA picture. SA to LA, should live in the same picture, and PA should have a bigger one as we introduce implementation constraints, but all LA picture should be include. Capella uses same engine for all three transitions with the same validation rules, may be this is the point.

Edit1: In fact, a scenario (FS) is causing an issue which terminates Capella… my plan is again to delete a part of my SA, function tree and may be everything, then transition only scenarios, rework them after. The other plan is to do that work in OA, and transform the exploration universe in system focus layer, after baselining a design level. Or reverse side transition all and simplify afterwards. In this last case I think I will remove the parent functions. and may be just remove OA after that work, prior to initiate LA.