How to allocate/link Functions to PBS elements

What should be done in order to allocate functions (defined in System Analysis phase) to a PBS ?
To be more specific :

  • During the System Analysis phase we are able to define the main functions of a system (let’s call it Plant), which can be represented in a SFBD diagram.
  • This Plant system has a previously defined and fixed PBS, that could eventually be represented through a CIBD diagram, for instance.
    What we would like to do is to allocate these main functions to the PBS elements.
    We guess that we may be forced to go through the following engineering phases (Logical, Physical…) in order to create a traceability matrix between Configuration Items (CI) and Physical Artifacts. But even in this case, the link between functions and PBS would be Function --> Logical component --> Physical component --> CI, what is not what we are looking for.

Hi Juan,
What you are trying to do seems like a big shortcut in the Arcadia method. You are right, the theoretical path would be: System Function -> Logical Functions -> Logical Components -> Physical Funtions / Physical Components -> Configuration Item.
It is surprising that you want to trace System Functions to PBS elements. What is the objective of your System Need Analysis? Why would you need this direct link?
I could give you a tool workaround to specify this direct allocation, but I would really like to understand your need before helping you twist the method

Thank you Stephane.
The use case is the following one :

  • The PBS already exists, for instance because it was deducted from a previous version of the system.
  • Engineering organization is dependent of PBS : each component of the PBS has a responsible or “architect”
  • We are doing some reverse engineering in order to allocate the system function to each of the components of the PBS, in order to establish responsibilities.
    You are right in that we ideally would like to deduce a PBS from system needs and all the process through functions until physical architecture. But in this use case we kind of started from the end, both because it is a complex system and we won’t be able to change the PBS, and because we want to establish architects perimeter from the reverse engineering work.

I understand, and having a legacy is a typical use case. Very often, we have an existing physical architecture, and we use the logical architecture to reconciliate between new needs formalized in SA and existing design.
I reckon your system functions will be refined in logical and then physical functions, won’t they? When you say you want to “establish responsibilities”, you mean each contributor will be responsible for refining its own functions in LA and PA until allocation of physical components to configuration items?

Based on the allocation of system functions to PBS nodes, the architect of a given PBS node would be able to then refine the allocated functions in sub-system, logical and then physical functions indeed. The responsability of this architect would be to treat the set of main functions that were allocated.
But in order to make it this way in Arcadia/Capella, we would need to make an early map of functions <–> PBS nodes. The object that best represents the PBS nodes is the CI, but maybe is not the best suited for that.
In fact, isn’t a PBS somehow a “transverse” model, that could be linked to all objects in Capella ? I mean, at the end, all objects could be related to the first-level PBS that define the main product/system.

We don’t really see the PBS as a “transverse” model, but we could elaborate on that next time we see each other.
So the link you want to maintain is close to a “responsibility” matter I was mentioning in my previous post.
I think the easiest way would be to
to create your PBS in the EPBS level
to use what we call “Generic Traces” between these Configuraiton Items and anything else you need (here, System Functions). You will find the instructions for creating such links in section 7.3 of the documentation.
These traceability links have no semantic, but their can be visible through the Semantic Browser.

I will try to add several comments to Stephane’s answer.
First in Arcadia, functions are allocated to structural elements at System, Logical and Physical levels, not at EPBS level.
So maybe the level you need is more Physical Architecture than EPBS in your case.
A simple solution would be to have a black-box functional analysis at System level, first transition functions and actors to LA without changing anything, then transition again from LA to PA.
At PA level, you can now breakdown your transitioned System Functions to allocate them partly to different Physical Components (representing your CIs).
From what I understand, you would like to be able then to “export” individual PCs to separate new models where each PC becomes in turn a “System”, so that it can be worked out by each architect.
This export feature is not yet in Capella, but was quoted during the Clarity training.

Thank you Pascal and Stephane, I think we will need to try both ideas to go further.
Beyond how we could do it with Capella, I think that we should be able to map functions directly to PBS elements. In classic (i.e. not Model Based) Systems Engineering this seems to be what is done : to allocate functions (or more generally a FBS) to a PBS at early stages of system life-cycle.
In which extent we would be twisting the Arcadia method ?
Another topic that may be derived from this discussion (more tool oriented): wouldn’t it be useful to be able to generate traceability matrices between any objects in Capella ? Currently, matrices are generated between objects of subjacent layers. In this particular case, wouldn’t it be useful to generate matrices between for instance system functions and EPBS elements, and let the tool calculate if elements are linked and how ?

Thank you Pascal and Stephane, I think we will need to try both ideas to go further.
Beyond how we could do it with Capella, I think that we should be able to map functions directly to PBS elements. In classic (i.e. not Model Based) Systems Engineering this seems to be what is done : to allocate functions (or more generally a FBS) to a PBS at early stages of system life-cycle.
In which extent we would be twisting the Arcadia method ?
Another topic that may be derived from this discussion (more tool oriented): wouldn’t it be useful to be able to generate traceability matrices between any objects in Capella ? Currently, matrices are generated between objects of subjacent layers. In this particular case, wouldn’t it be useful to generate matrices between for instance system functions and EPBS elements, and let the tool calculate if elements are linked and how ?

Hi all,
In regular SE, e.g. as defined in ISO 15288 and INCOSE HDBK, a system design is described with a number of views. ARCADIA uses the RFLP pattern of views, other implementations use Operational, Functional, Physical views in line with IEEE 1220 standard. Underneath, the same characteristics are necessary for sufficient representations of the system in a given stage of its lifecycle.
The PBS is the Project Management image of the Physical Architecture and of the industrial BoM.
So, in regular SE the functions of a system functional architecture are allocated to elements of the physical architecture in order to be further specified, designed and then realized, integrated, elements contracted to a supplier, or under the responsibility of a technical responsible/architect. Thus the need for allocating elementary functions to system elements for elaborating contractual specifications.
Now, when modeling a system, representing the system model along 4 architectural views, if it not useful at some high system level to design the physical decomposition, because the functional and logical views are sufficient for going further, it may be acceptable. And if it is necessary to design down to the physical view, it should be made straightforward to pass through the logical view and gain benefit of the additional descriptions attached to it.
In any case, the only obligation is to make complex things more understandable and usable by all who need it, as many as they are and diverse. So, please, no tricky tricks, no scary bypass, no extra complication for solving complexity! KISS. Keep It Simple.

I mentioned a twist of the Arcadia method because you expressed the need to establish a direct and straightforward link from system functions (need analysis) to PBS elements (organizational elements of the solution) without explaining what was your initial intent and without saying what you would do in logical / physical architectures
As long as you go through refinement of these system functions, allocations to elements of the physical architecture in addition to this early linking to PBS elements, I don’t see any major issue.
I still don’t fully understand how you would exploit this information and I am still skeptical it is the best way to manage design responsibility concerns (you could tag the functions for example). Let’s talk about it in March.
I agree with your idea of generic traceability matrices. Actually we already have premises of solutions for that, but this is not mature yet.
No tricky trick here, no scary bypass. What I proposed Juan was a simple link. Just what he was asking for in the first place.
I understand the “need” to skip the Arcadia logical architecture in certain cases for the sake of efficiency / simplicity. This is just not supported by Capella today, because it has never been on the top of our priorities to implement this shortcut.

Dear all,
The logical architecture is a concept which I have met a couple of times in the engineering community and has remained unclear to me all through the years. If I interpret correctly, it is mostly useful for the design of software intensive systems. The complexity of a nuclear power plant comes from (1) the various situations which it is designed to withstand and (2) the tight coupling between its parts (from the reactor core to the electric grid). This is due mainly to physics, not the software. Considered as a whole, a nuclear power plant is not a software intensive system (although this can be debated in the detail since some components are software intensive, but as a whole keep mind that a NPP is a powerful boiler, not a software intensive system). As a consequence, we do not need the logical architecture stage (Ockham’s razor principle: unnecessary notions should be avoided).
However, we do need to recursively split the plant into elements which each can be specified clearly. This can be done by collecting requirements at NPP level, performing an external functional analysis (black box), then an internal functional analysis (white box) pursuing this functional breakdown to a level of detail such that the NPP system can be broken down into its system elements and the elementary functions can be allocated to system elements. Then we can repeat this process, starting from each of the system elements (subsystems). Many system elements are known in advance because our designs are conservative: nuclear island, turbine island, electrical power supply…
All this is quite classical SE.
In terms of data model our requirement is definitely basic. All we need is the following (removing all that is linked to requirements as I understand requirements are outside the scope of Arkadia/ Capella):
-Objects: Functions, System elements
-Tree structures: Breakdown of functions into more detailed functions (FBS), Breakdown of systems into smaller system elements (SBS or PBS, whichever you choose to call it)
-Links: Allocation of functions to system elements, Flows between functions (functional interfaces), Flows between system elements (physical interfaces). It must be possible to draw such flow diagrams at any level of the FBS and PBS, and these diagrams should remain consistent with each other.
Any other representation would be nice to have but not essential (e.g. activity diagrams, sequence diagrams, …).
If we are obliged to strictly follow the Arkadia 4-stage process, this could be a problem. It is difficult enough to make people comfortable with functional architecture and physical architecture for a start. I am not eager to urge people to use notions which I do not feel comfortable with, or understand the necessity of, myself. What is more I think this is bound to be rejected by our engineering population.
As Jean-Pierre wrote, keep it simple. I hope this is posible.

Dear all, Just some precisions about the reason and role of Logical architecture in Arcadia: In some (mainly software oriented) modelling approaches, what is called a “logical” architecture is a definition of (software) components and assembly rules, while “physical” architecture is a description of one or more deployments of instances of these components, on execution nodes and communication means. This is not the case in Arcadia: the major reason for logical architecture (LA) has nothing to see with software dominant systems : its purpose is to manage complexity, by defining first a conceptual, high level view of system architecture and components, without taking care of design details, implementation constraints, and technological concerns - provided that these issues do not influence architectural breakdown at this level of detail. As an example, for a hybrid car, LA would show the design decisions regarding role of combustion and electric engines, whether they are expected to work together or not… For first level decisions, no need to describe how the transmission is built; this will be detailled in Physical Architecture. By this way, major orientations of architecture can be defined and shared, while hiding part of the final complexity of the design, and without dependency on technologies. As an example, some system models have one single, common logical architecture for several projects or product variations (and several physical architectures). In fact, logical architecture should have been named ‘notional architecture’, or ‘conceptual architecture’, but we had to meet existing internal denominations. Physical architecture (PA) describes the final solution, including what has not been taken into account at LA level, ready to sub-contract, purchase or develop, and to integrate. So all configuration items, parts and assemblies, software components if any, hardware devices… should be defined here (or later), but not before. In the car example, PA could go up to listing major parts of the transmission (if subject to System Engineering/architecture design decisions), in order to fully characterize and subcontract them. As you can imagine, then, for one logical component, we can often find several physical components, relation is one to many. Similarly, functional description of component is notional in LA, and detailled enough in PA so as to sub-contract it. So Logical Architecture is recommended in Arcadia method, because it eases the bridge between Need and Solution, without needing to dive into full details. However, the deployment of the method can be adapted if necessary: we already have examples of operational units skipping Logical Architecture… and some (but not all) of these units decided later to create a LA a posteriori, because their context made it useful. Capella already allows to work without LA, although it is a bit tricky, and could be improved. Some of you might be interrested in adding this capability .

A good model example that we are waiting will help to understand
the difference between Logical and Physical architectures.
I am trying to build such an example using Capella for house architecture description.
In this model I use LA and PA.
In Logical architecture logical components such as home water supply system components
communicate with washing machine component.
In LA there is no information how they communicate exactly.
There is no information about where the pipes that connect them are installed.
In Physical architecture I add additional physical components
such as pipes inside walls and floor that connects water supply system components (installed in one room)
to washing machine (installed in the other room). For room and constructions (walls, floor) modeling
I use Node PC and deploy phisical behaviour components (washing maching, pipes, …) on them.
Please correct me if I use concepts of LA and PA incorrectly.

Based on your description, I don’t see any misuse. The domain is original
If ever you would like us to comment on your model, feel free to ask.

Stephane, thanks for the comment about LA and PA usage.
We plan to build an own house and it’s a good
example system for me to learn Capella by practice

As an example, you could consider in PA defining water supply pipes and water disposal pipes as physical links, on which you would allocate “clean cold water” and “dirty water” behavioural exchanges (describing pipes contents)

I guess I understand Jean-Luc’s explanation.
Sticking to the hybrid car example, this is what I would do :
I would group the combustion engine (CE) and the electric engine (EE) into a super-system named « Mechanical power supply system » (MPSS). The MPSS, CE and EE would all be part of the physical architecture standing at two different levels of the PBS.
Functions (and requirements) would be allocated to the MPSS. The functional analysis of the MPSS would lead to more detailed functions that would be grouped in the FBS of the MPSS. Terminal functions of the MPSS would be in turn allocated to the CE and EE and, very likely, to other sub-systems of the MPSS managing which engine is used, how the EE can be reloaded from the ME under various contexts.
So I interpret the logical architecture as a way to represent different PBS levels. If we had only two or three PBS levels, we could probably use the logical architecture quite efficiently.
Coming back to our nuclear power plant case, we have many more PBS (or physical architecture) levels to manage, just as an example :
Nuclear Power Plant (provides electricity to the grid) / Nuclear Island (provides steam to the turbine island) / Nuclear Heat Production System (provides heat to the secondary circuit) / Reactor Coolant System (carries heat to an exchange point) / Steam Generator System (provides 8000 sq.m of heat exchange surface to the secondary circuit)
And even at this level, the Steam Generator system is still a huge beast performing many functions, composed of hundreds of components, weighing as much as an A380 airliner when empty.
My impression is that we shall have to create several Clarity projects to model a NPP or even a smaller subset thereof. If I am right and, since I am concerned with losing the consistency and traceability in the design, I would suggest to think about the way to connect two Clarity projects in order to maintain consistency (same functions, same system elements at the junction) if possible.