Feeding a model to an AI

Hello to all,

I am trying to find out what is the best way to feed a Capella model to an AI, in order to get it to describe it in a text.
For example, I would like the LLM to be able to give a summary of the role of a certain component in the model, or to describe how a certain Capability is realized.

I have tried to use the GPT “vision” models and to feed them pictures of the diagrams, but this only produces very general descriptions, in the style of “this is probably a system structure diagram, it contains blocks which are likely parts and probably interact with each other”.
I am looking for a way to give the AI access to the model, without exporting diagrams as pictures… for example something like an XML export?

Does anyone know of any way of doing this?
Any other tips on the whole topic?

Many thanks!

I used Chat-GPT to decompose a component into sub components. It was working but you need to create a prompt that make sens. For instance I started with:

Can you list sub components of a <Component name> with their description ?

I would return somethin like:

- SubComponent name 1: description
- SubComponent name 2: description

I then parsed the HTML list for the component name and description.
But some time the prompt was not specific enough and I started to add the parent component if it existed:

Can you list sub components of a <Component name> from a <Parent component name> with their description ?

But for the all model… I don’t really know how you could create the prompt. Maybe you could prompt the component structure then functions an so on and ask Chat-GPT to summarize.

I found the script, it was using HTML scrapping and is only a quick test but it can give you an idea:

# include needed for the Capella simplified API
import time
if False:
    from simplified_api.capella import *

from playwright.sync_api import sync_playwright

PLAY = sync_playwright().start()
BROWSER = PLAY.chromium.launch_persistent_context(
PAGE = BROWSER.new_page()

class OpenAI():
    def get_input_box(self):
        """Get the child textarea of `PromptTextarea__TextareaWrapper`"""
        return PAGE.wait_for_selector("textarea")
    def send_message(self, message):
        # Send the message
        box = self.get_input_box()
        PAGE.wait_for_selector("text=Try again", timeout=120000)
        res = self.get_dict()
        return res
    def reset(self):
        PAGE.locator("text=Reset Thread").click()
    def get_dict(self):
        res = {}
        """Get the latest message"""
        lis = PAGE.query_selector_all('li')
        for li in lis:
            text = li.inner_text()
            # print(text)
            splited = text.split(':')
            if len(splited) >= 2:
                res[splited[0]] = splited[1]
        return res
    def break_down(self, component):
        res = []
        message =  ''
        if component.get_container() is not None and component.get_container().get_name() is not None and component.get_container().get_name() != 'Structure':
            message = 'Can you list all components that compose the {} of a {} with the description of each component ?'.format(component.get_name(), component.get_container().get_name())
            message = 'Can you list all components that compose a {} with the description of each component ?'.format(component.get_name())
        dictionary = self.send_message(message)
        for key, value in dictionary.items():
            pc = PhysicalComponent()
        return res

aird_path = '/car/car.aird'

model = CapellaModel()
se = model.get_system_engineering()
component = se.get_physical_architecture().get_physical_component_pkg().get_owned_physical_components().get(0)

# start a transaction to modify the Capella model
    for child in OpenAI().break_down(component):
    for child_component in component.get_owned_physical_components():
        for child in OpenAI().break_down(child_component):
    # if something went wrong we rollback the transaction
    # if everything is ok we commit the transaction

# save the Capella model