Is Eclipse Capella going to use Generative AI?

Hi Epictetus,

Here is chatgpt answer.

Great topic – there’s a lot of buzz (and genuine progress) around generative-AI-assisted architecture work right now.

Is Capella “compatible” with generative AI today?

Yes, in the sense that nothing in Capella prevents you from using an LLM to read or write Capella models.
Capella models are standard EMF/XMI files, and the underlying APIs are fully open-source. A generative-AI service can:

  • Query a model (via the Java or Sirius/Capella Studio APIs, or by reading the XMI directly) to understand the current architecture.
  • Create or modify elements by calling the same APIs a normal Eclipse plug-in would, or by emitting valid XMI that you then open in Capella.
  • Generate documentation (with M2Doc, massaging the input text through an LLM) or propose design variants/patterns.

Because of that openness, several community and research proofs-of-concept already exist. Examples:

  • A demo plug-in that lets ChatGPT propose function breakdowns given a short natural-language prompt.
  • Internal experiments at Obeo where an LLM suggests trace links and auto-completes requirement allocations.
  • EU research projects (e.g., AI4MBSE) that use Capella APIs as the “ground-truth” model repository while AI agents explore design alternatives.

Most of these are still prototypes, but they show that the plumbing is straightforward.

Is anything official “in progress”?

  • Capella Studio 1.5+ already provides a low-code DSL to build custom assistants; adding an LLM behind those assistants is on the short-term roadmap and several partners are piloting it.
  • The SysON project (open-source, Capella-based) is experimenting with an integrated LLM assistant that can accept “modify the model” commands in plain English. Results are promising, especially for repetitive refactoring tasks.
  • The Polarsys working group is collecting use-cases (auto-layout, pattern suggestion, smart validation) for a larger community initiative. If you have a concrete need, please chime in!

Will Capella become obsolete if it doesn’t “bake in” AI soon?

Unlikely, for two reasons:

  1. Method & governance still matter. Generative AI can suggest structures, but safety-critical or mission-critical architectures still need methodical justification, traceability, and consistency checks. Capella/Arcadia provide that backbone; AI will augment it, not replace it.
  2. Open tooling adapts faster. Because Capella is open-source and model-centric, you’re free to integrate whichever LLM or AI workflow you prefer, without waiting for a vendor to expose an API. That tends to out-pace closed ecosystems over time.

In short:

  • Capella is already “AI-ready” from a technical standpoint.
  • Early prototypes show real value, and larger coordinated efforts are under discussion.
  • The community would love more feedback and contributors – if you have specific use-cases or are building an integration, please share!

Hope that helps clarify where things stand.
Best,
Stéphane