The following are the papers to be presented in the Foundations Track, the Model-Driven Engineering in Practice (MEIP) track, and the SoSyM journal-first track. The original call for papers is here.
DSLs I
Wednesday Sept 30, 10:45 a.m. Ballroom C
Session chair: Gregor Engels
Systematic Generation of Standard Compliant Tool Support of Diagrammatic Modeling Languages
MEIP
Alexis Fouché, Florian Noyrit, Sébastien Gérard, and Maged Elaasar
(CEA, France; Carleton University, Canada; Crossplatform Software, Canada)
In the Model-Driven Engineering community, the abstract syntax of modeling languages is usually defined and implemented using metamodeling techniques. However, it is not the case for the concrete syntax of graphical modeling languages. Indeed, this concern is mostly specified by informal means. This practice leaves considerable leeway in the implementation and raises several standards compliance issues. Hence, toolsmiths can only rely on their interpretation of the standard and lack of systematic way to build conforming tool support. In this context, a first normative specification of the concrete syntax of UML 2.5 has been recently released using Diagram Definition. In this paper, we propose an approach that uses those formal specifications to systematically generate modeling language tool support that guarantees compliance to standard notation. We assess the approach on a subset of the UML class diagram implemented within the open-source Papyrus tool.
Article Search
Textual Diagram Layout Language and Visualization Algorithm
Foundations
Balázs Gregorics, Tibor Gregorics, Gábor Ferenc Kovács, András Dobreff, and Gergely Dévai
(Eötvös Loránd University, Hungary)
Graphical diagrams are an excellent source of information for understanding models. On the other hand, editing, storing and versioning models are more efficient in textual representations. In order to combine the advantages of these two representations, diagrams have to be generated from models defined in text. The generated diagrams are usually created by autolayout algorithms based on heuristics. In this paper we argue that automatically laid out diagrams are not ideal. Instead, we propose a textual layout description language that allows users to define the arrangement of those diagram elements they consider important. The paper also presents algorithms that create diagrams according to the layout description and arrange the underspecified elements automatically. The paper reports on the implementation of the proposed layout description language as an embedded language in Java. It is used to generate class and state machine diagrams compatible with the Papyrus UML editor.
Preprint Available
Pattern-Based Development of Domain-Specific Modelling Languages
Foundations
Ana Pescador, Antonio Garmendia, Esther Guerra, Jesús Sánchez Cuadrado, and Juan de Lara
(Autonomous University of Madrid, Spain)
Model-Driven Engineering (MDE) promotes the use of models to conduct all phases of software development in an automated way. Models are frequently defined using Domain-Specific Modelling Languages (DSMLs), which many times need to be developed for the domain at hand. However, while constructing DSMLs is a recurring activity in MDE, there is scarce support for gathering, reusing and enacting knowledge for their design and implementation. This forces the development of every new DSML to start from scratch. To alleviate this problem, we propose the construction of DSMLs and their modelling environments aided by patterns which gather knowledge of specific domains, design alternatives, concrete syntax, dynamic semantics and functionality for the modelling environment. They may have associated services, realized via components. Our approach is supported by a tool that enables the construction of DSMLs through the application of patterns, and synthesizes a graphical modelling environment according to them.
Preprint Available
Requirements 1: The User
Wednesday Sept 30, 10:45 a.m. Richelieu
Session chair: Marsha Chechik
A Controlled Experiment with Usability Inspection Techniques Applied to Use Case Specifications: Comparing the MIT 1 and the UCE Techniques
Foundations
Natasha M. Costa Valentim, Jacilane Rabelo, Ana Carolina Oran, Tayana Conte, and Sabrina Marczak
(Federal University of Amazonas, Brazil; PUCRS, Brazil)
A Use Case Model is composed of use cases that describe software functionalities through Use Case Specifications. The evaluation of the specifications that compose such a model can allow for an early identification of usability defects. We previously proposed MIT 1—Model Inspection Technique for Usability Evaluation that aims to support the identification of usability defects through the evaluation of use cases specifications. In this paper, we present the evaluation of this technique through a controlled experiment that measured its efficiency, effectiveness, perceived ease of use, and perceived usefulness when compared to the Use Case Evaluation (UCE) method. Our quantitative findings indicate that MIT 1 allows users to find more usability defects in less time than UCE. However, UCE was considered easiest to use and more useful than MIT 1, highlighting improvement needs for MIT 1.
Preprint Available
Modeling User Intentions for In-Car Infotainment Systems using Bayesian Networks
MEIP
Daniel Lüddecke, Christoph Seidl, Jens Schneider, and Ina Schaefer
(Volkswagen, Germany; TU Braunschweig, Germany)
To support users in operating a computer system with a varying set of functions, it is fundamental to understand their intentions, e.g., within an in-car infotainment system. Although the development of current in-car infotainment systems is already model-based, explicitly gathering and modeling user intentions is currently not supported. However, manually creating software that predicts user intentions is complex, error-prone and expensive. Model-based development can help in overcoming these issues. In this paper, we present an approach for modeling a user's intention based on Bayesian networks. We support developers of in-car infotainment systems by providing means to model possible intentions of users according to the current situation. We further allow modeling of user preferences and show how the modeled intentions may change during run-time as a result of the user's behavior. We demonstrate feasibility of our approach using an industrial example of an intention-aware in-car infotainment system.
Preprint Available
Integrating Goal-Oriented and Use Case-Based Requirements Engineering: The Missing Link
Foundations
Tuong Huan Nguyen,
John Grundy, and Mohamed Almorsy
(Swinburne University of Technology, Australia)
Combining goal-oriented and use case modeling has been shown as an effective method of requirements engineering. To ensure the quality of such modeled artifacts, a conceptual foundation is needed to govern the process of determining what types of artifacts to be modeled, and how they should be specified and analyzed for 3Cs problems (completeness, consistency and correctness). However, such a foundation is missing in current goal-use case integration approaches. In this paper, we present GUIMeta, a meta-model, to address this problem. GUIMeta consists of three layers. The artifact layer defines the semantics and classification of artifacts and their relationships. The specification layer offers specification rules for each artifact class. The ontology layer allows semantics to be integrated into the entire model. Our promising evaluation shows the suitability of GUIMeta in modeling goals and use cases.
Article Search
Info
Models@Runtime
Wednesday Sept 30, 10:45 a.m. Joliet
Session chair: Nelly Bencomo
Reusable Event Types for Models at Runtime to Support the Examination of Runtime Phenomena
Foundations
Michael Szvetits and Uwe Zdun
(University of Applied Sciences Wiener Neustadt, Austria; University of Vienna, Austria)
Today's software is getting more and more complex and harder to understand. Models help to organize knowledge and emphasize the structure of a software at a higher abstraction level. While the usage of model-driven techniques is widely adopted during software construction, it is still an open research topic if models can also be used to make runtime phenomena more comprehensible as well. It is not obvious which models are suitable for manual analysis and which model elements can be related to what type of runtime events. This paper proposes a collection of runtime event types that can be reused for various systems and meta-models. Based on these event types, information can be derived which help human observers to assess the current system state. Our approach is applied in a case study and evaluated regarding generalisability and completeness by relating it to two different meta-models.
Preprint Available
Stream my Models: Reactive Peer-to-Peer Distributed Models@run.time
Foundations
Thomas Hartmann, Assaad Moawad, Francois Fouquet, Gregory Nain, Jacques Klein, and Yves Le Traon
(University of Luxembourg, Luxembourg)
The models@run.time paradigm promotes the use of models during the execution of cyber-physical systems to represent their context and to reason about their runtime behaviour. However, current modeling techniques do not allow to cope at the same time with the large-scale, distributed, and constantly changing nature of these systems. This paper introduces a distributed models@run.time approach, combining ideas from reactive programming, peer-to-peer distribution, and large-scale models@run.time. We define distributed models as observable streams of chunks that are exchanged between nodes in a peer-to-peer manner. A lazy loading strategy allows to transparently access the complete virtual model from every node, although chunks are actually distributed across nodes. Observers and automatic reloading of chunks enable a reactive programming style. We integrated our approach into the Kevoree Modeling Framework and demonstrate that it enables frequently changing, reactive distributed models that can scale to millions of elements and several thousand nodes. Distributed Models@run.time, Reactive-programming
Article Search
Infrastructure as Runtime Models: Towards Model-Driven Resource Management
Foundations
Filip Křikava, Romain Rouvoy, and Lionel Seinturier
(Czech Technical University, Czech Republic; INRIA, France; University of Lille, France)
The importance of continuous delivery and the emergence of tools allowing to treat infrastructure configurations programmatically have revolutionized the way computing resources and software systems are managed. However, these tools keep lacking an explicit model representation of underlying resources making it difficult to introspect, verify or reconfigure the system in response to external events. In this paper, we outline a novel approach that treats system infrastructure as explicit runtime models. A key benefit of using such models@run.time representation is that it provides a uniform semantic foundation for resources monitoring and reconfiguration. Adopting models at runtime allows one to integrate different aspects of system management, such as resource monitoring and subsequent verification into an unified view which would otherwise have to be done manually and require to use different tools. It also simplifies the development of various self-adaptation strategies without requiring the engineers and researchers to cope with low-level system complexities.
Preprint Available
Behavioural modeling I
Wednesday Sept 30, 1:30 p.m. Ballroom C
Session chair: Hans Vangheluwe
Incremental Symbolic Execution of Evolving State Machines
Foundations
Amal Khalil and Juergen Dingel
(Queen's University, Canada)
This paper introduces two complementary techniques, memoization-based and dependency-based incremental symbolic execution, that aim to optimize the analysis of state machine models that undergo change. We implement the two proposed techniques on IBM Rhapsody Statecharts and present some evaluation results.
Article Search
State Machine Antipatterns for UML-RT
Foundations
Tuhin Kanti Das and Juergen Dingel
(Queen's University, Canada)
Software development guidelines are a set of rules which can help improve the quality of software. These rules are defined on the basis of experience gained by the software development community over time. Software antipatterns are a powerful and effective form of guidelines used for the identification of bad design choices and development practices that often lead to poor-quality software. This paper introduces a set of seven state machine antipatterns for the model-based development of real time embedded software systems. Each of these antipatterns is described with a pair of examples: one for the antipattern itself and a second one for improved, refactored solution. Keywords: UML-RT, Model quality, Antipatterns, State machines, Model-driven development, Model refactoring, Quality attributes, Embedded systems, Real-time systems.
Article Search
Identification of Simulink Model Antipattern Instances using Model Clone Detection
Foundations
Matthew Stephan and
James R. Cordy
(Miami University, USA; Queen's University, Canada)
One challenge facing the Model-Driven Engineering community is the need for model quality assurance. Specifically, there should be better facilities for analyzing models automatically. One measure of quality is the presence or absence of good and bad properties, such as patterns and antipatterns, respectively. We elaborate on and validate our earlier idea of detecting patterns in model-based systems using model clone detection by devising a Simulink antipattern instance detector. We chose Simulink because it is prevalent in industry, has mature model clone detection techniques, and interests our industrial partners. We demonstrate our technique using near-miss cross-clone detection to find instances of Simulink antipatterns derived from the literature in four sets of public Simulink projects. We present our detection results, highlight interesting examples, and discuss potential improvements to our approach. We hope this work provides a first step in helping practitioners improve Simulink model quality and further research in the area.
Preprint Available
Quality I
Wednesday Sept 30, 1:30 p.m. Richelieu
Session chair: Benoit Baudry
Formalizing the ISO/IEC/IEEE 29119 Software Testing Standard
MEIP
Shaukat Ali and Tao Yue
(Simula Research Laboratory, Norway; University of Oslo, Norway)
Model-based testing (MBT) provides a systematic and automated way to facilitate rigorous testing of software systems. MBT has been an intense area of research and a large number of MBT techniques have been developed in the literature and in the practice. However, all of the techniques have been developed using their own concepts and terminology of MBT, which are very often different than other techniques and at times have conflicting semantics. Moreover, while working on MBT projects with our industrial partners in the last several years, we were unable to find a unified way of defining MBT techniques based on standard terminology. To precisely define MBT concepts with the aim of providing common understanding of MBT terminology across techniques, we formalize a small subset of the recently released ISO/IEC/IEEE 29119 Software Testing Standard as a conceptual model (UML class diagrams) together with OCL constraints. The conceptual model captures all the necessary concepts based on the standard terminology that are mandatory or optional in the context of MBT techniques and can be used to define new MBT tools and techniques. To validate the conceptual model, we instantiated its concepts for various MBT techniques previously developed in the context of our industrial partners. Such instantiation automatically enforces the specified OCL constraints. This type of validation provided us feedback to further refine the conceptual model. Finally, we also provide our experiences and lessons learnt for such formalization and validation.
Preprint Available
An Automated Model Based Testing Approach for Platform Games
MEIP
Sidra Iftikhar, Muhammad Zohaib Iqbal, Muhammad Uzair Khan, and Wardah Mahmood
(National University of Computer and Emerging Sciences, Pakistan; University of Luxembourg, Luxembourg)
Game development has recently gained a lot of momentum and is now a major software development industry. Platform games are being revived with both their 2D and 3D versions being developed. A major challenge faced by the industry is a lack of automated system-level approaches for game testing. Currently in most game development organizations, games are tested manually or using semi-automated techniques. Such testing techniques do not scale to the industry requirements where more systematic and repeatable approaches are required. In this paper we propose a model-based testing approach for automated black box functional testing of platform games. The paper provides a detailed modeling methodology to support automated system-level game testing. As part of the methodology, we provide guidelines for modeling the platform games for testing using our proposed game test modeling profile. We use domain modeling for representing the game structure and UML state machines for behavioral modeling. We present the details related to automated test case generation, execution, and oracle generation. We demonstrate our model-based testing approach by applying it on two cases studies, a widely referenced and open source implementation of Mario brothers game and an industrial case study of an endless runner game. The proposed approach was able to identify major faults in the open source game implementation. Our results showed that the proposed approach is practical and can be applied successfully on industrial games.
Article Search
Synthesizing Tests for Combinatorial Coverage of Modal Scenario Specifications
Foundations
Valerio Panzica La Manna, Itai Segall, and
Joel Greenyer
(Massachusetts Institute of Technology, USA; Bell Labs, Israel; Alcatel-Lucent, Israel; Leibniz Universität Hannover, Germany)
Software-intensive systems often consist of many components that interact to fulfill complex functionality. Testing these systems is vital, preferably by a minimal set of tests that covers all relevant cases. The behavior is typically specified by scenarios that describe what the system may, must, or must not do. When designing tests, as in the design of the system itself, the challenge is to consider interactions of scenarios. When doing this manually, critical interactions are easily overlooked. Inspired by Combinatorial Test Design, which exploits that bugs are typically found by regarding the interaction of a small set of parameters, we propose a new test coverage criterion based on scenario interactions. Furthermore, we present a novel technique for automatically synthesizing from Modal Sequence Diagram specifications a minimal set of tests that ensures a maximal coverage of possible t-wise scenario interactions. The technique is evaluated on an example specification from an industrial project.
Preprint Available
DSLs II
Wednesday Sept 30, 1:30 p.m. Joliet
Session chair: Øystein Haugen
Enhancing the Communication Value of UML Models with Graphical Layers
Foundations
Yosser El Ahmar, Sébastien Gérard, Cédric Dumoulin, and Xavier Le Pallec
(CEA, France; University of Lille, France)
UML is defined as a visual modeling language for specifying, constructing, and documenting software intensive systems. In that context, UML diagrams play a central role in the whole software engineering process, starting from early analysis, through implementation, to maintenance. Recent surveys of UML use in industry showed that software practitioners use it on a regular basis, and particularly for communication and as a mental-assist tool. However, they also pointed out the following weaknesses: the lack of context, graphical layout problems, and the language’s inadequacy as a facility for communication between technical teams and their clients. In this paper, we present a general approach that addresses these problems by enhancing the effectiveness of UML models as a communication vehicle. Our approach is based on expressing stakeholder-specific viewpoints through the use of secondary notations. This involves the use of auxiliary visual variables (e.g., color, position, size) that are not formally specified in UML. To that end, we extend the traditional concept of layer found in many graphical editors to UML diagram editors. FlipLayers is an implementation of our approach. It is in the form of a plugin for the Papyrus modeling environment. One scenario with several case studies is presented in the paper to demonstrate the benefits of our approach and also to illustrate how to express viewpoints with FlipLayers.
Article Search
SoSPa: A System of Security Design Patterns for Systematically Engineering Secure Systems
Foundations
Phu H. Nguyen, Koen Yskout, Thomas Heyman, Jacques Klein, Riccardo Scandariato, and Yves Le Traon
(University of Luxembourg, Luxembourg; Simula Research Laboratory, Norway; KU Leuven, Belgium; Chalmers University of Technology, Sweden; University of Gothenburg, Sweden)
Model-Driven Security (MDS) for secure systems development still has limitations to be more applicable in practice. A recent systematic review of MDS shows that current MDS approaches have not dealt with multiple security concerns systematically. Besides, catalogs of security patterns which can address multiple security concerns have not been applied efficiently. This paper presents an MDS approach based on a unified System of Security design Patterns (SoSPa). In SoSPa, security design patterns are collected, specified as reusable aspect models to form a coherent system of them that guides developers in systematically addressing multiple security concerns. SoSPa consists of not only interrelated security design patterns but also a refinement process towards their application. We applied SoSPa to design the security of crisis management systems. The result shows that multiple security concerns in the case study have been addressed by systematically integrating different security solutions.
Article Search
Engineering Tagging Languages for DSLs
Foundations
Timo Greifenberg, Markus Look, Sebastian Roidl, and Bernhard Rumpe
(RWTH Aachen University, Germany)
To keep a DSL clean, readable and reusable in different contexts, it is useful to define a separate tagging language. A tag model logically adds information to the tagged DSL model while technically keeping the artifacts separated. Using a generic tagging language leads to promiscuous tag models, whereas defining a target DSL-specific tag language has a high initial overhead. This paper presents a systematic approach to define a DSL-specific tag language and a corresponding schema language, combining the advantages of both worlds: (a) the tag language specifically fits to the DSL, (b) the artifacts are kept separated and enabling reuse with different tag decorations, (c) the tag language follows a defined type schema, and (d) systematic derivation considerably reduces the effort necessary to implement the tag language. An example shows that it can at least partially be realized by a generator and applied for any kind of DSL.
Preprint Available
SoSyM Journal-First Papers
Wednesday Sept 30, 4:30 p.m. Ballroom C
Session chair: Bernhard Rumpe
Identifying Duplicate Functionality in Textual Use Cases by Aligning Semantic Actions (SoSyM Abstract)
Alejandro Rago, Claudia Marcos, and J. Andrés Diaz-Pace
(UNICEN University, Argentina)
Developing high-quality requirements specifications often demands a thoughtful analysis and an adequate level of expertise from analysts. Although requirements modeling techniques provide mechanisms for abstraction and clarity, fostering the reuse of shared functionality (e.g., via UML relationships for use cases), they are seldom employed in practice. A particular quality problem of textual requirements, such as use cases, is that of having duplicate pieces of functionality scattered across the specifications. Duplicate functionality can sometimes improve readability for end users, but hinders development-related tasks such as effort estimation, feature prioritization and maintenance, among others. Unfortunately, inspecting textual requirements by hand in order to deal with redundant functionality can be an arduous, time-consuming and error-prone activity for analysts. In this context, we introduce a novel approach called ReqAligner that aids analysts to spot signs of duplication in use cases in an automated fashion. To do so, ReqAligner combines several text processing techniques, such as a use-case-aware classifier and a customized algorithm for sequence alignment. Essentially, the classifier converts the use cases into an abstract representation that consists of sequences of semantic actions, and then these sequences are compared pairwise in order to identify action matches, which become possible duplications. We have applied our technique to five real-world specifications, achieving promising results and identifying many sources of duplication in the use cases.
Preprint Available
Info
Enhanced Graph Rewriting Systems for Complex Software Domains (SoSyM Abstract)
Cédric Eichler, Thierry Monteil, Patricia Stolf, Alfredo Grieco, and Khalil Drira
(CNRS, France; IRIT, France; University of Toulouse, France; Politecnico di Bari, Italy)
Methodologies for correct by construction reconfigurations can efficiently solve consistency issues in dynamic software architecture. Graph-based models are appropriate for designing such architectures and methods. At the same time, they may be unfit to characterize a system from a non functional perspective. This stems from efficiency and applicability limitations in handling time-varying characteristics and their related dependencies. In order to lift these restrictions, an extension to graph rewriting systems is proposed herein. The suitability of this approach, as well as the restraints of currently available ones, are illustrated, analysed and experimentally evaluated with reference to a concrete example. This investigation demonstrates that the conceived solution can: (i) express any kind of algebraic dependencies between evolving requirements and properties; (ii) significantly ameliorate the efficiency and scalability of system modifications with respect to classic methodologies; (iii) provide an efficient access to attribute values; (iv) be fruitfully exploited in software management systems; (v) guarantee theoretical properties of a grammar, like its termination.
Article Search
A Situational Method for Semi-automated Enterprise Architecture Documentation (SoSyM Abstract)
Matthias Farwick, Christian M. Schweda, Ruth Breu, and Inge Hanschke
(University of Innsbruck, Austria; Reutlingen University, Germany; Lean 42, Germany)
Large organizations critically rely on their IT infrastructure. So called Enterprise Architecture (EA) models are often created to understand how the IT supports the business and used to optimize the IT and align it with the business. However, the models grow very large and are hard to keep up-to-date. Current approaches focus on automated data collection to tackle this problem, which is not feasible in many situations. In this paper we present a semi-automated EA documentation method and tool support that tackles this problem and takes the organizational contexts into account.
Article Search
Formalizing and Verifying Stochastic System Architectures Using Monterey Phoenix (SoSyM Abstract)
Yang Liu, Mikhail Auguston, Jun Sun, Jin Song Dong, and Tieming Chen
(Nanyang Technological University, Singapore; Naval Postgraduate School, USA; Singapore University of Technology and Design, Singapore; National University of Singapore, Singapore; Zhejiang University of Technology, China)
The analysis of software architecture plays an important role in understanding the system structures and facilitate proper implementation of user requirements. Despite its importance in the software engineering practice, the lack of formal description and verification support in this domain hinders the development of quality architectural models. To tackle this problem, in this work, we develop an approach for modeling and verifying software architectures specified using Monterey Phoenix (MP) architecture description language. MP is capable of modeling system and environment behaviors based on event traces, as well as supporting different architecture composition operations and views. First, we formalize the syntax and operational semantics for MP; therefore, formal verification of MP models is feasible. Second, we extend MP to support shared variables and stochastic characteristics, which not only increases the expressiveness of MP, but also widens the properties MP can check, such as quantitative requirements. Third, a dedicated model checker for MP has been implemented, so that automatic verification of MP models is supported. Finally, several experiments are conducted to evaluate the applicability and efficiency of our approach.
Article Search
Requirements II: Compliance and decision-making
Thursday Oct 1, 10:45 a.m. Ballroom C
Session chair: Ana Moreira
Model-Driven Regulatory Compliance: A Case Study of "Know Your Customer" Regulations
MEIP
Sagar Sunkle, Deepali Kholkar, and Vinay Kulkarni
(Tata Consultancy Services, India)
Modern enterprises face an unprecedented regulatory regime. Industry governance, risk, and compliance (GRC) solutions are document-oriented and expert-driven. Formal compliance checking techniques in contrast attempt to provide ways for rigorous modeling and analysis of regulatory compliance but miss out on holistic GRC perspective due to missing integration between diverse set of (semi-) formal models. We show that streamlining regulatory compliance using multiple purposive models of various aspects of regulations, it is possible to leverage both the rigor of formal techniques and the holistic enterprise GRC perspective. Our contributions are twofold. First, we present a model-driven architecture based on a conceptual model of integrated GRC that is capable of addressing key challenges of regulatory compliance. Second, using Know Your Customer regulations in Indian context as a case study, we demonstrate the utility of this architecture. Initial results with KYC regulations are promising and point to further work in model-driven regulatory compliance.
Article Search
Toward Overcoming Accidental Complexity in Organisational Decision-Making
MEIP
Vinay Kulkarni, Souvik Barat, Tony Clark, and Balbir Barn
(Tata Consultancy Services, India; Middlesex University, UK)
This paper takes a practitioner’s perspective on the problem of organisational decision making. Industry practice follows a refinement based iterative method for organizational decision-making. However, existing enterprise modelling tools are not complete with respect to the needs of organizational decision making. As a result, today, decision maker is forced to use a chain of paradigmatically diverse and non-interoperable tools with nothing more sophisticated than a documented method holding the tools together. This paper argues the case for a model based approach and proposes bridge meta-models as a solution for better integrated use of these tools. Validation of the proposed solution using a case study is presented
Article Search
A Model-Based Framework for Probabilistic Simulation of Legal Policies
Foundations
Ghanem Soltana, Nicolas Sannier,
Mehrdad Sabetzadeh, and Lionel C. Briand
(University of Luxembourg, Luxembourg)
Legal policy simulation is an important decision-support tool in domains such as taxation. The primary goal of legal policy simulation is predicting how changes in the law affect measures of interest, e.g., revenue. Currently, legal policies are simulated via a combination of spreadsheets and software code. This poses a validation challenge both due to complexity reasons and due to legal experts lacking the expertise to understand software code. A further challenge is that representative data for simulation may be unavailable, thus necessitating a data generator. We develop a framework for legal policy simulation that is aimed at addressing these challenges. The framework uses models for specifying both legal policies and the probabilistic characteristics of the underlying population. We devise an automated algorithm for simulation data generation. We evaluate our framework through a case study on Luxembourg's Tax Law.
Preprint Available
Maintenance and migration of models
Thursday Oct 1, 10:45 a.m. Richelieu
Session chair: Bran Selic
On the Use of UML Documentation in Software Maintenance: Results from a Survey in Industry
Foundations
Ana M. Fernández-Sáez, Danilo Caivano, Marcela Genero, and Michel R. V. Chaudron
(University of Castile–La Mancha, Spain; University of Bari, Italy; Chalmers University of Technology, Sweden; University of Gothenburg, Sweden)
This paper presents the findings of a survey on the use of UML in software maintenance, carried out with 178 professionals working on software maintenance projects in 12 different countries. As part of long-term research we are carrying out to investigate the benefits of using UML in software maintenance, the main objectives of this survey are: 1) to explore whether UML diagrams are being used in software industry maintenance projects; 2) to see what UML diagrams are the most effective for software maintenance; 3) to find out what the perceived benefits of using UML diagrams are; and 4) to contextualize the kind of companies that use UML documentation in software maintenance. Some complementary results based on the way the documentation is used (whether it is UML-based or not) during software maintenance are also presented.
Article Search
Process Mining in Software Systems: Discovering Real-Life Business Transactions and Process Models from Distributed Systems
Foundations
Maikel Leemans and Wil M. P. van der Aalst
(Eindhoven University of Technology, Netherlands)
This paper presents a novel reverse engineering technique for obtaining real-life event logs from distributed systems. This allows us to analyze the operational processes of software systems under real-life conditions, and use process mining techniques to obtain precise and formal models. Hence, the work can be positioned in-between reverse engineering and process mining. We present a formal definition, implementation and an instrumentation strategy based the joinpoint-pointcut model. Two case studies are used to evaluate our approach. These concrete examples demonstrate the feasibility and usefulness of our approach.
Preprint Available
Performance Prediction upon Toolchain Migration in Model-Based Software
Foundations
Aymen Ketata, Carlos Moreno, Sebastian Fischmeister, Jia Liang, and Krzysztof Czarnecki
(University of Waterloo, Canada)
Changing the development environment can have severe impacts on the system behavior such as the execution-time performance. Since it can be costly to migrate a software application, engineers would like to predict the performance parameters of the application under the new environment with as little effort as possible. In this paper, we concentrate on model-driven development and provide a methodology to estimate the execution-time performance of application models under different toolchains. Our approach has low cost compared to the migration effort of an entire application. As part of the approach, we provide methods for characterizing model-driven applications, an algorithm for generating application-specific microbenchmarks, and results on using different methods for estimating the performance. In the work, we focus on SCADE as the development toolchain and use a Cruise Control and a Water Level application as case studies to confirm the technical feasibility and viability of our technique.
Preprint Available
Fundamentals I
Thursday Oct 1, 10:45 a.m. Chaudière
Session chair: Jörg Kienzle
A Unifying Approach to Connections for Multi-Level Modeling
Foundations
Colin Atkinson, Ralph Gerbig, and Thomas Kühne
(University of Mannheim, Germany; Victoria University of Wellington, New Zealand)
Capturing relationships between concepts in a domain is as important as capturing the concepts themselves. Modeling languages reflect this by providing connections with rich semantics, such as associations and links, thus providing a key advantage over approaches that support relationships with simple references only. While connections for two-level modeling (e.g. in the UML) have enjoyed a stable design for a considerable time, the same cannot be said for connections in multi-level modeling languages. As interest in multi-level modeling grows, it is important to provide a comprehensive design for connections that not only adheres to multi-level principles such as level-agnosticism and explicit level organization, but also supports deep characterization, i.e., the ability to specify level content beyond one level boundary. In this paper we propose a unifying conceptual model for connections whose expressiveness and scalability does not come at the cost of concept proliferation.
Preprint Available
A-posteriori Typing for Model-Driven Engineering
Foundations
Juan de Lara, Esther Guerra, and Jesús Sánchez Cuadrado
(Autonomous University of Madrid, Spain)
Model-Driven Engineering is founded on the ability to create and process models conformant to a meta-model. Hence, meta-model classes are used in two ways: as templates to create objects, and as classifiers for them. While these two aspects are inherently tied in most meta-modelling approaches, in this paper, we discuss the benefits of their decoupling. Thus, we rely on standard mechanisms for object creation and propose a-posteriori typing as a means to reclassify objects and enable multiple, partial, dynamic typings. This approach enhances flexibility, permitting unanticipated reutilization (as existing model management operations defined for a meta-model can be reused with other models once they get reclassified), as well as model transformation by reclassification. We show the underlying theory behind the introduced concepts, and illustrate its applicability using our MetaDepth meta-modelling tool.
Preprint Available
Enriching Megamodel Management with Collection-Based Operators
Foundations
Rick Salay, Sahar Kokaly, Alessio Di Sandro, and Marsha Chechik
(University of Toronto, Canada; McMaster University, Canada)
Megamodels are often used in MDE to describe collections of models and relationships between them. Typical collection-based operations -- map, reduce, filter -- cannot be applied directly to megamodels since these operators need to take relationships between models into consideration. In this paper, we propose adapted versions of these operators, demonstrating them on four megamodeling scenarios. We then analyze their applicability for handling industrial-sized megamodels. Finally, we report on a reference implementation of the operators and experimental results using it.
Preprint Available
Product lines
Thursday Oct 1, 3:30 p.m. Ballroom C
Session chair: Shaukat Ali
Feature Modeling of Two Large-Scale Industrial Software Systems: Experiences and Lessons Learned
MEIP Best MEIP Paper
Daniela Lettner, Klaus Eder,
Paul Grünbacher, and Herbert Prähofer
(JKU Linz, Austria; KEBA, Austria)
Feature models are frequently used to capture the knowledge about configurable software systems and product lines. However, feature modeling of large-scale systems is challenging as many models are needed for diverse purposes. For instance, feature models can be used to reflect the perspectives of product management, technical solution architecture, or product configuration. Furthermore, models are required at different levels of granularity. Although numerous approaches and tools are available, it remains hard to define the purpose, scope, and granularity of feature models. In this paper we thus present experiences of developing feature models for two large-scale industrial automation software systems. Specifically, we extended an existing feature modeling tool to support models for different purposes and at multiple levels. We report results on the characteristics and modularity of the feature models, including metrics about model dependencies. We further discuss lessons learned during the modeling process.
Preprint Available
A Megamodel for Software Process Line Modeling and Evolution
MEIP
Jocelyn Simmonds, Daniel Perovich, María Cecilia Bastarrica, and Luis Silvestre
(University of Chile, Chile)
Companies formalize software processes as a way of organizing development projects. Since there are differences in project contexts, a one-size-fits-all approach does not work well in practice. Some companies use a family of a predefined processes, but this approach has a high process maintenance cost. Instead, we define Software Process Lines (SPrL), where a general process with variability is tailored to project contexts. Model-Driven Engineering (MDE) provides a formal framework for defining the models and transformations required for automated SPrL tailoring. However, this approach requires the definition and co-evolution of various types of models and tool support beyond the skills of process engineers, making the industrial adoption challenging. This paper shares our experience using a megamodeling approach to the development of the back-end of our toolset. The megamodel provides a uniform mechanism for process definition, variability, tailoring and evolution, and we hide the MDE complexity through a user-friendly front-end. We report the application of our approach at Mobius, a small Chilean software enterprise.
Preprint Available
Applying Product Line Use Case Modeling in an Industrial Automotive Embedded System: Lessons Learned and a Refined Approach
MEIP
Ines Hajri, Arda Goknil, Lionel C. Briand, and Thierry Stephany
(University of Luxembourg, Luxembourg; IEE, Luxembourg)
In this paper, we propose, apply, and assess Product line Use case modeling Method (PUM), an approach that supports modeling variability at different levels of granularity in use cases and domain models. Our motivation is that, in many software development environments, use case modeling drives interactions among stakeholders and, therefore, use cases and domain models are common practice for requirements elicitation and analysis. In PUM, we integrate and adapt existing product line extensions for use cases and introduce some template extensions for use case specifications. Variability is captured in use case diagrams while it is reflected at a greater level of detail in use case specifications. Variability in domain concepts is captured in domain models. PUM is supported by a tool relying on Natural Language Processing (NLP). We applied PUM to an industrial automotive embedded system and report lessons learned and results from structured interviews with experienced engineers.
Preprint Available
Transformations
Thursday Oct 1, 3:30 p.m. Richelieu
Session chair: Zhenjiang Hu
Quick Fixing ATL Model Transformations
Foundations Distinguished Paper
Jesús Sánchez Cuadrado, Esther Guerra, and Juan de Lara
(Autonomous University of Madrid, Spain)
The correctness of model transformations is key to obtain reliable MDE solutions. However, current transformation tools provide limited support to statically detect and correct errors. This way, the identification of errors and their correction are mostly manual activities. Our aim is to improve this situation. Based on a static analyser for ATL model transformations which we have previously built, we present a method and a system to propose quick fixes for transformation errors. The analyser is based on a combination of program analysis and constraint solving, and our quick fix generation technique makes use of the analyser features to provide a range of fixes, notably some non-trivial, transformation-specific ones. Our approach integrates seamlessly with the ATL editor. We provide an evaluation based on an existing faulty transformation, and automatically generated transformation mutants, showing overall good results.
Preprint Available
Video
Info
A Statistical Analysis Approach to Assist Model Transformation Evolution
Foundations
Roberto Rodriguez-Echeverria and Fernando Macias
(University of Extremadura, Spain; Bergen University College, Norway)
Model Driven Engineering (MDE) is essentially based in metamodel definition, model edition and the specification of model transformations (MT) among these. In many cases the development, evolution and adaptation of these transformations is still carried out without the support of proper methods and tools to reduce the effort and related costs to these activities. In this work, a novel model testing approach specifically designed to assist the engineer in model transformation evolution is presented. A statistical analysis of the actual behavior of the transformations is performed by means of the computation of well-known information extraction metrics. In order to assist the MT adaptation, a detailed interpretation of the possible results of those metrics is also presented. And finally, the results of applying this approach on a Model-Driven Reverse Engineering (MDRE) scenario defined in the context of the MIGRARIA project are discussed.
Preprint Available
Fully Verifying Transformation Contracts for Declarative ATL
Foundations
Bentley James Oakes, Javier Troya, Levi Lúcio, and Manuel Wimmer
(McGill University, Canada; Vienna University of Technology, Austria)
The Atlas Transformation Language (ATL) is today a de-facto standard in model-driven development. It is understood by the community that methods for exhaustively verifying such transformations provide an important pillar for achieving a stronger adoption of model-driven development in industry. In this paper we propose a method for verifying ATL model transformations by translating them into DSLTrans, a transformation language with limited expressiveness. Pre-/post-condition contracts are then verified on the resulting DSLTrans specification using a symbolic-execution property prover. The technique we present in this paper is exhaustive for the declarative ATL subset, meaning that if a contract holds, it will hold when any input model is passed to the ATL transformation being checked. We examine the scalability of our technique using a set of examples, including a model transformation developed in collaboration with our industrial partner.
Preprint Available
Systematically Deriving Domain-Specific Transformation Languages
Foundations
Katrin Hölldobler, Bernhard Rumpe, and Ingo Weisemöller
(RWTH Aachen University, Germany)
Model transformations are helpful to evolve, refactor, refine and maintain models. While domain-specific languages are normally intuitive for modelers, common model transforma- tion approaches (regardless of whether they transform graphical or textual models) are based on the modeling language’s abstract syntax requiring the modeler to learn the internal representation of the model to describe transformations. This paper presents a process that allows to systematically derive a textual domain- specific transformation language from the grammar of a given textual modeling language. As example, we apply this systematic derivation to UML class diagrams to obtain a domain-specific transformation language called CDTrans. CDTrans incorporates the concrete syntax of class diagrams which is already familiar to the modeler and extends it with a few transformation operators. To demonstrate the usefulness of the derived transformation language, we describe several refactoring transformations.
Preprint Available
Behavioural modeling II
Thursday Oct 1, 3:30 p.m. Chaudière
Session chair: Juergen Dingel
Beyond Discrete Modeling: A Continuous and Efficient Model for IoT
Foundations
Assaad Moawad, Thomas Hartmann, Francois Fouquet, Gregory Nain, Jacques Klein, and Yves Le Traon
(University of Luxembourg, Luxembourg)
Internet of Things applications analyze our past habits through sensor measures to anticipate future trends. To yield accurate predictions, intelligent systems not only rely on single numerical values, but also on structured models aggregated from different sensors. Computation theory, based on the discretisation of observable data into timed events, can easily lead to million of values. Time series and similar database structures can index the mere of data, but quickly reach computation and storage limits to create IoT models. We propose a concept of continuous models that can handle high-volatile IoT data by defining a new type of meta attribute, which represents the continuous nature of IoT related values. On top of traditional discrete object-oriented modeling APIs, we enable models to represent very large sequences of sensor values by using mathematical polynomials. We show on various IoT datasets that this significantly improves storage and reasoning efficiency.
Article Search
A Behavioral Coordination Operator Language (BCOoL)
Foundations
Matias Ezequiel Vara Larsen, Julien DeAntoni,
Benoit Combemale, and Frédéric Mallet
(University of Nice Sophia Antipolis, France; INRIA, France; University of Rennes 1, France)
The design of complex systems involves various, possibly heterogeneous, structural and behavioral models. In model-driven engineering, the coordination of behavioral models to produce a single integrated model is necessary to provide support for validation and verification. Indeed, it allows system designers to understand and validate the global and emerging behavior of the system. However, the manual coordination of models is tedious and error-prone, and current approaches to automate the coordination are bound to a fixed set of coordination patterns. In this paper, we propose a Behavioral Coordination Operator Language (BCOoL) to reify coordination patterns between specific domains by using coordination operators between the Domain-Specific Modeling Languages used in these domains. Those operators are then used to automate the coordination of models conforming to these languages. We illustrate the use of BCOoL with the definition of coordination operators between timed finite state machines and activity diagrams.
Preprint Available
Video
Extracting Frame Conditions from Operation Contracts
Foundations
Philipp Niemann, Frank Hilken, Martin Gogolla, and Robert Wille
(University of Bremen, Germany)
In behavioral modeling, operation contracts defined by pre- and postconditions describe the effects on model properties (i.e., model elements such as attributes, links, etc.) that are enforced by an operation. However, it is usually omitted which model properties should not be modified. Defining so-called frame conditions can fill this gap. But, thus far, these have to be defined manually – a time-consuming task. In this work, we propose a methodology which aims to support the modeler in the definition of the frame conditions by extracting suggestions based on an automatic analysis of operation contracts provided in OCL. More precisely, the proposed approach performs a structural analysis of pre- and postconditions together with invariants in order to categorize which class and object properties are clearly “variable” or “unaffected” – and which are “ambiguous”, i.e. indeed require a more thorough inspection. The developed concepts are implemented as a prototype and evaluated by means of several example models known from the literature.
Preprint Available
Info
Quality II
Friday Oct 2, 10:45 a.m. Ballroom C
Session chair: Benoit Baudry
Employing Classifying Terms for Testing Model Transformations
Foundations Distinguished Paper
Martin Gogolla, Antonio Vallecillo, Loli Burgueño, and Frank Hilken
(University of Bremen, Germany; University of Málaga, Spain)
This contribution proposes a new technique for developing test cases for UML and OCL models. The technique is based on an approach that automatically constructs object models for class models enriched by OCL constraints. By guiding the construction process through so-called classifying terms, the built test cases in form of object models are classified into equivalence classes. A classifying term can be an arbitrary OCL term on the class model that calculates for an object model a characteristic value. From each equivalence class of object models with identical characteristic values one representative is chosen. The constructed test cases behave significantly different with regard to the selected classifying term. By building few diverse object models, properties of the UML and OCL model can be explored effectively. The technique is applied for automatically constructing relevant source model test cases for model transformations between a source and target metamodel.
Preprint Available
Pattern-Based Debugging of Declarative Models
Foundations
Vajih Montaghami and Derek Rayside
(University of Waterloo, Canada)
Pattern-based debugging compares the engineer's model to a pre-computed library of patterns, and generates discriminating examples that help the engineer decide if the model's constraints need to be strengthened or weakened. A number of tactics are used to help connect the generated examples to the text of the model. This technique augments existing example/counter-example generators and unsatisfiable core analysis tools, to help the engineer better localize and understand defects caused by complete overconstraint, partial overconstraint, and underconstraint. The technique is applied to localizing, understanding, and fixing a defect in an Alloy model of Dijkstra's Dining Philosopher's problem. Automating the search procedure remains as essential future work.
Preprint Available
Info
Checking Concurrent Behavior in UML/OCL Models
Foundations
Nils Przigoda, Christoph Hilken, Robert Wille, Jan Peleska, and Rolf Drechsler
(University of Bremen, Germany; DFKI, Germany)
The Unified Modeling Language (UML) is a de-facto standard for software development and, together with the Object Constraint Language (OCL), allows for a precise description of a system prior to its implementation. At the same time, these descriptions can be employed to check the consistency and, hence, the correctness of a given UML/OCL model. In the recent past, numerous (automated) approaches have been proposed for this purpose. The behavior of the systems has usually been considered by means of sequence diagrams, state machines, and activity diagrams. But with the increasing popularity of design by contract, also composite structures, classes, and operations are frequently used to describe behavior in UML/OCL. However, for these description means no solution for the validation and verification of concurrent behavior is available yet. In this work, we propose such a solution. To this end, we discuss the possible interpretations of “concurrency” which are admissible according to the common UML/OCL interpretation and, afterwards, propose a methodology which exploits solvers for SAT Modulo Theories (i.e., SMT solvers) in order to check the concurrent behavior of UML/OCL models. How to address the resulting problems is described and illustrated by means of a running example. Finally, the application of the proposed method is demonstrated.
Preprint Available
Evolution and reuse
Friday Oct 2, 10:45 a.m. Chaudière
Session chair: Gregor Engels
Improving Reuse by means of Asymmetrical Model Migrations: An Application to the Orcc Case Study
MEIP
Paola Vallejo, Mickaël Kerboeuf, Kevin J. M. Martin, and Jean-Philippe Babau
(University Bretagne-Occidentale, France; CNRS, France; University Bretagne-Sud, France)
The legacy code of a tool handling domain specific data gathers valuable expertise. However in many cases, it must be rewritten to make it apply to structurally incompatible data. We investigate a co-evolution approach to avoid this update by making the call context meet the a legacy tool definition domain. The data conforming to the call context co-evolve into data conforming to the definition domain. Once processed by the tool, they can be put back into their original context thanks to a specific reverse transformation which enables the recovery of elements that had been initially removed. This approach is applied to Orcc, a compiler for dataflow applications. Orcc requires many common functions that are expected to be adapted to its own context. Our approach is an effective way to reuse them instead of rewriting them.
Article Search
Concern-Oriented Interfaces for Model-Based Reuse of APIs
Foundations
Matthias Schöttle and Jörg Kienzle
(McGill University, Canada)
Reuse is essential in modern software engineering, but limited in the context of MDE by the poor availability of reusable models. On the other hand, reusable code artifacts such as frameworks and libraries are abundant. This paper presents an approach to raise reusable code artifacts to the modelling level by modelling their API using concern-oriented techniques, thus enabling their use in the context of MDE. Our API interface models contain additional information, such as the encapsulated features and their impacts, to assist the developer in the reuse process. Once he has specified his needs, the model interface exposes only the API elements relevant for this specific reuse at the model level, together with the required usage protocol. We show how this approach is applied by hand to model the interface of a small GUI framework and outline how we envision this process to be performed semi-automatically.
Preprint Available
Consistent Co-Evolution of Models and Transformations
Foundations
Angelika Kusel, Jürgen Etzlstorfer, Elisabeth Kapsammer, Werner Retschitzegger, Wieland Schwinger, and Johannes Schönböck
(JKU Linz, Austria; University of Applied Sciences Upper Austria at Hagenberg, Austria)
Evolving metamodels are in the center of Model-Driven Engineering, necessitating the co-evolution of dependent artifacts like models and transformations. While model co-evolution has been extensively studied, transformation co-evolution has received less attention up to now. Current approaches for transformation co-evolution provide a fixed, restricted set of metamodel (MM) changes, only. Furthermore, composite changes are treated as monolithic units, which may lead to inconsistent co-evolution for overlapping atomic changes and prohibits extensibility. Finally, transformation co-evolution is considered in isolation, possibly inducing inconsistencies between model and transformation co-evolution. To overcome these limitations, we propose a complete set of atomic MM changes being able to describe arbitrary MM evolutions. Reusability and extensibility are supported by means of change composition, ensuring an intra-artifact consistent co-evolution. Furthermore, each change provides resolution actions for both, models and transformations, ensuring an inter-artifact consistent co-evolution. Based on our conceptual approach, a prototypical implementation is presented.
Article Search
Fundamentals II
Friday Oct 2, 10:45 a.m. Panorama
Session chair: Richard Paige
Fragmenta: A Theory of Fragmentation for MDE
Foundations
Nuno Amálio, Juan de Lara, and Esther Guerra
(University of York, UK; Autonomous University of Madrid, Spain)
Model-Driven Engineering (MDE) promotes models throughout development. However, models may become large and unwieldy even for small to medium-sized systems. This paper tackles the MDE challenges of model complexity and scalability. It proposes Fragmenta, a theory of modular design that breaks down overall models into fragments that can be put together to build meaningful wholes, in contrast to classical MDE approaches that are essentially monolithic. The theory is based on an algebraic description of models, fragments and clusters based on graphs and morphisms. The paper's novelties include: (i) a mathematical treatment of fragments and a seaming mechanism of proxies to enable inter-fragment referencing, (ii) fragmentation strategies, which prescribe a fragmentation structure to model instances, (iii) Fragmenta's support for both top-down and bottom-up design, and (iv) our formally proved result that shows that inheritance hierarchies remain well-formed (acyclic) globally when fragments are composed provided some local fragment constraints are met.
Preprint Available
Info
A Framework for Relating Syntactic and Semantic Model Differences
Foundations Distinguished Paper
Shahar Maoz and Jan Oliver Ringert
(Tel Aviv University, Israel)
Model differencing is an important activity in model-based development processes. Differences need to be detected, analyzed, and understood to evolve systems and explore alternatives. Two distinct approaches have been studied in the literature: syntactic differencing, which compares the concrete or abstract syntax of models, and semantic differencing, which compares models in terms of their meaning. Syntactic differencing identifies change operations that transform the syntactical representation of one model to the syntactical representation of the other. However, it does not explain their impact on the meaning of the model. Semantic model differencing is independent of syntactic changes and presents differences as elements in the semantics of one model but not the other. However, it does not reveal the syntactic changes causing these semantic differences. We define a language independent, abstract framework, which relates syntactic change operations and semantic difference witnesses. We formalize fundamental relations of necessary and sufficient sets of change operations and analyze their properties. We further demonstrate concrete instances of the framework for three different popular modeling languages, namely, class diagrams, activity diagrams, and feature models. The framework provides a novel foundation for combining syntactic and semantic differencing.
Preprint Available
Modular Model-Based Supervisory Controller Design for Wafer Logistics in Lithography Machines
MEIP
Bram van der Sanden, Michel Reniers, Marc Geilen, Twan Basten, Johan Jacobs, Jeroen Voeten, and Ramon Schiffelers
(Eindhoven University of Technology, Netherlands; TNO Embedded Systems Innovation, Netherlands; ASML, Netherlands)
Development of high-level supervisory controllers is an important challenge in the design of high-tech systems. It has become a significant issue due to increased complexity, combined with demands for verified quality, time to market, ease of development, and integration of new functionality. To deal with these challenges, model-based engineering approaches are suggested as a cost-effective way to support easy adaptation, validation, synthesis, and verification of controllers. This paper presents an industrial case study on modular design of a supervisory controller for wafer logistics in lithography machines. The uncontrolled system and control requirements are modeled independently in a modular way, using small, loosely coupled and minimally restrictive extended finite automata. The multiparty synchronization mechanism that is part of the specification formalism provides clear advantages in terms of modularity, traceability, and adaptability of the model. We show that being able to refer to variables and states of automata in guard expressions and state-based requirements, enabled by the use of extended finite automata, provides concise models. Additionally, we show how modular synthesis allows construction of local supervisors that ensure safety of parts of the system, since monolithic synthesis is not feasible for our industrial case.
Preprint Available
Info