1) Rigorous Engineering of Collective Adaptive Systems


Rocco de Nicola

Stefan Jähnichen

Martin Wirsing


Today´s software systems are becoming increasingly distributed and decentralized and have to adapt autonomously to dynamically changing, open-ended environments.  Often the nodes have their own individual properties and objectives; interactions with other nodes or with humans may lead to the emergence of unexpected phenomena. We call such systems collective adaptive systems. Examples for collective adaptive systems are robot swarms, smart cities, autonomous cars, voluntary peer-to-peer clouds as well as socio-technical systems and the internet of things.

It is crucial that unexpected and possibly dangerous situations be avoided within such systems. Hence, there is a strong need of methods, tools and techniques to guarantee that systems behave as expected and guarantee at once safety, security, integrity, availability, correctness, reliability, resilience and that such issue are not addressed in isolation but as a whole at system level.

This track is a follow-up of the successful tracks on “rigorous engineering autonomic ensembles” at ISOLA 2014 and ISOLA 2016 and the track on “rigorous engineering of collective adaptive systems” at ISOLA 2018 and ISOLA 2020/2021. It offers a venue for presenting research work aiming at rigorous defining methods to engineer collective adaptive systems in order to face the above outlined challenges. Topics of interest include (but are not limited to):

  • Methods and theories for designing, specifying, and analysing collective adaptive systems
  • Techniques for programming and operating collective adaptive systems
  • Methods and mechanisms for adaptation and dynamic self-expression
  • Trustworthiness, validation and verification of collective adaptive systems
  • Security and performance of collective adaptive systems
  • Techniques to identify collective adaptive systems and their components

Submission link:

2) Programming What is Next: The Role of Documentation


Bernhard Steffen

Klaus Havelund


Documentation, originally merely a comment feature helping programmers to orient themselves in the programming code and considered less relevant in the agile manifesto has many facets today: Not only gained it importance with the increasing collaboration of highly distributed teams, but it became a structured means that goes beyond the single purpose of human-readable text.

– Classic approaches such as JavaDoc, were instrumental in enhancing Java comments by  the adoption of structured tags that carry processable metadata of commented source code and the automatic generation of documentation in a browsable output format (e.g. HTML).
JML goes beyond JavaDoc. In particular, it supports design by contract by validating pre/post conditions written as annotation.
GitHub Copilot generates complete source code from comments, with the result of blurring the lines between
documenting and programming.
– Projects like the OpenAPI initiative support a documentation-first approach to describe (RESTful) APIs in a human- and machinable format that allows automatic service discovery.


The aim of the track is to review currents pragmatics and discuss the future role of documentation in the context of modern software engineering

Submission link:

3) X-by-Construction Meets Runtime Verification


Maurice ter Beek

Loek Cleophas

Martin Leucker

Ina Schaefer

Correctness-by-Construction (CbC) sees the development of software (systems) as a step-wise refinement process from specification to code, ideally by CbC
design tools that automatically generate error-free software (system) implementations from rigorous and unambiguous requirement specifications. Afterwards, testing only serves to validate the CbC process rather than to find bugs.

A lot of progress has been made on CbC and afer a successful track on the combination of CbC with post-hoc verification at ISoLA 2016, at ISoLA 2018 it was time to look further than correctness by investigating a move from CbC to X-by-Construction (XbC), i.e., by considering also non-functional properties. XbC is thus concerned with a step-wise refinement process from specification to code that automatically generates software (system) implementations that by construction satisfy specific non-functional properties concerning security, dependability, reliability, resource or energy consumption, and the like. In line with the growing attention to fault tolerance and the increasing use of machinelearning techniques in modern software systems, which make it hard to establish guaranteed properties, the track at ISoLA 2020/2021 focused on XbC in the setting of probabilistic systems and properties.

Runtime verification (RV) is concerned with monitoring and analysing actual software (and hardware) system behaviour. RV is of paramount importance to
system correctness, reliability, and robustness by providing an additional level of rigour and effectiveness when compared to testing and improved practical
applicability when compared to exhaustive formal verification. RV can be used prior to deployment, for testing, verification, and debugging purposes, as well as
after deployment, for ensuring reliability, safety, and security and for providing fault containment and recovery or online system repair, possibly paired with a
digital twin acting as the virtual counterpart of the actual real-time (software and/or hardware) system behaviour.

Building on the highly successful ISoLA tracks mentioned above, we aim to bring together researchers and practitioners who are interested in CbC/XbC
and who acknowledge the need to join forces with concepts from RV, because (1) achieving correctness starting at design time is difficult—if not impossible—
with the current proliferation of systems with data-driven AI components, while (2) a system failure detected by RV may possibly be repaired but how can it be
ensured that the corrected system is indeed better than before?

With this topic, this ISoLA 2022 track fits into the line of above mentioned
tracks at previous ISoLAs:
ISoLA 2016 Correctness-by-Construction and Post-hoc Verification: Friends or Foes?
ISoLA 2018 X-by-Construction
ISoLA 2020/2021 X-by-Construction: Correctness Meets Probability
ISoLA 2022 X-by-Construction Meets Runtime Verification

We thus invite researchers and practitioners working in the following communities to share their views on the many possible synergies between CbC/XbC
at design time and RV at runtime:

– People working on system-of-systems, who address modelling and analysis (considering correctness, but also non-functional properties concerning security,
reliability, resilience, energy consumption, performance, sustainability, and the like) of networks of interacting legacy and new software systems, and
who are interested in applying CbC/XbC or RV techniques in this domain in order to provide guarantees for system-of-systems.
– People working on quantitative modelling and analysis, for instance through probabilistic/real-time systems and probabilistic/statistical model checking,
in particular in the specific setting of dynamic, adaptive or (runtime) reconfigurable systems with variability. These people typically work on lifting
successful formal methods and (runtime) verification tools from single systems to families of systems, i.e., design and (runtime) verification techniques
that need to cope with the complexity of systems stemming from behaviour, variability, and randomness—and which focus not only on correctness but
also on non-functional properties concerning safety, security, performance, dependability, and the like. As such, they may be interested in applying
CbC/XbC or RV techniques in this domain to provide guarantees for families of systems.
– People working on systems involving components that employ machinelearning (ML) or other artificial intelligence (AI) approaches. In these settings,
models and behaviour are typically dependent on what is learned from large data sets, and may change dynamically based on yet more data being
processed. As a result, guaranteeing properties (whether correctness or non-functional ones) becomes hard, and probabilistic reasoning needs to be
applied instead with respect to such properties for the components employing AI approaches. As a consequence, people working in this domain may
be interested in applying CbC/XbC or RV techniques to provide guarantess for such AI-based systems.

1. M. H. ter Beek, L. Cleophas, A. Legay, I. Schaefer, and B. W. Watson. X-by- Construction: Correctness Meets Probability. In T. Margaria and B. Steffen, editors,
Proceedings of the 9th International Symposium on Leveraging Applications of Formal Methods, Verification and Validation: Verification Principles (ISoLA’20),
volume 12476 of LNCS, pages 211–215. Springer, 2020.
2. M. H. ter Beek, L. Cleophas, I. Schaefer, and B. W. Watson. X-by-Construction. In T. Margaria and B. Steffen, editors, Proceedings of the 8th International Symposium on Leveraging Applications of Formal Methods, Verification and Validation: Modeling (ISoLA’18), volume 11244 of LNCS, pages 359–364. Springer, 2018.
3. M. H. ter Beek, R. H¨ahnle, and I. Schaefer. Correctness-by-Construction and Posthoc Verification: Friends or Foes? In T. Margaria and B. Steffen, editors, Proceedings of the 7th International Symposium on Leveraging Applications of Formal Methods, Verification and Validation: Foundational Techniques (ISoLA’16), volume 9952 of LNCS, pages 723–729. Springer, 2016.

Submission link:

4) Automated Software Re-Engineering


Serge Demeyer

Reiner Hähnle

Heiko Mantel


Formal approaches to software analysis and development tend to focus on greenfield scenarios or to look at some piece of given software as
a static object. Arguably, dynamic evolution of software is a much more common and relevant issue, and its importance keeps growing
further. Key drivers are:

• The advent of innovative execution platforms, including massively parallel and re-configurable hardware, and the desire to optimally
exploit their benefits by software

• Emerging norms and regulations demanding software being re-engineered to comply with stricter legislation, new quality
requirements, or higher ethical standards

• The role of software in science (“in silico” now surpasses the more traditional “in vitro” / “in vivo” research methods) and the
challenge of maintaining such research software

• Novel application scenarios for existing software (e.g. blockchain, micro-services, IoT), fueled by the digitalization of
everything, and a desire to re-use time-tested technology.

Software refactoring, parallelization, and adaptation have become central activities in the value chain: automating them can realize
huge gains. Formal approaches to software modeling and analysis are poised to make a substantial contribution, because they are
fundamentally concerned with automation and correctness.

In this track we invite researchers with an active interest in the automation of software re-engineering: People working on formal
foundations, on tools, as well as practitioners of re-engineering.

Submission link:


5) Digital Twin Engineering


John Fitzgerald

Tiziana Margaria

Peter Gorm Larsen

Jim Woodcock

Cláudio Ângelo Gonçalves Gomes


A digital twin is a virtual replica of a cyber-physical system (often simply called the physical twin) that can be used to monitor the latter’s state, identifying and responding to changes, and providing a basis for making decisions about future modifications. The construction and use of digital twins appear to be growing in domains as diverse as manufacturing, transport, energy, agriculture, and the built environment. The potential benefits include the ability to perform what-if analysis, reconfigure in response to internal faults or a changing environment, perform preventive maintenance and facilitate visualisation. Notably, the Gartner group put digital twins in its 10 strategically most important technologies in 2019.

Digital twins bring assets created during design, such as models of cyber-physical processes, together with data acquired from the physical twin. Such run-time data can be used to derive inductive models through Machine Learning (ML). Such twins can be thought of as `perpetually learning’ in the sense that operational data is used to tune models, making them a more faithful reflection of the system as built, forming a sound basis for decisions about possible responses to dependability threats.

Ensuring that digital twins are dependable presents an intriguing series of questions for the communities in model-based systems engineering and formal methods. These include:

  • What foundations are needed for a well-founded digital twin, for example in defining accuracy and dependability?
  • What are the appropriate architectures and key abstractions that can facilitate the sound implementation of such twins?
  • What are the limits for such digital twins?
  • When it is worthwhile constructing a digital twin, and what value can be gained from doing so?

Dependable operation of cyber-physical systems (CPSs) requires both the ability to address the consequences of evolving system components, and the ability to explore and identify changes that do not unduly compromise overall dependability. This combination of prediction and response alongside support for informed decision-making and redesign by humans requires both the data derived from operational settings and the models developed in design stages. Tackling the challenges of CPS design thus requires a marriage of both descriptive multi-models of the type that might be developed in a design process, and inductive models derived from data acquired during operation. This combination of models, cutting across formalisms as well as across design and operation, has the potential to form a learning digital twin for a CPS, enabling off-line and on-line decision-making.

The goal of our proposed track is to discuss how one can enable the well-founded engineering of such dependable digital twins for CPSs.

The plan is that the track will consist of 4 sessions over a complete day:

  • Workshop type presentations, with an emphasis on actual experience and tool demonstrations
  • Reports on foundational challenges and ongoing research
  • An invited speaker from industry
  • Collaborative drafting of a track position paper on current challenges and research directions

In addition, the plan is to co-locate the track with the general assembly of the INTO-CPS Association and hopefully be able to attract additional participants in this manner.

Submission link:


6) Specify This – Bridging gaps between program specification paradigms

Wolfgang Ahrendt, Chalmers University of Technology, SE
Marieke Huisman, University of Twente, NL
Mattias Ulbrich, Karlsruhe Institute of Technology, DE
Paula Herber, University of Münster, DE

Specify This – Bridging gaps between program specification paradigms

The field of program verification has seen considerable successes in recent years. At the same
time, both the variety of properties that can be specified and the collection of approaches that
solve such program verification challenges have specialised and diversified a lot. Examples
include contract-based deductive specification and verification of functional properties, the
specification and verification of temporal properties using model checking or static analyses for
secure information flow properties. While this diversification enables the formal analyses of
properties that were not accessible a few years ago, it may leave the impression of isolated
solutions that solve different isolated problems.

Here lies a great potential that waits to be uncovered: If an approach can be extended to be
able to interpret specifications used in other approaches and to use them beneficially in its
analyses, a considerable extension of the power and reach of formal analyses is achievable. A
discipline of “separation of concerns” can be obtained, by, e.g., combining temporal
specifications of a protocol implementation with a contract-based specification of its

Within this track, we will investigate and discuss what can be achieved in joint efforts of the
communities of different specification and verification techniques, including dynamic and
static analysis, model checking, deductive verification, security analyses, testing. This track is a
natural next step following a series of well-structured online discussions within the VerifyThis
community during the last year. There, we identified first candidates for the
combination/interplay of formal program verification methods. During ISoLA 2022, we will
address questions such as how specifications which are shared between different approaches
should look like, how different abstraction levels can be bridged, how semantical differences can
be resolved, which application areas can benefit from which method combinations, what
artifacts can be carried forward through different verification technologies, what role user
interaction (in form of specifications) plays, and how can we integrate the various techniques
into the software, and hybrid/cyber-physical system development processes.

Submission link:

7) Verification and Validation of Concurrent and Distributed Heterogeneous Systems


Marieke Huismann – University of Twente, Netherlands (

Cristina Seceleanu – Mälardalen University, Sweden (


Modern software systems are becoming more and more heterogeneous: they mix different programming languages and programming concepts, combine computations happening in different places or on platforms characterized by heterogeneous architectures (e.g. CPU/GPU), contain both discrete and continuous behavior, or contain environmental sources of heterogeneity. Examples of such heterogeneous systems are the Internet of Things, cloud infrastructures, but also hybrid systems that combine discrete and continuous behavior, or biological systems.

To improve performance, many of these heterogeneous systems feature both concurrency and distribution. However, this also adds a lot of extra complexity to systems, and allows many different new kinds of errors to occur. Consequently, verifying such systems poses significant challenges. In the ISoLA track on Verification and Validation of Concurrent and Distributed Systems at ISoLA 2021, we saw that over the last years substantial progress was made on this topic, and impressive results on the verification and validation of concurrent and distributed systems was reported. In the current track, we wish to explore this area further, and to investigate what is needed to make such verification and validation techniques scale to industrial-sized heterogeneous systems. 

This track focuses on providing solutions to the issues mentioned above, by inviting submissions that propose models, techniques, and tools for the rigorous analysis and verification of concurrent and distributed heterogeneous systems, focusing in particular on how to use such techniques to industrial and real-world systems. During the session, we plan not only to present and discuss the individual verification techniques, but also the ways in which they can be combined.

Submission link:


8) Formal Methods Meet Machine Learning


Kim Larsen

Axel legay

Bernhard Steffen

Marielle Stoelinga


The research fields and landscapes of formal methods and machine learning appear opposite on a superficial level, the former focusing on conceptual approaches that yield highly precise and well understood algorithms to ensure safety in critical system while the latter relies on rough numerical approximations and statistical estimates to scale efficiently to the vast datasets of present times. Nevertheless, communication between both fields is not only beneficial, but rather necessary to solve some of the most pressing issues in either field.

As machine learning transforms the world, both the scope and complexity of machine learning systems increases. While in the past ML applications were only used in relatively inconsequential ways, in the present and future ML is used in more and more safety critical domains such as the medical sector, self driving cars or financial planning.
As more and more people are affected by the decisions of ML systems, the computer science community increasingly owes explanation of those ML decisions. Formal methods-based approaches deliver strong explanations for complex systems. Examples of this include error traces, that explain when and how errors come to be, game-based strategies that explain what actions or events lead to a failure in a system, or verification approaches that can mathematically prove correctness of a system.
The first challenge that we are interested in lies in extending the ideas of formal methods to the models of machine learning while still providing equally strong explanations

 Akin to ML systems, traditional computer systems also are becoming increasingly ubiquitous and complex. This poses a challenge to formal methods based approaches that are traditionally very precise but are often limited by scalability issues.
Machine learning has achieved remarkable results in complex problems by leveraging prior data to approximatively solve complex problems very quickly.
The second challenge is therefore dual to the first: How to overcome the typical scalability issues of formal methods-based approaches by applying ML techniques,
while controlling the many sources of uncertainty of ML systems that are unacceptable for the highly safety critical domains that formal methods are usually used in.

Formal Methods meets Machine Learning aims at briding the gap between the communities of formal methods and machine learning and providing a forum for discussing problems that lies at their intersection. This includes, but is not necessarily limited to, problems of verification and explanation of machine learning systems from a conceptual, formal methods-based perspective, and the use of  ML-based heuristics in formal methods approaches to increase the scalability of system verification perhaps in a probablistic or statiscal guise.


A typical, but not exhaustive list of topics is:

  1. Verification of Deep Neural Networks
  2. Formal Methods-based Explanation
  3. Guarantees on Reinforcement Learning in Verification
  4. Formal Program Synthesis and Analysis using Machine Learning
  5. Representations of Strategies and Controllers
  6. Advances in Statistical Model Checking
  7. Observation-Based Model Inference
  8. Shielding, e.g., against attacks

It is planned to publsh selected contribution in a Special Issue in STTT (ExPLAIn)

Submission link:


9) Formal methods for DIStributed COmputing in future RAILway systems (DisCoRail 2022)


Alessandro Fantechi (Università di Firenze, Italy)

Stefania Gnesi (ISTI-CNR, Italy)

Anne Haxthausen (Technical University of Denmark)


The two previous editions of DisCoRail (2019 and 2021) have discussed the intertwining of formal methods and distributed computing in the design and development of innovative train control systems. Such two dimensions   stem naturally from two fundamental characteristics of this class of systems, namely that their functions are intrinsically distributed between trains and wayside equipments, and that such functions are safety-critical, calling for rigorous proof of their safety.

Distribution of functionality enables distributing decisions as well. Currently, most of the crucial decisions needed to guarantee safety are taken at centralised locations (such as the Radio Block Centre – RBC – in ETCS). Pros and Cons of distributing vital decisions is a matter of active research, especially considering that the increasing importance of communication raises the need of uncertainty being taken into account: is the same safety level achievable by distributed decisions w.r.t. centralised ones?  How formal methods can guarantee safety in such context? What about availability, interoperability, cybersecurity?

The current research on autonomous driving for cars is inspiring a vision of autonomous trains in the next future. Autonomy requires even more distributed decisions, also based on local knowledge of the surrounding environment acquired through AI-enabled sensors, e.g. employing artificial vision. Can formal methods be exploited to provide the necessary safety assurance for these systems?

Following the success of the previous DisCoRail editions, the track aims for a fruitful discussion on these topics between researchers and experts from industry and academia that have addressed these aspects in research and development projects. 

Submission link:


10) Automated Verification of Embedded Control Software


Dilian Gurov, KTH Royal Institute of Technology, Stockholm

Paula Herber, University of Münster

Ina Schaefer, TU Braunschweig

The development of embedded control software is driven by high demands with respect to both cost efficiency and quality requirements. The time to market is comparatively short, and the functionality volume is ever-increasing. For example, in 2005, the software in a modern car comprised about 70 MB. Today, we usually have more than 1 GB (i.e. more than 100 million lines of code) distributed over more than a hundred microprocessors. With the current trend towards autonomous driving and the internet of things, these numbers can be expected to further increase. At the same time, embedded control software components often have to meet the highest safety requirements. Beside functional requirements, non-functional requirements like real-time, memory and power consumption are crucial. For example, the control part of automotive software is typically classified on the highest Automotive Safety Integrity Levels, that is, ASIL C or ASIL D, according to the ISO 26262 standard for functional safety in road vehicles. ISO 26262 strongly recommends that semi-formal and formal verification are used for such systems.

The major barrier for the adoption of semi-formal and formal verification in the industrial practice is the limited degree of automation of formal methods. Formal specifications and models often need to be manually written, and scalable verification techniques typically require tedious user annotations and interactions. The challenges of automated verification for embedded control software are multiplied by the recently incorporated use of statistical methods such as machine learning, by the uncertainties of self-adaptation in dynamic environments, and by increasing security issues in the wake of omnipresent wireless interfaces and the internet of things.

In this track, we investigate how the degree of automation can be increased for the verification of embedded control software systems. We plan to not only discuss the traditional safety goals of embedded control software, but also more recent challenges like the use of machine learning in safety-critical components and security issues arising from, e.g., car-to-car communication and wireless interfaces.

11) Digital Thread in Smart Manufacturing


Tiziana Margaria, University of Limerick, Ireland, and Confirm

Dirk Pesch, University College Cork, Ireland, and Confirm

Alan McGibney, Munster Technological University, Ireland, and Confirm


The concept of digital twins has emerged from the smart manufacturing space and is now gaining adoption in many other industries beyond manufacturing. A digital twin is a virtual replica of a cyber-physical system that is used to capture the state of the system and to allow reason and decision making on that state. While there has been much research on this topic, there is less work on the overall lifecycle eco-system that supports the smooth interoperation of a physical facility (like a machine, a factory or even a supply chain) and its digital components (like data, processes and digital twins), which is called the digital thread. The aim of the digital thread is the creation of a digital lifecycle eco-system that links together the data generated throughout a products lifecycle and represents the data, processes and communication platform that supports a product and its production at any instance of time.

In order to realise the concept of a digital thread, a range of functions need to work together in form of a “digital mesh” and open data spaces to allow for the integration and interoperability of physical and virtual representations of a product or process through the digital thread. The research and innovation aspects around and within the digital thread span inter-disciplinary topics including Internet of Things and Cyber-Physical System technologies, decentralised and edge computing architectures, distributed ledger technologies (DLT), model driven development, low-code/no-code approaches, trust management and data management strategies.

Important in realising a digital thread in any organisation are testbeds and practical experience in collecting, processing and managing data,  the integration of models and systems across heterogeneous technologies and paradigms, as well as the associated application development throughout a product lifecycle.

List of Topics (non-exhaustive)

  • IoT middleware platforms for the digital thread
  • IoT application layer protocols for a digital thread environment
  • Programming paradigms for the digital thread
  • Open data spaces for digital twins and digital thread
  • Data management strategies for the digital thread
  • Trust management for digital thread
  • Edge/fog computing environments supporting the digital thread
  • Digital thread testbeds and practical experiences
Doctoral Symposium


Anna-Lena Lamprecht

Falk Howar

 Submission link:

Industrial Day

Falk Howar

Axel Hessenkämper

Andreas Rausch

Hardi Hungar

Submission link:


Dime Days
International School on Tool-based Rigorous Engineering of Software Systems (STRESS) - postponed for 2024