1) Reliable Smart Contracts: State-of-the-art, Applications, Challenges and Future Directions
Gordon Pace, University of Malta Malta
César Sánchez, Instituto IMDEA Software, Madrid, Spain
Gerardo Schneider, University of Gothenburg, Sweden
Blockchain is a global distributed ledger or database running on millions of devices where not just information but anything of value (money, music, art, intellectual property, votes, etc.) can be moved and stored securely and privately. On the blockchain trust is established through mass (distributed) collaboration. Blockchain has the potential to change in a fundamental way how we deal not only with financial services but also with more general applications, improving transparency and regulation. Many applications have been proposed, starting with bitcoin, and smart contracts as introduced in Ethereum.
Smart contracts are software programs that self-execute complex instructions on blockchains. The promise of smart contract technology is to diminish the costs of contracting, enforcing contractual agreements, and making payments, while at the same time ensuring trust and compliance with the absence of a central authority. It is not clear, however, whether this promise can be delivered given the current state-of-the-art and state-of-practice of smart contracts.
In particular, the two most recent multi-million Ethereum bugs just witness what the community were afraid of: that it is not clear what the contracts mean and how to ensure that they are reliable and bug-free. This calls for better programming languages with stronger security & privacy guarantees (or to provide mechanisms for verification, security and privacy to existing ones).
In this track we want to encourage contributions for discussions related, but not limited, to:
- Research on different languages for expressing smart contracts (e.g., Solidity).
- Research on the use of formal methods for specifying and verifying smart contracts (both statically and at runtime).
- Surveys and SoK about security and privacy issues related to smart contract technologies.
- New applications based on smart contracts.
- Description of challenges and research directions to future development for better smart contracts.
2) Engineering of Digital Twins for Cyber-Physical Systems
John Fitzgerald email@example.com
Pieter Gorm Larsen firstname.lastname@example.org
Tiziana Margaria University of Limerick email@example.com
Jim Woodcock firstname.lastname@example.org
Ensuring the dependability of Cyber-Physical Systems (CPSs) poses challenges for model-based engineering, stemming from the semantic heterogeneity of the models of computational, physical and human processes, and from the range of stakeholders involved. We argue that delivering such dependability requires a marriage of multi-disciplinary models developed during design with models derived from real operational data. Assets developed during design thus become the basis of a learning digital twin, able to support decision making both in redesign and in responsive operation.
Starting from an open integrated toolchain leveraging formal models for CPS design, we consider the extension of this concept towards digital twins. The Gartner group puts digital twins in its 10 strategically most important technologies in 2019.
A digital twin is a virtual replica of a real system (usually called the physical twin) that can be used to inform decision making in operation and design. Digital twins combine multi-models developed during design processes, with models derived by Machine Learning (ML) from data acquired from the physical twin. Such twins are thus described as perpetually learning, in the sense that operational data is used to tune multi-models, making them a more faithful reflection of the CPS, forming a sound basis for decisions about possible responses to dependability threats.
From a scientific perspective the key research questions are
- what foundations are needed for a well-founded digital twin,
- where the limits are for such digital twins,
- when it is worthwhile constructing them and
- what values can be expected from them.
Dependable operation of CPSs requires both the ability to address the consequences of evolving system components, and the ability to explore and identify optimal changes that do not unduly compromise overall dependability. This combination of prediction and response alongside support for informed decision-making and redesign by humans requires both the data derived from operations and the models developed in design. Tackling the challenges of CPS design thus requires a marriage of both descriptive multi-models of the type that might be developed in a design process, and inductive models derived from data acquired during operation. This combination of models, cutting across formalisms as well as across design and operation, has the potential to form a learning digital twin for a CPS, enabling off-line and on-line decision-making.
The goal of this track is to discuss how one can enable the well-founded engineering of such learning digital twins for dependable CPSs.
3) Verification and Validation of Concurrent and Distributed Systems
Cristina Seceleanu email@example.com
Marieke Huisman firstname.lastname@example.org
Concurrent and distributed systems are becoming omnipresent, since concurrency and distribution are necessary to fulfil performance requirements of modern software, and due to their natural fit with the underlying application domain. However, concurrent and distributed systems add a lot of extra complexity to systems, and allow many different kinds of errors to occur, which cannot happen in sequential software. As Leslie Lamport, known for his seminal work in distributed systems, famously said: “A distributed system is one in which the failure of a computer you didn’t even know existed can render your own computer unusable” (CAITIE MCCAFFREY: The Verification of a Distributed System – A practitioner’s guide to increasing confidence in system correctness). Similarly, for concurrent systems, an error might occur in one execution, and then disappear in the next execution of the system.
Given such a pessimistic outlook, how can one even start thinking about verification of concurrent and distributed systems? How can verification techniques for concurrent systems benefit from verification techniques for distributed systems, and vice versa?
This track focuses on providing answers to these questions, by inviting submissions that propose models, techniques, and tools for the rigorous analysis of any sort of concurrent and distributed systems. During the session, we plan to have not only presentations and discussions about the individual verification techniques, but also about how they can be combined.
4) Modularity and (De-)composition in Verification
Reiner Hähnle (TU Darmstadt),
Eduard Kamburjan (TU Darmstadt),
Thomas Thuem (TU Braunschweig)
Achieving modularity – the structured composition and decomposition of systems – is a key technique in engineering disciplines and a cornerstone of computer science. A multitude of approaches to ensure modularity have been proposed, from object-orientation, via structured concurrency models to delta oriented programming for variability.
Modularity gives the overall system a structure that can be exploited by automatic tools. Many modularity approaches are based on formal techniques that follow existing modularity patterns in verification, for example, contracts and invariants.
When modularity is not strictly demanded by a system, formal verification techniques can be used to check or enforce additional modularity patterns, for example, choreographies or behavioral types.
Modularity in formal verification goes beyond exploiting modularity patterns: a suitable specification language must be found.
Finally, formal verification tools themselves can be modular and composed of subsystems and/or sub-logics for different aspects.
The track will bring together researchers from different areas of verification, in particular programming language theory, logic and formal methods, to discuss
(1) where current challenges in modularity for verification lie,
(2) whether modular verification techniques in different areas can benefit from each other
(3) how verification tools and techniques can be modularized.
5) Software Verification Tools
Markus Schordan (Lawrence Livermore National Laboratory, CA, USA)
Dirk Beyer (LMU Munich, Germany)
Irena Boyanova NIST, USA
In the last few years, tools for program analysis and verification became a lot more mature, and several competitions comparatively evaluate the implemented analyses for a given set of benchmarks.The comparison of the analyses either focuses on the analysis results themselves (verification of specified properties) or on the impact on a client analysis.
This track is concerned with methods of evaluation for comparing analysis and verification tools and how to determine their correctness. Therefore the validation of the results of analysis tools is also considered. Of particular interest are verification tools that can be applied to themselves.
The topics of interest are
- Reuse of verification results and combination of multiple verifiers using conditional model checking
- Analysis benchmarking and experimental evaluation of analysis accuracy
- Verification of parallel programs (e.g., data races in shared-memory parallel programs)
- Identification of undefined behavior in programs (e.g. C and C++)
- Validation of verification witnesses
- Elimination of spurious counter examples and false positives
- Soundness of program analysis for incomplete code (external function calls, etc.)
- Interoperability of analysis tools
- Combination of techniques (e.g., model checking with theorem proving)
- Approaches for evaluating interactive verification tools
6) X- by-Construction: Correctness meets Probability
Maurice H. ter Beek
Bruce W. Watson
Correctness-by-Construction (C-by-C) approaches the development of software (systems) as a true form of Engineering, with a capital ‘E’. C-by-C advertises a stepwise refinement process from specification to code, ideally by C-by-C design tools that automatically generate error-free software (system) implementations from rigorous and unambiguous specifications of requirements. Afterwards, testing only serves to validate the C-by-C process rather than to find bugs. (Of course, bugs might still be present outside the boundaries of the verified system: in libraries, compilers, hardware, the C-by-C design tools themselves, etc.).
A lot of progress has been made in this domain, implying it is time to look further than correctness and investigate a move from C-by-C to X-by-C, i.e. by considering also non-functional properties. X-by-C is thus concerned with a step-wise refinement process from specification to code that automatically generates software (system) implementations that by-construction satisfy specific non-functional properties concerning security, dependability, reliability or resource/energy consumption, etc.
We aim to bring together researchers and practitioners that are interested in C-by-C and X-by-C. In line with the growing attention to fault-tolerance (thanks to increasingly common failures in hardware and software) and the increasing use of machine learning techniques in modern software systems in both of these contexts, guaranteed properties are hard to establish we want to particularly emphasise X-by-C in the setting of probabilistic properties.
We therefore invite researchers and practitioners from (at least) the following communities to share their views on (moving from C-by-C to) X-by-C:
- People working on system-of-systems, who address modelling and verification (correctness, but also non-functional properties concerning security, reliability, resilience, energy consumption, performance and sustainability) of networks of interacting legacy and new software systems, and who are interested in applying X-by-C techniques in this domain in order to prove potentially probabilistic non-functional properties of systems-of-systems by construction (from their constituent systems satisfying these properties).
- People working on quantitative modelling and analysis, e.g., through probabilistic or real-time systems and probabilistic or statistical model checking, in particular in the specific setting of dynamic, adaptive or (runtime) reconfigurable systems with variability. These people work on lifting successful formal methods and verification tools from single systems to families of systems, i.e., modelling and analysis techniques that need to cope with the complexity of systems stemming from behaviour, variability, and randomness and which focus not only on correctness but also on non-functional properties concerning safety, security, performance or dependability properties. As such, they may be interested in applying X-by-C techniques in this domain to prove non-functional properties of families of systems by construction (from their individual family members satisfying these properties).
- People working on systems involving components that employ machine learning (ML) or other artificial intelligence (AI) approaches. In these settings, models and behaviour are typically dependent on what is learned from large data sets, and may change dynamically based on yet more data being processed. As a result, guaranteeing properties (whether correctness or nonfunctional ones) becomes hard, and probabilistic reasoning needs to be applied instead with respect to such properties for the components employing ML or AI approaches, and as a consequence, for systems involving such components as well.
- People working on generative software development, who are concerned with the automatic generation of software from specifications given in general formal languages or domain-specific languages, leading to families of related software (systems). Also in this setting, the emphasis so far has typically been on functional correctness, but the restricted scope of the specifications| especially for domain-specific languages may offer a suitable ground for reasoning about non-functional properties, and for using X-by-C techniques to guarantee such properties.
- People working on systems security, privacy and algorithmic transparency and accountability, who care about more than correctness. The application of X-by-C techniques could provide certain guarantees from the outset when designing critical systems. It could also enforce transparency when developing algorithms for automated decision-making, in particular those based on data analytics thus reducing algorithmic bias by avoiding opaque algorithms.
7) Rigorous Engineering of Collective Adaptive Systems
Rocco De Nicola (IMT Lucca, Italy),
Stefan Jähnichen (TU Berlin, Germany),
Martin Wirsing (LMU München, Germany)
Today´s software systems are becoming increasingly distributed and decentralized and have to adapt autonomously to dynamically changing, open-ended environments. Often the nodes have their own individual properties and objectives; interactions with other nodes or with humans may lead to the emergence of unexpected phenomena. We call such systems collective adaptive systems. Examples for collective adaptive systems are robot swarms, smart cities, autonomous cars, voluntary peer-to-peer clouds as well as socio-technical systems and the internet of things.
It is crucial that unexpected and possibly dangerous situations be avoided within such systems. Hence, there is a strong need of methods, tools and techniques to guarantee that systems behave as expected and guarantee at once safety, security, integrity, availability, correctness, reliability, resilience and that such issue are not addressed in isolation but as a whole at system level.
The track on Rigorous Engineering of Collective Adaptive Systems offers a venue for presenting research work aiming at rigorous defining methods to engineer collective adaptive systems in order to face the above outlined challenges. Topics of interest include (but are not limited to):
- Methods and theories for designing, specifying, and analysing collective adaptive systems
- Techniques for programming and operating collective adaptive systems
- Methods and mechanisms for adaptation and dynamic self-expression
- Trustworthiness, validation and verification of collective adaptive systems
- Security and performance of collective adaptive systems
- Techniques to identify collective adaptive systems and their components
8) Automated Verification of Embedded Control Software
Dilian Gurov, KTH Royal Institute of Technology, Stockholm
Paula Herber, University of Münster
Ina Schaefer, TU Braunschweig
The development of embedded control software is driven by high demands with respect to both cost efficiency and quality requirements. The time to market is comparatively short, and the functionality volume is ever-increasing. For example, in 2005, the software in a modern car comprised about 70 MB. Today, we usually have more than 1 GB (i.e. more than 100 million lines of code) distributed over more than a hundred microprocessors. With the current trend towards autonomous driving and the internet of things, these numbers can be expected to further increase. At the same time, embedded control software components often have to meet the highest safety requirements. Beside functional requirements, non-functional requirements like real-time, memory and power consumption are crucial. For example, the control part of automotive software is typically classified on the highest Automotive Safety Integrity Levels, that is, ASIL C or ASIL D, according to the ISO 26262 standard for functional safety in road vehicles. ISO 26262 strongly recommends that semi-formal and formal verification are used for such systems.
The major barrier for the adoption of semi-formal and formal verification in the industrial practice is the limited degree of automation of formal methods. Formal specifications and models often need to be manually written, and scalable verification techniques typically require tedious user annotations and interactions. The challenges of automated verification for embedded control software are multiplied by the recently incorporated use of statistical methods such as machine learning, by the uncertainties of self-adaptation in dynamic environments, and by increasing security issues in the wake of omnipresent wireless interfaces and the internet of things.
In this track, we investigate how the degree of automation can be increased for the verification of embedded control software systems. We plan to not only discuss the traditional safety goals of embedded control software, but also more recent challenges like the use of machine learning in safety-critical components and security issues arising from, e.g., car-to-car communication and wireless interfaces.
9) Automating Software Re-Engineering
Formal approaches to software analysis and development tend to focus on greenfield scenarios or to look at some piece of given software as a static object. Arguably, dynamic evolution of software is a much more common and relevant issue already, and its importance keeps growing. Key drivers are
- the advent of massively parallel hardware and the desire to optimally exploit it by software,
- the need to diversify and customize, particularly, in IoT and Industry 4.0, and
- novel application scenarios for existing software, fueled by digitalization of everything.
Software refactoring, parallelization, adaptation, become central activities in the value
chain: automating them can realize huge gains. Formal approaches to software modeling and analysis are poised to make a substantial contribution, because they are fundamentally concerned with automation and correctness.
In this track we invite researchers with an active interest in the automation of software re-engineering; people working on formal foundations, on tools, as well as practitioners of re-engineering, for example, from the HPC or Industry 4.0 domain.
10) 30 years of Statistical Model Checking!
Kim. G. Larsen, Aalborg University, Denmark
Axel Legay, UCLOUVAIN, Belgium
Statistical Model Checking (SMC) has been proposed as an alternative to avoid an exhaustive exploration of the state space of a system under verification. The core idea of the approach is to conduct some simulations of the system and then use results from the statistics area in order to decide whether the system satisfies the property with respect to a given probability. The answer is correct up to some confidence. SMC is generally much faster than formal verification techniques, and it is closer to industry current practices. Moreover, the approach can be used to verify properties that cannot be expressed by the classical temporal logics used in formal verification as well as to approximate undecidable model checking problems. Impressive results have been obtained in various areas such as energy-centric systems, automotive, or systems biology.
This track is the 4th SMC track at ISOLA (formers: 2014,2016,2018). As it was the case for the previous tracks, our track is opened to all aspects of SMC. This includes (but is not limited to): applying SMC beyond classical stochastic systems, speeding up simulation, rare-events, etc. This year, our objective is to put a strong focus on real-life applications of SMC (robotics, connected cars, …) and on extending SMC’s application beyond classical bounded LTL.
11) From Verification to Explanation
Holger Herrmanns and Christel Baier
Universität des Saarlandes, Dresden University (Germany)
It is becoming the norm that software artefacts participate in actions and decisions that affect humans. This trend has been catching momentum for decades, and is now amplified considerably by the remarkable abilities of machine-learnt methods.
However, our understanding of what is the cause of a specific automated decision is lagging far behind. More severe, we are lacking the scientifc basis to explain how several such applications interact in cascades of automated decisions. With the increase in cyber-physical technology impacting our lives, the consequences of this gradual loss in understanding are becoming severe.
This track of ISOLA intends to explore how computer aided verification techniques can be leveraged to master this challenge. We look for algorithmic and tool-supported approaches that aim at making the behaviour of software and CPS systems understandable.