Difference between revisions of "Abstracts"
Radu iosif (Talk | contribs) (Created page with "1. Eva Darulova (EPFL,Lausanne): Compiling Real Numbers with Precision Guarantees Writing accurate numerical software is hard because of many sources of unavoidable uncertain...") |
Radu iosif (Talk | contribs) |
||
Line 1: | Line 1: | ||
− | 1. Eva Darulova (EPFL,Lausanne): Compiling Real Numbers with Precision Guarantees | + | '''1. Eva Darulova (EPFL,Lausanne): Compiling Real Numbers with Precision Guarantees''' |
Writing accurate numerical software is hard because of many sources of unavoidable uncertainties, including finite numerical precision of implementations. We present a programming model where the user writes a program in a real-valued implementation and specification language that explicitly includes different types of uncertainties. We then present a compilation algorithm that generates a finite-precision implementation that is guaranteed to meet the desired precision with respect to real numbers. Our compilation performs a number of verification steps for different candidate precisions. It generates verification conditions that treat all sources of uncertainties in a unified way and encode reasoning about finite-precision roundoff errors into reasoning about real numbers. Such verification conditions can be used as a standardized format for verifying the precision and the correctness of numerical programs. Due to their non-linear nature, precise reasoning about these verification conditions remains difficult and cannot be handled using state-of-the art SMT solvers alone. We therefore propose a new procedure that combines exact SMT solving over reals with approximate and sound affine and interval arithmetic. We show that this approach overcomes scalability limitations of SMT solvers while providing improved precision over affine and interval arithmetic. Our implementation gives promising results on several numerical models, including dynamical systems, transcendental functions, and controller implementations. | Writing accurate numerical software is hard because of many sources of unavoidable uncertainties, including finite numerical precision of implementations. We present a programming model where the user writes a program in a real-valued implementation and specification language that explicitly includes different types of uncertainties. We then present a compilation algorithm that generates a finite-precision implementation that is guaranteed to meet the desired precision with respect to real numbers. Our compilation performs a number of verification steps for different candidate precisions. It generates verification conditions that treat all sources of uncertainties in a unified way and encode reasoning about finite-precision roundoff errors into reasoning about real numbers. Such verification conditions can be used as a standardized format for verifying the precision and the correctness of numerical programs. Due to their non-linear nature, precise reasoning about these verification conditions remains difficult and cannot be handled using state-of-the art SMT solvers alone. We therefore propose a new procedure that combines exact SMT solving over reals with approximate and sound affine and interval arithmetic. We show that this approach overcomes scalability limitations of SMT solvers while providing improved precision over affine and interval arithmetic. Our implementation gives promising results on several numerical models, including dynamical systems, transcendental functions, and controller implementations. | ||
− | 2. Ravichandhran Kandhadai Madhavan (EPFL,Lausanne): Symbolic Resource Bound Inference for Functional Programs | + | '''2. Ravichandhran Kandhadai Madhavan (EPFL,Lausanne): Symbolic Resource Bound Inference for Functional Programs''' |
We present an approach for inferring symbolic resource bounds for purely functional programs consisting of recursive functions, algebraic data types and nonlinear arithmetic operations. In our approach, the developer specifies the desired shape of the bound as a program expression containing numerical holes which we refer to as templates. We present a scalable algorithm for computing tight bounds for sequential and parallel execution times by solving for the unknowns in the template. We empirically evaluate our approach on several benchmarks that manipulate complex data structures such as binomial heap, lefitist heap, red-black tree and AVL tree. Our implementation is able to infer hard, nonlinear symbolic time bounds for our benchmarks that are beyond the capability of the existing approaches. | We present an approach for inferring symbolic resource bounds for purely functional programs consisting of recursive functions, algebraic data types and nonlinear arithmetic operations. In our approach, the developer specifies the desired shape of the bound as a program expression containing numerical holes which we refer to as templates. We present a scalable algorithm for computing tight bounds for sequential and parallel execution times by solving for the unknowns in the template. We empirically evaluate our approach on several benchmarks that manipulate complex data structures such as binomial heap, lefitist heap, red-black tree and AVL tree. Our implementation is able to infer hard, nonlinear symbolic time bounds for our benchmarks that are beyond the capability of the existing approaches. | ||
− | 3. Souha Ben Rayana (VERIMAG,Grenoble): Compositional Verification of Component-Based Real-time Systems and Applications. | + | '''3. Souha Ben Rayana (VERIMAG,Grenoble): Compositional Verification of Component-Based Real-time Systems and Applications.''' |
The aim of this work is to overcome the state-space explosion problem related to the verification of timed systems with great number of components. Some methods for compositional verification where proposed for untimed systems. However, for timed systems, another important issue is encountered. In fact, the main difficulty with compositional verification of timed system is that calculating the local invariant of one component requires the desynchronization from other components. In the other side, time synchronization between components should be further considered in the calculation of the global property of the system. o this end, we make use of auxiliary clocks, which we call « history clocks », to automatically generate new invariants. History clocks are intended to capture the constraints induced by the synchronizations between components. This idea allows a pure compositional method for the calculation of global invariants of real-time systems. The method has been implemented in a prototype and was successfully experimented on several case studies. Now, it is being implemented as an extension of the D-Finder tool. | The aim of this work is to overcome the state-space explosion problem related to the verification of timed systems with great number of components. Some methods for compositional verification where proposed for untimed systems. However, for timed systems, another important issue is encountered. In fact, the main difficulty with compositional verification of timed system is that calculating the local invariant of one component requires the desynchronization from other components. In the other side, time synchronization between components should be further considered in the calculation of the global property of the system. o this end, we make use of auxiliary clocks, which we call « history clocks », to automatically generate new invariants. History clocks are intended to capture the constraints induced by the synchronizations between components. This idea allows a pure compositional method for the calculation of global invariants of real-time systems. The method has been implemented in a prototype and was successfully experimented on several case studies. Now, it is being implemented as an extension of the D-Finder tool. | ||
− | 4. Najah Bensaid (VERIMAG,Grenoble): Information flow security in component-based systems | + | '''4. Najah Bensaid (VERIMAG,Grenoble): Information flow security in component-based systems''' |
The amount and complexity of nowadays conceived systems and software knows a continuous increase and ensuring information security in these systems is also paramount. Information flow security policies are very adequate and quite used to track the circulation of sensitive information throughout the system. The application of global constraints on the systems information flow allows to ensure event and data confidentiality and integrity. In this talk, I will present an extended framework for automated information flow security in component-based systems following the model-driven security (MDS) approach. The security extension is based on sufficient syntactic constraints formally proved, ensuring automatic verification and secure distributed code generation. | The amount and complexity of nowadays conceived systems and software knows a continuous increase and ensuring information security in these systems is also paramount. Information flow security policies are very adequate and quite used to track the circulation of sensitive information throughout the system. The application of global constraints on the systems information flow allows to ensure event and data confidentiality and integrity. In this talk, I will present an extended framework for automated information flow security in component-based systems following the model-driven security (MDS) approach. The security extension is based on sufficient syntactic constraints formally proved, ensuring automatic verification and secure distributed code generation. | ||
− | 5. Alexios Leikidis (VERIMAG,Grenoble): Rigorous modeling and validation of sensor network systems. | + | '''5. Alexios Leikidis (VERIMAG,Grenoble): Rigorous modeling and validation of sensor network systems.''' |
Sensor networks have emerged as a dominant technology over the last years. Since they are used in a vast variety of applications, the main arising challenge is to provide efficient design solutions ensuring limited communication cost and energy consumption, manageable complexity and reduced failure rate. A well-known formalism, considering all these critical constraints, is model-based design since it allows the simulation and validation of complex heterogeneous systems in every stage of the development, but also optimizes the application performance in an effective manner. This talk will present the basic principles of a new approach applying formal modeling and verification techniques in sensor network systems, guaranteeing the separation of software and hardware concerns. The soundness of this approach is proved by the application in benchmark embedded networked systems, indicating similar results for a variety of well-known communication protocols and standards. | Sensor networks have emerged as a dominant technology over the last years. Since they are used in a vast variety of applications, the main arising challenge is to provide efficient design solutions ensuring limited communication cost and energy consumption, manageable complexity and reduced failure rate. A well-known formalism, considering all these critical constraints, is model-based design since it allows the simulation and validation of complex heterogeneous systems in every stage of the development, but also optimizes the application performance in an effective manner. This talk will present the basic principles of a new approach applying formal modeling and verification techniques in sensor network systems, guaranteeing the separation of software and hardware concerns. The soundness of this approach is proved by the application in benchmark embedded networked systems, indicating similar results for a variety of well-known communication protocols and standards. | ||
− | 6. Amit Kumar Dhar (LIAFA,Paris): Verification of Properties on Flat Counter Systems | + | '''6. Amit Kumar Dhar (LIAFA,Paris): Verification of Properties on Flat Counter Systems''' |
Checking even simple properties like reachability for general counter systems with even two counters is known to be undecidable. However, we can restrict the admitted executions to a suitable subclass to obtain an under-approximation of executions over which we can check expressive properties. The subclass of counter systems we consider here are flat counter systems (i.e. the loops in the system do not intersect). Reachability and model-checking problems for flat counter systems are already known to be decidable. But, whereas the reachability problem can be shown to be in NP, for model-checking problems the best known upper bound is made up of a tower of exponentials, when there is such a bound. Here we investigate and provide new result on the sharp complexity characterisation for model-checking problems of flat counter systems with both linear and branching time properties expressed using several different specification languages. All our specifications admit arithmetical constraints on counters and hence can express properties on counter values encountered during the execution. We show that the complexity of model-checking problems for a wide variety of logics range from NP to PSPACE. This is a joint work with Stéphane Demri and Arnaud Sangnier. | Checking even simple properties like reachability for general counter systems with even two counters is known to be undecidable. However, we can restrict the admitted executions to a suitable subclass to obtain an under-approximation of executions over which we can check expressive properties. The subclass of counter systems we consider here are flat counter systems (i.e. the loops in the system do not intersect). Reachability and model-checking problems for flat counter systems are already known to be decidable. But, whereas the reachability problem can be shown to be in NP, for model-checking problems the best known upper bound is made up of a tower of exponentials, when there is such a bound. Here we investigate and provide new result on the sharp complexity characterisation for model-checking problems of flat counter systems with both linear and branching time properties expressed using several different specification languages. All our specifications admit arithmetical constraints on counters and hence can express properties on counter values encountered during the execution. We show that the complexity of model-checking problems for a wide variety of logics range from NP to PSPACE. This is a joint work with Stéphane Demri and Arnaud Sangnier. | ||
− | 7. Jad Hamza (LIAFA,Paris): Verifying Eventual Consistency of Optimistic Replication Systems | + | '''7. Jad Hamza (LIAFA,Paris): Verifying Eventual Consistency of Optimistic Replication Systems''' |
We address the verification problem of eventual consistency of optimistic replication systems. Such systems are typically used to implement distributed data structures over large scale networks. We introduce a formal definition of eventual consistency that applies to a wide class of existing implementations, including the ones using speculative executions. Then, we reduce the problem of checking eventual consistency to reachability and model checking problems. This reduction enables the use of existing verification tools for message-passing programs in the context of verifying optimistic replication systems. Furthermore, we derive from these reductions decision procedures for checking eventual consistency of systems implemented as finite-state programs communicating through unbounded unordered channels. This is a joint work with Ahmed Bouajjani and Constantin Enea. | We address the verification problem of eventual consistency of optimistic replication systems. Such systems are typically used to implement distributed data structures over large scale networks. We introduce a formal definition of eventual consistency that applies to a wide class of existing implementations, including the ones using speculative executions. Then, we reduce the problem of checking eventual consistency to reachability and model checking problems. This reduction enables the use of existing verification tools for message-passing programs in the context of verifying optimistic replication systems. Furthermore, we derive from these reductions decision procedures for checking eventual consistency of systems implemented as finite-state programs communicating through unbounded unordered channels. This is a joint work with Ahmed Bouajjani and Constantin Enea. | ||
− | 8. Klaus Gleissenthall (TUM,Munich): Solving Horn Clauses with Cardinality Constraints | + | '''8. Klaus Gleissenthall (TUM,Munich): Solving Horn Clauses with Cardinality Constraints''' |
Verification of quantitative program properties, including bounds on resource usage or quantitative information flow security properties, can often be formalized using cardinality based arguments. For example, memory consumption can be bounded by the cardinality of the set of positions at which memory is accessed, while the probability of guessing a secret input can be bounded by relying on the cardinality of the set of observable program states that the input leads to. Automatic verification of quantitative properties is a difficult task, as it requires a combination of first-order theory reasoning about data manipulated by the program with the reasoning about cardinalities. However, we currently lack adequate invariant/interpolant/ranking function generation methods that can deal with cardinality constraints. In this talk I will present an extension of Horn constraints with cardinality operators together with a corresponding solving method that can be used for the automation of quantitative verification tasks. Our method relies on Barvinok's theory to find solutions to recursion-free Horn constraints which are then combined to solutions to recursive Horn constraints in a counterexample guided abstraction refinement loop. Finally, I will present an experimental evaluation of our method. This is joint work with Boris Köpf and Andrey Rybalchenko. | Verification of quantitative program properties, including bounds on resource usage or quantitative information flow security properties, can often be formalized using cardinality based arguments. For example, memory consumption can be bounded by the cardinality of the set of positions at which memory is accessed, while the probability of guessing a secret input can be bounded by relying on the cardinality of the set of observable program states that the input leads to. Automatic verification of quantitative properties is a difficult task, as it requires a combination of first-order theory reasoning about data manipulated by the program with the reasoning about cardinalities. However, we currently lack adequate invariant/interpolant/ranking function generation methods that can deal with cardinality constraints. In this talk I will present an extension of Horn constraints with cardinality operators together with a corresponding solving method that can be used for the automation of quantitative verification tasks. Our method relies on Barvinok's theory to find solutions to recursion-free Horn constraints which are then combined to solutions to recursive Horn constraints in a counterexample guided abstraction refinement loop. Finally, I will present an experimental evaluation of our method. This is joint work with Boris Köpf and Andrey Rybalchenko. | ||
− | 9. Ioan Dragan (Vienna University of Technology): Lingva: Generating and Proving Program Properties using Symbol Elimination | + | '''9. Ioan Dragan (Vienna University of Technology): Lingva: Generating and Proving Program Properties using Symbol Elimination''' |
We describe the Lingva tool for generating and proving complex program properties. Lingva is based on the recently introduced symbol elimination method and uses saturation-based theorem proving in conjunction with static analysis and symbolic computation methods. We present implementation details and report on a large number of experiments on academic examples and open-source software packages. Our experiments show that Lingva can automatically generate quantified invariants, possibly with alternation of quantifiers, over integers and arrays. Moreover, Lingva can be used to prove program properties expressing the intended behavior of programs. This is a joint work with Laura Kovacs (Chalmers University of Technology, Sweden). | We describe the Lingva tool for generating and proving complex program properties. Lingva is based on the recently introduced symbol elimination method and uses saturation-based theorem proving in conjunction with static analysis and symbolic computation methods. We present implementation details and report on a large number of experiments on academic examples and open-source software packages. Our experiments show that Lingva can automatically generate quantified invariants, possibly with alternation of quantifiers, over integers and arrays. Moreover, Lingva can be used to prove program properties expressing the intended behavior of programs. This is a joint work with Laura Kovacs (Chalmers University of Technology, Sweden). | ||
− | 10. Leonardo Alt (University of Lugano): PeRIPLO: A Framework for Producing Effective Interpolants in SAT-Based Software Verification | + | '''10. Leonardo Alt (University of Lugano): PeRIPLO: A Framework for Producing Effective Interpolants in SAT-Based Software Verification''' |
− | Propositional interpolation is widely used as a means of overapproximation to achieve efficient SAT-based symbolic model checking. Different verification applications exploit interpolants for different purposes; | + | Propositional interpolation is widely used as a means of overapproximation to achieve efficient SAT-based symbolic model checking. Different verification applications exploit interpolants for different purposes; it is unlikely that a single interpolation procedure could provide interpolants fit for all cases. This talk describes the PeRIPLO framework, an interpolating SAT-solver that implements a set of techniques to generate and manipulate interpolants for different model checking tasks. The flexibility of the framework is showed in two software bounded model checking applications: verification of a given source code incrementally with respect to various properties, and verification of software upgrades with respect to a fixed set of properties. Both applications use interpolation for generating function summaries. Our systematic experimental investigation shows that size and logical strength of interpolants significantly affect verification, that these characteristics depend on the role played by interpolants, and that therefore techniques for tuning size and strength can be used to customize interpolants in different applications. |
− | it is unlikely that a single interpolation procedure could provide interpolants fit for all cases. This talk describes the PeRIPLO framework, an interpolating SAT-solver that implements a set of techniques to generate and manipulate interpolants for different model checking tasks. The flexibility of the framework is showed in two software bounded model checking applications: verification of a given source code incrementally with respect to various properties, and verification of software upgrades with respect to a fixed set of properties. Both applications use interpolation for generating function summaries. Our systematic experimental investigation shows that size and logical strength of interpolants significantly affect verification, that these characteristics depend on the role played by interpolants, and that therefore techniques for tuning size and strength can be used to customize interpolants in different applications. | + | |
− | 11. Moritz Sinn (Vienna University of Technology): A Simple and Scalable Static Analysis for Bound Analysis and Amortized Complexity Analysis | + | '''11. Moritz Sinn (Vienna University of Technology): A Simple and Scalable Static Analysis for Bound Analysis and Amortized Complexity Analysis''' |
We present the first scalable bound analysis that achieves amortized complexity analysis. In contrast to earlier work, our bound analysis is not based on general purpose reasoners such as abstract interpreters, software model checkers or computer algebra tools. Rather, we derive bounds directly from abstract program models, which we obtain from programs by comparatively simple invariant generation and symbolic execution techniques. As a result, we obtain an analysis that is more predictable and more scalable than earlier approaches. Our experiments demonstrate that our analysis is fast and at the same time able to compute bounds for challenging loops in a large real-world benchmark. Technically, our approach is based on lossy vector addition systems (VASS). Our bound analysis first computes a lexicographic ranking function that proves the termination of a VASS, and then derives a bound from this ranking function. Our methodology achieves amortized analysis based on a new insight how lexicographic ranking functions can be used for bound analysis. | We present the first scalable bound analysis that achieves amortized complexity analysis. In contrast to earlier work, our bound analysis is not based on general purpose reasoners such as abstract interpreters, software model checkers or computer algebra tools. Rather, we derive bounds directly from abstract program models, which we obtain from programs by comparatively simple invariant generation and symbolic execution techniques. As a result, we obtain an analysis that is more predictable and more scalable than earlier approaches. Our experiments demonstrate that our analysis is fast and at the same time able to compute bounds for challenging loops in a large real-world benchmark. Technically, our approach is based on lossy vector addition systems (VASS). Our bound analysis first computes a lexicographic ranking function that proves the termination of a VASS, and then derives a bound from this ranking function. Our methodology achieves amortized analysis based on a new insight how lexicographic ranking functions can be used for bound analysis. | ||
− | 12. Ivan Radicek (Vienna University of Technology): Feedback Generation for Performance Problems in Introductory Programming Assignments | + | '''12. Ivan Radicek (Vienna University of Technology): Feedback Generation for Performance Problems in Introductory Programming Assignments''' |
Providing feedback on programming assignments manually is a tedious, error prone, and time-consuming task. In this paper, we motivate and address the problem of generating feedback on performance aspects in introductory programming assignments. We studied a large number of functionally correct student solutions to introductory programming assignments and observed: (1) There are different algorithmic strategies, with varying levels of efficiency, for solving a given problem. These different strategies merit different feedback. (2) The same algorithmic strategy can be implemented in countless different ways, which are not relevant for reporting feedback on the student program. We propose a light-weight programming language extension that allows a teacher to define an algorithmic strategy by specifying certain key values that should occur during the execution of an implementation. We describe a dynamic analysis based approach to test whether a student's program matches a teacher's specification. Our experimental results illustrate the effectiveness of both our specification language and our dynamic analysis. On one of our benchmarks consisting of 2316 functionally correct implementations to 3 programming problems, we identified 16 strategies that we were able to describe using our specification language (in 95 minutes after inspecting 66, i.e., around 3%, implementations). Our dynamic analysis correctly matched each implementation with its corresponding specification, thereby automatically producing the intended feedback. | Providing feedback on programming assignments manually is a tedious, error prone, and time-consuming task. In this paper, we motivate and address the problem of generating feedback on performance aspects in introductory programming assignments. We studied a large number of functionally correct student solutions to introductory programming assignments and observed: (1) There are different algorithmic strategies, with varying levels of efficiency, for solving a given problem. These different strategies merit different feedback. (2) The same algorithmic strategy can be implemented in countless different ways, which are not relevant for reporting feedback on the student program. We propose a light-weight programming language extension that allows a teacher to define an algorithmic strategy by specifying certain key values that should occur during the execution of an implementation. We describe a dynamic analysis based approach to test whether a student's program matches a teacher's specification. Our experimental results illustrate the effectiveness of both our specification language and our dynamic analysis. On one of our benchmarks consisting of 2316 functionally correct implementations to 3 programming problems, we identified 16 strategies that we were able to describe using our specification language (in 95 minutes after inspecting 66, i.e., around 3%, implementations). Our dynamic analysis correctly matched each implementation with its corresponding specification, thereby automatically producing the intended feedback. | ||
− | 13. Marco Gario (FBK,Trento): Using Temporal Epistemic Logic for Reasoning about Fault Management | + | '''13. Marco Gario (FBK,Trento): Using Temporal Epistemic Logic for Reasoning about Fault Management''' |
Fault Detection and Identification (FDI) components are crucial for the success of autonomous systems (e.g., satellites, space rovers). ASLk [1] is a formal framework for the specification, validation, verification and synthesis of FDI components. ASLk is equipped with a formal semantics expressed in terms of Temporal Epistemic Logic. To enable the verification and validation (V&V) of ASLk specifications, efficient Satisfiability and Model checking algorithms and implementations are needed. In this talk, we provide an overview of ASLk, and of the state-of-the-art of reasoning tools for Temporal Epistemic Logic. Additionally, we discuss the limitations of current tools wrt to the reasoning capabilities required to perform V&V on ASLk specification. | Fault Detection and Identification (FDI) components are crucial for the success of autonomous systems (e.g., satellites, space rovers). ASLk [1] is a formal framework for the specification, validation, verification and synthesis of FDI components. ASLk is equipped with a formal semantics expressed in terms of Temporal Epistemic Logic. To enable the verification and validation (V&V) of ASLk specifications, efficient Satisfiability and Model checking algorithms and implementations are needed. In this talk, we provide an overview of ASLk, and of the state-of-the-art of reasoning tools for Temporal Epistemic Logic. Additionally, we discuss the limitations of current tools wrt to the reasoning capabilities required to perform V&V on ASLk specification. | ||
− | 14. Benjamin Bittner (FBK,Trento): Failure Propagation Analysis for Safety-Critical Systems | + | '''14. Benjamin Bittner (FBK,Trento): Failure Propagation Analysis for Safety-Critical Systems''' |
A fundamental aspect of safety-critical systems is the possibility of failure occurrences and resulting off-nominal situations. This can pose threats for instance to mission objectives or perhaps even human safety. It is thus of prime importance to understand how these systems behave under failures in order to manage accordingly the associated risks. Recently the TFPG formalism (Timed Failure Propagation Graph) has been introduced to model how failures affect various system components over time, but still there is a lack of automated tools for working with them, especially wrt formal properties. One open problem is how to verify whether the TFPG is complete wrt its system of reference, that is, whether all real failure traces of the system are covered by the TFPG abstraction. If a TFPG is incomplete, some specific propagation paths might be missed. A second reasoning problem, which to some extend assumes that a given TFPG is complete wrt the system, is to understand whether the information provided by the TFPG is suitable for diagnosis. For instance, it might be important to always distinguish certain classes of failures, but the TFPG contains too little diagnostic information for doing so in all cases. In this talk we provide an overview of the TFPG formalism and describe algorithmic approaches based on symbolic model checking techniques for the aforementioned reasoning problems. We also describe the relationship of these techniques with fault-tree analysis (FTA) and failure-mode-effect analysis (FMEA). | A fundamental aspect of safety-critical systems is the possibility of failure occurrences and resulting off-nominal situations. This can pose threats for instance to mission objectives or perhaps even human safety. It is thus of prime importance to understand how these systems behave under failures in order to manage accordingly the associated risks. Recently the TFPG formalism (Timed Failure Propagation Graph) has been introduced to model how failures affect various system components over time, but still there is a lack of automated tools for working with them, especially wrt formal properties. One open problem is how to verify whether the TFPG is complete wrt its system of reference, that is, whether all real failure traces of the system are covered by the TFPG abstraction. If a TFPG is incomplete, some specific propagation paths might be missed. A second reasoning problem, which to some extend assumes that a given TFPG is complete wrt the system, is to understand whether the information provided by the TFPG is suitable for diagnosis. For instance, it might be important to always distinguish certain classes of failures, but the TFPG contains too little diagnostic information for doing so in all cases. In this talk we provide an overview of the TFPG formalism and describe algorithmic approaches based on symbolic model checking techniques for the aforementioned reasoning problems. We also describe the relationship of these techniques with fault-tree analysis (FTA) and failure-mode-effect analysis (FMEA). | ||
− | 15. Tomer Kotek (Vienna University of Technology): Shape and Content: Incorporating Domain Knowledge into Shape Analysis | + | '''15. Tomer Kotek (Vienna University of Technology): Shape and Content: Incorporating Domain Knowledge into Shape Analysis''' |
The verification community has studied dynamic data structures primarily in a bottom-up way by analyzing pointers and the shapes induced by them. Recent work in fields such as separation logic has made significant progress in extracting shapes from program source code. Many real world programs however manipulate complex data whose | The verification community has studied dynamic data structures primarily in a bottom-up way by analyzing pointers and the shapes induced by them. Recent work in fields such as separation logic has made significant progress in extracting shapes from program source code. Many real world programs however manipulate complex data whose | ||
structure and content is most naturally described by formalisms from object oriented programming and databases. In this paper, we attempt to bridge the conceptual gap between these two communities. Our approach is based on descripti on logic, a widely used knowledge representation paradigm which gives a logical underpinning for diverse modeling frameworks such as UML and ER. We show how description logic can be used on top of an existing shape analysis to add content descriptions to the shapes. Technically, we assume that we have separation logic shape invariants obtained from a shape analysis tool, and requirements on the program data in terms of description logic. We show that the two-variable fragment of first order logic with counting and trees (whose decidability was proved at LICS 2013) can be used as a joint framework to embed suitable fragments of description logic and separation logic. | structure and content is most naturally described by formalisms from object oriented programming and databases. In this paper, we attempt to bridge the conceptual gap between these two communities. Our approach is based on descripti on logic, a widely used knowledge representation paradigm which gives a logical underpinning for diverse modeling frameworks such as UML and ER. We show how description logic can be used on top of an existing shape analysis to add content descriptions to the shapes. Technically, we assume that we have separation logic shape invariants obtained from a shape analysis tool, and requirements on the program data in terms of description logic. We show that the two-variable fragment of first order logic with counting and trees (whose decidability was proved at LICS 2013) can be used as a joint framework to embed suitable fragments of description logic and separation logic. | ||
− | 16. Tewodros Beyene (TUM,Munich): Compositional Repair of Reactive Programs | + | '''16. Tewodros Beyene (TUM,Munich): Compositional Repair of Reactive Programs''' |
I present a compositional approach to the automatic repair of infinite-state, procedural, reactive programs. The input to our approach is a partial program obtained by deleting a set of suspicious expressions from an erroneous program. The objective is to synthesize a complete program that satisfies a temporal requirement in spite of an adversarial environment. Following prior work, the problem is modeled as a game between two players - the Program and the Environment - who take turns changing the system state. Repair involves the computation of a strategy under which Program can always meet the requirements. Our solution to the problem consists of a set of deductive proof rules for computing winning strategies for Program in a procedure-modular way. The rules extend seamlessly to recursive programs, and are sound and relatively complete. The rules are automated using an off-the-shelf Horn constraint solver that allows existential quantication in clause heads. The practical promise of the method is demonstrated through case studies involving a set of challenging device driver examples. | I present a compositional approach to the automatic repair of infinite-state, procedural, reactive programs. The input to our approach is a partial program obtained by deleting a set of suspicious expressions from an erroneous program. The objective is to synthesize a complete program that satisfies a temporal requirement in spite of an adversarial environment. Following prior work, the problem is modeled as a game between two players - the Program and the Environment - who take turns changing the system state. Repair involves the computation of a strategy under which Program can always meet the requirements. Our solution to the problem consists of a set of deductive proof rules for computing winning strategies for Program in a procedure-modular way. The rules extend seamlessly to recursive programs, and are sound and relatively complete. The rules are automated using an off-the-shelf Horn constraint solver that allows existential quantication in clause heads. The practical promise of the method is demonstrated through case studies involving a set of challenging device driver examples. | ||
− | 17. Florian Lonsing (Vienna University of Technology): Incremental QBF Solving | + | '''17. Florian Lonsing (Vienna University of Technology): Incremental QBF Solving''' |
Propositional logic (SAT) has been widely applied to encode problems from model checking, formal verification, and synthesis, for example. In these applications, often a sequence of closely related formulae must be solved. Incremental solving aims at using information learned from one formula in the process of solving the next formulae. Motivated by the success of incremental SAT solving, we consider the problem of incrementally solving a sequence of quantified Boolean formulae (QBF). We adopt ideas from incremental SAT solving and present an approach to incremental QBF solving which is application-independent and hence applicable to QBF encodings of arbitrary problems. We implemented this approach in our incremental search-based solver DepQBF. The API of the solver provides the user with functions to manipulate the input formula by incrementally adding and deleting clauses and variables. As an additional API feature, the user can add and delete sets of clauses by means of push and pop operations. This way, the set of clauses of the input formula is organized as a stack. We report on challenges in the context of incremental QBF solving, like maintaining learned information across different calls of the solver, and sketch usage scenarios by means of examples. Based on first experimental results, we conclude that incremental QBF solving has the potential to improve QBF-based workflows in many application domains. | Propositional logic (SAT) has been widely applied to encode problems from model checking, formal verification, and synthesis, for example. In these applications, often a sequence of closely related formulae must be solved. Incremental solving aims at using information learned from one formula in the process of solving the next formulae. Motivated by the success of incremental SAT solving, we consider the problem of incrementally solving a sequence of quantified Boolean formulae (QBF). We adopt ideas from incremental SAT solving and present an approach to incremental QBF solving which is application-independent and hence applicable to QBF encodings of arbitrary problems. We implemented this approach in our incremental search-based solver DepQBF. The API of the solver provides the user with functions to manipulate the input formula by incrementally adding and deleting clauses and variables. As an additional API feature, the user can add and delete sets of clauses by means of push and pop operations. This way, the set of clauses of the input formula is organized as a stack. We report on challenges in the context of incremental QBF solving, like maintaining learned information across different calls of the solver, and sketch usage scenarios by means of examples. Based on first experimental results, we conclude that incremental QBF solving has the potential to improve QBF-based workflows in many application domains. | ||
− | 18. Corneliu Popeea (TUM,Munich): Reduction for compositional verification of multi-threaded programs | + | '''18. Corneliu Popeea (TUM,Munich): Reduction for compositional verification of multi-threaded programs''' |
Automated verification of multi-threaded programs requires keeping track of a very large number of possible interactions between the program threads. Different reasoning methods have been proposed that alleviate the explicit enumeration of all thread interleavings, e.g., Lipton's theory of reduction or Owicki-Gries method for compositional reasoning, however their synergistic interplay has not yet been fully explored. In this paper we explore the applicability of the theory of reduction for pruning of equivalent interleavings for the automated verification of multi-threaded programs with infinite-state spaces. We propose proof rules for safety and termination of multi-threaded programs that integrate into an Owicki-Gries based compositional verifier. The verification conditions of our method are Horn clauses, thus facilitating automation by using off-the-shelf Horn clause solvers. We present preliminary experimental results that show the advantages of our approach when compared to state-of-the-art verifiers of C programs. This is joint work with Andrey Rybalchenko and Andreas Wilhelm. | Automated verification of multi-threaded programs requires keeping track of a very large number of possible interactions between the program threads. Different reasoning methods have been proposed that alleviate the explicit enumeration of all thread interleavings, e.g., Lipton's theory of reduction or Owicki-Gries method for compositional reasoning, however their synergistic interplay has not yet been fully explored. In this paper we explore the applicability of the theory of reduction for pruning of equivalent interleavings for the automated verification of multi-threaded programs with infinite-state spaces. We propose proof rules for safety and termination of multi-threaded programs that integrate into an Owicki-Gries based compositional verifier. The verification conditions of our method are Horn clauses, thus facilitating automation by using off-the-shelf Horn clause solvers. We present preliminary experimental results that show the advantages of our approach when compared to state-of-the-art verifiers of C programs. This is joint work with Andrey Rybalchenko and Andreas Wilhelm. | ||
− | 19. Frantisek Blahoudek (Masaryk University,Brno): Chasing the Best Büchi Automata for NestedDFS-Based Model Checking | + | '''19. Frantisek Blahoudek (Masaryk University,Brno): Chasing the Best Büchi Automata for NestedDFS-Based Model Checking''' |
The automata-based LTL model checking process translates an LTL formula into Büchi automaton (BA) and then searches the product of the automaton and the model for accepting cycles. The automaton used can heavily influence the running time of the search. The authors of LTL to BA translators are usually concerned mainly about the size of state space of the resulting BA. As was already shown, also the determinism plays role in the size of the resulting product with the model. We performed experiments of real model checking tasks with Spin model checker and used different LTL translators to discover what properties of the BA actually matter the most. Our experiments reveal that there are more properties of BA that have impact on the Spin's running time. We found out that the BAs that are good for tasks where there is an accepting cycle (and therefore Spin does not build the whole product) are not necessary good for tasks without any accepting cycle. The talk summarises of our experiments, suggests> several improvements for LTL to BA translators, and describes our ranking method for BA with and without any knowledge about the model being verified. | The automata-based LTL model checking process translates an LTL formula into Büchi automaton (BA) and then searches the product of the automaton and the model for accepting cycles. The automaton used can heavily influence the running time of the search. The authors of LTL to BA translators are usually concerned mainly about the size of state space of the resulting BA. As was already shown, also the determinism plays role in the size of the resulting product with the model. We performed experiments of real model checking tasks with Spin model checker and used different LTL translators to discover what properties of the BA actually matter the most. Our experiments reveal that there are more properties of BA that have impact on the Spin's running time. We found out that the BAs that are good for tasks where there is an accepting cycle (and therefore Spin does not build the whole product) are not necessary good for tasks without any accepting cycle. The talk summarises of our experiments, suggests> several improvements for LTL to BA translators, and describes our ranking method for BA with and without any knowledge about the model being verified. | ||
− | 20. Jan Otop and Roopsha Samanta (IST Austria): Robustness Analysis of Transducers | + | '''20. Jan Otop and Roopsha Samanta (IST Austria): Robustness Analysis of Transducers''' |
Many important functions over strings can be represented as finite-state string transducers. In this talk, we study the problem of checking if such a function/transducer is "robust" to uncertainty. Our definition of robustness is inspired by Lipschitz continuity - a transducer is robust if the change in its output is proportional to the change in its input. In the first part of this talk, we focus on bounded input perturbation, and present automata-theoretic decision procedures for checking robustness of transducers w.r.t. two popular string distance metrics. These decision procedures are based on reducing the problem of robustness analysis of a transducer to the problem of checking the emptiness of a reversal-bounded counter machine. In the second part of this talk, we focus on unbounded input perturbation. We identify classes of transducers and distance metrics for which robustness analysis is decidable and present examples of such systems. We also show that robustness analysis of arbitrary transducers is undecidable. | Many important functions over strings can be represented as finite-state string transducers. In this talk, we study the problem of checking if such a function/transducer is "robust" to uncertainty. Our definition of robustness is inspired by Lipschitz continuity - a transducer is robust if the change in its output is proportional to the change in its input. In the first part of this talk, we focus on bounded input perturbation, and present automata-theoretic decision procedures for checking robustness of transducers w.r.t. two popular string distance metrics. These decision procedures are based on reducing the problem of robustness analysis of a transducer to the problem of checking the emptiness of a reversal-bounded counter machine. In the second part of this talk, we focus on unbounded input perturbation. We identify classes of transducers and distance metrics for which robustness analysis is decidable and present examples of such systems. We also show that robustness analysis of arbitrary transducers is undecidable. | ||
− | 21. Thorsten Tarrach (IST Austria): Regression-free Synthesis for Concurrency | + | '''21. Thorsten Tarrach (IST Austria): Regression-free Synthesis for Concurrency''' |
While fixing concurrency bugs, program repair algorithms may introduce new concurrency bugs. We present a new repair algorithm that avoids such regressions. The solution space is given by the program transformations we consider in the repair process. These include reordering of instructions within a thread, inserting atomic sections, and inserting wait-notify mechanisms. The new algorithm learns constraints on the space of candidate solutions, from both positive examples (error-free traces) and counterexamples (error traces). From counterexamples, the algorithm learns a constraint necessary to remove them. From positive examples, it learns constraints that are necessary in order to prevent the repair from turning the trace into a counterexample. Our implementation is able to fix the bugs while avoiding regressions. | While fixing concurrency bugs, program repair algorithms may introduce new concurrency bugs. We present a new repair algorithm that avoids such regressions. The solution space is given by the program transformations we consider in the repair process. These include reordering of instructions within a thread, inserting atomic sections, and inserting wait-notify mechanisms. The new algorithm learns constraints on the space of candidate solutions, from both positive examples (error-free traces) and counterexamples (error traces). From counterexamples, the algorithm learns a constraint necessary to remove them. From positive examples, it learns constraints that are necessary in order to prevent the repair from turning the trace into a counterexample. Our implementation is able to fix the bugs while avoiding regressions. |
Revision as of 13:50, 28 March 2014
1. Eva Darulova (EPFL,Lausanne): Compiling Real Numbers with Precision Guarantees
Writing accurate numerical software is hard because of many sources of unavoidable uncertainties, including finite numerical precision of implementations. We present a programming model where the user writes a program in a real-valued implementation and specification language that explicitly includes different types of uncertainties. We then present a compilation algorithm that generates a finite-precision implementation that is guaranteed to meet the desired precision with respect to real numbers. Our compilation performs a number of verification steps for different candidate precisions. It generates verification conditions that treat all sources of uncertainties in a unified way and encode reasoning about finite-precision roundoff errors into reasoning about real numbers. Such verification conditions can be used as a standardized format for verifying the precision and the correctness of numerical programs. Due to their non-linear nature, precise reasoning about these verification conditions remains difficult and cannot be handled using state-of-the art SMT solvers alone. We therefore propose a new procedure that combines exact SMT solving over reals with approximate and sound affine and interval arithmetic. We show that this approach overcomes scalability limitations of SMT solvers while providing improved precision over affine and interval arithmetic. Our implementation gives promising results on several numerical models, including dynamical systems, transcendental functions, and controller implementations.
2. Ravichandhran Kandhadai Madhavan (EPFL,Lausanne): Symbolic Resource Bound Inference for Functional Programs
We present an approach for inferring symbolic resource bounds for purely functional programs consisting of recursive functions, algebraic data types and nonlinear arithmetic operations. In our approach, the developer specifies the desired shape of the bound as a program expression containing numerical holes which we refer to as templates. We present a scalable algorithm for computing tight bounds for sequential and parallel execution times by solving for the unknowns in the template. We empirically evaluate our approach on several benchmarks that manipulate complex data structures such as binomial heap, lefitist heap, red-black tree and AVL tree. Our implementation is able to infer hard, nonlinear symbolic time bounds for our benchmarks that are beyond the capability of the existing approaches.
3. Souha Ben Rayana (VERIMAG,Grenoble): Compositional Verification of Component-Based Real-time Systems and Applications.
The aim of this work is to overcome the state-space explosion problem related to the verification of timed systems with great number of components. Some methods for compositional verification where proposed for untimed systems. However, for timed systems, another important issue is encountered. In fact, the main difficulty with compositional verification of timed system is that calculating the local invariant of one component requires the desynchronization from other components. In the other side, time synchronization between components should be further considered in the calculation of the global property of the system. o this end, we make use of auxiliary clocks, which we call « history clocks », to automatically generate new invariants. History clocks are intended to capture the constraints induced by the synchronizations between components. This idea allows a pure compositional method for the calculation of global invariants of real-time systems. The method has been implemented in a prototype and was successfully experimented on several case studies. Now, it is being implemented as an extension of the D-Finder tool.
4. Najah Bensaid (VERIMAG,Grenoble): Information flow security in component-based systems
The amount and complexity of nowadays conceived systems and software knows a continuous increase and ensuring information security in these systems is also paramount. Information flow security policies are very adequate and quite used to track the circulation of sensitive information throughout the system. The application of global constraints on the systems information flow allows to ensure event and data confidentiality and integrity. In this talk, I will present an extended framework for automated information flow security in component-based systems following the model-driven security (MDS) approach. The security extension is based on sufficient syntactic constraints formally proved, ensuring automatic verification and secure distributed code generation.
5. Alexios Leikidis (VERIMAG,Grenoble): Rigorous modeling and validation of sensor network systems.
Sensor networks have emerged as a dominant technology over the last years. Since they are used in a vast variety of applications, the main arising challenge is to provide efficient design solutions ensuring limited communication cost and energy consumption, manageable complexity and reduced failure rate. A well-known formalism, considering all these critical constraints, is model-based design since it allows the simulation and validation of complex heterogeneous systems in every stage of the development, but also optimizes the application performance in an effective manner. This talk will present the basic principles of a new approach applying formal modeling and verification techniques in sensor network systems, guaranteeing the separation of software and hardware concerns. The soundness of this approach is proved by the application in benchmark embedded networked systems, indicating similar results for a variety of well-known communication protocols and standards.
6. Amit Kumar Dhar (LIAFA,Paris): Verification of Properties on Flat Counter Systems
Checking even simple properties like reachability for general counter systems with even two counters is known to be undecidable. However, we can restrict the admitted executions to a suitable subclass to obtain an under-approximation of executions over which we can check expressive properties. The subclass of counter systems we consider here are flat counter systems (i.e. the loops in the system do not intersect). Reachability and model-checking problems for flat counter systems are already known to be decidable. But, whereas the reachability problem can be shown to be in NP, for model-checking problems the best known upper bound is made up of a tower of exponentials, when there is such a bound. Here we investigate and provide new result on the sharp complexity characterisation for model-checking problems of flat counter systems with both linear and branching time properties expressed using several different specification languages. All our specifications admit arithmetical constraints on counters and hence can express properties on counter values encountered during the execution. We show that the complexity of model-checking problems for a wide variety of logics range from NP to PSPACE. This is a joint work with Stéphane Demri and Arnaud Sangnier.
7. Jad Hamza (LIAFA,Paris): Verifying Eventual Consistency of Optimistic Replication Systems
We address the verification problem of eventual consistency of optimistic replication systems. Such systems are typically used to implement distributed data structures over large scale networks. We introduce a formal definition of eventual consistency that applies to a wide class of existing implementations, including the ones using speculative executions. Then, we reduce the problem of checking eventual consistency to reachability and model checking problems. This reduction enables the use of existing verification tools for message-passing programs in the context of verifying optimistic replication systems. Furthermore, we derive from these reductions decision procedures for checking eventual consistency of systems implemented as finite-state programs communicating through unbounded unordered channels. This is a joint work with Ahmed Bouajjani and Constantin Enea.
8. Klaus Gleissenthall (TUM,Munich): Solving Horn Clauses with Cardinality Constraints
Verification of quantitative program properties, including bounds on resource usage or quantitative information flow security properties, can often be formalized using cardinality based arguments. For example, memory consumption can be bounded by the cardinality of the set of positions at which memory is accessed, while the probability of guessing a secret input can be bounded by relying on the cardinality of the set of observable program states that the input leads to. Automatic verification of quantitative properties is a difficult task, as it requires a combination of first-order theory reasoning about data manipulated by the program with the reasoning about cardinalities. However, we currently lack adequate invariant/interpolant/ranking function generation methods that can deal with cardinality constraints. In this talk I will present an extension of Horn constraints with cardinality operators together with a corresponding solving method that can be used for the automation of quantitative verification tasks. Our method relies on Barvinok's theory to find solutions to recursion-free Horn constraints which are then combined to solutions to recursive Horn constraints in a counterexample guided abstraction refinement loop. Finally, I will present an experimental evaluation of our method. This is joint work with Boris Köpf and Andrey Rybalchenko.
9. Ioan Dragan (Vienna University of Technology): Lingva: Generating and Proving Program Properties using Symbol Elimination
We describe the Lingva tool for generating and proving complex program properties. Lingva is based on the recently introduced symbol elimination method and uses saturation-based theorem proving in conjunction with static analysis and symbolic computation methods. We present implementation details and report on a large number of experiments on academic examples and open-source software packages. Our experiments show that Lingva can automatically generate quantified invariants, possibly with alternation of quantifiers, over integers and arrays. Moreover, Lingva can be used to prove program properties expressing the intended behavior of programs. This is a joint work with Laura Kovacs (Chalmers University of Technology, Sweden).
10. Leonardo Alt (University of Lugano): PeRIPLO: A Framework for Producing Effective Interpolants in SAT-Based Software Verification
Propositional interpolation is widely used as a means of overapproximation to achieve efficient SAT-based symbolic model checking. Different verification applications exploit interpolants for different purposes; it is unlikely that a single interpolation procedure could provide interpolants fit for all cases. This talk describes the PeRIPLO framework, an interpolating SAT-solver that implements a set of techniques to generate and manipulate interpolants for different model checking tasks. The flexibility of the framework is showed in two software bounded model checking applications: verification of a given source code incrementally with respect to various properties, and verification of software upgrades with respect to a fixed set of properties. Both applications use interpolation for generating function summaries. Our systematic experimental investigation shows that size and logical strength of interpolants significantly affect verification, that these characteristics depend on the role played by interpolants, and that therefore techniques for tuning size and strength can be used to customize interpolants in different applications.
11. Moritz Sinn (Vienna University of Technology): A Simple and Scalable Static Analysis for Bound Analysis and Amortized Complexity Analysis
We present the first scalable bound analysis that achieves amortized complexity analysis. In contrast to earlier work, our bound analysis is not based on general purpose reasoners such as abstract interpreters, software model checkers or computer algebra tools. Rather, we derive bounds directly from abstract program models, which we obtain from programs by comparatively simple invariant generation and symbolic execution techniques. As a result, we obtain an analysis that is more predictable and more scalable than earlier approaches. Our experiments demonstrate that our analysis is fast and at the same time able to compute bounds for challenging loops in a large real-world benchmark. Technically, our approach is based on lossy vector addition systems (VASS). Our bound analysis first computes a lexicographic ranking function that proves the termination of a VASS, and then derives a bound from this ranking function. Our methodology achieves amortized analysis based on a new insight how lexicographic ranking functions can be used for bound analysis.
12. Ivan Radicek (Vienna University of Technology): Feedback Generation for Performance Problems in Introductory Programming Assignments
Providing feedback on programming assignments manually is a tedious, error prone, and time-consuming task. In this paper, we motivate and address the problem of generating feedback on performance aspects in introductory programming assignments. We studied a large number of functionally correct student solutions to introductory programming assignments and observed: (1) There are different algorithmic strategies, with varying levels of efficiency, for solving a given problem. These different strategies merit different feedback. (2) The same algorithmic strategy can be implemented in countless different ways, which are not relevant for reporting feedback on the student program. We propose a light-weight programming language extension that allows a teacher to define an algorithmic strategy by specifying certain key values that should occur during the execution of an implementation. We describe a dynamic analysis based approach to test whether a student's program matches a teacher's specification. Our experimental results illustrate the effectiveness of both our specification language and our dynamic analysis. On one of our benchmarks consisting of 2316 functionally correct implementations to 3 programming problems, we identified 16 strategies that we were able to describe using our specification language (in 95 minutes after inspecting 66, i.e., around 3%, implementations). Our dynamic analysis correctly matched each implementation with its corresponding specification, thereby automatically producing the intended feedback.
13. Marco Gario (FBK,Trento): Using Temporal Epistemic Logic for Reasoning about Fault Management
Fault Detection and Identification (FDI) components are crucial for the success of autonomous systems (e.g., satellites, space rovers). ASLk [1] is a formal framework for the specification, validation, verification and synthesis of FDI components. ASLk is equipped with a formal semantics expressed in terms of Temporal Epistemic Logic. To enable the verification and validation (V&V) of ASLk specifications, efficient Satisfiability and Model checking algorithms and implementations are needed. In this talk, we provide an overview of ASLk, and of the state-of-the-art of reasoning tools for Temporal Epistemic Logic. Additionally, we discuss the limitations of current tools wrt to the reasoning capabilities required to perform V&V on ASLk specification.
14. Benjamin Bittner (FBK,Trento): Failure Propagation Analysis for Safety-Critical Systems
A fundamental aspect of safety-critical systems is the possibility of failure occurrences and resulting off-nominal situations. This can pose threats for instance to mission objectives or perhaps even human safety. It is thus of prime importance to understand how these systems behave under failures in order to manage accordingly the associated risks. Recently the TFPG formalism (Timed Failure Propagation Graph) has been introduced to model how failures affect various system components over time, but still there is a lack of automated tools for working with them, especially wrt formal properties. One open problem is how to verify whether the TFPG is complete wrt its system of reference, that is, whether all real failure traces of the system are covered by the TFPG abstraction. If a TFPG is incomplete, some specific propagation paths might be missed. A second reasoning problem, which to some extend assumes that a given TFPG is complete wrt the system, is to understand whether the information provided by the TFPG is suitable for diagnosis. For instance, it might be important to always distinguish certain classes of failures, but the TFPG contains too little diagnostic information for doing so in all cases. In this talk we provide an overview of the TFPG formalism and describe algorithmic approaches based on symbolic model checking techniques for the aforementioned reasoning problems. We also describe the relationship of these techniques with fault-tree analysis (FTA) and failure-mode-effect analysis (FMEA).
15. Tomer Kotek (Vienna University of Technology): Shape and Content: Incorporating Domain Knowledge into Shape Analysis
The verification community has studied dynamic data structures primarily in a bottom-up way by analyzing pointers and the shapes induced by them. Recent work in fields such as separation logic has made significant progress in extracting shapes from program source code. Many real world programs however manipulate complex data whose structure and content is most naturally described by formalisms from object oriented programming and databases. In this paper, we attempt to bridge the conceptual gap between these two communities. Our approach is based on descripti on logic, a widely used knowledge representation paradigm which gives a logical underpinning for diverse modeling frameworks such as UML and ER. We show how description logic can be used on top of an existing shape analysis to add content descriptions to the shapes. Technically, we assume that we have separation logic shape invariants obtained from a shape analysis tool, and requirements on the program data in terms of description logic. We show that the two-variable fragment of first order logic with counting and trees (whose decidability was proved at LICS 2013) can be used as a joint framework to embed suitable fragments of description logic and separation logic.
16. Tewodros Beyene (TUM,Munich): Compositional Repair of Reactive Programs
I present a compositional approach to the automatic repair of infinite-state, procedural, reactive programs. The input to our approach is a partial program obtained by deleting a set of suspicious expressions from an erroneous program. The objective is to synthesize a complete program that satisfies a temporal requirement in spite of an adversarial environment. Following prior work, the problem is modeled as a game between two players - the Program and the Environment - who take turns changing the system state. Repair involves the computation of a strategy under which Program can always meet the requirements. Our solution to the problem consists of a set of deductive proof rules for computing winning strategies for Program in a procedure-modular way. The rules extend seamlessly to recursive programs, and are sound and relatively complete. The rules are automated using an off-the-shelf Horn constraint solver that allows existential quantication in clause heads. The practical promise of the method is demonstrated through case studies involving a set of challenging device driver examples.
17. Florian Lonsing (Vienna University of Technology): Incremental QBF Solving
Propositional logic (SAT) has been widely applied to encode problems from model checking, formal verification, and synthesis, for example. In these applications, often a sequence of closely related formulae must be solved. Incremental solving aims at using information learned from one formula in the process of solving the next formulae. Motivated by the success of incremental SAT solving, we consider the problem of incrementally solving a sequence of quantified Boolean formulae (QBF). We adopt ideas from incremental SAT solving and present an approach to incremental QBF solving which is application-independent and hence applicable to QBF encodings of arbitrary problems. We implemented this approach in our incremental search-based solver DepQBF. The API of the solver provides the user with functions to manipulate the input formula by incrementally adding and deleting clauses and variables. As an additional API feature, the user can add and delete sets of clauses by means of push and pop operations. This way, the set of clauses of the input formula is organized as a stack. We report on challenges in the context of incremental QBF solving, like maintaining learned information across different calls of the solver, and sketch usage scenarios by means of examples. Based on first experimental results, we conclude that incremental QBF solving has the potential to improve QBF-based workflows in many application domains.
18. Corneliu Popeea (TUM,Munich): Reduction for compositional verification of multi-threaded programs
Automated verification of multi-threaded programs requires keeping track of a very large number of possible interactions between the program threads. Different reasoning methods have been proposed that alleviate the explicit enumeration of all thread interleavings, e.g., Lipton's theory of reduction or Owicki-Gries method for compositional reasoning, however their synergistic interplay has not yet been fully explored. In this paper we explore the applicability of the theory of reduction for pruning of equivalent interleavings for the automated verification of multi-threaded programs with infinite-state spaces. We propose proof rules for safety and termination of multi-threaded programs that integrate into an Owicki-Gries based compositional verifier. The verification conditions of our method are Horn clauses, thus facilitating automation by using off-the-shelf Horn clause solvers. We present preliminary experimental results that show the advantages of our approach when compared to state-of-the-art verifiers of C programs. This is joint work with Andrey Rybalchenko and Andreas Wilhelm.
19. Frantisek Blahoudek (Masaryk University,Brno): Chasing the Best Büchi Automata for NestedDFS-Based Model Checking
The automata-based LTL model checking process translates an LTL formula into Büchi automaton (BA) and then searches the product of the automaton and the model for accepting cycles. The automaton used can heavily influence the running time of the search. The authors of LTL to BA translators are usually concerned mainly about the size of state space of the resulting BA. As was already shown, also the determinism plays role in the size of the resulting product with the model. We performed experiments of real model checking tasks with Spin model checker and used different LTL translators to discover what properties of the BA actually matter the most. Our experiments reveal that there are more properties of BA that have impact on the Spin's running time. We found out that the BAs that are good for tasks where there is an accepting cycle (and therefore Spin does not build the whole product) are not necessary good for tasks without any accepting cycle. The talk summarises of our experiments, suggests> several improvements for LTL to BA translators, and describes our ranking method for BA with and without any knowledge about the model being verified.
20. Jan Otop and Roopsha Samanta (IST Austria): Robustness Analysis of Transducers
Many important functions over strings can be represented as finite-state string transducers. In this talk, we study the problem of checking if such a function/transducer is "robust" to uncertainty. Our definition of robustness is inspired by Lipschitz continuity - a transducer is robust if the change in its output is proportional to the change in its input. In the first part of this talk, we focus on bounded input perturbation, and present automata-theoretic decision procedures for checking robustness of transducers w.r.t. two popular string distance metrics. These decision procedures are based on reducing the problem of robustness analysis of a transducer to the problem of checking the emptiness of a reversal-bounded counter machine. In the second part of this talk, we focus on unbounded input perturbation. We identify classes of transducers and distance metrics for which robustness analysis is decidable and present examples of such systems. We also show that robustness analysis of arbitrary transducers is undecidable.
21. Thorsten Tarrach (IST Austria): Regression-free Synthesis for Concurrency
While fixing concurrency bugs, program repair algorithms may introduce new concurrency bugs. We present a new repair algorithm that avoids such regressions. The solution space is given by the program transformations we consider in the repair process. These include reordering of instructions within a thread, inserting atomic sections, and inserting wait-notify mechanisms. The new algorithm learns constraints on the space of candidate solutions, from both positive examples (error-free traces) and counterexamples (error traces). From counterexamples, the algorithm learns a constraint necessary to remove them. From positive examples, it learns constraints that are necessary in order to prevent the repair from turning the trace into a counterexample. Our implementation is able to fix the bugs while avoiding regressions.