Our research topics fit in four research areas and address issues in three different societal challenges.
Cloud computing is a new paradigm that enables the ubiquitous access to pools of computing resources that can be rapidly provisioned with minimal management effort. Since the launch of Amazon EC2 in 2006, cloud computing has revolutionize the software development, allowing the developers to partially abstract from the computing medium and develop applications that can be deployed in matter of minutes and can easily scale based on the request load. More than 50% of organizations are using the cloud to message, store data, and compute.
The promise of Digital Transformation is to accelerate and improve business activities, processes, and models through digital automation. This process has already started decades ago, but recently it has become a crucial asset for companies in highly-competing markets. At the essence of Digital Transformation there are two factors: integration and flexibility. Companies already own many digital systems, used to automate specific tasks. However, these systems are frequently silos: they are not meant to interact with other systems and the data they manage is difficult to extract and share. Integration of these legacy systems (both within the same company or belonging to different ones) as well as with recent applications, devices, and the Cloud must go hand-in-hand with a high flexibility to change how the integrated systems interact with each other. Together, these two factors lead to increased productivity, as well as better reactivity and system consistency.
In a world where we rely more and more on computers to perform tasks of ever-increasing responsibility, it is essential to be able to guarantee that these computers are doing their work correctly. It is unacceptable that a plane crashes because of a software fault, or that a patient in a hospital dies because a computer assigned them the wrong medication.
One of the goals of our group's research is to increase reliability of computations, by either designing tools and techniques to generate code that is mathematically guaranteed to satisfy specific safety properties (correctness-by-construction) or by exploring methods to verify interesting properties of existing computer systems (hardware/software verification).
Traditionally the science that studies reasoning, logic has become an important tool in several areas of Computer Science. In our group, we apply logic and logic techniques to specify and verify properties of programming models, be it in the form of particular logics that we can use to write and prove interesting properties; as type systems that automatically ensure interesting properties of our systems; or through correspondences between logic proofs and programs that allow us to view the latter in insightful new ways. We also use logic and state-of-the-art solvers to optimize real world problems such as scheduling, packing, timetabling, configuration, and logistics problems.
Concurrency refers to the ability of multiple (possibly interacting) activities to overlap over time, so that one starts before the other finishes. Digital systems are a prime example of such ability for they exhibit concurrent aspects at every scale: from computing clouds to modern CPUs. Concurrency Theory is the science of concurrency. This research area started as part of Theoretical Computer Science but extends far beyond that. In fact, concurrent features appear in many different fields like Biology, Economics, Medicine, and Social Sciences. Building upon our group’s expertise in applied logic, programming languages, and software engineering, we continuously apply and advance the state-of-the-art in concurrency. Most of all, we focus on programming languages and tools to design, develop, and analyse complex concurrent systems.
Programming languages have a central role in solving computing problems for they empower developers with building blocks for solving problems or classes of problems. Research on programming languages is not just to find solutions to important problems, but to find the best expression of those solutions, typically in the form of a language, language extension, library, program analysis, or transformation. The aim is for simple, understandable solutions that are also general since by acting at the level of a language (or its toolchain) they apply to many sorts of programs and sorts of problems. Therefore, a rigorous approach to software behaviour is critical to prove that (classes of) programs enjoy desired properties, and/or eschew undesired ones. Building upon our group’s expertise, we develop programming language solutions to the challenges issued by the growing complexity of concurrent systems, especially in the wake of the digital transformation.
Software Engineering is the establishment and use of sound engineering principles in order to economically obtain software that is reliable and works efficiently on real machines. Software engineering is needed due to the hight rate of change in user requirements and environment on which the software is working. Based on the application to develop, the available resources and constraints, Software Engineering provides best practices and methodologies to ease and guide the task of developing and maintaining the software from the initial requirement specification through its entire evolution.
Choreographic Programming is an emerging paradigm for correct-by-construction concurrent programming based on message passing. In this paradigm, communications are specified using an Alice-to-Bob notation, rather than as two separate send and receive actions. As a consequence, protocols are guaranteed not to deadlock because of communication mismatches. Models based on choreographic programming have been successfully developed for different settings where concurrent programming is challenging, including service-oriented computing and cyber-physical systems.
In our group, we are extending choreographic programming by developing more expressive and powerful languages that can be used in real-world applications. The main challenge is to increase expressitivy while maintaining the good theoretical properties that made choreography languages successful in the first place.
The increasing dependency on computers to perform everyday tasks is associated with the need to store larger and larger quantities of information, which must be efficiently processed and retrieved. One of the challenges involved in managing this data is ensuring its internal consistency, especially as the information being stored changes due to the evolution of the real-world data it models.
Modern-day database management systems address this problem by allowing the user to define integrity constraints – properties of the data that should hold at all times – and algorithms to repair the database when these properties no longer hold. Extending these techniques to more powerful reasoning tools, such as those used in the Semantic Web, is however an ongoing complex process, which we address in our research.
DevOps is a software engineering collections of practices to link the software development (Dev) with software operations (Ops). DevOps strongly advocates for automation and monitoring at all steps of software construction, from integration, testing, releasing to deployment and infrastructure management. By using DevOps methodology it is possible to reduce the time between committing a change to a system and the change being placed into normal production, while ensuring high quality.
The microservices architectural style has been recently proposed to cope with the problems of scaling and evolving software in a distributed setting. A microservice is a cohesive, independent process interacting via messages. Following the microservices architectural style, a network of interacting microservices corresponds to the interconnected modules of a distributed software. Microservices are usually contrasted with traditional single executable artefacts, also called "monoliths". In monoliths, modules communicate via locally-available resources (memory, databases, files), making them tightly coupled together and ultimately hindering both software scalability and evolvability. The principle of microservices assists project managers and developers: it provides a guideline for the design and implementation of distributed applications. Following this principle, developers focus on the implementation and testing of a few, cohesive functionalities. This holds also for higher-level microservices, which are concerned with coordinating the functionalities of other microservices.
Quantification of properties in complex systems is essential to improve the understanding of situations, to identify problems, and to support decision making. Quantitative models offer solid methodologies to conceptualise systems and to access this critical information. Nowadays, they are valuable assets to overcome the challenges issued by the growing complexity of systems, especially in the wake of the digital transformation. Building upon our group's expertise on quantitative methods, complex concurrent systems, and programming languages, we develop new tools and techniques to effectively quantify properties of choreographic programs. Most of all, we focus on availability, efficiency, performance, reliability, robustness, safety, security, survivability, and timeliness.
Since researchers in 1973 proved a mathematical result (the four-color theorem) using a computer program, there has been an increasing interest in using computers to do abstract mathematics – rather than simply viewing them as extremely powerful calculators. Although the classical results of Gödel and Turing tell us that there are limits to how much can be automated, computers have gained notourious success at being able to prove safety and correctness properties of sensitive software – either on their own or interactively.
Building upon our group's expertise on theorem proving, we plan to start applying it to choreographic programming, evolving our mathematical constructions into full-fledged certified compilers that are guaranteed not to introduce errors in the code they generate.