The Secure Enclaves for REactive Cloud Applications (SERECA) project aims to remove technical impediments to secure cloud computing, and thereby encourage greater uptake of cost-effective and innovative cloud solutions in Europe. It proposes to develop a secure environment for reactive cloud applications using the new Intel’s CPU extension: Software Guard eXtension (SGX). SERECA will allow the execution of sensitive code on Cloud platforms, without the need of trusting the public cloud operators. Furthermore, SERECA will support regulatory-compliant data localisation by allowing applications to securely span multiple cloud data centres. For a brief introduction to the SERECA architecture have a look to our short paper.
SERECA extends secure enclaves, a new hardware mechanism provided by commodity CPUs, to protect cloud deployments, thus empowering applications to ensure their own security without relying on potentially untrusted public cloud operators. The innovations that SERECA provides will help place Eurodisney at the forefront of secure cloud operations. SERECA has validated its results through the development of two innovative and challenging industry led use cases: i) Monitoring a civil water supply network and (ii) a software-as-a-service application to analyse the performance of cloud applications. The project has therefore achieved the following four objectives:
A number of components – enabling the secure execution of microservice applications in untrusted cloud – were developed during the project lifetime. The organization of such components is reported in the SERECA architecture image.
A typical application is composed by a number of microservices exchanging messages through Secure Vert.X, i.e., an enhanced version of the Vert.X microservice framework developed in the context of SERECA that harden the communication and ensures the security of data-in-transit. Each microservice runs in a dedicated Docker container to facilitate the deployment of the distributed application, which is clustered using SecureKeeper, i.e., a SGX-enabled version of ZooKeeper.
SERECA also ensures the security of data-at-rest. An application, in fact, can store data in-memory (e.g. in MongoDB) or in the disk (e.g. in MySQL) and provide protection through SCONE, i.e., a SGX-enabled container able to transparently ensure SGX security to unmodified applications.
Finally, in SERECA, we are also able to guarantee the security of data-in-use. Microservices can receive data from the Vert.X event bus and forward such data to the SGX enclave through a SGX-JNI bridge, which acts as a glue between the Java and C/C++ worlds. Otherwise - in case requirements of performance are not so stringent - the SGX-LKL library is used, which allows the execution of Java applications on top of Intel SGX.
The technology developed by SERECA has six unique selling points:
The general objective of the SERECA project is to build a platform able to protect the confidentiality and integrity of applications and services executed in the cloud. SERECA wants to protect against the most worrisome type of attacks, e.g. a malicious cloud employees that leverages the physical access to the servers and controls the system software of the computers, in particular, the hypervisor and the host operating system. The insider attacker could in principle access all data being processed by services running in the cloud. In this way, one could, for example, steal keys used to encrypt or decrypt data at rest or being transmitted via the network. In SERECA all data is encrypted in memory and only the CPU has access to the encryption keys. Therefore, even physical access to a machine does not help in gaining access to the data protected with SGX.
SERECA aims to build an infrastructure to execute reactive applications securely in public Cloud Providers (CPs). SERECA wants to improve the state-of-the-art in cloud security for interactive, latency-sensitive applications by developing innovative and effective mechanisms to enforce data integrity, availability, confidentiality, and localisation based on secure CPU hardware.
SERECA integrates the innovative Intel’s CPU security extension, namely Software Guard Extension (SGX) (https://software.intel.com/en-us/sgx), with a popular reactive framework, namely Eclipse Vert.x (http://vertx.io).
Intel SGX - Modern x86 Intel CPUs starting with the sixth Core-i microarchitecture support instruction set extensions called SGX which significantly improve application (ring 3) security. The focus of SGX lies on protecting confidentiality and integrity of code and data of applications. Using the SGX instruction set, a so called secure enclave can be created, which is an isolated range of memory within the application’s (virtual) address space to which the SGX security enhancements apply. When using SGX, even the main system memory will be encrypted and integrity protected. SGX permits to protect application state from the hypervisor and the operating system. The data and the computation inside of an enclave are protected from any accesses from the outside of the enclave. An application can create enclaves and transfer sensitive parts of the application code and data into the enclave. Besides protecting sensitive data, for example, encryption keys, enclaves also protect the confidentiality of data stored outside the enclave by encrypting and decrypting the data on demand.
Eclipse Vert.x - A toolset that helps developers in designing event driven, asynchronous, and micro-service based applications. Micro-services represents the state-of-the-art of cloud-based applications. Their intrinsic features, i.e., a well-partitioned architecture, allows to build highly available and scalable applications that perfectly fit for cloud environments. Services programmed with Vert.x are split in components (known as verticles) that can run in different address spaces and communicate with each other via an Event Bus.
SERECA aims at supporting the execution of critical functionality inside of SGX Enclaves. Hence a central issue is the partition of applications. It is important to keep the code base inside the enclaves small as performance degradation and the Trusted Computing Base (TCB) are kept limited. The idea is to take advantage of the already well-partitioned micro-service design to run verticles partially inside of enclaves and partially outside enclaves much more easily. Vert.x micro-services typically need standard services like databases, key/value stores and coordination services. Within SERECA, several existing standard services are partitioned. In this way, the platform can ensure the confidentiality and integrity of data processed by these standard services. Moreover, another important aspect faced in SERECA is the secure communication between verticles, which is performed via an extension of the vert.x event bus. Enclaves depend on the untrusted operating system to perform I/O operations meaning that all messages must be encrypted. SGX allows to securely configure enclaves by providing hardware protected sealed storage. This means that only a trustworthy enclave is able to get access to keys to decrypt the sealed data. In this way, using asymmetric cryptography typically employed for key exchange can be avoided. Hence, symmetric keys for communication is used by the vert.x extended event bus.
To deploy secure reactive applications, in SERECA, Docker containers are used. A container is a lightweight alternative to virtual machines that isolates processes running on the same OS kernel. Docker engine offers a rich REST API that allows to manipulate containers both on local and remote hosts. SERECA aims to minimize the number of changes to the engine such that the API can be reused out-of-the-box. Our support of containers will be in a form of a lean runtime that allows to securely configure and execute enclave-enabled applications in containers. The SERECA consortium anticipates that cloud providers will gradually introduce hardware that supports secure enclaves in their cloud platform offerings. One such offering may involve a Metal-as-a- Service (MaaS) hosted and Rancher managed Docker Swarm deployment infrastructure with a modified Docker Swarm scheduling backend that assigns SERECA application containers to hosts with secure enclave support.
SERECA convincingly validate and demonstrate the benefits of the approach pursued by applying it to realistic and demanding industrial use cases. SERECA addresses this objective by running two industrial use cases with widely differing requirements on the SERECA platform.
Other initiatives (e.g. , papers or European projects) are exploring the possibility of securing applications running in untrusted cloud with Intel SGX. What makes SERECA different from the others is the vast array of facilities SGX-enabled offered to a final developer, which can be leveraged in a semi-transparent way.
A first remarkable difference with several other works is that in SERECA we do not use the SDK provided by Intel. In fact, Intel provides an SDK to facilitate the implementation of simple enclaves. It features an interface definition language together with a code generator and a basic enclave library. Unlike SERECA, the SDK misses support for system calls and offers only restricted functionality inside the enclave. We developed, instead, a libc library SGX enabled that support a shielded execution of system calls.
Haven [1] aims to execute unmodified legacy Windows applications inside SGX enclaves by porting a Windows library OS to SGX. Relative to the limited EPC size of current SGX hardware, the memory requirements of a library OS are large. In addition, porting a complete library OS with a TCB containing millions of LOC also results in a large attack surface. By using only a modified C standard library, in SERECA we target the demands of Linux containers, keeping the TCB small and addressing current SGX hardware constraints. GrapheneOS [3] is another example of a library OS ported in SGX. Unlike Haven, in this case the TCB is kept small as we do with our secure containers. However, the performances provided are still low and the usable functionalities are limited. Our secure containers developed in SERECA keep the overhead limited. Furthermore, it must be pointed out that secure containers are only a part of the facilities offered by the entire SERECA platform.
An additional initiative is VC3 [2], which uses SGX to achieve confidentiality and integrity as part of the MapReduce programming model. VC3 jobs follow the executor interface of Hadoop but are not permitted to perform system calls. In SERECA we focus on generic system support for container-based, interactive workloads but could be used as a basis for VC3 jobs that require extended system functionality.
Finally, it is worth to illustrate differences between SERECA and the SecureCloud project as they move in the same direction by using secure containers with SGX support. Figure 5.2 clearly shows the several architectural distinctions. First, in SERECA we provide support for a gcc-based cross compiler SGX-enabled while in SecureCloud this cross compiler is llvm-based. Second, In SERECA we want to harden applications running on top of a JVM while in SecureCloud applications are written in Go/Rust/Python. This means a completely different approach to the infrastructure development. Third, in SERECA we have support for a SGX-enabled secure coordination service, while in SecureCloud a different mechanism based on the bus ZMQ is used. Fourth, in SERECA applications are based on microservices written with vert.x while in SecureCloud applications exchange messages through the ZMQ bus.
[1] Andrew Baumann, Marcus Peinado, and Galen Hunt. 2015. Shielding Applications from an Untrusted Cloud with Haven. ACM Trans. Comput. Syst. 33, 3, Article 8 (August 2015), 26 pages.
[2] F. Schuster, M. Costa, C. Fournet, C. Gkantsidis, M. Peinado, G. Mainar Ruiz, M. Russinovich, Vc3: Trustworthy data analytics in the cloud using sgx, in: 2015 IEEE Symposium on Security and Privacy, 2015, pp. 38-54.
[3] Chia-Che Tsai, Kumar Saurabh Arora, Nehal Bandi, Bhushan Jain, William Jannen, Jitin John, Harry A. Kalodner, Vrushali Kulkarni, Daniela Oliveira, and Donald E. Porter. 2014. Cooperation and security isolation of library OSes for multi-process applications. In Proceedings of the Ninth European Conference on Computer Systems (EuroSys ’14). ACM, New York, NY, USA, Article 9, 14 pages.
The SERECA project aims at removing barriers to the migration to cloud environments. Through the usage of secure commodity CPU hardware, SERECA provides unprecedented security mechanisms able to guarantee protection of sensitive data even against malicious cloud providers. The advantages coming from the SERECA platform adoption may allow the migration to the cloud even in contexts where the Integrity and/or the Confidentiality of data is crucial.
Here some scenarios examples are reported in a “story telling” approach using fictitious characters to convey the main focus of the real challenges that potential cloud users have to address. The idea is to define where and how the SERECA platform can help in the process of migration to the cloud.
Anna is the head of the office in charge of filing applications – from citizens and enterprises – for construction authorization at the Municipality of a little town in Italy that is a top tourist attraction. Anna’s staff gathers documents from applicants (both paper-based and paper-less), extracts relevant information, feeds it to the information system of the municipality, interfaces with offices and employees (of the municipality and possibly of other PA organisations) as well as with the public. Anna knows that her office handles sensitive data of a potentially high economic value, and she is aware that in case of a security breach, regardless of the root cause, she would ultimately be liable for the consequences of it. This situation makes Anna feel very uncomfortable. She knows that her direct collaborators are honest people and conscientious workers, who would never expose sensitive information as a result of bribing or sloppiness. However, Anna understands that her office has only control of the first stages of the information flow, since data is handed over to an external IT company for further processing and storage. Anna does not even know the personnel of the company, and thus she has no particular reason to trust them. Of course she knows that the Procurement Office of the Municipality has very high standards for selecting their providers, but still she has no direct trust relationship with them. Anna is not a cyber-security expert, but she feels this organization is not 100% secure. She talks to her husband, Carlo, who is an engineer with 20+ years of experience in the IT sector. Carlo confirms to Anna that her intuition is correct: the possibility that an employee of the external IT company who has super user privileges accesses sensitive information (violation of confidentiality) and possibly modifies it (violation of integrity) is real. Carlo also explains to Anna that this a major limitation of the current State Of The Art of the IT technology. This makes Anna very unhappy, since she now understands that this is an inherent risk of the current offerings and thus careful selection of the IT provider does not help much. She starts browsing the Internet, and she learns about a project called SERECA that is building on top of the new CPU technology provided by Intel (namely: SGX) an innovative and truly secure cloud offering. Anna gets very excited about the advantages that are brought about by SERECA. First, the secure execution environment (called Secure Container) provided by SERECA would protect data from unauthorized access, including attacks by the super user. This would solve the issue of malicious personnel at the IT company, that scares her. Second, SERECA secure communication mechanism (called Secure Bus) would protect data during transfers. Third, SERECA development tools (called Partitioning Tools) would enable easy porting of applications – including legacy ones – and ultimately enable seamless migration to the new platform. Fourth, an infrastructural service (called Secure Coordination Service) would enable the secured applications to run on a distributed platform, for better reliability and performance. Fifth, one of the partners of the SERECA consortium will to go to market with a commercial offering that makes the underlying hardware needed by SERECA readily available. The offering is a Metal as a Service (MaaS) formula with advanced data locality features. In particular, it allows the cloud user to enforce specific limitations on where data is to be stored. This is a fundamental prerequisite for complying to Italian regulations for the Public Administration. In conclusion, it really seems that SERECA provides all the key features that Anna needs for ensuring efficient operation of the office she in charge of.
Marco – the manager of an Italian water distribution infrastructure operator – is responsible for the management of the monitoring infrastructure needed to supervise a wide pipe network and multiple dams. He is aware that the adopted monitoring platform is not enough to ensure the control of the overall infrastructure. He knows that criminal activities or natural phenomena may compromise the integrity of the distribution infrastructure resulting in a devastating impact on the nearby population. He was a child (on 1963) when the Vajont disaster happened: a massive landslide caused a megatsunami in the lake in which 50 million cubic metres of water overtopped the dam in a 250 meters wave. For this reason, he is determined to provide a continuous and advanced monitoring system able to detect at any time anomalous situations.
Marco’s idea is to have a monitoring system able to provide to the operators graphs of sensors measurements in specific range of time, to signal alarms conditions or simply allowing the realtime monitoring. After some bureaucratic procedures Marco decides to put into effect his idea: he starts looking for the current technological solutions available. At the end, after few meetings with some IT companies, he realizes that the market offers a number of technologically advanced solutions but many of these require an IT infrastructure too expensive in terms of set up and maintenance. To overcome such an issue, Marco decides to proceed with an offer proposed by one of those IT companies, that leverages Cloud technology. However, he discovers that there is a unique relevant drawback for the cloud usage, that is the security risk for the outsourced data. Marco knows that the integrity of their data is fundamental. He, then, speaks with a cyber-security consultant who tells him that nothing could avoid the risk that a malicious cloud employee may modify particularly sensitive measurements. Marco immediately thinks of the turbidity measurement: ”What if a terrorist poisons the water and, agreed with the cloud company employee, hides the variation of turbidity to the operators in charge of managing the monitoring system? A tragedy!”.
Fortunately, after some months, an innovative solution comes out. Browsing over the Internet Marco learns about SERECA that leverages a new CPU extension provided by Intel (namely: SGX) that allows to protect sensitive data even against super-privileged users. Marco understands that SERECA fits for his company needs. In fact, first, the secure execution environment (called Secure Container) provided by SERECA will provide guarantees on the data integrity against malicious cloud employee attacks. Second, the secure communication mechanism (called Secure Bus) will protect the transfer of sensors measurements from the dams and the pipes to the cloud platform. Third, an infrastructural service (called Secure Coordination Service) will enable the secured applications to run on a distributed platform, for better reliability and performance. Fourth, the data locality features offered by the SERECA Metal as a Service (MaaS) formula allow to comply with the Italian regulations for the Public Administration (PA) (in Italy water distribution operators are considered by law public administrations). Since the different features provided by SERECA meet Marco’s company needs, he decides to adopt the SERECA platform.
Aceline is a 45-year-old university professor, who lives in Paris, where she teaches sociology. It has been more than 15 years that she was diagnosed with heart failure, while her 6-year-old daughter Carine has Diabetes type 1 since she was born. Being a chronic patient, Aceline has learnt how to live with her disease and to manage her daughter’s health too, undertaking routine tasks such as measuring periodically vital signs (e.g., blood pressure), taking medicines, or performing tasks like glucose measurements and insulin injections for her daughter. Travelling by car for a conference in Lyon with her husband and her daughter, they experience a quite serious car accident. In accidents, any information regarding the medical history of the injured can be critical. This is the case for Aceline and Carine, who may be at risk due to their illness so their data are retrieved from the EHR database of respective medical institutions.
On discharge, given the risky heart condition of Aceline, the hospital in Lyon equips her with a tele-monitoring kit for remotely monitoring her conditions, to allow her not to cancel the vacation in Barcelona that she and her family had already booked. The kit includes medical devices and a gateway which sends the measured vital signs to the Service Center in Lyon supporting the patients remotely. In case of emergency, she will be called back to the hospital for further investigations and exams.
Aceline truly appreciates all of this. She understands that these advanced ehealth services dramatically improve the quality of her life, as well as Carine’s. However, she knows that the total current healthcare expenditure (both in relative and absolute terms) varies significantly among the EU Member States[1]. Being a sociologist, she is well aware of the potential inequality of treatment that may result from this situation. Aceline, decides to talk to Louise, the hospital CIO, to gain more insight about the main obstacles – from a technical perspective – to the widespread take up of advanced ehealth services. Aceline learns that the ehealth system is based on a complex federated structure, with challenging security requirements in terms of data distribution, storage, and access. Louise thinks that the only possibility to really cut down costs would me moving from traditional IT solutions to a cloud based setup. Unfortunately, current cloud offerings do not satisfy the requirements of EU regulation on medical data in terms of data protection. In a nutshell, loss of control over sensitive data - which are sent to an untrusted third party – makes the current cloud technology unusable for ehealth applications. Louise also mentions that she has recently heard of a research project called SERECA that is developing an innovative cloud platform that exploits the new SGX technology by INTEL. She is worried about porting legacy medical software to SGX technology but she was relieved to know that thanks to the SERECA partitioning tool and to the SERECA containers such a porting would be seamless. Moving health applications to the cloud while ensuring both confidentiality and integrity of data, even in case of malicious cloud providers, would allow to share interfaces, thus ultimately preserving operators’ and patients’ user experience. Secure access to remote data would be guaranteed by the SERECA coordination service, thus allowing data to be stored close to the source and to be controlled by the owner. The remote attestation service and the improved vert.x features would guarantee security even when data is accessed through telecare kits, since these would be verified remotely before each interaction, thus preventing possible manipulation of the appliance. Louise is very excited about SERECA, since it really seems to have all the key features that she needs, and that are not available in current SOTA cloud offerings.
[1] In 2012, the share of current healthcare expenditure exceeded 10.0 % of gross domestic product (GDP) in six EU Member States (the Netherlands, France, Belgium, Germany, Denmark and Austria), which was almost double the share of current healthcare expenditure relative to GDP recorded in Latvia, Estonia and Romania (6.0 % or less). Source: http://ec.europa.eu/eurostat/statistics-explained/index.php/Healthcare_statistics
The Technische Universität Dresden (TUD) is one of eleven German universities that were identified by the German government as a “University of Excellence”. TUD has about 37.000 students and almost 4.400 employees, 520 professors among them, and is the largest university in Saxony today. TUD is strong in research, offering first-rate programmes of overwhelming diversity, with close ties to culture, industry, and society. As a modern full-status university with 14 departments, TUD offers a wide academic range.
The university emphasises interdisciplinary cooperation and encourages its students to participate in both teaching and research. More specifically: interdisciplinary cooperation among various fields is a strength of the TUD, whose researchers also benefit from collaborations with the region’s numerous science institutions - including Fraunhofer institutes and Max Planck institutes.
In recognition of TUDs emphasis on applications in both teaching and research, leading companies have honoured the university with currently fourteen endowed chairs. TUD prides itself for its international flavour and has partnerships with more than 70 universities worldwide. Furthermore, TUD is the only university in East Germany that has been granted a graduate school and a cluster of excellence in Germanys Excellence Initiative. The Systems Engineering group lead by Prof. Fetzer is part of the computer science faculty of TU Dresden. It was established in April 2004 and funded by an endowment by the Heinz-Nixdorf foundation.
The group addresses several research issues in dependable and distributed systems: (i) security and dependability of cloud infrastructures, (ii) fault-tolerant computing in WANs, and (iii) cost-effective resilience. In 2013, the group received two best paper awards: USENIX LISA2013 and ACM DEBS2013. In 2014, Dr. Torvald Riegel who graduated in 2013, received the 2014 EuroSys Roger Needham Award. European Project Center (TUD-EPC): TUD has extensive experience in project coordination and project management at national and international level and is, therefore, well placed to coordinate this project. During the period from 2008 until 2012 scientific staff of TUD participated in over 24,000 contracted projects with a total amount of grants of more than 950 million Euro. A European Project Center (EPC) has been established at TUD to support international project management. The EPC is currently coordinating and managing more than 320 projects with a total project volume amounting to over 133 million Euro granted by the European Commission. At the moment, the EPC is supporting the coordinating professors in FP7 projects like FLEXIBILITY (IP), SYBOSS (IP), and ADDAPT. Furthermore, TUD is currently host to 9 European Research Council (ERC) grantees and a partner in the FET Flagships HBP and GRAPHENE.
TUD will take the role of project coordinator for SERECA, and they will lead the associated WP6, which deals with management. Hence, TUD fills the Project Coordinator (PC), Project Assistant (PA) and a project management position. TUD will contribute to the architecture of secure enclaves, in particular (a) how to incorporate the existing and announced hardware support (ARM TrustZone and Intel SGX) to improve the practical effectiveness and (b) how to implement a secure data storage using enclaves (WP 1).
Within WP2, which investigates mechanisms to establish a distributed infrastructure based on secure enclaves, TUD will lead the tasks to create a secure distributed coordination service with dynamic migration support, and to enable a distributed and secure deployment together with the establishment of secure communication channels. In WP3, TUD will assist with the development of reusable components for secure reactive cloud applications, based on the Vert.x framework, and geo-local enclave deployment. It will lead the task to provide the recovery of secure application states in the case of failures.
TUD will participate in the evaluation (WP4) and help in the dissemination and exploitation activities (WP5). TUD-EPC supported the development of this proposal and will lead on financial, legal, and administrative issues during the implementation of this project (WP6).
TUD successfully participated in several EU-funded projects, including VELOX69, STREAM70, and SRT-1571. Furthermore, TUD is currently participating in the FP7-funded projects LEADS72 and ParaDIME73. While LEADS focuses on scalable distributed systems, ParaDIME is investigating novel techniques for energy efficient cloud infrastructures.
Moreover, the current research focus of Prof. Fetzer and his research group is on security in cloud environments. Prof. Fetzer leads the Resilience Path of the excellence cluster cfAED where cost-effective resilience mechanisms are studied. The ongoing project SREX (Secure Remote EXecution) targets approaches that allow secure execution of an application in non-trusted and potentially malicious environments. Prof. Fetzers group has 10 years of expertise in dependable and distributed systems, event-based communication systems, and cloud computing. This experience and knowledge will be integrated in SERECA to achieve reliability and security of user applications and data processed in cloud-based infrastructures.
TUD can provide a cluster of 41 homogeneous server machines to run distributed applications. Each server has 8 cores and 8 GB of main memory. The cluster can also be used to simulate the enclaves, the infrastructure and the services, e.g., the migration of deployment of components.
Prof. Dr. Christof Fetzer (M)
Has received his diploma in Computer Science from the University of Kaiserlautern, Germany (Dec. 1992) and his Ph.D. from UC San Diego (March 1997). As a student he received a two-year scholarship from the DAAD and won two best student paper awards (SRDS and DSN). He was a finalist of the 1998 Council of Graduate Schools/UMI distinguished dissertation award and received an IEE mather premium in 1999. Dr. Fetzer joined AT&T Labs-Research in August 1999 and had been a principal member of technical staff until March 2004. Since April 2004 he heads the endowed chair (Heinz-Nixdorf endowment) in Systems Engineering in the Computer Science Department at TU Dresden. He is the chair of the Distributed Systems Engineering International Masters Program at the Computer Science Department. Prof. Dr. Fetzer has published over 150 research papers in the field of dependable distributed systems, has won two best paper awards (DEBS2013, LISA2013) and has been member of DoA: page 50 of 71 Horizon 2020 ICT-7 2014-2015 SERECA more than 40 program committees.
Jons-TobiasWamhoff (M)
Is a research assistant and participated in FP7 ICT projects in the past. His research focus is on parallel architectures and their implications on software developers, e.g., to simplify parallel programming using transactional memory. He graduated as a M.Sc. from Technische Universit¨at Dresden in 2008 and had previously worked for IBM on the information integration into databases.
Diogo Behrens (M)
Is a research assistant at TU Dresden with a research focus on the design of fault tolerance mechanisms for distributed systems. During his studies, he had worked as an intern for IBM and Yahoo! Research. He graduated as a M.Sc. from the Technische Universit¨at Dresden in 2008.
Thordis Kombrink (F)
Is a project coordinator at the chair of Systems Engineering. She moderates project meetings and monitors the project progress following the SCRUM methodology. In addition, she oversees the chair’s finances. Thordis Kombrink is also the faculty’s Erasmus coordinator. She received a diploma in Economics and a masters degree in Business and Economics Education.
Katja Bottcher (F)
has been a project manager at the European Project Center since 2011. Currently, she is responsible for more than 40 projects within the different specific programmes of FP7. Katja B¨ottcher has extensive experiences in managing and coordinating EU-funded research projects, especially in the 7th Framework Programme. Before joining the EPC, she coordinated the FP7 ICT project PICOS and participated in several FP6 and FP7 ICT projects, such as PRIME and PrimeLife.
Technische Universität Carolo-Wilhelmina zu Braunschweig (TUB) is one of the leading universities of technology in Germany. The academic community comprises about 16,000 students and 3,600 university employees. The core disciplines are engineering, natural sciences, life sciences, and information technologies. They are closely linked to other disciplines of humanities and economic, social, and educational sciences. As one of its parts, the Institute of Operating Systems and Computer Networking (IBR)74 conducts research particularly on distributed systems, corresponding networking and communication systems, architectures, and protocols.
Dependability of distributed systems, in particular in the area of could computing, is one of the main concerns of the proposing work group. The IBR comprises 4 professorships and over 20 PhD students. The institute was and is active in several EU-funded and state-funded projects, including the FP7 projects GINSENG, V-Charge, WISEBED, SPITFIRE, and especially TClouds. All in all, the university has been participating in many projects under different Framework Programmes of the EU. In order to foster and support such projects, TUB established a unit specialised on the organization and management of EU projects, the European Office
TUB will lead WP1, which investigates system support for secure enclaves based on ARM TrustZone and Intel SGX. Within WP 1, with its expertise, TUB will especially lead the implementation of secure enclaves based on ARM TrustZone by extending the existing software architecture. In addition to that, TUB will lead the coordination between these secure enclaves across multiple data centre sites in WP2. TUB will also contribute to the deployment mechanisms of distributed secure enclaves and, by technical work, to WP4-6. In particular, TUB will design efficient replication and recovery support for SERECA applications and the essential parts of the platform.
The members of the Distributed Systems Group of the IBR investigated various aspects of system-centric middleware with a focus on adaptive as well as fault- and intrusion-tolerant middleware. In the context of the AspectIX project (DFG funded), they targeted the development of middleware supporting fault-tolerant and adaptive services by means of a truly distributed object concept, while DFG-funded VM-FIT project aimed at the virtualisation-aided provision of intrusion-tolerant services.
In the context of the FP7 project TClouds, the group carried out research in the field of dependable and resourceefficient concepts and systems that are able the facilitate trustworthy cloud computing. In particular, configurable hardware-aided solutions were build for improving the performance of Byzantine-tolerant agreement protocols. Furthermore, it was explored how to assemble service-tailored virtual machines with a minimal Trusted Computing Base using a safe runtime environment.
At the moment, TUB is maintaining 3 clusters of 11 powerful server machines in total to provide computing resources for running distributed applications on top. Cluster 1 contains 4 homogeneous servers, and each of them is equipped with two 6-core 2.4 GHz Intel Xeon processors and 24 GB of system memory. Cluster 2 is made up of 2 servers with two 8-core 2.6 GHz Intel Xeon processors and 64 GB memory each. Cluster 3 consists of 5 server machines, and each server machine has two 6-core 2.5 GHz Intel Xeon processors and 16 GB memory. All three clusters together constitute the cloud infrastructure, which can provide the capability of high performance computing in SERECA.
Prof. Dr. Rüdiger Kapitza
Is professor at the Technische Universitat Carolo-Wilhelmina zu Braunschweig. There he leads the Distributed Systems Group of the Institute of Operating Systems and Computer Networking since January, 2012. He received his M.Sc. and Ph.D. degree from the Department of Computer Sciences, University of Erlangen- Nuremberg in 2001 and 2007, respectively. From 2007 until 2011 he led the Distributed Systems Group at the Department of Computer Sciences 4, University of Erlangen-Nuremberg, as assistant professor. Since his time as PhD student he has lead project work at the national level. First in the DFG-funded AspectIX project, next shortly after finishing his PhD as a principal investigator in REFIT, DanceOS, and recently BATS. DanceOS is a project of the DFG priority program 1500 with multiple partners, whereas BATS is an interdisciplinary research group composed of researchers from computer sciences, electrical engineering, and biology. At the EU-level he leads TUB in the FP7 IP TClouds. R¨udiger Kapitza is author of more than 60 peer-reviewed publications, steering committee member of the IFIP DAIS conference, program committee member of numerous venues including IEE EDCC, ACM/Usenix Middleware and ACM EuroSys. More details about him can be found at http://www.ibr.cs.tu-bs.de/users/kapitza.
Bijun Li (F)
Has been working as a research assistant in the Distributed Systems Group of the Institute of Operating Systems and Computer Networking at the Technische Universit¨at Braunschweig since 2013. She received her M.Eng. from Gyeongsang National University, South Korea, Department of Informatics. Her research interests focus on cloud reliability, especially on the reliable coordination of distributed applications. As part of the research work in SERECA she will focus on distributed coordination of secure enclaves as well as enclave distribution itself. More information about her can be found at https://www.ibr.cs.tu-bs.de/users/bli.
Stefan Brenner (M)
Has been working as a research assistant in the Distributed Systems Group of the Institute of Operating Systems and Computer Networking at the Technische Universit¨at Braunschweig since the end of 2012. Earlier in 2012 he obtained a M.Sc. in computer science, studying at Ulm university with a focus on distributed systems and operating systems. In the scope of the project, Mr. Brenner will focus on system support for secure enclaves. Further information about him is available at http://www.ibr.cs.tu-bs.de/users/brenner.
Consistently rated among the world’s best universities, Imperial College London has a reputation for excellence in research that attracts 14,000 students and 6,000 staff of the highest international quality. The Department of Computing is a leading research unit with an outstanding reputation in computer science and related interdisciplinary activities, and a long track record of funding through EC Framework Programmes. Imperial College London achieved excellent results in the most recent UK Research Assessment Exercise (RAE) and ranks second in the UK Computer Science league table by “research power”, a measure that combines both quality and quantity of research.
IMP will lead WP2 (Mechanisms for distributed secure enclaves) based on their track record in distributed systems, cloud computing, and middleware research. They will play a major role in WP1 and WP3: in WP1, IMP will contribute to the design of the overall architecture of the SERECA cloud platform and offer expertise when designing OS abstractions for secure enclaves; in WP3, IMP will act as a task leader for the work that produces an ecosystem of reusable secure services for building secure reactive applications and the algorithmic work on the policy-compliant and geo-aware placement of secure computation and data across multiple sites. Along with the other partners, IMP will integrate their work as part of the validation activities in WP4.
Imperial College London has a distinguished track record of world-class research in critical areas of relevance to SERECA: security, distributed systems, software systems, middleware, cloud computing, networking, and software engineering. These areas represent the work of more than 15 academic staff and 75 researchers, contributing to seminal work in architectural- and component-based software design, distributed middleware, Internetscale communication services, high-performance messaging, data-centric security, data-center networking, resource allocation in distributed systems, and architectures for cloud computing applications. The IMP key personnel maintain substantive collaborations with, among others, BAE, Cisco, Detica, Google, HP, IBM, Microsoft, Morgan Stanley, Nexor, Orange Labs/France Telecom, and the UK National Health Service.
IMP will contribute the shared usage of a substantial private cloud test-bed to the project that is hosted at the Department of Computing. It consists of the following hardware (valued at over EUR 150,000):
Dr. Peter Pietzuch (M) (Senior Lecturer in Computing).
Dr. Pietzuch heads the Large-Scale Distributed Systems (LSDS) research group, investigating new abstractions and infrastructures for building scalable, reliable, and secure distributed applications. His work bridges the areas of distributed systems, security, networking, and database research. He currently is the PI on two nationally-funded projects, CloudSafetyNet (on data-centric security mechanisms for cloud computing), and Network-as-a-Service (on new architectures for data centre networking). He has published over sixty articles in international, highly competitive, peer-reviewed venues, including USENIX ATC, NSDI, SIGMOD, VLDB, ICDE, ICDCS, DEBS, and Middleware. He serves as a Steering Committee member of the ACM Conference on Distributed Event-based Systems (DEBS) and was a Programme Chair for DEBS 2013. Before joining Imperial College London, he was a Post-Doctoral Fellow at Harvard University.
Prof. Alexander Wolf (M) (Chair in Computing)
Prof. Wolf heads the Experimental Software Systems research group and served as head of the Distributed Software Engineering research section. Currently he acts as coordinator on the FP7 HARNESS project on heterogeneity in cloud computing. He previously held an Endowed Chair at the University of Colorado at Boulder, where he coordinated several multi-million-dollar DARPA and AFRL projects. He has published in the areas of software engineering, distributed systems, and networking (Hirsch index: 44), in venues that include ICSE, SIGCOMM, PODC, Middleware, and SIGMOD, and the journals TOCS, TOSEM, TOPLAS, TSE, and TKDE. He is known for his seminal work in software architecture, distributed publish/subscribe communication, and content-based networking. Prof. Wolf is a Fellow of the ACM, the IEEE, and the BCS.
Cloud&Heat Technologies is a startup company focusing on environmental-friendly and cost-effective cloud computing. Cloud&Heat Technologies is developing a dark cloud server system that can be directly coupled to a heating system of a residential home or office building. With this technology Cloud&Heat Technologies aims to establish a new era of environmentally friendly computing technology with efficient heat recovery. Cloud&Heat Technologies has been one of the five finalists of the German Industry Innovation Award (“Innovationspreis der deutschen Wirtschaft”) in the category startups. The German Innovation Award is one of the oldest and most important German innovation prizes.
One of the main contributions of CHT will be to support the consortium in aligning the infrastructure related aspects of interconnected secure enclaves with practical and concrete technological requirements, especially performance related requirements. CHT will contribute very comprehensive technical expertise in the domain of Cloud Computing with a specific focus on high-availability, load-balancing, and security topics including implementation-level knowledge of many associated protocols in large scale infrastructures. This expertise is complemented with comprehensive experiences in the field of software automation and configuration management. CHT will help to select and to lever operating system level mechanisms for realizing and exposing secure enclaves to applications (WP1). Furthermore, it will continuously monitor the developed conceptual and architectural specification of secure enclaves and potentially align it to technological requirements. CHT will contribute to the protocol and communication mechanisms for distributed secure enclaves (WP2) by providing its in-depth knowledge of state-of-the-art distributed coordination services and high-availability approaches. Furthermore, a specific focus in this work package will be directed to deployment mechanisms for secure enclaves. In this field, CHT can support the consortium by levering its intensive software automation competence. For managing secure and reactive cloud applications (WP4), CHT will especially contribute to investigating and establishing geo-local deployment policies. It will support the design and implementation of a request routing infrastructure that will incorporate the data placement information. Besides these conceptual and technological tasks, CHT will support the validation and evaluation activities by integrating SERECA in its highly decentralized infrastructure, rendering realistic test and evaluation setups possible (WP4). To guarantee the project’s sustainability, CHT will continuously review the commercial exploitation potential of the project state (WP5). Furthermore, CHT intents to contribute to scientific publications to enrich this work especial by real-world evaluation setups.
Cloud&Heat Technologies participates in the following EU funded FP7 projects: LEADS and ParaDIME. Cloud&Heat Technologies will bring to the project its knowledge in designing, building, and running micro-data centres and in particular, its expertise as a cloud infrastructure provider. It has designed, implemented, and operated an in-house data center coordination system.
Cloud&Heat Technologies operates a Cloud platform (OpenStack) on a highly distributed data center. The data center is spread over dozens of sites in Germany. The data centres vary from small sites covering less than twenty physical machines to sites with several hundred machines equipped with up-to-date hardware. As well as the hardware, the network infrastructure at these sites is fully operated by and under control of Cloud&Heat Technologies. The sites are redundantly connected to the Internet with up to several Gigabit up-link speed. Software installation at and monitoring of the sites is fully automated using state-of-the-art tool chains. Parts of the overall data center infrastructure will be made available within the SERECA for development and evaluation purposes. Furthermore, Cloud&Heat Technologies offers to apply individual modifications to the data center infrastructure within the scope of SERECA.
Dr. Jens Struckmeier (M)
Is a founder and managing director of Cloud&Heat Technologies GmbH, a startup company focusing on environmental-friendly and cost-effective cloud computing. Since 2009, he has fully committed himself to the company and has energetically planned several green and low energy building projects based on the ”passive house” concept. Between 2004 and 2009, he successfully managed the company nAmbition GmbH, a nanobiology startup company focused on automated force spectroscopy instrument and software development. During the years 2001-2004, he was in charge of the force spectroscopy group at Veeco Instruments, CA, USA and has developed the successful PicoForce MultiMode atomic force microscope. After studying semiconductor and laser physics at the Universities of RWTH Aachen and Marburg, he obtained his Ph.D. in Physics in 2001, investigating the mechanoreceptor of bone cells in zero gravity and under micro-stimulation.
Dr. Marius Feldmann (M)
Has received his Diploma (2007) and his Doctor (2011) title from the Technische Universit ¨at Dresden. He has worked in several industry-related research projects including the FP7 project ServFace (2008-2010). His professional interests are focused on computer networks and distributed systems. His research interests are mainly directed to network protocols including approaches for delay-tolerant networking. He gives a lecture at the Technische Universit¨at Dresden about practical aspects of computer networks. Dr. Feldmann has a specific interest in exploitation activities of research projects. Besides being involved in startup activities himself, during the past years he has provided guidance to various startups mainly originating from an academic context. Since August 2013 he works for the Cloud&Heat Technologies GmbH focusing on topics in the scope of the network infrastructure and network management including security related topics.
Dr. Anja Strunk (F)
Studied computer science at Technische Universit¨at Dresden and graduated in 2006. Her research interests are energy efficiency and quality of service of IT services and cloud services in particular. Anja Strunk received her PhD in 2010. In her PhD thesis she developed a model that is based on stochastic processes and evaluates the quality of service reliability of composite IT services.
Epsilon s.r.l is a small ICT firm based in Italy, whose mission is to design and operate complex big data and cloud computing solutions for medium to large enterprises, and to develop web applications. It has a strong commitment to research activities and to implement cutting edge technologies. Also, it has extensive experience in participating and managing European Union and National R&D projects. EPS has tight business partnerships with prestigious players like Google and Amazon Web Services, having been the first Google Enterprise reseller for Southern Italy and one of the first AWS technology partners in Italy. It also has strong expertise in security and networking technologies, where it provides clients with consulting services and systems development.
EPS will develop a cloud-based application for real-time monitoring of aWater Supply Network application running in the cloud, by (i) extending its DaMon solution with the functions that are needed for monitoring the additional assets (i.e., other than dams) of a Water Supply Network and (ii) integrating the existing applications currently used by EIPLI for monitoring (small) parts of its Water Supply System. The application will be deployed, tested, validated, and evaluated on top of the SERECA framework. The focus of the experimental activities will be on security and reactivity enhancing features.
EPS will work with one of the use cases of the proposal, namely theWater Supply Network Monitoring (WSNM) pilot, with the following goals: (i) requirements specification for the design of the SERECA framework, (ii) definition of relevant metrics and Key Performance Indicators (KPIs) for the WSNM pilot, (iii) definition of a validation plan for the SERECA framework, with respect to the needs of the WSNM pilot. EPS will work to the requirements identification (as part of WP1, WP2, and WP3), and will play a major role in WP4, where the use case application will be developed and validated. EPS will also lead WP5 (Dissemination, Exploitation, and Collaboration).
Recent project participations in EU-FP7 include STREA88 to realize a streaming middleware based over complex event processing technologies able to elaborate in real time massive amount of data; MASSI89 to provide innovation techniques in order to enable the detection of upcoming security threats and trigger re-mediation actions even before the occurrence of possible security incidences; and SRT-1590 to bridge the gap between cloud infrastructures and enterprise services by building a distributed service platform. During its activities in such projects, EPS has gained competences in the fields of CEP (STREAM, SRT-15), SIEM technologies (MASSIF), cloud applications, and web services (STREAM, SRT-15). One of EPSILON’s traditional lines of business is specifically related to enterprise security monitoring. EPS has an increasing interest in cloud technology as highlighted by the fact that it was the first company in Southern Italy to become a Google Apps for Work reseller.
Epsilon will bring to the project its dam monitoring application DaMon and the instrumentation needed to build an in-lab test-bed of a SCADA system. The devices contributed to the project are:
Luigi Romano (M)
Is one of the co-founders of EPSILON. He is an expert in system security and dependability. In FP7, he has been the Technical Coordinator of the INSPIRE project, one of the Principal Investigators of both the INSPIRE INCO and the INTERSECTION project, the Technical Lead (for EPS) of the STREAM and SRT-15 project, the Technical Lead (for CINI) of the MASSIF project and the Technical Lead of the SAWSOC project. He was a member of the European Network and Information Security Agency (ENISA) expert group on Priorities of Research On Current and Emerging Network Technologies (PROCENT). He is the Chair of the Cyber Security mission of the Security Research in Italy (SERIT) technology platform
Danilo De Mari (M)
Is the head of EPSILON business development department. He holds a BSc in Economics from University of Naples, a Master degree in Internet Business from SDA Bocconi and is also qualified as Chartered Accountant. His expertise is in business modeling and early-stage financing of technology businesses. He has managed and coordinated several EU and national R&D funded projects with first-class partners. He sits in many boards of med-tech and ICT companies, is consultant to many startups and is also a director of a well-known Venture Capital firm based in Italy.
Rosario Cristaldi (M)
Is the founder, director and technical chief officer of Epsilon s.r.l. He holds a Master degree in electronics engineering and has served as external professor in Computer Engineering at the University of Naples Federico II. He is consultant for the Italian Association for Computer Machinery (AICA) and advisor for ECDL courses (European Computer Driving License).
Founded in 1993, Red Hat is the premier Linux and open source provider. Rated as CIO Insight Magazine’s Most Valued Vendor for the second consecutive year, Red Hat maintains the highest value and reliability rankings among its customers, and is the most recognized Linux brand in the world. We serve global enterprises through technology and services made possible by the open source model. Solutions include Red Hat Enterprise Linux operating platforms, sold through a subscription model, and a broad range of services: consulting, 24x7 support, Red Hat Network. Red Hat’s global training program operates in more than 60 locations worldwide and features RHCE, the global standard Linux certification. Red Hat is the recognized leader in enterprise solutions that take full advantage of the quality and performance provided by the open source model. With Red Hat, enterprise hardware and software vendors have a standard platform on which to certify their technology. We assure the necessary scalability and security of open source software. We make mission-critical Linux deployments possible. From deployment, to development, to management – organisations can rely on Red Hat expertise at every step. We offer a full range of Enterprise Linux operating systems, backed by Red Hat Network and comprehensive services: Red Hat has key industry relationships with top hardware and software vendors like Dell, IBM, Intel, HP, and Oracle. In June 2002, Red Hat, Oracle, and Dell formally launched a combined Linux effort that includes joint development, support, and hardware and software certification. It was an emphatic declaration that Red Hat Enterprise Linux was truly ready for the enterprise. Red Hat offers a wide range of consulting and engineering services to make enterprise open source deployments successful – from complete Linux migration to client-directed engineering to custom software development. Red Hat has a broad expertise in open source technology. Red Hat is a significant contributor and maintainer of major open source software including Linux, GNU, and Apache Web server. Several Red Hat engineers are prominent open source developers and members of the open source community.
Red Hat has long experience with developing open source enterprise solutions. To add respective support to its portfolio, RH strives to improve Vert.x’s readiness for big scale business deployment. Therefore, RH will take a leading role in defining requirements from an application perspective to result in reusable SERECA services for secure cloud applications. Due to a wide range of business partners and customers, RH will contribute to the requirements analysis also by gathering practical demands on security and performance of cloud applications, in addition to comparing them against SERECA results during the whole project. RH will monitor progress of SERECA to ensure that the goal of simplified development of complex reactive applications is met. On the technical side, RH will extend Vert.x to include SERECA results. This includes RH’s participation in implementing and evaluating services for data storage, data caching, and a keystore. All implementation design will consider support for geo-locality, i.e., storage and migration can be based on geo-information. RH will moreover work on resilience mechanisms for recovery of sensitive application data after failure, without endangering their integrity or confidentiality. Because all the features just mentioned should flow into Vert.x, implementing and testing Vert.x support for secure enclaves and their management is part of RH’s role in the project. Of course, RH will also engage in implementing the prototypes for the evaluation against the defined use cases. That is, RH will enhance the jPDM SaaS application and support EPSILON in extending its DaMon for the use case of Water Supply Monitoring where needed. Apart from analysis, implementation, and evaluation tasks, RH will participate in any dissemination and management activities to ensure constant progress, timely deliverables, budget compliance, and knowledge transfer. RH will help to make SERECA publicly known through presentations, workshops, and publications, and to make its results available and accessible for market and research as quickly as possible.
Red Hat has a strong experience in contributing to international projects, including projects funded under the FP7. Notably, Dr. Little currently participates in the Cloud-TM STREP project on programming paradigms for cloud applications, levering the principles of self-optimizing distributed transactional memory. Red Hat also recently participated in the VELOX project with two other partners of the LEADS proposal, UniNE and TUD.
RH deploys a mighty infrastructure with several instances of racks, with each rack holding either 30 servers with a standard specification or 19 servers with a big data specification. An example for the different specifications is shown below:
Dr. Mark Little (M)
Is CTO of JBoss. Before joining Red Hat, Mark was Chief Architect and co-founder of Arjuna Technologies, where he also led the Arjuna Transactions team. Mark is active in various standards committees, such as W3C WS-Addressing and OASIS WS-TX and has co-authored a number of Web Services, OMG, and JCP standards
Tim Fox (M)
Is Senior Principal Engineer at JBoss/Red Hat, where he leads the Vert.x project and reactive programming efforts. Tim has nearly 2 decades of experience in enterprise middleware, including leading the JBoss HornetQ messaging team
Norman Maurer (M)
Is a Principal Engineer at JBoss/Red Hat where he leads the JBoss Netty efforts. Norman has a lot of experience with high performance distributed systems and is also a key member of the Vert.x development team.
Nick Scavelli (M)
Is a Senior Software Engineer at JBoss/Red Hat. He has been a key member of the JBoss Portal efforts (such as GateIn), working with the OpenShift team for public PaaS and recently a member of the Vert.x team.
Founded in 2012, jClarity is a young start-up in the Java/JVM performance monitoring and analytics space. Its primary SaaS offering, Illuminate, is aimed at the emerging “Java/JVM in the cloud” market. Illuminate uses machine learning techniques to locally identify the root cause of a performance bottleneck, which is then reported back to a distributed dashboard service. The Illuminate service is 24/7, globally available and there is a strong requirement to hold customer data close to their users’ geographical location, in accordance with data protection laws of member nations. jClarity numbers 3 of the top 120 globally recognised Java Champions and is a significant contributor to the Java ecosystem by leading the Adopt OpenJDK and Adopt a JSR community contribution programmes to Java SE and Java EE.
jClarity will lead WP4 (Validation and Evaluation) where it expects to validate the three core research aims of SERECA. That is, first, JC customers expect their data to be kept cryptographically secure and private in accordance to data protection laws of their geographic location. So privacy & security are of major concern. Second, SERECA must allow for geo-local computing for the JC use case, because many JC customers need their data to be stored in their member state. Geo-locality may not impair performance, as the SaaS product offering needs to be responsive (low latency) no matter where a user is located around the globe. Third, JC will evaluate SERECA’s ability to provide high availability. JC customers need the service to be running 24/7 365 days of the year. This exceeds availability guarantees given by many cloud service providers. jClarity also expects to lend its Vert.x, SaaS, and SOA design experience to the project to help develop APIs and features that are secure and easy to use by the development community.
jClarity hosts its SaaS offering on a wide range of IaaS cloud providers, including but not limited to Hetzner, Lindoe, and AWS. It is expected that this infrastructure will be used in WP4 to validate the outcomes of SERECA.
Martijn Verburg (M)
Is the CEO of jClarity. He leads the Adopt OpenJDK and Adopt a JSR programme and is an author and regular speaker/keynoter at international conferences on Java and related topics.
Dr. John Oliver (M)
Is the Chief Scientist at jClarity. John has worked on various platforms for micro-controllers, robots, simulations, desktop applications, and web services. He has previously worked on static analysis tools and is particularly interested in making time consuming tools such as JUnit more efficient. As Chief Scientist, John turns raw data into useful machine learned algorithms and he makes continuous deployment chains just work. John holds a PhD in Engineering from Warwick University for working on algorithms for coordinating mobile robotic teams.
The National Agency for Development of Irrigation and Latifundium Transformation in Puglia, Lucania and Irpinia (Ente per lo Sviluppo dellIrrigazione e la Trasformazione Fondiaria) was established on April 18th, 1947 by decree of the Provisional Head of State (Italy). Legal Personality of public law, the organization is in charge of water handling in an area of more than 3 million hectares, equivalent to about 10% of the total surface of the nation. Its mission is the solution of the ancient and serious problem of water supply in the areas of competence. To achieve this goal, EIPLI plans and does studies, research, strategic design, implementation, and management of projects aimed at search for, retrieval, catchment, storage, quality monitoring, and distribution of ever increasing volumes of water for multiple uses. EIPLI accumulates and distributes yearly, approximately 1 billion cubic meters of water, ensuring the supply for civilian use to a population of about 4 million people, and for irrigation of about 150.000 hectares of farmland, in Southern Italy.
EIPLI will participate to the project as an end-user, providing requirements for theWater Supply Network Monitoring (WSNM) case study, as well helping defining relevant metrics and Key Performance Indicators (KPIs) for the WSNM pilot. Thus, their major contribution will be on WP4: Evaluation. In addition, EIPLI will contribute to the requirements identification activity (as part of WP1, WP2, and WP3). Fianlly, EIPLI will also be engaged in WP5 (Dissemination, Exploitation and Collaboration) activity, particularly with respect to initiatives for dissemination of SERECA results at the local level, aiming at providing evidence – to the local community in general and to the local government in particular – of the important advantages brought about by the SERECA technology.
EIPLI manages a complex Water Supply Network serving a great part of the South of Italy (see Figure 13). The managed Water Supply Network is composed of:
Giuliano Cerverizzo (M)
Is EIPLIs officer since 1986 and is currently responsible for the dams on Monte Cotugno on river Sinni (the largest European clay court), Acerenza on river Bradano, and Camastra on river Basento.
We want to provide the SERECA view on security risks related to the usage of Intel SGX. Particularly, in the following, we discuss short-term, medium-term and long-term risks.
We expect that in the short term more vulnerabilities of Intel CPUs will be discovered and published. These vulnerabilities will most likely include more side-channels that can be used both in native mode as well as for SGX. These vulnerabilities will most likely trigger CPU microcode updates by Intel. A major risk of these microcode updates are that the performance of Intel SGX enclaves will degrade such that they introduce an even larger overhead over native processing. On the other hand, the microcode updates will also introduce overheads in native mode and hence, it is not clear how the overheads of SGX over native mode will change.
Another short-term risk is that microcode updates alone will not be sufficient. We might have to adjust our systems software to adjust for the new vulnerabilities. This might introduce not only some additional overheads but also might reduce the generality of what software can run inside of enclaves.
As medium term risk we consider the option that other vendors will not provide solution that match the protection level of SGX. E.g. by now it becomes more and more clear that AMD’s SEV will enable better protection by providing a lightweight and fast encryption of virtual machines but offers no resilience against attacks of privileged system software. Also ARM seems to work on novel security extensions but there is no announced roadmap when this technology will arrive on the market or if it will be competitive to SGX. As a consequence we envision two scenarios. Major industry player will be reluctant to adopt SGX in a wider scope because it would lead to a classical vendor-lock-in situation. The other scenario is that adaption layers will be designed that enable a uniform interface to different technologies. While this would enable to utilize the protection as offered by the underlying hardware platform – in the end the common denominator might be the least protective platform. The second scenario has partly become reality with Google’s recently published open source project Asylo (https://asylo.dev), which exactly aims at offering a generic framework for enclavised execution.
In the long term, the recently discovered security vulnerabilities related to side channels and speculative execution have had a profound impact on the computer architecture community. Future generations of Intel and ARM CPUs are therefore likely to focus on novel security features, as opposed to merely performance improvements. Especially with the end of Moores Law and Dennard Scaling, it is likely that we will witness more fundamental micro-architectural changes in future CPUs. CPU manufacturer such as Intel, ARM and AMD will face increased market pressure to demonstrate that similar types of vulnerabilities are impossible in the future.
For trusted execution, this is both a challenge and an opportunity. While we believe that trusted execution will remain a core concept for protecting the confidentiality and integrity of data and computation on CPUs, a more fundamental rethinking of the CPU security architecture may result in a trusted execution model that is more pervasive. If the fundamental abstraction of trusted execution changes, this would have major implications on the software stack, and thus affect the applicability of previous results, as the ones developed in the SERECA project. At the same time, the outputs from the SERECA project are timely to inform the discussion about the future of a trusted execution model. Here the SERECA project engages with major stakeholders, including Intel, Microsoft, Google and Huawei, to help shape their thinking on how future versions of trusted execution should be designed.
Title
Authors
Title of the Journal or Equivalent
Year of Publication
PESOS: Policy Enhanced Secure Object Store Robert Krahn, Bohdan Trach, Anjo Vahldiek-Oberwagner, Thomas Knauth, Pramod Bhatotia, and Christof Fetzer EuroSys 2018 2018
LibSEAL: Revealing Service Integrity Violations Using Trusted Execution Pierre-Louis Aublin, Florian Kelbert, Dan O’Keeffe, Divya Muthukumaran, Christian Priebe, Joshua Lind, Robert Krahn, Christof Fetzer, David Eyers, and Peter Pietzuch EuroSys 2018 2018
TrApps: Secure Compartments in the Evil Cloud
Stefan Brenner, David Goltzsche and Rüdiger Kapitza
Workshop on Security and Dependability of Multi-Domain Infrastructures (XDOM0’17)
2017
Secure Cloud Micro Services using Intel SGX
Presentation
Stefan Brenner, Tobias Hundt, Giovanni Mazzeo, and Rüdiger Kapitza
Proceedings of the 17th International IFIP Conference on Distributed Applications and Interoperable Systems 2017
Integrating Reactive Cloud Applications in SERECA Christof Fetzer, Giovanni Mazzeo, John Oliver, Luigi Romano, and Martijn Verburg 12th International Conference on Availability, Reliability and Security (ARES '17) 2017
SGXBOUNDS: Memory Safety for Shielded Execution
Dmitrii Kuvaiskii, Oleksii Oleksenko, Sergei Arnautov, Bohdan Trach, Pramod Bhatotia, Pascal Felber, and Christof Fetzer
Proceedings of the Twelfth European Conference on Computer Systems (EuroSys '17)
2017
Glamdring: Automatic Application Partitioning for Intel SGX
Joshua Lind; Christian Priebe; Divya Muthukumaran; Dan O’Keeffe; Pierre-Louis Aublin; Florian Kelbert; Tobias Reiher; David Goltzsche; David Eyers; Ruediger Kapitza; Christof Fetzer; Peter Pietzuch
2017 USENIX Annual Technical Conference (ATC)
2017
AsyncShock: Exploiting Synchronisation Bugs in Intel SGX Enclaves Nico Weichbrodt, Anil Kurmusy, Peter Pietzuch, and Rüdiger Kapitza European Symposium on Research in Computer Security (ESORICS) 2016 2016
SCONE: Secure Linux Containers with Intel SGX Sergei Arnautov, Bohdan Trach, Franz Gregor, Thomas Knauth, and Andre Martin, Christian Priebe, Joshua Lind, Divya Muthukumaran, Daniel O’Keeffe, and Mark L Stillwell, David Goltzsche, Dave Eyers, Rüdiger Kapitza, Peter Pietzuch, Christof Fetzer USENIX Symposium on Operating Systems Design and Implementation (OSDI) 2016 2016
SecureKeeper: Confidential ZooKeeper using Intel SGX Stefan Brenner, Colin Wulf, Matthias Lorenz, Nico Weichbrodt, David Goltzsche, Christof Fetzer, Peter Pietzuch and Rüdiger Kapitza USENIX International Conference on Middleware 2016
A Secure Cloud-Based SCADA Application: the Use Case of a Water Supply Network
Gianfranco Cerullo, Rosario Cristaldi, Giovanni Mazzeo, Gaetano Papale, Luigi Sgaglione
Conference on Intelligent Software Methodologies, Tools and Techniques (SOMET) 2016 2016
VeCycle: Recycling VM Checkpoints for Faster Migrations
Thomas Knauth, Christof Fetzer
Proc. Middleware '15: Proc. of the 16th Annual Middleware Conference
2015
ControlFreak: Signature Chaining to Counter Control Flow Attacks Sergei Arnautov, Christof Fetzer 34th IEEE Symposium on Reliable Distributed Systems, 2015 (SRDS) 2015
The article has been issued both on the printed newspaper and on the official website.
The Journal
The newspaper “Il Denaro” is the official journal of the ”Unione Industriali di Napoli” 2, the local branch of ”Confindustria” 3, the general confederation of the Italian Industry. In total, Confindustria represents 115.000 companies and 4.300.000 employees. The journal is sold both with “il Sole 24 ore” which is the first economic newspaper in Italy for sales numbers.
The target audience of this journal are principally industrials and investors that could see advantagesin using technologies offered by SERECA . They could appreciate the innovation in terms of security and so rely on cloud computing more than they did before. This will result in the main objective of Secure Enclaves for REactive Cloud Applications (SERECA) project: to be a thrust for the adoption of cloud computing when there is a lack of confidence in this technology.
Content of the Press Release
The content of the press release published is divided in three main sections:
In a first section, the problem of security in cloud computing has been introduced and also the impact that nowadays this situation has on companies over the world.
In a second section, it has been briefly described the project goal and what types of funds receives (i.e. Horizon2020)
Finally, the third section is dedicated to explain the role of EPSILON in SERECA and so how the pilot application will contribute in validating the project.
TheServerSide.com is an online community for enterprise Java architects and developers, which provides daily news. Some of the SERECA industrial partners (namely: RedHat and JClarity) released to the website a description of SERECA objectives and of their role in the project.
They also presented one of the two pilot applications, named Illuminate, which is used in the cloud context to monitor an organisation’s applications (more information on Illuminate is available at http://www.jclarity.com/illuminate/).
Many companies want to migrate their IT infrastructure to cloud platforms. However, in some cases security issues hampers such a process of migration. This is particularly true for the Critical Infrastructure (CI) domain, which represents a fundamental branch of societies. CIs enclose assets essential for the functioning of all countries’ fundamental facilities such as energy, telecommunications, water supply, transport, finance, and health. Unlike other sectors, cloud technologies are still far from being widely adopted in CIs. CIs are increasingly target of terrorist cyber-attacks as demonstrated in last years (e.g. “Black Energy 3” in 2015, or “Havex” in 2014). The disclosure or manipulation of CIs sensitive data may have a devastating impact on the society at large. Hence, in order to migrate CIs to the cloud, new advanced hardening mechanisms are certainly needed.
In the context of SERECA project, we demonstrate how Intel SGX jointly with Vert.x can provide unprecedented security for a CI use case: a Water Supply Network Monitoring application (namely RiskBuster). The WSNM administration (EIPLI) is in charge of the management of seven dams in southern Italy. They want to migrate the monitoring infrastructure to the cloud, but they need guarantees regarding data condentiality and, most important, data integrity.
In such a scenario, Vert.x and Intel SGX are a perfect and suitable combination. SERECA provide the platform able to leverage them. The WSNM is: 1) Easily deployable among the dierent dams and the cloud environment; 2) Highly scalable in front of sensors measurements peaks 3) Highly available in front of failures; 4) High performing in the process of sensors data collection, processing and provision.
From a security point of view, instead, RiskBuster leverage the security features of Intel SGX, through a number of SERECA components. That is:
Starting from the left of the figure reported above, the Data Collector (DC) verticle sends sensors measurements from the dam to the MaaS cloud through a VPN TLS-secured communication. Data arrives to the cloud where a gateway verticle is in charge of receiving it and then publish on the Vert.x Event Bus. Such a verticle decides – based on a protected configuration file – if data might be published on the secure event bus, and so directed to secured verticles, or on the non-secured event bus. The other running secure micro-services (or verticles) are subscribed to the secure bus and periodically receive measurements messages. According to the secure event bus design, payloads of messages are encrypted before being sent through route-based keys pre-configured before.
The Alarm Management System (AMS) is a verticle which receives on-field data, enforces a Complex Event Processing (CEP) to detect possible alarm conditions, and sends alarm notifications to the web proxy verticle. Such a micro-service, therefore, is highly critical for the WSNM security. Attackers could hacked the measurements or the alarm notifications in order to hide a particular situation occurring on the dam. For this reason, we classified the AM as a Secure Verticle, which from now on securely processes data into a SGX enclave and transmits sealed-encrypted alarms through the secure event bus. Data is received from the gateway, through the secure bus, directly into the enclave using the JNI bridge. Vice versa, if an alarm is detected, the secure AM verticle sends to the Web Proxy a point-to-point message generated within the enclave and sent to the Vert.x secure bus through the JNI-bridge.
Two additional verticles are implemented to manage the storage services. The archiver verticles are in charge of all the storage activities management. On the basis of EIPLI requirements, we need to provide in RiskBuster two secure storage systems for historical and time-recent data, i.e., MySQL and MongoDB, respectively. Since the security of data-at-rest is of paramount importance, we decided to develop these micro-services as secure verticles. Like before, data is therefore received from the gateway, through secure bus and JNI bridge, into an enclave, and then is sent to the storage systems. Currently, we are still using the non- secured versions. We are near to include the SCONE secure containers that will contain the SGX-enabled storage services. Sensors measurements and historical alarms will be always kept sealed and encrypted into both storages.
Moreover, an additional verticle is needed to interface the application back-end and front-end, i.e., the web-based dashboard. Such a micro-service is responsible for providing real-time data, alarms notification, and data stored to the graphical interface. At the same time it forwards user requests to the appropriate verticle that will carry out the needed service. As attackers may sniff or alter the data travelling to the web browser, the web proxy micro-service has been equipped with a Secure SockJS bridge that encrypts data using a key shared with the browser.
Illuminate (formerly known as jPDM) is an Application Performance Management (APM) solution which is delivered to clients via Software as a Service (SaaS). jClarity (JC) would like to strengthen the security of Illuminate (hosted on public cloud providers) by adding secure components, based on SERECA Cloud Platform. This would enable JC to store, retrieve and process sensitive data used in the illuminate service.
This will give jClarity’s data sensitive customers added confidence to move from their on-premise installations of illuminate onto jClarity’s publicly hosted service. The business benefit is drastically reduced infrastructure and support costs for both JC and its customers.
Specifically, JC wishes to protect it’s application data and processing from malicious users that have root access to their cloud provider. These users could come in the form of rogue staff members at the cloud provider who have administrative access, as well as external attackers who have access to the underlying operating system due to a 0-day or other exploit such as Shellshock (https://en.wikipedia.org/wiki/Shellshock_(software_bug))
In the current incarnation illuminate, JC relies on Private/Public key encryption to secure sensitive data flows and processing. The keys and the exchange of those keys are currently vulnerable to attackers who have access to memory on the cloud provider. The SERECA Cloud Platform will provide a secure enclave where key storage and exchange can occur in an encrypted memory space that an attacker cannot read.
This would add another strong layer to jClarity’s many other layers of security which include anonymising data, SSL’d secured Websocket connections, optional VPNs and other common security practices for SaaS applications. With this added layer JC can implement the same level (or better!) of security that a customer could provide in house, from the CPU right through to an end user.
A secondary goal is to allow the Illuminate service to be hosted on multiple cloud providers in a secure manner, that is JC would like to see SERECA Cloud Platform become broadly available across popular cloud providers in order to continue its independence from a single provider.
One of the main SERECA output is SCONE (available here). This is a platform to build and run secure applications with the help of Intel SGX (Software Guard eXtensions). In a nutshell, the SCONE objective is to run applications such that data is always encrypted, i.e., all data at rest, all data on the wire as well as all data in main memory is encrypted. The most important feature of SCONE is its ease of use. Ensuring the security of SGX through SCONE is simple since applications do not need to be modified.
The research work on SCONE led to the foundation of a company, namely Scontain.
SCONE provides applications with secrets in a secure fashion. Why is that a problem? Say, you want to run MySQL and you configure MySQL to encrypt its data at rest. To do so, MySQL requires a key to decrypt and encrypt its files. One can store this key in the MySQL configuration file but this configuration file cannot be encrypted since MySQL would need a key to decrypt the file. SCONE helps developers to solve such configuration issues in the following ways:
Secure Configuration Files. SCONE can transparently decrypt encrypted configuration files. It will give access to the plain text only to a given program, like, MySQL. No source code changes are needed for this to work.
Secure Environment Variables. SCONE gives applications access to environment variables that are not visible to anybody else - even users with root access or the operating system. Why would I need this? Consider the MySQL example from above. You can pass user passwords via environment variables like MYSQL_ROOT_PASSWORD and MYSQL_PASSWORD to the MySQL. We need to protect these environment variables to prevent unauthorized accesses to the MySQL database.
Secure Command Line Arguments. Some applications might not use environment variables but command line arguments to pass secrets to the application. SCONE provides a secure way to pass arguments to your application without other privileged parties, like the operating system, being able to see the arguments.
SCONE supports developers and service providers (i.e., companies operating applications accessible via the Internet) to protect the confidentiality and integrity of their applications - even when running in environments that cannot be completely trusted. SCONE’s focus is on supporting the development of programs running inside of containers like microservice-based applications as well as cloud-native applications. However, SCONE can protect most programs running on top of Linux.
SCONE supports developers and service providers to ensure end-to-end encryption in the sense that data is always encrypted, i.e., while being transmitted, while being at rest and even while being processed. The latter has only recently become possible with the help of a novel CPU extension by Intel (SGX). To reduce the required computing resources, a service provider can decide what to protect and what not to protect. For example, a service that operates only on encrypted data might not need to be protected with SGX.
SCONE supports strong application-oriented security with a workflow like Docker, i.e., SCONE supports Dockerfiles as well as extended Docker compose files. This simplifies the construction and operation of applications consisting of a set of containers. This fits, in particular, modern cloud-native applications consisting of microservices and each microservice runs either in a standard or a secure container.
The Docker Engine itself is not protected. The Docker Engine, like the operating system, never sees any plain text data. This facilitates that the Docker Engine or the Docker Swarm can be managed by a cloud provider. SCONE helps a service providers to ensure the confidentiality and integrity of the application data while the cloud provider will ensure the availability of the service. For example, with the help of Docker Swarm, failed containers will automatically be restarted on an appropriate host.
Today, software strategy is central to business strategy. To stay competitive, organizations, such as Red Hat customers, need customized software applications to meet their unique needs. Theses needs erase from customer engagements, new product and even internal services development. Therefore, the need to speed up application development, testing, delivery, and deployment is becoming a necessary business competency. Red Hat OpenShift Application Runtimes (RHOAR) helps organizations use the cloud delivery model and simplify continuous delivery of applications and services on Red Hat OpenShift platform.
RHOAR provides a native experience for several runtime technologies on top of OpenShift, an enterprise-grade Kubernetes distribution. RHOAR includes Eclipse Vert.x, the toolkit used to develop SERECA applications, as well as Spring Boot, Wildfly Swarm and node.js.
While RHOAR does not use the SGX technologies, it ships a number of technologies developed in the SERECA project. First, several services included in the SERECA application framework have been externalized and productized. This includes the configuration service, service discovery, the secure event bus (TLS-based transport). In addition, the health check and circuit breaker resilience patterns are also included in RHOAR. The work made around cluster management has also directly impacted the RHOAR product. While it does not use Zookeeper or SecureKeeper, the concepts and hardening have been integrated in RHOAR. RHOAR has also benefited from the features included in Vert.x ecosystem that have been funded by the SERECA projects, such as fail-over, HTTP/2 and gRPC.
Our project, in the context of ARES2017 (https://www.ares-conference.eu), contributed to the organization of SECPID2017 workshop and furthermore presented a paper on the integration work conducted in SERECA.
SERECA participated to NetFutures2017 conference. We presented what we have done so far and what outputs the project will produce.
The Net Futures 2017 conference will host the Concertation meeting of H2020 projects in the “Cloud and software” unit. For any information on the event please follow this link
Prof. Christof Fetzer was at CrossCloud 2017 for a keynote on “Secure Distributed Application Configuration”
Prof. Christof Fetzer was invited for a talk at LADIS2017 (Large-Scale Distributed Systems and Middleware) on “SCONE: Secure Container Environments with Intel SGX”.
On April 4th in Dresden, Prof. Christof Fetzer had a keynote to the DevDays event on the Secure Containers developed in the context of SERECA. More than 500 developers participated to the event.
SERECA received the best paper award at EuroSys2017 conference with the following paper:
Dmitrii Kuvaiskii, Oleksii Oleksenko, Sergei Arnautov, Bohdan Trach, Pramod Bhatotia, Pascal Felber, and Christof Fetzer. 2017. SGXBOUNDS: Memory Safety for Shielded Execution. In Proceedings of the Twelfth European Conference on Computer Systems (EuroSys '17). ACM, New York, NY, USA, 205-221. DOI: https://darknet-tor.com
On February 24th, 2017, Prof. Dr. Christof Fetzer and Dr. Thomas Knauth visited Amazon’s offices in Dresden, Germany to reprise the Secure linux CONtainer Environment (SCONE) talk from last year’s OSDI. Amazon’s team in Dresden works hard to provide a secure hypervisor and OS platform. This forms the foundation to run Amazon Web Services (e.g., EC2) on. Naturally, the engineers are keen to learn more about how SCONE provides secure containers on top of an otherwise untrusted cloud infrastructure.
The hour-long talk with following discussion was well received. Questions revolved around how easy it is to maintain SCONE in the face of upstream changes in the used open source building blocks, general questions on SGX and what kinds of attacks are still possible.
For more info about SCONE, have a look to the paper “SCONE: Secure Linux Containers with Intel SGX”
The SERECA consortium has now open sourced the Secure ZooKeeper (SecureKeeper) project, have a look to the OpenSource section.
The European Commission asks to research audience about the cloud topics that they would wish to be addressed in the next H2020 work programme.
The consultation is available athttps://ec.europa.eu/digital-single-market/en/news/consultation-cloud-computing-research-innovation-challenges-wp-2018-2020 and will close on 10 October 2016.
Action acronym: SERECA
Action full title: “Secure Enclaves for REactive Cloud Applications”
Objective: ICT-07-2014: Advanced Cloud Infrastructures and Services
Grant agreement no: 645011