University College London
University College London (UCL), https://www.ucl.ac.uk/, London’s Global University was established in 1826 and is among the top universities in the UK and worldwide ranked in joint 5th place in the QS World University Rankings 2014/15. It was also the first university to welcome female students on equal terms with men. Academic excellence and conducting research that addresses real-world problems inform its ethos to this day. UCL academics are working at the forefront of their disciplines partnering with world-renowned organisations such as Intel, BHP Billiton and NASA and contributing to influential reports for the UN, EU and UK government. UCL’s academic structure consists of 10 faculties, each home to world-class research, teaching and learning in a variety of fields. UCL has 920 professors, more than 5,000 academic and research staff, and a nearly 29,000-strong student community.
The Centre for Computational Science (CCS) at UCL is an internationally leading centre for computational science research using high performance computing. The CCS is currently comprised of about 20 members and pursues a diverse range of research unified by common computational approaches, from theory and design of algorithms to implementations and middleware on internationally distributed HPC systems. The CCS enjoys numerous successful industrial collaborations with companies such as Unilever, Schlumberger, Microsoft, MI-SWACO and Fujitsu. In the realm of materials simulation, the CCS has been performing internationally leading research on mineral-polymer systems based on molecular scale simulations for more than 15 years. These projects have had a major impact on experimental research. In particular the design of clay-swelling inhibitors and nanocomposites, for use in oil and gas drilling, has resulted in patents being awarded and in preparation. In terms of software, the CCS has developed several applications, including the Computational Fluid Dynamics (CFD) code, HemeLB, for clinical applications in vascular disorders such as intracranial aneurysms. The UCL team also maintains a second LB code, LB3D, which supports a number of biomedical problems. UCL has extensive experience in the development of software tools to enable multiscale simulations: the Application Hosting Environment [5] enables straightforward and secure access to heterogeneous computational resources, workflow automation, advance reservation and urgent computing; MML [6], MPWide [7] and MUSCLE 2 [8] (used e.g. in the MAPPER project (http://www.mapper-project.eu)) provide a means to deploy and run coupled models on production computing infrastructures.
Role in the project:
UCL leads the overall project and have taken a substantial role in WP1: Management, particularly in coordination of the consortium which it ha been leading; and WP4: Application Development, for its two exemplar applications, BAC (Binding Affinity Calculator) and Multiscale Materials Modelling. UCL will play a major role in WP2 (VVUQ primitives and formalisms) and WP3 (Multiscale VVUQ Tools) regarding the development and implementation of primitives relevant to the exemplar applications. UCL has also been promoting relevant research outcomes with WP6: Dissemination, for communication to a wider audience and public.
Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V.
The Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V (MPG), https://www.ipp.mpg.de/en, is represented in this proposal by the Max Planck Institute for Plasma Physics (IPP) which is one of the largest fusion research centres in Europe, where the main goal is to investigate the physical basis of a fusion reaction used as a new energy production source. In addition to two major fusion experiments (ASDEX upgrade, a medium size tokamak based in Garching, and W7X, a large stellarator currently under construction in Greifswald) and two theory divisions, it also houses a joint computing centre (RZG) of the IPP and the Max Planck Society, which offers services for Max Planck Institutes all over Germany. The institute coordinates leading expertise on both experimental and theoretical plasma physics, and drives the development of some of the most advanced simulation codes in this field. IPP is also involved in the Integrated Modelling activities of EUROfusion, which develops a simulation platform composed of a generic set of tools for modelling an entire tokamak experiment.
Role in the project:
IPP has been leading Work Package 4, Applications and has also been providing one multi-scale application. It has been bringing in expertise on aspects of VVUQ.
Brunel University
Brunel University London, https://www.brunel.ac.uk/, is a dynamic institution with over 15,000 students and over 1,000 academic staff operating in a vibrant culture of research excellence. Brunel plays a significant role in the higher education scene nationally and has numerous national and international links and partnerships with both academia and industry. The volume of ‘world-leading’ and ‘internationally excellent’ research carried out at Brunel University London has increased by more than half in the past six years, according to the Research Excellence Framework 2014. Brunel has a long history of successful bidding for European funding and of successful managing and delivering EU projects. It was partner or coordinator on over 120 projects within FP7 within cumulative value to Brunel of over €40 M, and has been already successful with 36 Horizon 2020 proposals of which we are coordinator of 7 projects.
The Department of Computer Science is an interdisciplinary centre that includes researchers with a range of backgrounds including computer science, engineering, mathematics, and psychology. They carry out rigorous world-leading applied research in a range of related topics including software engineering, intelligent data analysis, human computer interaction, information systems, and systems biology. Much of their research relates to two main domains: healthcare/biomedical informatics and digital economy/business. Brunel has long-standing fruitful collaborations with many user organisations and they publish in top journals, including over 80 papers in IEEE/ACM Transactions between 2008 and 2013. The Department of Computer Science has been lauded by the British Computer Society for their achievements in student project supervision, and has a growing student population over the last two years.
Role in the project:
UBRU has been leading Work Package 3, Multiscale VVUQ Tools, and has been providing expertise on HPC software development and automation. UBRU has also been providing one deep-track multi-scale applications as part of WP4, bring substantial contributions to the specification of the primitives in WP 2, and support a number of activities in WP 6.
Bayerische Akademie der Wissenschaften – Leibniz-Rechenzentrum
Bayerische Akademie der Wissenschaften -Leibniz-Rechenzentrum (LRZ), https://www.lrz.de/, The Leibniz Supercomputing Centre (Leibniz-Rechenzentrum, BADW-LRZ) is part of the Bavarian Academy of Sciences and Humanities (Bayerische Akademie der Wissenschaften, BADW). BADW-LRZ has been an active player in the area of high performance computing (HPC) for over 20 years and provides computing power on several different levels to Bavarian, German, and European scientists. BADW-LRZ operates SuperMUC, a top-level supercomputer with 155,000 x86 cores and a peak performance of over 3 PFlop/s, as well as a number of general purpose and specialized clusters and cloud resources. In addition, it is a member of the “Munich Data Science Centre”, providing the scientific community with large-scale data archiving resources and Big Data technologies. Furthermore, it operates a powerful communication infrastructure called Munich Scientific Network (MWN) and is a competence centre for high-speed data communication networks.
BADW-LRZ participates in education and research and supports the porting and optimization of suitable algorithms to its supercomputer architectures. This is carried out in close collaboration with international centres and research institutions. BADW-LRZ is member of the Gauss Centre for Supercomputing (GCS), the alliance of the three national supercomputing centres in Germany (JSC Jülich, HLRS Stuttgart, BADW- LRZ Garching). BADW-LRZ has decided to extend the application support in a few strategic fields, e.g., for life science, astrophysics, geo physics, and energy research. BADW-LRZ is operating a center for big data research, the “Munich Data Science Center – MDSC” and established an Intel Parallel Computing Centre. On a European level, BADW-LRZ participates in the European Projects PRACE, DEEP, DEEP-ER, AutoTune, Mont-Blanc, Mont- Blanc 2, VERCE, and EESI 2. BADW-LRZ was the leader of the highly successful EU project “Initiative for Globus in Europe – IGE” and is a leading member of EGCF. BADW-LRZ is internationally known for expertise and research in security, network technologies, IT-Management, IT- operations, data archiving, high performance computing and Grid computing.
Role in the project:
The Leibniz Supercomputing Centre has been contributing to the project by providing computing resources and support to project partners. LRZ has also been seeking to transform the results of the project into services of general interest to the scientific community. For example, the web services or programming libraries. This will contribute to the sustainability of the work completed in the project.
Bull SAS
Bull SAS, https://atos.net/en/, is the newest member of the Atos family. Atos SE is a leader in digital services with pro forma annual revenue of € 5.6 billion and 96,000 employees in 72 countries. Serving a global client base, the Group provides Consulting & Systems Integration services, Managed Services & BPO, Cloud operations, Big Data & Cyber-security solutions, as well as transactional services. With its deep technology expertise and industry knowledge, the Group works with clients across different business sectors: Defense, Financial Services, Health, Manufacturing, Media, Utilities, Public sector, Retail, Telecommunications, and Transportation. With 80+ years of technology innovation expertise, the new « Big Data & Cyber-security » 80 Service Line (ATOS BDS) gathers the expertise in Big Data, Security and Critical Systems brought by Bull acquisition and the ones already developed by Atos in this domain.
The Service Line is structured into 3 complementary activities: Big Data, Cyber-security and Critical Systems.
- Big Data: extreme performance that unleashes the value of data (detailed below)
- Cyber-security: the expertise of extreme security for business trust
- Critical Systems: the expertise of extreme safety for mission-critical activities.
In recent years, the Bull SAS R&D labs have developed many major products that are recognized fortheir originality and quality. These include the Sequana supercomputer which is delivering the first results of the “Bull Exascale Program” announced at SuperComputing 2014, Bullion servers for the private Clouds and Big Data, the Shadow intelligent jamming system designed to counter RCIEDs, the libertp tool for modernization of legacy applications and hoox, the first European smartphone featuring native security. To explore new areas and develop tomorrow’s solutions, today, Bull SAS R&D is investing heavily in customers – with whom it has forged many successful technological partnerships – as well as in institutional collaborative programs (such as competitiveness clusters and European projects) and in partnerships with industry (Open Source, consortiums). Bull SAS is involved with the strategy toward HPC in Europe through its active leadership of ETP4HPC and contribution to the Strategic Research Agenda. Already engaged in the race towards Exascale computing, Bull SAS sees in the VECMA project the development of ambitious means to leverage this computing power, and overall an innovative way to deliver high fidelity numerical simulations with guaranteed reliability for numerous scientific domains. In the medium-term, the objective of Bull SAS is to create market differentiators. In that respect, promoting the efficiency of its technologies via the use of relevant, adapted and accessible frameworks will help develop its market in the HPC domain worldwide and especially in Europe.
Role in the project:
Bull SAS has been mainly participating and contributing in the work packages 2, 3 & 5. Additionally, Bull SAS has also been participating and contributing in the work packages 1, 4, and 6.
Stichting Centrum Wiskunde & Informatica
Centrum Wiskunde & Informatica (CWI), https://www.cwi.nl/ is the Netherlands’ national research institute for mathematics and computer science. Founded in 1946, it forms a part of the Netherlands Organisation for Scientific Research (NWO). Being located in Amsterdam, the institute has strong international links, and enjoys a global reputation for innovative research. CWI’s strength is discovering and developing new ideas that benefit society and the economy, as well as other scientific areas. The research is rooted in practical, real-life questions and explores essential aspects of modern life, including transport and communication networks, internet security, medical imaging and smart energy systems. Innovations form an integral part of numerous software products, programming languages and international standards.
CWI is at the heart of European research in mathematics and computer science. CWI takes part in many international programmes, including EU Horizon2020 and EIT Digital. With close ties to industry and the wider academic world, both at home and abroad, it has a sharp focus on the bigger picture and real-world issues. CWI unites 50 tenured and tenure-track researchers, 30 postdocs and 65 PhD students from more than 25 different countries. Many members of the permanent research staff teach at a Dutch university. CWI was the birthplace of the European internet in 1988, registering one of the first country domains in the world, .nl, in 1986. The first Dutch computer was built in 1952 and the development of the popular programming language Python was kick-started in the 1990s. So far, CWI has founded 24 spin-off companies. The Scientific Computing (SC) group develops efficient computational methods for systems with inherent uncertainties. The impact of these uncertainties (for example uncertain model parameters or to chaotic behavior) on model outputs and predictions is an important question in many applications. The SC group develops methods to model and compute with uncertainties in an efficient manner. Current research includes numerical algorithms for stochastic differential equations, Monte Carlo methods, uncertainty quantification, data assimilation, and rare event simulation. The computational methods developed in the SC group are aimedat applications in climate science, finance, and energy systems.
Role in the project:
CWI has been contributing to WP2 (development of multiscale UQ algorithms and UQPs), and provide one application (Climate) to WP4.
CBK Sci Con Limited
CBK Sci Con Limited (SME), https://www.cbkscicon.com/ is a consultancy that offers technical and management advice to business in e-science domains. CBK sits at the interface between academia and industry, and its main areas of focus include High Performance Computing and Modelling and Simulation across a number of sectors. CBK also facilitates industry access to High Performance Computing facilities and provides the required support to use the infrastructure. CBK is well-connected to the HPC community and has participated in organising conferences and events in the e-infrastructure space.
Role in the project:
CBK has been leading the Dissemination and Exploitation work package, WP6. Dissemination and outreach are an essential part of the success of projects such VECMA. CBK is well-positioned to lead this task, as it is well-connected to the HPC community and has a dedicated business development front that is experienced in dissemination strategies including event organisation, website construction, social media management, and so forth. CBK both has the domain-specific knowledge and outreach expertise to conduct dissemination for VECMA.
University of Amsterdam
University of Amsterdam (UvA), https://www.uva.nl/en is an intellectual hub. It collaborates with hundreds of national and international academic and research institutions, as well as businesses and public institutions. The UvA forges a meeting of minds for the advancement of education and science. The UvA has a long-standing tradition of excellent research. Its fundamental research in particular has gained national and international recognition and won numerous grants. UvA has 7 faculties, 3000 academic staff members and 30000 students. The UvA is one of Europe’s leading research universities.
The Computational Science Lab of the faculty of Science of the University of Amsterdam aims to describe and understand how complex systems in nature and society process information. The abundant availability of data from science and society drives our research. We study Complex Systems in the context of methods like multi-scale cellular automata, dynamic networks and individual agent based models. Challenges include data-driven modeling of multi-level systems and their dynamics as well as conceptual, theoretical and methodological foundations that are necessary to understand these processes and the associated predictability limits of such computer simulations. The Computational Science Lab has extensive experience in (the management of) EU Framework projects. The lab provides a wealth of experience in computational science and specifically in information processing in complex systems, multiscale modelling and simulations, and applications in the socio-economic domain. Their work on modelling complex systems in general and complex networks in particular together with long experience in the application of Cellular Automata, Agent-based models and Complex Networks methods, as well as advanced modelling methods, will be crucial to this project.
Role in the project:
UvA is partner in the project and is WP2 leader. UvA has been bringing in two applications (in the biomedical domain), and are contributing to multiscale UQ algorithms and UQPs.
Institute of Bioorganic Chemistry – Poznan Supercomputing and Networking Centre
Poznan Supercomputing and Networking Centre (PSNC), http://www.man.poznan.pl/online/en/ was established in 1993 as a research laboratory of the Polish Academy of Sciences and is responsible for the development and management of the national optical research network, high-performance computing and various eScience services and applications in Poland. The optical network infrastructure called PIONIER is based on dedicated fibres and DWDM equipment owned by PSNC. PSNC has several active computer science research and development groups working on a variety of aspects including: innovative HPC applications, portals, digital media services, mobile user support technologies and services, digital libraries, storage management, tools for network management, optical networks and QoS management. As it was demonstrated in many international projects founded by European Commission, PSNC experts are capable of bringing unique IT capabilities to the research and e-Science based on many experiences in the 5th, 6th and 7th Framework Programs. An active participation in the design and development of high-speed interconnects, fiber-based research and education networks allows PSNC today to be a key member of pan- European GEANT optical network connecting 34 countries through 30 national networks (NRENs). PSNC is also participating in the biggest scientific experiments offering the access to large scale computing, data management and archiving services. In addition we have been engaged in European initiative of building high performance computing e-Infrastructure – PRACE which will end in provisioning of permanent future Petaflops supercomputing installations involving reconfigurable hardware accelerators. PSNC is also taking an active role in EUDAT contributing with the development of sustainable data storage, archiving and backup services available for Another branch of PSNC activity is the hosting of high performance computers, including SGI, SUN and clusters of 64-bit architecture PC application servers. PSNC was participating in multiple national and international projects (Clusterix, A TRIUM, SEQUIN, 6NET , MUPBED, GN2 JRA1, GN2 JRA3, GN2 SA3). It was also a coordinator of pan-European projects such as GridLab, PORTA OPTICA STUDY and PHOSPHORUS and took an active part in many other EU projects such as HPC-EuropeI/II, OMII-Europe, EGEEI/II, ACGT, InteliGrid, QosCosGrid or MAPPER.
Role in the project:
PSNC leads the work package, WP5: Infrastructure and has been responsible for integration of applications, tools and components from WP2, WP3 and WP4, providing HPC infrastructure for the deployment of the consistent VECMA platform, validation with end-users in real-world scenarios, evaluating the platform against established benchmarks, providing feedback to all Work Packages in order to introduce changes in the software components researched and developed.