BigHPC will design and implement a novel solution for monitoring and optimally managing the infrastructure, data, and applications of current and next-generation HPC data centers.
The BigHPC project will produce an innovative solution to efficiently manage parallel and Big Data workloads that:
- combines novel monitoring, virtualization and software-defined storage components;
- can cope with HPC’s infrastructural scale and heterogeneity;
- efficiently supports different workload requirements while ensuring holistic performance and resource usage;
- can be seamlessly integrated with existing HPC infrastructures and software stacks;
- will be validated with pilots running in both MACC and TACC supercomputers.
The project will have access to state-of-the-art HPC infrastructures from the Minho Advanced Computing Centre (MACC) and the Texas Advanced Computing Centre (TACC), which will be crucial to ensure the development and validation of the project’s goals. It will allow validation through real use-cases and a pilot deployed on both TACC and MACC supercomputers.
The project’s outcomes will be exploited commercially by Wavecom that will provide the devised software framework as a service.
The BigHPC platform will be useful for companies and research centers aiming at supporting Big Data and traditional HPC applications on their infrastructures. A better and simplified management of HPC applications and infrastructural resources will have a direct impact in society, by accelerating scientific breakthroughs in different fields (e.g., healthcare, IoT, biology, chemistry, physics), and increasing the competitiveness of companies through better data analysis and enhanced decision-support processes.