What we do

The BigHPC project aims at improving the management of HPC data centers and Big Data applications supported by these with the following novel features:


Efficient Monitoring of Large-Scale HPC Infrastructures


Improved Resource Isolation and Usage Through Virtualization Techniques

Software-Defined Storage

Improved End-to-End Storage Performance for Data-Centric Workloads

A Management Framework for Consolidated Big Data and HPC

BigHPC will simplify the management of HPC infrastructures supporting Big Data and parallel computing applications. The project will have a direct impact on science, industry and society, by accelerating scientific breakthroughs in different fields and increasing the competitiveness of companies through better data analysis and improved decision-support processes.

Who we are

The BigHPC consortium is composed by six partners from academia and industry.

News and Events

Container Orchestration on HPC Platforms

The last decade witnessed a new era of software development that allows software developers to write applications independently of the target environment by packaging them along with their dependencies and environment variables inside containers. Numerous Read more…

Monarch system presented at CCGrid 2022

‘Accelerating Deep Learning Training Through Transparent Storage Tiering’ is the title of the new paper presented at the 22nd IEEE International Symposium on Cluster, Cloud and Internet Computing (CCGrid 22). This paper was made by Read more…

Contact us

Please contact us using the contact form.

General email


Follow us


Get in touch

Your full name.
Your email.
Your message.