Three years ago, partners from the two sides of the ocean gathered with a problem in mind: HPC infrastructures have been increasingly sought to support Big Data applications, whose workloads significantly differ from those of traditional parallel computing tasks.
Keeping an eye on the challenges ahead, like the increasing difficulty in efficiently managing available computational and storage resources, providing transparent application access to such resources, and ensuring performance isolation and fairness across the different workloads, BigHPC researchers proposed a novel management framework, for Big Data and parallel computing workloads, that can be seamlessly integrated with existing HPC infrastructures and software stacks.
With the project ending, these are its outcomes:
- 3 Major prototypes
- 15 Scientific publications
- 12 Open source software contributions
These contributions have a direct impact on science, industry and society, by accelerating scientific breakthroughs in different fields and increasing the competitiveness of companies through better data analysis and improved decision-support processes.