Monitoring in the Big Data era

Monitoring can be described as a three-step process, composed of collecting, storing, and alerting. Each of these steps is intrinsically simple and understandable by everyone: collecting is the process of gathering the necessary data, where this can be a temperature sensor, RAM usage counter, power consumption, or the number of Read more…

Improving Storage QoS for HPC centers

Data-centric applications (e.g., data analytics, machine learning, deep learning) running at HPC centers require efficient access to digital information in order to provide accurate results and new insights.  Users typically store this information on a shared parallel file system (e.g., Lustre, GPFS), which is available at HPC infrastructures. This is Read more…

BigHPC Framework’s Vision

Following the challenges addressed in our first blog post, BigHPC will design and implement a new framework for monitoring and managing the infrastructure, data and applications of current and next-generation HPC data centers. The proposed solution aims at enabling both traditional HPC and Big Data applications to be deployed on Read more…