Wednesday, March 4, 2015

Performance analysis of MPI+OpenMP Programs with HPCToolkit
The number of hardware threads per processor on multicore and manycore processors is growing rapidly. Fully exploiting emerging scalable parallel systems will require programs to use threaded programming models at the node level. OpenMP is the leading model for multithreaded programming. This tutorial will give a hands-on introduction of how to use Rice University’s open-source HPCToolkit performance tools to analyze the performance of programs that employ MPI + OpenMP to harness the power of scalable parallel systems. See http://hpctoolkit.org for more information about HPCToolkit.
John Mellor-Crummey, Professor, Computer Science & ECE, Rice University

OCCA: portability layer for many-core thread programming
The OCCA API enables an experienced programmer who is comfortable with programming in OpenCL, CUDA, pThreads, or OpenMP to write a single implementation of their compute kernels that can be treated at run time as any of these four threading approaches. In this way the best performing threading model can be chosen at run time for almost all modern mainstream many-core processors. See http://libocca.org for further background on OCCA.
Tim Warburton, Professor, Computational & Applied Mathematics, Rice University
David Medina, Graduate Student, Computational & Applied Mathematics, Rice University

OpenMP Tutorial
OpenMP has emerged as a popular directive-based approach for shared memory parallel programming. For some applications, a parallel version of an existing sequential code can be created via the insertion of just a few OpenMP directives. Recently, OpenMP has been extended to enable its use on accelerators attached to a host system. In this tutorial we give an introduction the features and scope of OpenMP.
Barbara Chapman, Professor, Computer Science, University of Houston

Introduction to HDF5 for High-Performance Computing Environments
This tutorial provides an introduction to using HDF5 with HPC applications. Beginning with the HDF5 data model and progressing though serial application development with HDF5 to using HDF5 for parallel I/O, this tutorial provides a fast-paced overview of using HDF5 for writing application data in high-performance environments.
Quincey Koziol, Director of Core Software and HPC, The HDF Group


The following event might be of interest to you:

June 1-4, 2015

You might also be interested in our 2015 HPC Summer Institute.   The 2015 HPC Summer Institute is organized by the Ken Kennedy Institute for Information Technology at Rice University in an effort to address a growing demand for training and education in high-performance computing and scientific programming. While the main driver for the Summer Institute has been participation from  the oil and gas industry, the curriculum is broadly applicable to any field engaged in scientific computing where there is a need to harness more of the computing power offered by modern servers and clusters. The HPC Summer Institute offers participants, with a wide array of backgrounds, opportunities to be trained modern programing techniques and tool.

Comments are closed.