HAC 2012: International Workshop on Heterogeneous Architectures and Computing

High performance computing (HPC) has significantly evolved during the last decades. The remarkable evolution of networks, the raise of multi-core technology and the use of hardware accelerators have made it possible to integrate up to hundreds of thousands cores into the current petaflop machines. Issues beyond simple computational capabilities (such as memory and power, among others) are becoming increasingly important, and new technological trends able to enhance the performance and scalability of future supercomputers are critical for future improvements.

This scenario has led to the emergence of massively and heterogeneous parallel systems com- posed of a variety of different types of computational units. It seems clear that the roadmap to the next generation of exascale computers is through integration of large distributed systems in which computational nodes are in turn composed of multi-core processors with a large number of cores and hardware accelerators (like GPUs or FPGAs) that allow a substantial speed up in certain types of applications.

This scenario has led to the emergence of massively and heterogeneous parallel systems com- posed of a variety of different types of computational units. It seems clear that the roadmap to the next generation of exascale computers is through integration of large distributed systems in which computational nodes are in turn composed of multi-core processors with a large number of cores and hardware accelerators (like GPUs or FPGAs) that allow a substantial speed up in certain types of applications.

However, the design and implementation of efficient parallel applications for heterogeneous systems is still a very important challenge. The heterogeneity of these systems introduces a number of important factors that cause the models and algorithms used in homogeneous parallel systems, in which all nodes are similar, are outdated and do not produce adequate and reliable results. The diverse architectures, interconnection networks, parallel programming paradigms, as well as differences in the performance of different components, which also varies depending on applications and algorithms, have a pervasive impact on algorithm performance and scalability.

Therefore the traditional parallel algorithms, programming environments and development tools as well as theoretical models are not applicable to the high performance parallel heterogeneous systems currently available. As a consequence, we need a thorough analysis of these new systems to propose new ideas, innovative algorithms and tools, paradigms of programming as well as theoretical models for modelling and working properly and efficiently with heterogeneous environments.

The workshop is intended to be a forum for researchers working on algorithms, programming languages, tools, and theoretical models aimed at efficiently solving problems on heterogeneous networks.