Exascale supercomputers, and beyond, are expected to have highly hierarchical architectures with nodes composed by lot-of-core processors and accelerators. New programming paradigms and languages allowing scientists to give expertise would be proposed. The goal of this talk is to illustrate the interest of multi-level programming paradigms for both dense and sparse linear algebra; associated with auto/smart tuning of several parameters at runtime. We will survey several researches and results to conclude that extreme computing would ask soon for intelligent methods and new programming paradigms for linear algebra.
We first present some auto/smart tuning algorithms and results for iterative sparse Krylov linear algebra methods. We discuss on potential adaptations of such algorithms for "unite-and-conquer” asynchronous Krylov methods; showing the interests of multi-level programming paradigms mixing distributed and parallel computing. Then, we introduce YML allowing to describe graph of encapsulated large granularity components, developed using parallel languages. We illustrate the utilization of YML with implementation components developed using the PGAS languages XMP on several supercomputers for dense and sparse linear algebra.