Keynote Lecture

Program Analysis beyond Closed-form Expressions for Maximum Parallelization

Dean Kleanthis Psarris
School of Natural and Behavioral Sciences
City University of New York-Brooklyn College
Brooklyn, NY 11201
E-mail: kpsarris@brooklyn.cuny.edu

Abstract: Program analysis techniques and accurate data dependence testing enable a compiler to perform safe automatic code optimization and parallelization. It has been shown that factors, such as loop variants and nonlinear expressions, limit program analysis, dependence testing, and parallelization. The NLVI-Test and the PLATO library have been introduced as a new tool to enable exact data dependence testing on nonlinear expressions. Apart from this work, analyses that utilize the Chains of Recurrences formalism have been shown to improve a dependence test’s ability to analyze expressions. In this work we present techniques for applying the NLVI-Test ideas in conjunction with Chains of Recurrences analysis, to couple the benefits of both. In addition, we develop a “Parallelization Index” which describes the upper bound of the total parallelization obtainable in a compiler infrastructure. We perform an experimental evaluation of our techniques on several scientific benchmarks. Our experiments show that our techniques result in higher numbers of total parallel loops discovered, and moreover, that we consistently expose a majority of the obtainable parallelism.

Brief Biography of the Speaker:Kleanthis Psarris is a Professor of Computer and Information Science and the Dean of the School of Natural and Behavioral Sciences at City University of New York - Brooklyn College. He received his B.S. degree in Mathematics from the National University of Athens, Greece in 1984. He received his M.S. degree in Computer Science in 1987, his M.Eng. degree in Electrical Engineering in 1989 and his Ph.D. degree in Computer Science in 1991, all from Stevens Institute of Technology in Hoboken, New Jersey. His research interests are in the areas of Parallel and Distributed Systems, Programming Languages and Compilers, and High Performance Computing. He has designed and implemented state of the art program analysis and compiler optimization techniques and he developed compiler tools to increase program parallelization and improve execution performance on advanced computer architectures. He has published extensively in top journals and conferences in the field and his research has been funded by the National Science Foundation and the Department of Defense. He is an Editor of the Parallel Computing journal. He has served on the Program Committees of several international conferences including the ACM International Conference on Supercomputing (ICS) in 1995, 2000, 2006 and 2008, the IEEE International Conference on High Performance Computing and Communications (HPCC) in 2008, 2009 and 2010, and the ACM Symposium on Applied Computing (SAC) in 2003, 2004, 2005 and 2006.