First ITC-NU High Performance Computing Division International Seminar

第1回 名古屋大学情報基盤センター 大規模計算支援環境研究部門 国際セミナー

  Date   January 24th, 2024
  Venue   2F Lecture Room, Information Technology Center, Nagoya University
  名古屋大学情報基盤センター 2階演習室
  Program
  • 12:00 - 12:50
  • Lunch in Nagoya University

  • 13:00 - 13:05  Opening Talk
  • Prof. Takahiro Katagiri (Nagoya University)

  • 13:05 -13:15  Dr. Fang-Pang Lin (National Center for High Performance Computing, Taiwan)
  • Title: NCHC overview

  • 13:15 - 13:35  Dr. Fang-An Kuo (National Center for High Performance Computing, National Applied Research Labs, Taiwan)
  • Title: Large-Scale CFD Simulations with HPC Containers on NCHC HPC Systems

    Abstract:
    Presented is a novel computation framework for large-scale simulation of general physical conservation laws in NCHC. The CFD framework code, named UNICONES, along with pre-and post-processing tools provides high-fidelity time-accurate flow simulations and allows user to load used-define plugins to modify their physical conservation laws for space-time integration. The code solver in UNICONES is based on the space-time conservation element and solution element (CESE) method. It implements RANS models, such as SA, SST, and K-epsilon models, and also implements a dynamic Smagorinsky subgrid-scale model for large-eddy simulations (LES). The grid mesh system is the unstructured mesh. To improve user experience on HPC systems, the containerization of UNICONES imports more features including the cross-platform execution and the automatic workflow through the job queuing system built on HPC systems. A pre-compiled UNICONES code has been integral in an HPC container of Singularity and can execute on multiple platforms without re-compiling it.

  • 13:35 - 13:45  Break

  • 13:45 - 14: 05  Prof. Serge G. Petiton (University of Lille and CNRS, France)
  • Title: Challenges for Distributed and Parallel Very Sparse Matrix computing

    Abstract:
    Exascale machines are now available, based on several different arithmetic (from 64-bit to 16-32 bit arithmetics, including mixed versions and some that are no longer IEEE compliant) and using different architectures (with network-on-chip processors and/or with accelerators). Brain-scale applications, from machine learning and AI for example, manipulate huge graphs that lead to very sparse non-symmetric linear algebra problems. Moreover, those supercomputers have been designed primarily for computational science, mainly numerical simulations, not for machine learning and AI. New applications that are maturing after the convergence of big data and HPC to machine learning and AI would probably generate postexascale computing that will redefine some programming and application development paradigms. End-users and scientists have to face a lot of challenge associated to these evolutions and the increasing size of the data.
    In this talk, after a short description of some recent evolutions having important impacts on our results, in particular about programming paradigms. I present some results obtained on the still #1 supercomputer of the HPCG list, Fugaku, for sequences of sparse matrix products, with respect to the sparsity and the size of the matrices, on the one hand, and to the number of process and nodes, on the other hand. Then, I introduce two opensource generators of very large data, allowing to evaluate several methods using very large graph-sparse matrices as data sets for several application evaluations.

  • 14:05 - 14:25  Prof. Nahid Emad (University of Paris Saclay/Versailles, France)
  • Title: A Parallel and Scalable Approach for High Performance Learning

    Abstract:
    This presentation highlights certain common characteristics and methods in the field of high-performance numerical computing and that of machine and deep learning. In this context, a new machine learning approach based on the Unite and Conquer methods, used in linear algebra, will be presented. The important characteristics of this intrinsically parallel and scalable technique make it very well suited to multi-level and heterogeneous parallel and/or distributed architectures. Experimental results demonstrating the interest of these approaches for efficient data analysis in the case of clustering, anomaly detection and road traffic simulation will be presented.

  • 14:25 - 14:45  Prof. Masatoshi Kawai (Nagoya University, Japan)
  • Title:Dynamic Core Binding for Load Balancing of Applications Parallelized with MPI/OpenMP

    Abstract
    Load imbalance is a critical problem that degrades the performance of parallelized applications in massively parallel processing. Although an MPI/OpenMP implementation is widely used for parallelization, users must maintain load balancing at the process level and thread (core) level for effective parallelization. In this paper, we propose dynamic core binding (DCB) to processes for reducing the computation time and energy consumption of applications. Using the DCB approach, an unequal number of cores is bound to each process, and load imbalance among processes is mitigated at the core level. This approach is not only improving parallel performance but also reducing power consumption by reducing the number of using cores without increasing the computational time.

  • 14:45-14:50  Closing Remarks
  • Prof. Takahiro Katagiri (Nagoya University, Japan)
最終更新:2024年1月9日