HPC

The work of our group and our collaborations depends critcally on high-performance computing (HPC) resources. We are fortunate to have access to a range of Canadian and international HPC resources.

Canadian HPC resources

Niagara

The work horse for our 3D stellar hydrodynamics simulations is the Niagara supercomputer operated by SciNet at the University of Toronto. The Niagara supercomputer is part of the Digital Alliance. It features a fast interconnect and has 80,640 cores. This capability cluster allows us to run large-parallel jobs where we utilize typically hundreds of nodes each with 40 CPU cores at the same time in one run. This allows us to perform simulations on 3D grids typically between 7683 to 15363 cells for order 106 time steps.

Cedar

Cedar is a general purpose capability HPC system at Simon Fraser University. We use cedar for nuclear astrophysics simulations, especially large nuclear reaction rate impact studies for the CaNPAN collaboration. These simulations relate our latest results on the dynamic properties of the origin of the elements derived from our 3D simulations to the new nuclear physics data needs that these novel scenarios require. Experimentalists at TRIUMF and other labs internationally, e.g. in the IReNA network rely on such impact studies to guide their experiments.

Arbutus cloud computing

Arbutus is a cloud computing system operated by operated by University of Victoria Research Computing Services. The cloud hosts our two virtual research platforms www.PPMstar.org and astrohub.uvic.ca, as well as our Mattermost and GitLab servers and our Globus endpoints.

These platforms are our primary research tools that provide a key link between our simulation results and the broader research and HQP community. Both platforms provide multiple JupyterHub instances for interactive data analysis and visualization, and a range of other tools for data sharing and collaboration. The make some of our codes and our data available to the community. Members of the community can use the platforms to run their own simulations, analyze their and our data, and share their results with others. We use the platforms to train our students and postdocs in the use of our codes and to collaborate with our international partners. And of course, we use these platforms as our primary research tools every day in our own work, to plan and prepare new simulations, to connect to the HPC clusters, move data around, and to analyze and visualize our results.

The www.PPMstar.org platform is dedicated to our 3D simulations of stellar hydrodynamics.

The Astronomy Research Center platform astrohub.uvic.ca features our nuclear astrophysics simulations and the data that we use to study the origin of the elements in the Universe. The platform enables our collaborations, espacially the NuGrid collaboration. But it also enables research network-based training in a multi-disciplinary environment. For example, the TINA Hub (Training in Nuclear Astrophysics) operates on has been developed jointly with JINA colleague Hendrik Schatz at Michigan State University. This environment and others on the platform have been used over the years for a wide variety of research training, such as Physics of Atomic Nuclei (PAN) school at MSU (for high-school students, and for educators) in 2021 and 2022, the NuGrid/JINA-CEE/ChETEC School (2018), the ChETEC School in Zagreb (2020), the Thailand-UK Python+Astronomy Summer School 2018, 2019 and 2020 as well as undergraduate training at UVic, UBC and at the college level in Cork, Ireland. The Astrohub platform has had 579 identified different users, many of whom accessed the platform for research training.

We have developed the software stack for these platforms ourselves, based on a previous collaboration with CANFAR and the CADC.

International HPC resources

We are participating in the computing time allocation that our collaborator Paul Woodward’s Laboratory of Computational Science and Engineering at the University of Minnesota has been obtaining over the past year at the TACC Frontera supercomputer with a peak performance of 23.5 petaflops. An example of our work on Frontera are the runs performed at TexaScale 2024.