Initializing AMReX (26.05-38-g728cddf4f1c2)... MPI initialized with 4 MPI processes MPI initialized with thread support level 0 Initializing CUDA... CUDA initialized with 1 device. AMReX (26.05-38-g728cddf4f1c2) initialized REMORA git hash: v1.5-98-g402a0a3 AMReX git hash: 26.05-38-g728cddf4f1 Making level 0 from scratch GRIDS AT LEVEL 0 ARE (BoxArray maxbox(6) m_ref->m_hash_sig(0) ((0,0,0) (6,14,9) (0,0,0)) ((7,0,0) (13,14,9) (0,0,0)) ((14,0,0) (20,14,9) (0,0,0)) ((21,0,0) (27,14,9) (0,0,0)) ((28,0,0) (34,14,9) (0,0,0)) ((35,0,0) (41,14,9) (0,0,0)) ) Calling init_masks_from_netcdf level 0 Loading masks from NetCDF file dogbone_grd_whole_classic64.nc Masks loaded from netcdf file Calling init_bathymetry_from_netcdf Loading initial bathymetry from NetCDF file dogbone_grd_whole_classic64.nc Bathymetry loaded from netcdf file Calling init_zeta_from_netcdf on level 0 Loading initial sea surface height from NetCDF file dogbone_ini_whole_classic64.nc Sea surface height loaded from netcdf file Calling init_data_from_netcdf Loading initial solution data from NetCDF file dogbone_ini_whole_classic64.nc Initial data loaded from netcdf file Writing plotfile Dogbone_plt00000 Coarse STEP 1 starts ... Coarse STEP 1 ends. TIME = 6 DT = 6 Coarse STEP 2 starts ... Coarse STEP 2 ends. TIME = 12 DT = 6 Coarse STEP 3 starts ... Coarse STEP 3 ends. TIME = 18 DT = 6 Coarse STEP 4 starts ... Coarse STEP 4 ends. TIME = 24 DT = 6 Coarse STEP 5 starts ... Coarse STEP 5 ends. TIME = 30 DT = 6 Coarse STEP 6 starts ... Coarse STEP 6 ends. TIME = 36 DT = 6 Coarse STEP 7 starts ... Coarse STEP 7 ends. TIME = 42 DT = 6 Coarse STEP 8 starts ... Coarse STEP 8 ends. TIME = 48 DT = 6 Coarse STEP 9 starts ... Coarse STEP 9 ends. TIME = 54 DT = 6 Coarse STEP 10 starts ... Coarse STEP 10 ends. TIME = 60 DT = 6 Writing plotfile Dogbone_plt00010 Unused ParmParse Variables: [TOP]::amr.checkpoint_files_output(nvals = 1) :: [0] [TOP]::remora.coriolis_type(nvals = 1) :: [netcdf] Total GPU global memory (MB) spread across MPI: [7965 ... 7965] Free GPU global memory (MB) spread across MPI: [3315 ... 5363] [The Arena] max space (MB) allocated spread across MPI: [1024 ... 1024] [The Arena] max space (MB) used spread across MPI: [10 ... 11] [The Managed Arena] max space (MB) allocated spread across MPI: [8 ... 8] [The Managed Arena] max space (MB) used spread across MPI: [0 ... 0] [The Pinned Arena] max space (MB) allocated spread across MPI: [8 ... 8] [The Pinned Arena] max space (MB) used spread across MPI: [0 ... 0] AMReX (26.05-38-g728cddf4f1c2) finalized