Initializing AMReX (26.05-38-g728cddf4f1c2)... MPI initialized with 4 MPI processes MPI initialized with thread support level 0 Initializing CUDA... CUDA initialized with 1 device. AMReX (26.05-38-g728cddf4f1c2) initialized REMORA git hash: v1.5-98-g402a0a3 AMReX git hash: 26.05-38-g728cddf4f1 Making level 0 from scratch GRIDS AT LEVEL 0 ARE (BoxArray maxbox(6) m_ref->m_hash_sig(0) ((0,0,0) (13,39,15) (0,0,0)) ((14,0,0) (27,39,15) (0,0,0)) ((28,0,0) (40,39,15) (0,0,0)) ((0,40,0) (13,79,15) (0,0,0)) ((14,40,0) (27,79,15) (0,0,0)) ((28,40,0) (40,79,15) (0,0,0)) ) Writing plotfile DoublyPeriodicC4-xy_plt00000 Coarse STEP 1 starts ... Coarse STEP 1 ends. TIME = 300 DT = 300 Coarse STEP 2 starts ... Coarse STEP 2 ends. TIME = 600 DT = 300 Coarse STEP 3 starts ... Coarse STEP 3 ends. TIME = 900 DT = 300 Coarse STEP 4 starts ... Coarse STEP 4 ends. TIME = 1200 DT = 300 Coarse STEP 5 starts ... Coarse STEP 5 ends. TIME = 1500 DT = 300 Coarse STEP 6 starts ... Coarse STEP 6 ends. TIME = 1800 DT = 300 Coarse STEP 7 starts ... Coarse STEP 7 ends. TIME = 2100 DT = 300 Coarse STEP 8 starts ... Coarse STEP 8 ends. TIME = 2400 DT = 300 Coarse STEP 9 starts ... Coarse STEP 9 ends. TIME = 2700 DT = 300 Coarse STEP 10 starts ... Coarse STEP 10 ends. TIME = 3000 DT = 300 Writing plotfile DoublyPeriodicC4-xy_plt00010 Unused ParmParse Variables: [TOP]::amr.checkpoint_files_output(nvals = 1) :: [0] Total GPU global memory (MB) spread across MPI: [7965 ... 7965] Free GPU global memory (MB) spread across MPI: [3315 ... 6387] [The Arena] max space (MB) allocated spread across MPI: [1024 ... 1024] [The Arena] max space (MB) used spread across MPI: [16 ... 23] [The Managed Arena] max space (MB) allocated spread across MPI: [8 ... 8] [The Managed Arena] max space (MB) used spread across MPI: [0 ... 0] [The Pinned Arena] max space (MB) allocated spread across MPI: [8 ... 8] [The Pinned Arena] max space (MB) used spread across MPI: [0 ... 0] AMReX (26.05-38-g728cddf4f1c2) finalized