Initializing AMReX (26.05-38-g728cddf4f1c2)... MPI initialized with 2 MPI processes MPI initialized with thread support level 0 Initializing CUDA... CUDA initialized with 1 device. AMReX (26.05-38-g728cddf4f1c2) initialized Successfully read inputs file ... Using conservative advection update for tracer. Successfully read inputs file ... NavierStokesBase::init_additional_state_types()::have_divu = 0 NavierStokesBase::init_additional_state_types()::have_dsdt = 0 NavierStokesBase::init_additional_state_types: num_state_type = 3 Diffusion settings... From diffuse: max_order = 2 tensor_max_order = 2 scale_abec = 0 From ns: do_reflux = 1 visc_tol = 1e-10 is_diffusive = 0 0 0 0 Doing initial redistribution... estTimeStep :: LEV = 0 UMAX = 1 0.04993977281 estimated timestep: dt = 0.015625 Multiplying dt by init_shrink: dt = 0.015625 calling initialVelocityProject done calling initialVelocityProject estTimeStep :: LEV = 0 UMAX = 1.52045238 1.409258839 estimated timestep: dt = 0.01027654677 Multiplying dt by init_shrink: dt = 0.01027654677 post_init_press(): doing initial pressure iterations with dt = 0.01027654677 post_init_press(): iter = 0 Advancing grids at level 0 : starting time = 0 with dt = 0.01027654677 NavierStokes::advance(): before velocity update: max(abs(u/v)) = 1.52045238 1.409258839 max(abs(gpx/gpy/p)) = 0 0 0 ... predict edge velocities ... mac_projection NavierStokesBase:mac_project(): lev: 0, time: 0.194723498 ... advect scalars ... update scalars ... advect momenta ... update scalars ... update momenta NavierStokes::advance(): exiting. max(abs(u/v)) = 1.436595748 1.395568372 max(abs(gpx/gpy/p)) = 0 0 0 After sync projection and avgDown: max(abs(u/v)) = 1.494364763 1.402058387 max(abs(gpx/gpy/p)) = 22.34128331 17.21400425 1.070366323 post_init_press(): iter = 1 Advancing grids at level 0 : starting time = 0 with dt = 0.01027654677 NavierStokes::advance(): before velocity update: max(abs(u/v)) = 1.52045238 1.409258839 max(abs(gpx/gpy/p)) = 22.34128331 17.21400425 1.070366323 ... predict edge velocities ... mac_projection NavierStokesBase:mac_project(): lev: 0, time: 0.187177959 ... advect scalars ... update scalars ... advect momenta ... update scalars ... update momenta NavierStokes:velocity_diffusion_update(): lev: 0, time: 9.400000001e-08 NavierStokes::advance(): exiting. max(abs(u/v)) = 1.503401719 1.403364358 max(abs(gpx/gpy/p)) = 22.34128331 17.21400425 1.070366323 After sync projection and avgDown: max(abs(u/v)) = 1.505235442 1.398203863 max(abs(gpx/gpy/p)) = 26.42997971 18.95951655 1.18953278 post_init_press(): exiting after 2 iterations After initial iterations: max(abs(u/v)) = 1.52045238 1.409258839 max(abs(gpx/gpy/p)) = 26.42997971 18.95951655 1.18953278 INITIAL GRIDS Level 0 2 grids 4096 cells 100 % of domain smallest grid: 64 x 32 biggest grid: 64 x 32 PLOTFILE: file = DoubleShearLayer_2d_plt00000 Write plotfile time = 0.007580311 seconds estTimeStep :: LEV = 0 UMAX = 1.52045238 1.409258839 estimated timestep: dt = 0.01027654677 Multiplying dt by init_shrink: dt = 0.01027654677 [Level 0 step 1] ADVANCE at time 0 with dt = 0.01027654677 Advancing grids at level 0 : starting time = 0 with dt = 0.01027654677 NavierStokes::advance(): before velocity update: max(abs(u/v)) = 1.52045238 1.409258839 max(abs(gpx/gpy/p)) = 26.42997971 18.95951655 1.18953278 ... predict edge velocities ... mac_projection NavierStokesBase:mac_project(): lev: 0, time: 0.186122211 ... advect scalars ... update scalars ... advect momenta ... update scalars ... update momenta NavierStokes:velocity_diffusion_update(): lev: 0, time: 1.539999999e-07 NavierStokes::advance(): before nodal projection max(abs(u/v)) = 1.505020391 1.398198808 max(abs(gpx/gpy/p)) = 26.42997971 18.95951655 1.18953278 NavierStokes::advance(): exiting. max(abs(u/v)) = 1.514038333 1.434740016 max(abs(gpx/gpy/p)) = 34.22465764 18.91764028 1.571919909 [Level 0 step 1] Advanced 4096 cells [STEP 1] Coarse TimeStep time: 0.316838258 [STEP 1] FAB kilobyte spread across MPI nodes: [1397 ... 2631] STEP = 1 TIME = 0.01027654677 DT = 0.01027654677 estTimeStep :: LEV = 0 UMAX = 1.514038333 1.434740016 estimated timestep: dt = 0.01032008216 [Level 0 step 2] ADVANCE at time 0.01027654677 with dt = 0.01027654677 Advancing grids at level 0 : starting time = 0.01027654677 with dt = 0.01027654677 NavierStokes::advance(): before velocity update: max(abs(u/v)) = 1.514038333 1.434740016 max(abs(gpx/gpy/p)) = 34.22465764 18.91764028 1.571919909 ... predict edge velocities ... mac_projection NavierStokesBase:mac_project(): lev: 0, time: 0.185538446 ... advect scalars ... update scalars ... advect momenta ... update scalars ... update momenta NavierStokes:velocity_diffusion_update(): lev: 0, time: 8.100000004e-08 NavierStokes::advance(): before nodal projection max(abs(u/v)) = 1.50103409 1.408036616 max(abs(gpx/gpy/p)) = 34.22465764 18.91764028 1.571919909 NavierStokes::advance(): exiting. max(abs(u/v)) = 1.498202638 1.388604695 max(abs(gpx/gpy/p)) = 29.25253204 21.35475822 1.452709028 [Level 0 step 2] Advanced 4096 cells [STEP 2] Coarse TimeStep time: 0.316869877 [STEP 2] FAB kilobyte spread across MPI nodes: [1397 ... 2631] STEP = 2 TIME = 0.02055309354 DT = 0.01027654677 estTimeStep :: LEV = 0 UMAX = 1.498202638 1.388604695 estimated timestep: dt = 0.01042916332 [Level 0 step 3] ADVANCE at time 0.02055309354 with dt = 0.01032008216 Advancing grids at level 0 : starting time = 0.02055309354 with dt = 0.01032008216 NavierStokes::advance(): before velocity update: max(abs(u/v)) = 1.498202638 1.388604695 max(abs(gpx/gpy/p)) = 29.25253204 21.35475822 1.452709028 ... predict edge velocities ... mac_projection NavierStokesBase:mac_project(): lev: 0, time: 0.185873206 ... advect scalars ... update scalars ... advect momenta ... update scalars ... update momenta NavierStokes:velocity_diffusion_update(): lev: 0, time: 8.700000009e-08 NavierStokes::advance(): before nodal projection max(abs(u/v)) = 1.487862052 1.341124209 max(abs(gpx/gpy/p)) = 29.25253204 21.35475822 1.452709028 NavierStokes::advance(): exiting. max(abs(u/v)) = 1.485250972 1.349121243 max(abs(gpx/gpy/p)) = 28.51503151 21.62584736 1.408447804 [Level 0 step 3] Advanced 4096 cells [STEP 3] Coarse TimeStep time: 0.317237401 [STEP 3] FAB kilobyte spread across MPI nodes: [1397 ... 2631] STEP = 3 TIME = 0.03087317571 DT = 0.01032008216 estTimeStep :: LEV = 0 UMAX = 1.485250972 1.349121243 estimated timestep: dt = 0.01052010757 [Level 0 step 4] ADVANCE at time 0.03087317571 with dt = 0.01042916332 Advancing grids at level 0 : starting time = 0.03087317571 with dt = 0.01042916332 NavierStokes::advance(): before velocity update: max(abs(u/v)) = 1.485250972 1.349121243 max(abs(gpx/gpy/p)) = 28.51503151 21.62584736 1.408447804 ... predict edge velocities ... mac_projection NavierStokesBase:mac_project(): lev: 0, time: 0.186062853 ... advect scalars ... update scalars ... advect momenta ... update scalars ... update momenta NavierStokes:velocity_diffusion_update(): lev: 0, time: 7.200000018e-08 NavierStokes::advance(): before nodal projection max(abs(u/v)) = 1.475573883 1.301389218 max(abs(gpx/gpy/p)) = 28.51503151 21.62584736 1.408447804 NavierStokes::advance(): exiting. max(abs(u/v)) = 1.471698724 1.304463225 max(abs(gpx/gpy/p)) = 27.6368312 21.48072552 1.361695961 [Level 0 step 4] Advanced 4096 cells [STEP 4] Coarse TimeStep time: 0.31745274 [STEP 4] FAB kilobyte spread across MPI nodes: [1397 ... 2631] STEP = 4 TIME = 0.04130233903 DT = 0.01042916332 estTimeStep :: LEV = 0 UMAX = 1.471698724 1.304463225 estimated timestep: dt = 0.01061698277 [Level 0 step 5] ADVANCE at time 0.04130233903 with dt = 0.01052010757 Advancing grids at level 0 : starting time = 0.04130233903 with dt = 0.01052010757 NavierStokes::advance(): before velocity update: max(abs(u/v)) = 1.471698724 1.304463225 max(abs(gpx/gpy/p)) = 27.6368312 21.48072552 1.361695961 ... predict edge velocities ... mac_projection NavierStokesBase:mac_project(): lev: 0, time: 0.186388246 ... advect scalars ... update scalars ... advect momenta ... update scalars ... update momenta NavierStokes:velocity_diffusion_update(): lev: 0, time: 5.799999991e-08 NavierStokes::advance(): before nodal projection max(abs(u/v)) = 1.461501729 1.2549639 max(abs(gpx/gpy/p)) = 27.6368312 21.48072552 1.361695961 NavierStokes::advance(): exiting. max(abs(u/v)) = 1.457024579 1.274235332 max(abs(gpx/gpy/p)) = 26.66052427 21.31472799 1.301561084 [Level 0 step 5] Advanced 4096 cells [STEP 5] Coarse TimeStep time: 0.317668445 [STEP 5] FAB kilobyte spread across MPI nodes: [1397 ... 2631] STEP = 5 TIME = 0.05182244661 DT = 0.01052010757 PLOTFILE: file = DoubleShearLayer_2d_plt00005 Write plotfile time = 0.003751732 seconds Run time = 2.39019278 Unused ParmParse Variables: [TOP]::amr.ref_ratio(nvals = 4) :: [2, 2, 2, 2] [TOP]::amr.regrid_int(nvals = 1) :: [1] Total GPU global memory (MB) spread across MPI: [7965 ... 7965] Free GPU global memory (MB) spread across MPI: [6697 ... 7187] [The Arena] max space (MB) allocated spread across MPI: [488 ... 488] [The Arena] max space (MB) used spread across MPI: [10 ... 11] [The Managed Arena] max space (MB) allocated spread across MPI: [8 ... 8] [The Managed Arena] max space (MB) used spread across MPI: [0 ... 0] [The Pinned Arena] max space (MB) allocated spread across MPI: [8 ... 8] [The Pinned Arena] max space (MB) used spread across MPI: [0 ... 0] AMReX (26.05-38-g728cddf4f1c2) finalized