Initializing AMReX (26.04-114-g95e461b1a293)... MPI initialized with 4 MPI processes MPI initialized with thread support level 0 Initializing CUDA... CUDA initialized with 1 device. AMReX (26.04-114-g95e461b1a293) initialized MLMG: # of AMR levels: 1 # of MG levels on the coarsest AMR level: 6 MLMG: Initial rhs = 15.20237123 MLMG: Initial residual (resid0) = 15.20237123 MLMG: Iteration 1 Fine resid/bnorm = 0.04891404548 MLMG: Iteration 2 Fine resid/bnorm = 0.002067126033 MLMG: Iteration 3 Fine resid/bnorm = 0.0001229681592 MLMG: Iteration 4 Fine resid/bnorm = 7.625782257e-06 MLMG: Iteration 5 Fine resid/bnorm = 6.511770765e-07 MLMG: Iteration 6 Fine resid/bnorm = 5.521876329e-08 MLMG: Iteration 7 Fine resid/bnorm = 4.520098587e-09 MLMG: Iteration 8 Fine resid/bnorm = 3.639178096e-10 MLMG: Iteration 9 Fine resid/bnorm = 2.899327531e-11 MLMG: Iteration 10 Fine resid/bnorm = 2.295349405e-12 MLMG: Final Iter. 10 resid, resid/bnorm = 3.489475375e-11, 2.295349405e-12 MLMG: Timers: Solve = 0.358394002 Iter = 0.318720633 Bottom = 0.010205935 Unused ParmParse Variables: [TOP]::amr.check_file(nvals = 1) :: [EB_Node_2D_chk] [TOP]::amr.checkpoint_files_output(nvals = 1) :: [0] [TOP]::amr.plot_file(nvals = 1) :: [EB_Node_2D_plt] Total GPU global memory (MB) spread across MPI: [7965 ... 7965] Free GPU global memory (MB) spread across MPI: [7339 ... 7393] [The Arena] max space (MB) allocated spread across MPI: [17 ... 17] [The Arena] max space (MB) used spread across MPI: [10 ... 11] [The Managed Arena] max space (MB) allocated spread across MPI: [8 ... 8] [The Managed Arena] max space (MB) used spread across MPI: [0 ... 0] [The Pinned Arena] max space (MB) allocated spread across MPI: [8 ... 8] [The Pinned Arena] max space (MB) used spread across MPI: [0 ... 0] AMReX (26.04-114-g95e461b1a293) finalized