CCSE Home

CCSE Home

Overview

AMReX

People

Publications


Microelectronics and Quantum Chip Modeling


CCSE Microelectronics Modeling Team

Zhi Jackie Yao

Saurabh Sawant

Prabhat Kumar

Revathi Jambunathan

Andy Nonaka


Overview


Researchers in CCSE have developed several AMReX-based code packages that enable physical modeling and simulation of next-generation microelectronic devices.

  • ARTEMIS - Time-domain electrodynamics solver

  • FerroX - 3D phase-field framework for ferroelectric devices

  • Quantum-eXstatic - coupled electrostatic / quantum transport code for nanoscale transport

  • MagneX - Magnetostatic solver

  • Phonon tranport on quantum chips - Ballistic phonon solver for quantum chip design.

  • Machine learning approaches for predictive design and computational modeling

  • What is ARTEMIS?


    ARTEMIS (Adaptive mesh Refinement Time-domain ElectrodynaMics Solver) is a time-domain electrodynamics solver developed in CCSE that is fully open-source and portable from laptops to many-core/GPU exascale systems. The core solver is a finite-difference time-domain (FDTD) implementation for Maxwell's equations that has been adapted to conditions found in microelectronic circuitry. This includes spatially-varying material properties, boundary conditions, and external sources to model our target problems. In order to achieve portability and performance on a range of platforms, ARTEMIS leverages the developments of two DOE Exascale Computing Project (ECP) code frameworks. First, the AMReX software library is the product of the ECP co-design center for adaptive, structured grid calculations. AMReX provides complete data structure and parallel communication support for massively parallel many-core/GPU implementations of structuredgrid simulations such as FDTD. Second, the WarpX accelerator code is an ECP application code for modeling plasma wakefield accelerators and contains many features that have been leveraged by ARTEMIS. These features include core computational kernels for FDTD, an overall time stepping framework, and I/O. Using the ARTEMIS python-style function interpreter, we can define more advanced structures containing many different material types using different geometrical configurations. Additionally, the GPU capability of the code provides extreme speed, as a GPU build offers a 59x speedup over the host on a node-by-node basis. Thus, using HPC resources will allow for high-resolution and rapid prototyping of various configurations with different geometries and material properties. We also note algorithmic flexibility for additional physics such as magnetic and superconducting materials.
    Artemis_overview
    ARTEMIS is part of the two DOE-funded Microelectronics-CoDesign programs at Berkeley Lab (Click here for the full list of awards) . Overview of the device-level modeling capability, the ARTEMIS package. ARTEMIS bridges the gap between material physics and circuit model of PARADISE, by solving governing PDEs of the physics in devices such as NCFET and MESO. Since ARTEMIS is based off two ECP products, AMReX and WarpX, it fully functions on GPU supercomputers such as NERSC perlmutter system, providing rapid device-level modeling to the co-design workflow.

    For more information about ARTEMIS or any of the applications below, contact the ARTEMIS Team or visit the ARTEMIS github page.




    ARTEMIS for Magnon-Photon Dynamics


    The comprehensive simulation of coupled magnon-photon coupling circuits has historically posed challenges due to the significant scale disparity inherent in magnonics and electrodynamics. Using ARTEMIS, we address this challenge by employing a fully coupled computational approach that simultaneously solves the equations governing ferromagnetic dynamics and electromagnetic dynamics. we have built the coupled model to include spatial inhomogeneity of materials, containing both magnetic and non-magnetic regions. In magnetic regions, our coupled algorithm solves both Maxwell's equations and the Landau-Lifshitz-Gilbert equation. To effectively handle the spatial disparity, we have devised a massively GPU-parallelized code package as our computational strategy. The numerical outcomes from our approach reveal the emergence of an anti-crossing spectrum between magnons and photons.

    ARTEMIS for Transmission Line Analysis


    Modeling and characterization of electromagnetic wave interactions with microelectronic devices to derive network parameters has been a widely used practice in the electronic industry. However, as these devices become increasingly miniaturized with finer-scale geometric features, computational tools must make use of manycore/GPU architectures to efficiently resolve length and time scales of interest.
    Sparameter
    Left: A microscale transmission line with a Z-directional electric field excitation for computing S-matrix between ports (1) and (2). Bottom Right: Components of S-matrix as a function of frequency. Top Right: Weak-scaling efficiency of ARTEMIS on NVIDIA A100 GPUs using NERSC's Perlmutter supercomputer.
    This has been the focus of our open-source solver, ARTEMIS, which is performant on modern GPU-based supercomputing architectures while being amenable to additional physics coupling. This work demonstrates its use for characterizing network parameters of transmission lines using established techniques. A rigorous verification and validation of the workflow is carried out, followed by its application for analyzing a transmission line on a CMOS chip designed for a photon-detector application. Simulations are performed for millions of timesteps on state-of-the-art GPU resources to resolve nanoscale features at gigahertz frequencies. The network parameters are used to obtain phase delay and characteristic impedance that serve as inputs to SPICE models. The code is demonstrated to exhibit ideal weak scaling efficiency up to 1024 GPUs and 84% efficiency for 2048 GPUs, which underscores its use for network analysis of larger, more complex circuit devices in the future. The details can be found in the following publication.




    ARTEMIS for Superconducting Resonators


    In collaboration with Richard Lombardini of Saint Mary's University, we have implemented a new London equation module for superconductivity in the GPU-enabled ARTEMIS framework, and coupled it to a finite-difference time-domain solver for Maxwell's equations. We applied this two-fluid approach to model a superconducting coplanar waveguide (CPW) resonator and validated our implementation by verifying that the theoretical skin depth and reflection coefficients can be obtained for several superconductive materials, with different London penetration depths, over a range of frequencies. Our convergence studies show that the algorithm is second-order accurate in both space and time. In our CPW simulations, we leverage the GPU scalability of our code to compare the two-fluid model to more traditional approaches that approximate superconducting behavior and demonstrate that superconducting physics can show comparable performance to the assumption of quasi-infinite conductivity as measured by the Q-factor. The details can be found in this recent publication. SC_Resonator On the left is an illustration of a CPW resonator structure found in quantum readout applications with superconducting films sitting atop a silicon substrate. The right depicts the spatial variation of electric field along an x-z slice passing through the transmission lines. The dark shaded regions indicate metal (either conducting or superconducting). The red and blue shading indicates the magnitude of the Ex field (blue/red = +/-0.001 V/m) near the end of the simulation, illustrating the fundamental mode. The inset is an x-z slice with normal in the y-direction extract at the front of the resonator line, y = -300 microns, with vectors illustrating the electric field.


    FerroX


    FerroX is a massively parallel 3D phase-field simulation framework for modeling and design of ferroelectric-based microelectronic devices. Due to their switchable polarization in response to applied electric fields, ferroelectric materials have enabled a wide portfolio of innovative microelectronics devices, such as ferroelectric capacitors, nonvolatile memories, and ferroelectric field effect transistors (FeFET). FeFETs, in particular, are designed to overcome the fundamental energy consumption limit (the "Boltzmann's Tyranny") associated with individual semiconductor components, allowing for the design of ultra low-power logic technologies. The goal of FerroX is to provide an in-depth insight into the underlying physics and to facilitate researchers with a reliable design tool for novel microelectronic devices. One of the key challenges in the modeling of devices such as FeFETs is the intrinsic multiphysics nature of the multimaterial stacks. Typical ferroelectric devices involve at least three coupled physical mechanisms: ferroelectric polarization switching, semiconductor electron transport, and classical electrostatics, each of which includes rich underlying physics. FerroX self-consistently couples the time-dependent Ginzburg Landau (TDGL) equation for ferroelectric polarization, Poisson's equation for electric potential, and charge equation for carrier densities in semiconductor regions. We discretize the coupled system of partial differential equations using a finite difference approach, with an overall scheme that is second-order accurate in both space and time. The algorithm is implemented using the AMReX software framework, which provides effective scalability on manycore and GPU-based supercomputing architectures. We have demonstrated the performance of the algorithm with excellent scaling results on NERSC multicore and GPU systems, with a significant (15x) speedup on the GPU using a node-by-node comparison. Additional details can be found in this recent publication. Our ongoing efforts include implementation of the capability to quantify the effect of tetragonal/orthorhombic phase mixtures in the negative capacitance stabilization and effective oxide thickness lowering. In addition, we are adding features to model carrier transport in semiconductor region to enable first full 3D simulation of FeFETs. MFISM MFISM stack with 5 nm thick HZO on top, followed by 1 nm thick SiO2, and a 10 nm thick Si as the ferroelectric, dielectric, and semiconductor layers, respectively. Vertical direction represents the thickness of the device (z). For an applied voltage, V app = 0 V (a) Polarization distribution showing multi-domain formation in HZO (b) Potential distribution induced in the semiconductor (c) Electric field vector plot in semiconductor.


    Quantum-eXstatic


    Quantum-eXstatic is an exascale electrostatic-quantum transport framework currently supporting the modeling of carbon nanotube field-effect transistors (CNTFETs). It is developed as part of a DOE-funded project called 'Codesign and Integration of Nanosensors on CMOS.' One of the applications of CNTFETs is their use as sensors in advanced photodetectors, where carbon nanotubes are particularly attractive due to their high surface-to-volume ratio, making them highly sensitive to their environment. Such sensing applications require modeling arrays of nanotubes on the order of hundreds of nanometers in length, which may be functionalized with photosensing materials such as quantum dots. The goal of Quantum-eXstatic is to model such systems, making efficient use of CPU/GPU heterogeneous architectures. The framework comprises three major components: the electrostatic module, the quantum transport module, and the part that self-consistently couples the two modules. The electrostatic module computes the electrostatic potential induced by charges on the surface of carbon nanotubes, as well as by source, drain, and gate terminals, which can be modeled as embedded boundaries with intricate shapes. The quantum transport module uses the nonequilibrium Green's function method to model induced charge. Currently, it supports coherent (ballistic) transport, contacts modeled as semi-infinite leads, and Hamiltonian representation using the tight-binding approximation. The self-consistency between the two modules is achieved using Broyden's modified second algorithm, which is parallelized on both CPUs and GPUs. Preliminary studies have demonstrated that the electrostatic and quantum transport modules can compute the potential on billions of grid cells and compute the Green's function for a material with millions of site locations within a couple of seconds, respectively.

    Quantum_eXstatic
    A scalable approach to modeling CNTFETs using the 3D exascale electrostatic-quantum transport framework. The lower left figure shows the variation in the self-consistently computed electrostatic field in a gate-all-around CNTFET due to variation in the user-defined gate-source voltage for a fixed drain-source bias of -0.1 V. The lower right figure shows the magnitude of the drain-source current as a function of gate-source voltage for the same CNTFET, along with a comparison to the results of Léonard and Stewart (2006). The subthreshold swing of 69 mV/decade is calculated from this plot.


    MagneX


    For classes of micromagnetic problems where the electromagnetic fields are slowly varying, the magnetostatic approximation offers huge computational savings. We are developing a new micromagnetics code, MagneX for modeling ferromagnetic materials using the magnetostatic approximation. MagneX incorporates many different physical phenomena, such as exchange coupling, anisotropy, Dzyaloshinskii Moriya interaction (DMI), and demagnetization. The GPU-capabilities provided by the AMReX library makes MagneX a powerful tool for simulating a wide range of micromagnetic applications, including magnetic storage and memory devices.


    phononeX: Phonon Transport on Quantum Chips


    We have recently developed a modeling package, phononeX, to simulate the dynamics of ballistic phonons to apply to the design of quantum chips in support of the DOE-ACCELERATE project, "Phonon control for next-generation superconducting systems and sensors." The code employs a Monte Carlo approach to solving the Boltzmann transport equation, using the relaxation time method to account for phonon scattering. Phonons are generated stochastically from an equilibrium source via the Planck distribution. At low temperatures, low phonon density and energy allow for very simple phonon transport models, i.e. the Debye model. Arbitrary geometries can be simulated, including effects such as surface roughness and inhomogeneous temperatures. PhononeX capitalizes on AMReX, which ensures its portability across various supercomputers and PCs. It is highly scalable on GPUs/CPUs, allowing for massive parallelization, and its algorithmic flexibility facilitates future advancements in modeling new physical mechanisms.


    Machine Learning


    We are working on two machine learning projects that leverage simulation efforts described above. First, we are using the FerroX framework to enable design of NCFET gate stack with machine learning. In collaboration with Jorge Munoz of UTEP, we are employing multilayer perceptrons (MLP) to directly map input variables (e.g., the thickness of ferroelectric, dielectric, and Landau free energy coefficients) to the output variables (e.g., semiconductor voltage). We are also testing the convolutional neural network (CNN) as a means of establishing a mapping from the field quantities to semiconductor voltage. Also, we are using Gaussian Process Regression (GPR) to discern the significance of the input parameters. Using the Radial Basis Function (RBF) kernel, the model was trained to comprehend the influence of parameters such as ferroelectric (FE) and dielectric (DE) thicknesses on capacitance at a given applied voltage. Second, we are developing highly performant ML-augmented models, that integrate neural networks directly into multi-physical mechanistic models to capture the coupling mechanisms between different physics. This project focuses on the capabilities of the ARTEMIS framework to model spintronic devices, a thriving technique that utilizes magnetic spins to control and manipulate electric signals with remarkable scalability and low power dissipation. We are using neural networks as a universal approximator to model the coupling phenomena, which can alleviate stiffness in multiple PDEs and significantly enhance the efficiency of scientific modeling. The core approach is that we will solve the governing PDE of magnetic spins and incorporate neural networks to represent the coupling torque term coming from the electric field. This enables us to fully utilize the immense computational power of modern manycore/GPU exascale supercomputers (this approach surpasses the capabilities of popular PINNs models), while taking advantage of the acceleration offered by machine learning techniques.


    Publications


    P. Kumar, M. Hoffmann, A. Nonaka, S. Salahuddin, and Z. Yao, 3D ferroelectric phase field simulations of polycrystalline multi-phase hafnia and zirconia based ultra-thin films, submitted for publication. [arxiv]

    R. Jambunathan, Z. Yao, R. Lombardini, A. Rodriguez, and A. Nonaka, Two-Fluid Physical Modeling of Superconducting Resonators in the ARTEMIS Framework, Computer Physics Communications, 291, 2023. [link]

    P. Kumar, A. Nonaka, R. Jambunathan, G. Pahwa, S. Salahuddin, and Z. Yao, FerroX: A GPU-accelerated, 3D Phase-Field Simulation Framework for Modeling Ferroelectric Devices, Computer Physics Communications, 108757, 2023. [link]

    S. Sawant, Z. Yao, R. Jambunathan, and A. Nonaka, Characterization of Transmission Lines in Microelectronics Circuits using the ARTEMIS Solver, IEEE J. on Multiscale and Multiphysics Comp. Tech., 8, 2022. [link]

    Z. Yao, R. Jambunathan, Y. Zeng, and A. Nonaka, A Massively Parallel Time-Domain Coupled Electrodynamics-Micromagnetics Solver, International Journal of High Performance Computing Applications, 10943420211057906, 2021. [link]