Timur Takhtaganov

Postdoctoral Researcher, Computational Research Division

Contact Information

Timur Takhtaganov
MS 50A-3111
Lawrence Berkeley National Lab
1 Cyclotron Rd.
Berkeley, CA 94720

510-486-5835 (office)
510-486-6900 (fax)

Affiliation and Research Interests

I am a postdoctoral researcher in the Center for Computational Sciences and Engineering (CCSE) in the Computational Research Division of the Computing Sciences Directorate at the Lawrence Berkeley National Laboratory. My research focuses on the development of efficient numerical algorithms for solving optimization problems with computationally expensive simulation constraints that are subject to uncertainty in the parameters.

Current research

The current focus of my work is on solving parameter inference problems when the cost of forward model evaluations is high. I utilize Gaussian process models as computationally inexpensive surrogates of the forward model.

The novelty of my work is in using an adaptive approach to refining the Gaussian process surrogate in a principled way by utilizing ideas from Bayesian optimization and employing the uncertainty estimates that the Gaussian process framework provides. The developed approach requires solving an auxiliary optimization problem which involves only evaluations of the surrogate model, and, thus, has a negligible computational cost. The goal of the optimization step is to propose a new input parameter to evaluate the forward model which can then be added to the training set. Proceeding iteratively the algorithm obtains a ``localized'' Gaussian process surrogate that is appropriate for solving the original inference problem without requiring a high accuracy of the surrogate globally. The obtained surrogate is used for generating the samples from the posterior distribution of the parameters of interest at a low cost enabling further analysis and calibration tasks. For details of the method see the pre-print in the Publications section or the presentation in the Talks section.

Previous research

My doctoral research focused on the numerical approaches to the solution of optimization problems constrained by partial differential equations (PDEs) with random input data. PDE-constrained optimization problems arise in numerous engineering applications, such as topological design of elastic structures or optimization of oil reservoir production. Often the parameters of the PDEs are not known exactly and estimated from data. This uncertainty must be quantified when formulating and solving optimal design/control problems in order to obtain solutions that are robust in the face of randomness. Solving PDEs with random coefficients (random PDEs) is a challenging task from a computational perspective. In order to approximate the statistics of a quantity of interest associated with the solution of a random PDE, one needs to evaluate the solution for many samples of random parameters, which can be very expensive if the model PDE has a large number of solution variables, and especially if it has nonlinear dynamics. On top of that, if one wishes to optimize the quantity of interest associated with the solution of a random PDE, the number of PDE solves and the computational cost increase dramatically.

In my work I have addressed the numerical solution of the optimization problems constrained by random PDEs that incorporate the concept of risk averseness. Risk-averse formulations allow the decision maker to explicitly quantify the risk associated with a particular value of a design/control variable, and to obtain a solution that is more robust to the fluctuations in the random parameters. Unfortunately, the risk-averse approach leads to a higher computational cost, since accurate evaluation of the risk functions of the random quantities of interest requires a large number of samples. My work was primarily concerned with reducing the cost of evaluating the risk functions in risk-averse optimization problems by exploiting the mathematical properties of such functions, advanced statistical sampling techniques, such as importance sampling, and surrogate models of the underlying PDEs, such as reduced-order models.


T. Takhtaganov, J. Mueller "Adaptive Gaussian process surrogates for Bayesian inference", submitted to the SIAM/ASA Journal on Uncertainty Quantification, 2018. [arXiv] abstract

M. Heinkenschloss, B. Kramer, T. Takhtaganov, K. Willcox "Conditional-Value-at-Risk estimation via reduced-order models", SIAM/ASA Journal on Uncertainty Quantification, 6(4), 1395-1423, 2018. [link] abstract

T. Takhtaganov "Efficient estimation of coherent risk measures for risk-averse optimization problems governed by partial differential equations with random inputs", PhD thesis, Department of Computational and Applied Mathematics, Rice University, 2017. [link]

T. Takhtaganov, D. P. Kouri, D. Ridzal "An importance sampling approach to risk estimation", Summer Proceedings of the Center for Computing Research at Sandia National Laboratories, 2016. [link]

T. Takhtaganov "High-dimensional integration for optimization under uncertainty", Master's thesis, Department of Computational and Applied Mathematics, Rice University, 2015. [link]

T. Takhtaganov, D. P. Kouri, D. Ridzal, E. Keiter "Optimization under uncertainty for the Shokley and the drift-diffusion models of a diode", Summer Proceedings of the Center for Computing Research at Sandia National Laboratories, 2014. [link]


T. Takhtaganov, "Adaptive construction of Gaussian process surrogates for Bayesian solution of inverse problems", in Research Challenges and Opportunities at the interface of Machine Learning and Uncertainty Quantification workshop, USC, Los Angeles, California, June 4-6, 2018. [slides] abstract


LinkedIn profile           ResearchGate profile           Google Scholar profile