Abstract: In this work we develop descent methods for the minimization of nonsmooth optimization functions that involve L1-penalizations. The descent direction is enriched by second order information obtained by the regularization of the nonsmooth terms. This strategy is applied to group sparse optimization problems and to incompressible bi-viscous fluids.
Second Order Descent Methods for Optimization Problems that involve L1-Penalizations
Bilevel parameter learning for total variation image denoising: optimality conditions and numerical solution
Abstract: Computational imaging restoration models rely heavily on the choice of the parameters used. Bilevel parameter learning is a supervised learning approach for estimating optimal parameters based on a training set of clean and damaged image pairs. This work will find suitable constraint qualification conditions and characterize optimality conditions for the bilevel learning problem applied to image denoising models involving the total variation seminorm using non-smooth analysis and variational geometry tools. Furthermore, we will solve the bilevel problem numerically using a tailor-made trust-region algorithm based on a characterization of the linear elements of the Bouligand sub-differential of the solution operator.
Data assimilation: regularity and applications
Abstract: In this work, we study variational data assimilation (DA) problems in finite and infinite dimensions. We consider bilevel optimization problems dealing with data assimilation and optimal placement. Additionally, we explore the regularity of the 4th-dimensional variational problem (4D-Var) in its infinite-dimensional setting. Finally, we study the application of Bayesian variational data assimilation in solving a parameter estimation problem when there is a high degree of uncertainty in the data. We use ensemble methods to compute the error covariance matrices needed in the problem formulation.