Convex optimization
DTU Department of Informatics and Mathematical Modeling
The aim of the course is to provide students with a general overview of convex optimization theory, its applications, and computational methods for large-scale optimization. The students will learn how to recognize convex optimization problems and how to solve these numerically using either an existing software library or by deriving/implementing a suitable method that exploits problem structure. As part of the course, the students will work on a project which aims to provide students with the opportunity to put theory to work in a practical and application-oriented context.
Learning objectives:
A student who has met the objectives of the course will be able to:
- recognize and characterize convex functions and sets
- explain/characterize the subdifferential of a convex function
- describe basic concepts of convex analysis
- derive the Lagrange dual of a convex optimization problem
- recognize and formulate conic constraints
- derive a convex relaxation of nonconvex quadratic problems
- implement a first-order method for a large-scale optimization problem with structure
- construct and implement a splitting method for a convex–concave saddle-point problem
- evaluate the computational performance of an optimization algorithm
Contents:
Convex analysis (convex sets and functions, convex conjugate, duality, dual norms, composition rules, subgradient calculus), conic optimization (linear optimization, second-order cone optimization, semidefinite optimization), first-order methods for smooth and nonsmooth optimization (proximal gradient methods, acceleration), splitting methods (Douglas–Rachford splitting, ADMM, Chambolle–Pock algorithm), stochastic methods, incremental methods and coordinate descent methods.