Automatic Regularization Parameter Selection

Date:

Download here

Regularization is a process that arises in machine learning and inverse problems. It is necessary when there are insufficient data or incomplete models, and can prevent the overfitting of data. The amount of regularization introduced to a problem is controlled with a regularization parameter. The regularization parameter is often adjusted manually until a solution is obtained that appears good, but there are automatic methods for finding one, for example the discrepancy principle, L-curve and generalized cross validation. In the case of least squares regularization, I will discuss a variant of the discrepancy principle that uses a chi-squared test to identify a regularization parameter and reduces the overfitting that typically occurs with the discrepancy principle. I will also present a chi-squared test to automatically find a regularization parameter for Total Variation regularization (TV). The chi-squared test for TV is justified by viewing the regularization term as a sparsity prior with a Laplacian distribution. Finally, I will show results from a few imaging problems using these chi-square tests for regularization parameter selection.