Overcomplete Transform Learning With the $log$ Regularizer
Overcomplete Transform Learning With the $log$ Regularizer
Blog Article
Transform learning has been proposed as a new and effective formulation for analysis dictionary learning, where the ℓ0 norm or the ℓ1 norm are generally used as sparsity constraint.The sparse solutions can be obtained by the hard thresholding or the soft thresholding.The hard thresholding is actually a greedy algorithm, which only obtains the approximate solutions; while the soft thresholding has a certain bias for the large elements.In this paper, we propose to employ the log regularizer instead of the ℓ0 norm and the ℓ1 norm in the overcomplete transform learning problem.Our minimization problem echofix spring reverb is nonconvex due to the log regularizer.
We propose to employ a simple proximal alternating minimization method, where a closed-form solution of the log function could be obtained based on the proximal operator.Hence, an efficient and fast berness white sneakers overcomplete transform learning algorithm is developed, which iterates based on the analysis coding stage and the transform update stage.The proposed algorithm can obtain sparser solutions and more accurate results from the theoretical analysis.Numerical experiments verify that the proposed algorithm outperforms existing transform learning approaches with the ℓ0 norm or the ℓ1 norm.Furthermore, the proposed algorithm is on par with the state-of-the-art image denoising algorithms.