In this talk, we consider a class of L1-regularized optimization problems and the associated smooth “over-parameterized” optimization problems built upon the Hadamard difference parametrization (HDP). We show that second-order stationary points of the HDP-based model correspond to some stationary points of the corresponding L1-regularized model. More importantly, we show that the Kurdyka-Łojasiewicz (KL) exponent of the HDP-based model at a second-order stationary point can be inferred from that of the corresponding L1-regularized model under suitable assumptions. Our assumptions are general enough to cover a wide variety of loss functions commonly used in L1-regularizations, such as the least squares loss function and the logistic loss function. We also discuss how these KL exponents can help deduce the local convergence rate of a standard gradient method for minimizing the HDP-based models.