Smoothing (say by a Guassian kernel) has been a very popular technique for optimizing a nonconvex objective function. The rationale behind smoothing is that the smoothed function has less spurious local minima than the original one. This technique has seen tremendous success in many real world tasks such as those arising in machine learning and computer vision. Despite its empirical success, there has been little theoretical understanding about the effect of smoothing in optimization. This work rigorously studies some of the fundamental properties of the smoothing technique. In particular, we present a formal definition for the functions that can eventually become convex by smoothing. We clarify the related necessary and sufficient conditions and present a closed-form expression for the minimizer of the resulted smoothed function, when it satisfies certain decay conditions. Finally, we show that the class of functions which are asymptotically convex is very rich.