NONSMOOTH OPTIMIZATION PDF DOWNLOAD

The most difficult type of optimization problem to solve is a nonsmooth problem (NSP). Such a problem normally is, or must be assumed to be non-convex. This book is a self-contained elementary study for nonsmooth analysis and optimization, and their use in solution of nonsmooth optimal control problems. As many problems in computer vision, robotics, signal processing and geometric mechanics are expressed as nonsmooth optimization problems, the huge.


NONSMOOTH OPTIMIZATION PDF DOWNLOAD

Author: Davin Dibbert
Country: Germany
Language: English
Genre: Education
Published: 9 January 2014
Pages: 236
PDF File Size: 49.89 Mb
ePub File Size: 29.20 Mb
ISBN: 755-8-38102-880-2
Downloads: 504
Price: Free
Uploader: Davin Dibbert

NONSMOOTH OPTIMIZATION PDF DOWNLOAD


Moreover, using certain important methodologies for solving difficult smooth continuously differentiable problems leads directly to the need to solve nonsmooth problems, which are either smaller in dimension or simpler in structure.

This is the case, for instance in decompositions, dual formulations and exact penalty functions. Finally, there exist so called stiff problems that are analytically smooth but nonsmooth optimization nonsmooth. This means that the gradient varies too rapidly and, thus, these nonsmooth optimization behave like nonsmooth problems.

There are several approaches to solve NSO problems.

Napsu Karmitsa - Nonsmooth Optimization (NSO)

The direct application of smooth gradient-based methods to nonsmooth problems is a simple approach but it may lead to a failure in convergence, in optimality conditions, or in nonsmooth optimization approximation [ 1 ].

All these difficulties arise from the fact that the objective function fails to have a derivative nonsmooth optimization some values of the variables.

The following figure demonstrates the difficulties that are caused by nonsmoothness. Difficulties caused by nonsmoothness Nonsmooth problem Descent direction is obtained at the opposite nonsmooth optimization of the gradient The gradient does not exist at every point, leading to difficulties in defining the descent direction.

NONSMOOTH OPTIMIZATION PDF DOWNLOAD

The necessary optimality condition Nonsmooth optimization usually does not exist at the optimal point. Difference approximation can be used to approximate the gradient. Difference approximation is not useful and may lead to serious failures. The smooth algorithm does not converge or it converges nonsmooth optimization a non-optimal point.

On the other hand, using some derivative free method may be another approach but standard derivative free methods like genetic algorithms nonsmooth optimization Powell's method may be unreliable and become inefficient as the dimension of the problem increases.

Nonsmooth Optimization (NSO)

Moreover, the convergence of such methods has been proved only for smooth functions. In addition, different kind of smoothing and regularization techniques may give satisfactory results in some cases but are not, in general, nonsmooth optimization efficient as the nonsmooth optimization nonsmooth approach [ 4 ].

Thus, special tools for solving NSO problems are needed. Methods for solving NSO problems include subgradient methods see e.

Convex and Nonsmooth Optimization

All of them are based on the assumption that only the objective function value and one arbitrary subgradient generalized gradient [ 2 nonsmooth optimization at each point are available.

The basic idea behind the subgradient methods is to generalize smooth methods by replacing the gradient with an arbitrary subgradient. Due to this simple structure, they are widely used NSO methods, although they may suffer nonsmooth optimization some serious drawbacks this is true especially with the simplest versions of subgradient methods [ 3 ].

An extensive overview of various subgradient methods can be found in [ 6 ]. At the moment, bundle methods are regarded as the most effective and reliable methods for NSO. They are based on the subdifferential theory developed by Rockafellar [ 5 ] and Clarke [ 2 ], where the classical differential theory is generalized for convex and locally Lipschitz continuous functions, respectively.

The basic idea of bundle methods is to approximate nonsmooth optimization subdifferential that is, the set of subgradients of the objective function by gathering subgradients from previous iterations into a bundle.



Related Post