![]() In other words, for a given f, only one function g has to be determind to give the representation formula (1.1) of f.Ī neural network, used as an input/output machine, consists of an input layer, one or more hidden layers, and an output layer. ![]() Where g is a continuous function that depends on f. According to this theorem, there exist universal Lipschitz continuous functions ϕ 1, ⋯ , ϕ 2 D+1 and universal scalars λ 1, ⋯ , λ D ∈ (0, 1), for which every continuous function f : D → ℝ can be written as To motivate this idea, let us first recall a theorem originating with Kolmogorov and Arnold. We will illustrate this general line of ideas by using neural networks as an example. From this perspective, the problem of extending f from the traning data set C to x not in C in machine learning is called the generalization problem. ![]() In this case, a statistical scheme, such as some kind of averaging of f x being the simplest one, can be used to obtain a desired value f( x) for the sample of f at x, x ∈ C. In practice, because of the random nature of the data, it may be possible that there are several pairs of the form ( x, f x) in the data for the same values of x. In terms of function approximation, the data for learning and prediction can be formulated as x ∈ C, where C is a finite training data set. Machine learning is an active sub-field of Computer Science on algorithmic development for learning and making predictions based on some given data, with a long list of applications that range from computational finance and advertisement, to information retrieval, to computer vision, to speech and handwriting recognition, and to structural healthcare and medical diagnosis. Our ideas are easily extended to solve both the pre-image problem and the out-of-sample extension problem, with a priori bounds on the growth of the function thus extended. In addition, these methods are universal, in that they do not require a prior knowledge of the smoothness of the target function, but adjust the accuracy of approximation locally and automatically, depending only upon the local smoothness of the target function. Our methods do not involve such optimization techniques as back-propagation, while assuring optimal (a priori) error bounds on the output in terms of the number of derivatives of the target function. Deep nets (or multilayered neural networks) are proposed to accomplish this approximation scheme by using the training data. For □ embedded in a high dimensional ambient Euclidean space ℝ D, a deep learning algorithm is developed for finding a local coordinate system for the manifold without eigen-decomposition, which reduces the problem to the classical problem of function approximation on a low dimensional cube. The problem of extending a function f defined on a training data C on an unknown manifold □ to the entire manifold and a tubular neighborhood of this manifold is considered in this paper.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |