This is a file from the Wikimedia Commons

File:Moving From unknown to known feature spaces based on TS-ELM with random kernels and connections.tif

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Original file(1,670 × 1,464 pixels, file size: 7.02 MB, MIME type: image/tiff)

Summary

Description
English: In ELM theories the input samples are mapped to another space according to any continues probability distribution when all the parameters of hidden nodes are randomly generated and cannot be updated [1], this transformation is also accompanied with solving a maximization problem based on Lagrange methods for quadratic problem interpretations[2], in this case the input samples will be changed into another known ones that is considered as a learning information for classification or testing simples.

ELM theories includes also random feature mapping depending on random kernel functions distributions but, into an unknown feature space as mentioned in Huang G et al [2] that ELM can work with kernels if the output hidden layer is unknown. as much as the SVM (support vector machine) projection when we depend on learning samples from a unknown space that we don’t need to know what actually it is[3]. Kernel functions are known with many uses in non-linearly separation in SVM, because it gives an adequate feature representation which consequently gives an opportunity to found the best separation hyperplane that increases the classification capability. RCN (Random connection neurons) are explained that input samples of the neuron network can be characterized with some relation sheep between each other but not necessarily all of them [4], and randomness in connections helps the network to optimize the classification rate and time consuming during learning or testing as well [5].

From these theories and rezoning we created a new algorithm that includes all the previous points in reason to cover many weakness points in classification rate like getting the best feature representation using random kernel projections, optimization of classification and learning capability using a known dimensionality samples with ELM model and finally to decrease time consuming during learning or testing depending on RCN networks.
Date
Source Own work
Author Tarekmer

Licensing

I, the copyright holder of this work, hereby publish it under the following license:
w:en:Creative Commons
attribution share alike
This file is licensed under the Creative Commons Attribution-Share Alike 4.0 International license.
You are free:
  • to share – to copy, distribute and transmit the work
  • to remix – to adapt the work
Under the following conditions:
  • attribution – You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
  • share alike – If you remix, transform, or build upon the material, you must distribute your contributions under the same or compatible license as the original.

Captions

Add a one-line explanation of what this file represents

Items portrayed in this file

depicts

12 February 2017

File history

Click on a date/time to view the file as it appeared at that time.

Date/TimeThumbnailDimensionsUserComment
current12:17, 26 November 2017Thumbnail for version as of 12:17, 26 November 20171,670 × 1,464 (7.02 MB)TarekmerUser created page with UploadWizard

The following page uses this file:

Metadata