• Nie Znaleziono Wyników

Neural Networks in Statistica

N/A
N/A
Protected

Academic year: 2021

Share "Neural Networks in Statistica"

Copied!
42
0
0

Pełen tekst

(1)

Neural Networks in Statistica

Agnieszka Nowak - Brzezińska

http://usnet.us.edu.pl/uslugi-sieciowe/oprogramowanie-w-usk-usnet/oprogramowanie- statystyczne/

(2)

• The basic element of each neural network is neuron.

Axon

Terminal Branches of Axon Dendrites

S

x1

x2 w1 w2

wn xn

x3 w3

(3)
(4)

Types of neurons

y

S- aggregated input value

Activation function

(5)

Activation function

• For linear neurons:

Linear, sigmoidal, hiperbolic, exponential, sinusoidal,

• For radial:

gauss.

Linear is the aggregation. Output value can be taken

from nonlinear activation function.

(6)

Neuron’s learning

y

(7)

Prediction

Input: X

1

X

2

X

3

Output: Y Model: Y = f(X

1

X

2

X

3

)

0.5

0.6 -0.1 0.1 -0.2

0.7

0.1 -0.2

X1 =1 X2=-1 X3 =2

0.2 f (0.2) = 0.55

0.55

0.9

f (0.9) = 0.71 0.71

-0.087

f (-0.087) = 0.478 0.478

0.2 = 0.5 * 1 –0.1*(-1) –0.2 * 2

Prediction Y = 0.478

If true id Y = 2

Then prediction error is = (2-0.478)

=1.522

f(x) = e

x

/ (1 + e

x

)

f(

0.2

) = e

0.2

/ (1 + e

0.2

)

= 0.55

(8)

1. Randomly choose one observation

2. Calculate the value of Y

3. Compare Y with the actual value

4. Modify the weights by calculation the error

Learning process

(9)

Backpropagation

• It is one of the most popular techniques of

learning process for NN.

(10)

How to calculate the prediction error ?

where:

•Errori is the error ofr i-th node,

•Outputi is the predicted by the network,

•Actuali is the real value which should be predicted

(11)

Weights modification

L- is the learning factor from the range [0,1]

The less the l values is the slowest the learning process is.

Very often l is the highest in the begining and then reducted

with the changing of the weights.

(12)
(13)

Example

(14)

Zmiana wag

L- is the learning factor from the range [0,1]

The less the l values is the slowest the learning process is.

Very often l is the highest in the begining and then reducted

with the changing of the weights.

(15)

How many neurons?

• The number of neurons in the input layer depends on the number of input variables

• The number of neurons in output layer depends on the type of the problem to solve by the network

• The number of neurons in hidden layer depends on the

users qualifications

(16)

Neural network tasks:

• clasification – NN is to decide about the class of a given object (classes in nominal scale)

• regresion – NN is to predict a value (numerical) of the

attribute which is the output value.

(17)

Clasification

• 1. Dataset leukemia.sta

• 2. choose the type of NN

(18)

• 3. Choose the variables:

4. Automatic generation of NN

(19)

• You may change the proportions of division of

dataset in learning and testing probes

(20)

Automatic generation of NN

• Linears neurons(MLP)

• Minimal (3) maksimal (10) neurons in hidden layer

• 20 NN, 5 the best is displayed

• Error function: SSE

The window presents the creation of model „3-6-2” where 3 are neurons in input layer, 6 in hidden and 2 in output layer.

(21)
(22)
(23)

3 best nets are saved…

(24)

• Predictions

• Graphs

• Details

• Liftcharts

• Custom predictions

• SUMMARY

(25)

Predictions

(26)

Details

• Summary

• Weights

• Confusion matrix

(27)

Details

• Ciekawe są opcje:

• Summary

• Weights

• Confusion matrix

(28)

Details

• Ciekawe są opcje:

• Summary

• Weights

• Confusion matrix

(29)

Results

(30)

Zakładka „liftcharts”

Further read:

http://www.statsoft.pl/czytelnia/artykuly/Krzywe_ROC_czyli_ocena_jakosci.pdf

(31)

• 1 dataset tomatoes.sta

• 2. type of the network:

Regression

(32)

• 3. Choose from the variables:

4. Automatic generation of NN

(33)
(34)
(35)
(36)

2 best NN saved…

(37)

• Predictions

• Graphs

• Details

• Liftcharts

• Custom predictions

• SUMMARY

(38)

Predictions

(39)

Graphs

(40)

Details

• Ciekawe są opcje:

• Summary

• Weights

• Correlation coefficients

• Confusion matrix -

(41)

Results

(42)

• read:

http://zsi.tech.us.edu.pl/~nowak/si/SI_w4.pdf

Cytaty

Powiązane dokumenty

Trela zajął się również innymi aspektami samorządności, które są pilnie obserwowane przez polityków i mogą się stać nowymi polami kon- frontacji dla adwokatury z

Ta niechęć do demonizowania zła Rosji brała się niewątpliwie z rezerwy Mickiewicza wobec manichejskich skłonności ku omnipotencji zła, a ulegał im i to

In addition to a literature review we carried out interviews with designers from practice, addressing 1 their current role 2 their vision on the role designers should fulfil in

For t that are divisble by 2 and 3 we will use the theory of quadratic forms, modular forms and Gauss’ Eureka theorem to prove the positivity of c t (n).. When p ≥ 5 is prime, we

S z´ek e l y, Crossing numbers and hard Erd˝os problems in Discrete

Then, after having transformed the Hecke zeta-functions by means of their functional equation, we apply the summation formula once again, this time “backwards”, in order to

Allocation scheme of indistinguishable particles into differ- ent cells, Gaussian random variable, Berry–Ess´ een inequality, limit theorem, local limit theorem.. This work

FINDS FROM THE SHAFT TOMBS The excavation of the filling of the shaft tombs brought over one hundred bigger and smaller decorated fragments, mostly from the walls and ceiling of