26/01/2021 1
What are Multivariate Techniques
2
Background
Signal
Machine Learning - Multivariate Techniques
3
Regression
4
Regression -> model functional behaviour
5
Multi-Variate Classification
6
Classification: Different Approaches
7
Signal Probability Instead of Hard Decisions
8
Machine learning: Basic terminology
9
10
Where are the neural networks?
11
Neural Networks
12
Perceptron
13
The Biological Inspiration: the Neuron
14
Feedforward Neural Network with One Hiden Layer
15
Network Training
16
Backpropagation
17
Neural Network Output and Decision Boundaries
18
Example of Overtraining
19
Monitoring Overtraining
20
Deep Neural Networks
21
How do NNs work?
22
How do NNs learn?
23
How do NNs learn?
24
How do NNs learn?
25
Typical Applications
26
Input Preprocesing
27
Training
28
Training: (Stochastic) Gradient Descent
29
Training: more optimisers
30
Underfitting and overtraining
31
Overtraining solutions
32
Deep-learning Neural Network
TensorFlowTM
MNIST example
Scientific application:
Higgs CP measurement at LHC
33
Since 2010 new era in Machine Learning:
rapidly increasing areas of applications
34
Neural network
Since 2010 new era: rapidly increasing areas of applications
35
Neural network
Deep-Learning tutorial @ udacity
36
https://www.udacity.com/course/
deep-learning--ud730
Supervised Classifications
37
Supervised Classifications
38
Classifications for Detection
39
Classifications for Ranking
40
Logistic classifier: Linear model
41
Softmax
42
„One hot” encoding
43
„One hot” encoding
44
Optimisation: Cross-Entropy
45
Multinomial logistic classification
46
Optimisation of average loss
47
Gradient decent
48
Normalised input and output
49
Normalised input and output
50
Initialisation
51
Training, validation, testing
52
Gradient Descent
53
Stochastic Gradient Descent
54
SDG: optimising with momentum
55
SDG: learning rate
56
SDG: „black magic”
57
Input – linear - output
58
Linear models are linear
59
Linear models are stable
60
• This is still linear
• Lets introduce non-linearity
Linear models are here to stay
61
RELU: Rectified Linear Unit
62
Networks of RELU
63
The Chain Rule
64
Back - propagation
65
Optimisation tricks
66
Optimisation trick: dropout
67
Deep networks
68
Deep networks
69
tensorflow.org/paper/whitepaper2015.pdf
70
71
72
Hand-written diggits: MNIST
73
Simple linear model
74
Slides from M. Gorner tutorial
http://www.youtube.com/watch?v=vq2nnJ4g6NO
TensorFlow full python code
75
Slides from M. Gorner@youtube
Simple linear model
76
Slides from M. Gorner@youtube
Multi-layer connected network
77
Slides from M. Gorner@youtube
Multi-layer connected network
78
Slides from M. Gorner@youtube
• \
All tricks count
Slides from M. Gorner@youtube 79
But noisy accuracy Use RELU
Exponentialy reduce learning rates Add drop-out
Can do better with conv network
80
References
81
http://www.deeplearning.book.org
http://download.tensorflow.org/paper/whitepaper2015.pdf
https://www.tensorflow.org/
http://www.youtube.com/watch?v=vq2nnJ4g6NO
https://www.udacity.com/course/deep-learning--ud730