• Nie Znaleziono Wyników

Machine Learning and Multivariate Techniques in HEP data Analyses

N/A
N/A
Protected

Academic year: 2021

Share "Machine Learning and Multivariate Techniques in HEP data Analyses"

Copied!
52
0
0

Pełen tekst

(1)

Machine Learning and Multivariate Techniques in HEP data Analyses

Prof. dr hab. Elżbieta Richter-Wąs

Extracted from slides by:

G. Cowan’s lectures at RH London Univ., H. Voss at SOS 2016, K. Reygers lectures at Heilderbeg Univ.

Boosted Decision Trees

Artificial Neural Networks

(2)

Decision Trees

2

(3)

Boosted Decision Trees

3

(4)

Decision Trees

4

(5)

Finding Optimal Cuts

5

(6)

Separation Measures

6

(7)

Decision Tree Prunning

7

(8)

Boosted Decision Trees: Idea

8

(9)

AdaBoost (short for Adaptive Boosting)

9

(10)

Assigning the Classifier Score

10

(11)

Updating Events Weights

11

(12)

Boosting

12

(13)

Addaptive Boosting (AdaBoost)

13

(14)

Boosted Decision Trees

14

(15)

General Remarks on Multi-Variate Analyses

15

(16)

Machine Learning - Basic terminology

16

(17)

17

(18)

Where are the Neural Networks?

18

(19)

Neural Networks

19

(20)

Perceptron

20

(21)

The Biological Inspiration: the Neuron

21

(22)

Feedforward Neural Network with One Hiden Layer

22

(23)

Network Training

23

(24)

Backpropagation

24

(25)

Neural Network Output and Decision Boundaries

25

(26)

Example of Overtraining

26

(27)

Monitoring Overtraining

27

(28)

Deep Neural Networks

28

(29)

How do NNs work?

29

(30)

How do NNs learn?

30

(31)

How do NNs learn?

31

(32)

How do NNs learn?

32

(33)

Typical Applications

33

(34)

Input Preprocesing

34

(35)

Training

35

(36)

Training: (Stochastic) Gradient Descent

36

(37)

Training: more optimisers

37

(38)

Underfitting and overtraining

38

(39)

Overtraining solutions

39

(40)

Convolutional NN

40

(41)

Convolution layer

41

(42)

Average and Max pooling layers

42

(43)

Convolutional NN architecture

43

(44)

Recursive NN

44

(45)

Recursive NN: possible HEP applications

45

(46)

Adversarial NN

46

(47)

Generative Adversarial NN

47

(48)

Lorentz boost network: motivation

48

(49)

Lorentz boost network: network architecture

49

(50)

Lorentz boost network: feauture extraction

50

(51)

BN for tth(bb) vs tt+bb: performance

51

(52)

Conclusions

52

Cytaty

Powiązane dokumenty

Since 2010 new era: rapidly increasing areas of

High Statistics Limit of Poisson

Error propagation II: the general

Binned Maximum Likelihood

Bayesian Credible Intervals -

Berger lectures at CERN Summer School 2019... Randomness in High

Berger lectures at CERN Summer School 2019... How to represent

We fitted a cumulative Gaussian distribution to the average speeds in free flow on the middle and the left lane using the fractions of traffic on these lanes. We added 5% to