• Nie Znaleziono Wyników

slajdy2

N/A
N/A
Protected

Academic year: 2021

Share "slajdy2"

Copied!
16
0
0

Pełen tekst

(1)

Motivations Basic notions Soft cuts and soft DT Searching for soft cuts Conclusions Home Page Title Page JJ II J I Page1of16 Go Back Full Screen Close Quit

A Soft Decision Tree

Nguyen Hung Son

son@mimuw.edu.pl

June 4, 2002

Abstract

We present the novel ”soft discretization” methods using ”soft cuts” instead of traditional ”crisp” (or sharp) cuts. This new concept allows to generate more compact and stable decision trees with high classification accuracy. We also present an efficient method for soft cut generation from large data bases.

(2)

Motivations

Basic notions Soft cuts and soft DT Searching for soft cuts Conclusions Home Page Title Page JJ II J I Page2of16 Go Back Full Screen Close Quit

Talk layout

1. Motivations 2. Basic notions

3. Soft cuts and soft DT 4. Searching for soft cuts 5. Conclusions

(3)

Motivations

Basic notions

Soft cuts and soft DT Searching for soft cuts Conclusions Home Page Title Page JJ II J I Page3of16 Go Back Full Screen Close Quit

1.

Motivations

• The most important advantage of decision tree meth-ods are:

– compactness and clearness of presented knowledge – high accuracy of classification

• The disadvantage of standard decision tree methods: – inefficiency for very large data tables.

– instability, i.e., small deviation of data can consider-ably change a model.

• Our proposition: use ”soft cuts” instead of ”crisp cuts” in internal nodes. This concept allows to

– generate more compact and stable decision trees. – assure high classification quality.

– speed up induction algorithms in case of large data stored in databases.

(4)

Motivations

Basic notions

Soft cuts and soft DT Searching for soft cuts Conclusions Home Page Title Page JJ II J I Page4of16 Go Back Full Screen Close Quit

2.

Basic notions

Decision table consists of • a set of objects U .

• a set of attributes (columns) A = {a : U → Va} • a decision attribute dec /∈ A. Assume that Vdec = {1, . . . , d},

DECk = {x ∈ U : dec(x) = k}

will be called the kth decision class

• Any pair (a, c), where a is an attribute and c is a real value, is called a cut.

• We say that ”the cut (a, c) discerns a pair of objects x, y” if either a(x) < c ≤ a(y) or a(y) < c ≤ a(x).

(5)

Motivations

Basic notions

Soft cuts and soft DT Searching for soft cuts Conclusions Home Page Title Page JJ II J I Page5of16 Go Back Full Screen Close Quit

2.1. Standard decision tree

Decision tree (DT) consists of • ”test functions” in internal

nodes

• ”decision class” in leaves. Decision tree tasks:

• using DT to classify new ob-jects;

• construction of DT from data; • choosing parameters for DT:

”test function” types, ”test function” evaluation, pruning ...

Optimal decision tree?

• DT is consistent with the decision table A if it classifies properly all objects from A.

• DT is optimal for A if it has a smallest height among decision trees consistent with A.

(6)

Motivations

Basic notions

Soft cuts and soft DT Searching for soft cuts Conclusions Home Page Title Page JJ II J I Page6of16 Go Back Full Screen Close Quit

2.2. Decision tree construction

• The cut (a, c) is optimal if it labels one of internal nodes of optimal decision trees.

• The typical algorithm for DT induction:

1. For a given set of objects U , select a cut (a, cBest) of

high quality among all possible cuts and all attributes; 2. Induce a partition U1, U2 of U by (a, cBest) ;

3. Recursively apply Step 1 to both sets U1, U2 of

ob-jects until some stopping condition is satisfied. • decision tree induction problem:

”For a given set of candidate cuts {c1, ..., cN} on an

attribute a, find a cut ci belonging to the set of optimal

cuts with highest probability”.

• Usually, we use some measure F : {c1, ..., cN} → R to

estimate the quality of cuts. • straightforward algorithm:

cBest = arg max ci

(7)

Motivations

Basic notions

Soft cuts and soft DT Searching for soft cuts Conclusions Home Page Title Page JJ II J I Page7of16 Go Back Full Screen Close Quit Entropy measure

• The class information entropy of object set X with class distribution hN1, ..., Ndi, where N1 + ... + Nd = N : Ent(X) = − d X j=1 Nj N log Nj N

• the entropy of the partition induced by a cut (a, c): E (a, c; U ) = |UL|

n Ent(UL) +

|UR|

n Ent(UR) where {UL, UR} is a partition of U defined by c.

(8)

Motivations Basic notions

Soft cuts and soft DT

Searching for soft cuts Conclusions Home Page Title Page JJ II J I Page8of16 Go Back Full Screen Close Quit Discernibility measure

• energy of the set of objects X ⊂ U can be defined by the number of pairs of objects from X to be discerned

conf lict(X) = X

i<j

NiNj

where hN1, ..., Ndi is a class distribution of X

• The cut c which divides the set of objects U into U1, and U2 is evaluated by

(9)

Motivations Basic notions

Soft cuts and soft DT

Searching for soft cuts Conclusions Home Page Title Page JJ II J I Page9of16 Go Back Full Screen Close Quit

3.

Soft cuts and soft DT

A soft cut is any triple p = ha, l, ri, where • a ∈ A is an attribute,

• l, r ∈ < are called the left and right bounds of p ; • the value ε = r−l

2 is called the uncertain radius of p.

• We say that a soft cut p discerns a pair of objects x1, x2 if a (x1) < l and a (x2) > r.

-l r a

• The intuitive meaning of p = ha, l, ri:

– there is a real cut somewhere between l and r.

– for any value v ∈ [l, r] we are not able to check if v is either on the left side or on the right side of the real cut.

– [l, r] is an uncertain interval of the soft cut p. – normal cut can be treated as soft cut of radius 0.

(10)

Motivations Basic notions Soft cuts and soft DT

Searching for soft cuts

Conclusions Home Page Title Page JJ II J I Page10of16 Go Back Full Screen Close Quit

3.1. Soft Decision Tree

• The test functions can be defined by soft cuts

• Here we propose two strategies using described above soft cuts:

– fuzzy decision tree: any new object u can be classified as follows:

∗ For every internal node, compute the probability that u turns left and u turns right;

∗ For every leave L compute the probability that u is reaching L;

∗ The decision for u is equal to decision labeling the leaf with largest probability.

– rough decision tree: in case of uncertainty

∗ Use both left and right subtrees to classify the new object;

∗ Put together their answer and return the answer vector;

(11)

Motivations Basic notions Soft cuts and soft DT

Searching for soft cuts

Conclusions Home Page Title Page JJ II J I Page11of16 Go Back Full Screen Close Quit

4.

Searching for soft cuts

STANDARD ALGORITHM FOR BEST CUT

• For a given attribute a and a set of candidate cuts {c1, ..., cN}, the best cut (a, ci) with respect to given

heuristic measure

F : {c1, ..., cN} → R+

can be founded in time Ω(N ).

• The minimal number of simple SQL queries of form SELECT COUNT

FROM data_table

WHERE (a BETWEEN c_L AND c_R) GROUPED BY d. necessary to find out the best cut is Ω(dN )

OUR PROPOSITIONS FOR SOFT CUTS • Tail cuts can be eliminated

(12)

Motivations Basic notions Soft cuts and soft DT

Searching for soft cuts

Conclusions Home Page Title Page JJ II J I Page12of16 Go Back Full Screen Close Quit

4.1. Divide and Conquer Technique

• The algorithm outline:

1. Divide the set of possible cuts into k intervals 2. Chose the interval to which the best cut may

belong with the highest probability.

3. If the considered interval is not STABLE enough then Go to Step 1

4. Return the current interval as a result.

• The number of SQL queries is O(d · k logkn) and is minimum for k = 3;

• How to define the measure evaluating the quality of the interval [cL; cR]?

• This measure should estimate the quality of the best cut from [cL; cR].

(13)

Motivations Basic notions Soft cuts and soft DT

Searching for soft cuts

Conclusions Home Page Title Page JJ II J I Page13of16 Go Back Full Screen Close Quit

We construct estimation measures for intervals in four cases: Discernibility measure Entropy Measure Independency assumption ? ? Dependency assumption ? ? 4.2. Discernibility measure:

Under dependency assumption, i.e. x1 M1 ' x2 M2 ' ... ' xd Md ' x1 + ... + xd M1 + ... + Md = x M =: t ∈ [0, 1] discernibility measure for [cL; cR] can be estimated by:

W (cL) + W (cR) + conf lict(cL; cR)

2 +

[W (cR) − W (cL)]2 conf lict(cL; xR)

(14)

Motivations Basic notions Soft cuts and soft DT

Searching for soft cuts

Conclusions Home Page Title Page JJ II J I Page14of16 Go Back Full Screen Close Quit

Under dependency assumption, i.e. x1, ..., xd are inde-pendent random variables with uniform distribution over sets {0, ..., M1}, ..., {0, ..., Md}, respectively.

• The mean E(W (c)) for any cut c ∈ [cL; cR] satisfies E(W (c)) = W (cL) + W (cR) + conf lict(cL; cR)

2

• and for the standard deviation of W (c) we have

D2(W (c)) = n X i=1   Mi(Mi + 2) 12 X j6=i (Rj − Lj) !2 

• One can construct the measure estimating quality of the best cut in [cL; cR] by

(15)

Motivations Basic notions Soft cuts and soft DT Searching for soft cuts

Conclusions Home Page Title Page JJ II J I Page15of16 Go Back Full Screen Close Quit 4.3. Example

(16)

Motivations Basic notions Soft cuts and soft DT Searching for soft cuts

Conclusions Home Page Title Page JJ II J I Page16of16 Go Back Full Screen Close Quit

5.

Conclusions

• Soft cuts as a novel discretization concept; • Soft decision tree;

• Efficient method for construction of soft cuts from large data (one can reduce the number of simple queries from O(N ) to O(log N ) to construct the partition very close to the optimal one).

Cytaty

Powiązane dokumenty

Tę część rozw ażań otw iera rozdział poświęcony poszukiw aniu m odelow ych rozw iązań w kiero w an iu i organizacji tego ruchu.. Część pierw sza jest

What speaks in favor of India including the component of soft power in their foreign policy is the nation’s lack of traditions of conquest in their relations with territories

4 Patch coverage (dark red active patches A and light orange mobility patches M) leading to the desired types of motion: (A) pure translation in the z-direction, (B) pure rotation

7 Comparison of experimental data (full circles) obtained from ASM470 (R H = 98 nm, f = 0.1) and ASM540 (R H = 144 nm) suspensions, virial calculations (full line, the dashed

1, може відбуватися (після кваліфікаційної оцінки) на конкурсній основі.. проведення та кваліфікаційні критерії конкурсу визначає статут

- the general matrix of savings and investment attitudes of households in Poland, which takes into account the most important forms of investing savings in

Dokument opracowany w Bibliotece Głównej Politechniki Śląskiej na podstawie informacji zawartych w opisie bibliograficznym rozprawy doktorskiej, stanowiącym Załącznik nr 1

go; Dawnego Prawa Polskiego i Litewskiego; Historii Prawa na Zachodzie Eu- ropy; Teorii i Filozofii Prawa; Nauki o Państwie i Prawa Państwowego; Prawa Kościelnego;