• Nie Znaleziono Wyników

Heart attacks

N/A
N/A
Protected

Academic year: 2022

Share "Heart attacks"

Copied!
34
0
0

Pełen tekst

(1)

1

Looking at real data

Relationship

(2)

Introduction

A medical study finds that short women are more likely to have heart attacks than women of average height, while tall women have the fewest heart attacks.

An insurance group reports that heavier cars have fewer accident deaths per 10000 vehicles registered than do lighter cars. These and many other statistical studies look at the relationship between two variables.To understand such a relationship, we must often examine other variables as well. To conclude that shorter women have higher risk from heart attack, for example the researers had to eliminate the effect of others variables such as weight and exercise habits.

We are also interested in relationship between variables. One of our main themes is that the relationship between two variables can be strongly

influenced by other variables that are lurking in the background.

(3)

3

Introduction

To study the relationship between two variables, we measure both variables on the same individuals. If we measure both the height and the weight of each of large group of people, we know which height goes with each weight. These data alloows us to study the connection between height and weight. A list of the heights and a separate list of the weights, two set of single variable data, do not show the connection between the two variables. In fact, taller people also tend to be heavier. And people who smoke more cigarettes per day tend not to live as long as those who smoke fewer. We say that pairs of variables such as height and weight or smoking and life expectancy are associated.

(4)

Association between Variables

Two variables measured on the same individuals are associated if some values of one variable tend to occur more often with some values of the second

variable than with other values of that variable.

Statistical associations are overall tendencies, not ironclad rules. They allow individual exceptions. Althought smokers on the average die earlier than nonsmokers, some people live to 90 while smoking three packs a day.

(5)

5

Examining Relationship

When you examine the relationship between two or more variables, first ask the preliminary questions:

 What individuals do the data describie?

 What variables are presented? How are they measures?

 What variables are quantitative and which are cathegorical?

(6)

Association between Variables

A medical study, for example, may record each subject’s sex (male, female) and smoking status along with quantitative variables such as weight and blood pressure. We may be interested in possible associations between two

quantitative variables (such as person’s weight and blood pressure), between a cathegorical and quantitative variable (such as sex and blood pressure) or

between two cathegorical variables (such as sex and smoking status).

When you examine the relationship between two variables, a new question becomes important:

 Is your purpose simply to explore the nature of the relationship, or do you hope to show that one of the variables can explain variation in the other?

That is, are some of the variables response variables and others explanatory variables.

(7)

7

Response Variable, Explanatory Variable

A response variable measures an outcome of a study. An explanatory variable explains or causes changes in the response variable.

Example: Alcohol has many effect on the body. One effect is a drop in body temperature. To study this effect, researchers give several different amounts of alcohol to mice, then measure the change in each mouse’s body temperature in the 15 minutes after taking the alcohol. Amount of alcohol is the explanatory variable and change in body temperature is the response variable.

In many studies, the goal is to show that changes in one or more explanatory variables actually cause changes in a response variable. But not all

explanatory-response relationships involve direct causation.

(8)

Dependent and Independent Variables

Some of statistical techniques require us to distinguish explanatory from

response variables; others make no use of this distinction. You will often see explanatory variables called independent variables and response variables called dependent variables. The idea behind this language is that response variables depend on explanatory variables.

Most statistical studies examine data on more than one variable. Fortunately, statistical analysis of several-variable data builds on the tools used for

examining individual variables.

(9)

9

Scatterplots

Relationship between two quantitative variables are best displayed graphically.

The most useful graph for this purpose is a scatterplot.

A scatterplot shows the relationship between two quantitative variables measured on the same individuals. The values of one variable apper on the horizontal axis, and the values of the other –on the vertical axis. Each

individual in the data appears as the point in the plot fixed by the values of both variables for that individual.

Always plot the explanatory variables (if there is one) on the horizontal axis (the x axis) of a scatterplot.

The explanatory variable-x The response variable-y

(10)

Scatterplot

107 7.9

Italy

172 2.7

Germany 300

0.7 Ireland

199 1.2

United States 211

0.8 Iceland

285 1.3

United Kingdom 71

9.1 France

115 5.8

Switzerland 297

0.8 Finland

207 1.6

Sweden 220

2.9 Denmark

86 6.5

Spain 191

2.4 Canada

227 0.8

Norway 131

2.9 Belgium

266 1.9

New Zeland 167

3.9 Australia

167 1.8

Netherlands 211

2.5 Australia

Heart disease deaths Alcohol from wine

Country Heart disease deaths

Alcohol from wine Country

(11)

11

Scatterplot

0 1 2 3 4 5 6 7 8 9 10

50 100 150 200 250 300

Alcohol from wine

Heart attacks

(12)

Positive Association, Negative Association

Tow variables are positively associated when above-average values of one tend to accompany above-average values of the other and below-average values also tend to occur together.

Two variables are negatively associated when above-average values of one accompany below-average values of the other, and vice versa.

(13)

13

More examples of scatterplots

149.8 130.1

124.6 160.8

Mean

134.8 150.5

119.0 28000

143.1 156.1

138.4 134.7

24000

146.2 149.9

139.6 130.1

165.3 20000

143.2 149.8

135.2 120.7

166.9 16000

131.0 142.6

118.4 113.0

150.1 12000

Mean 1960

1959 1958

1956 Plants per acre\yield (in

bushes per acre)

(14)

More examples of scatterplots

1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8

100 110 120 130 140 150 160 170 180

Yield (in bushes per acre)

(15)

15

More examples of scatterplots

1956 1958 1959 1960

0 20 40 60 80 100 120 140 160

Values

Column Number

(16)

Correlation

We have data on variables x and y for n individuals. Think, for example, of measuring height and weight for n people. Then x1 and y1 are your height and weight, x2 and y2 are my height and weight and so on. For the i-th individual, height xi goes with weight yi.

(17)

17

Correlation

The correlation measures the direction and strenght of the linear relationship between two quantitative varaibles. Correlation is usually written as r.

Suppose that we have data on variables x and y for n individuals. The means and the standard deviations of the two variables are

The correlation between x and y is

. , , ,sx y sy x

). )(

( 1 1

1

=

= − n

i x y

i i

s s

y y x x r n

(18)

Correlation

The formula for correlation help us see that r is positive when there is a positive association between the variables. Height and weight, for example, have a positive association. People who are above (below) average in height tend to also be above (below) average in weight. Using the formula for r, we can see that the correlation is negative when the association between x and y is negative.

(19)

19

What you need to know in order to interpret correlation

 Correlation makes to use of the distinction between explanatory and response variables. It makes no difference which variable you call x and which you call y in calculating the correlation.

 Correlation requires that both variables be quantitative, so that it makes sense to do the artithmetic indicated by the formula for r.

 Because r uses the standarized values of the observations, r does not change when we change the units of measurements of x and y.

 Positive r indicates positive association between the variables, and negative r indicates negative association.

 The correlation is always a number between -1 and 1. Values of r near 0 indicate a very weak linear relationship.Values close to -1or 1 indicate that the points lie close to a straight line. The extreme values r=-1 and r=1 occur only when the points in a scatterplots lie exactly along the straight line.

 Correlation measures the strenght of only the linear relationship between two variables. It does not describe curved relationships between variables, no matter how strong they are.

(20)

Least-Squares Regression

Correlation measures the direction and strength of the linear (straight-line) relationship between two quantitative variables. If a scatterplot shows a linear relationship, we would like to summarize this overall pattern by drawing a line on the scatterplot. A regression line summarizes the relationship between two variables, but only in a specific setting; when one of the variables helps

explain or predict the other. That is, regression describes a relationship between explanatory and response variables.

(21)

21

Least-Squares Regression

A regression line is a straight line that desribes how a response variable y

changes as an explanatory variable x changes. We often use the regression line to predict the value of y for a given value of x. Regression, unlike correlation, requires that we have an explanatory variable and a response variable.

(22)

Fitting a line to data

When a scatterplot displays a linear pattern, we can display the overall pattern by drawing a straight line through the points. Of course, no straight line passes exactly through all of the points. Fitting a line to data means drawing a line that comes as close as possible to the points. The equation of a line fitted to the data gives a compact description of the dependence of the response variable on the explanatory variable. It is a mathematical model for the straightt-line

relationship.

(23)

23

Straight Line

Suppose that y is a response variable and x is an explanatory variable. A straight line relating y to x has an equation of the form

In this equation, b is the slope, the amount by which y changes when x increases by one unit. The number a is the intercept, the value of y when x=0.

bx a

y = +

(24)

Prediction

We can use a regression line to predict the response y for a specific value of the explanatory variable x.

Extrapolation is the use of a regression line for prediction far outside the range of values of the explanatory variable that you use to obtain the line. Such predictions are often not accurate.

(25)

25

Prediction

83.5 29

82.8 28

81.8 27

81.2 26

81.1 25

79.9 24

79.7 23

78.8 22

78.2 21

78.1 20

77.0 19

76.1 18

Height y in centimeters Age x (in months)

18 20 22 24 26 28 30

76 77 78 79 80 81 82 83 84

age (in months)

height (in centimeters)

y=64.93+0.635x

(26)

Prediction

y Observed

y y

Error

bx a

y ediction

: : ˆ

: ˆ Pr

+

=

(27)

27

Equation of the Least-Squares Regression Line

We have data on explanatory variable x and a response variable y for n individuals.The means and standard deviations are

The correlation between x and y is r.

The equation of the least squares regression line of y and x is y

x

y s

s

x , , ,

x b y a

s r s b bx

a y

x y

=

= +

= ,

ˆ

(28)

Correlation and Regression

Least-squares regression looks at the distances of the data points from the line only in the y direction. So the two variables x and y play different roles in

regression.

Although the correlation ignores the distinction between explanatory and response variables, there is a close connection between correlation and

regression. We saw that the slope of the least-squares line involves r. Another connection between correlation and regression is even more important. In fact, the numerical value of r as a mesure of the strength of a linear relationship is best interpreted by thinking about regression.

(29)

29

Correlation and Regression

The square of the correlation, is the fraction of the variation in the values of y that is explained by the least-squares regression of y on x.

Example: Age and Height

The straight line relationship between height and age explains 98.88% (almost all) of the variation in heights.

Square of regression – mesure, how successfully the regression explain the response.

When r=1 or r=-1 – points are exactly on the line, square of correlation=1- all of the variation in one variable is accounted for by the linear relationship with the other variable.

9888 . 0 9944

.

0 2 =

= r

r

(30)

Residuals

A residual is a difference between an observed value of the response variable and the value predicted by the regression line. That is:

A residual plot is a scatterplot of the regression residuals agains the explanatory variable. Residual plots help us assess the fit of a regression line.

y y predicted observed

residual = − = − ˆ

(31)

31

Residual

18 20 22 24 26 28 30

-0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5

Age

Residuals

(32)

Outliers and Influential Observation in Regression

An outlier is an observation that lies outside the overall pattern of the other observations. Points that are outliers in the y direction of a scatterplot have large regression residuals, but other outliers need not have large residuals.

An observation is influential for statistical calculation if removing it markedly change the result of calculation. Points that are outliers in the x direction of a scatterplot are often influential for the least-squares regression line.

(33)

33

Outliers and Influential Observation in Regression

18 20 22 24 26 28 30

76 77 78 79 80 81 82 83 84

Age

Height

(34)

Transforming Relationships

-3 -2 -1 0 1 2 3

0 2 4 6 8 10 12 14

data2

Cytaty

Powiązane dokumenty

следует закону 1'одип—р степеней свободы, где 8 = Х*Х, а 8п является блочной матрицей матрицы

The founding principles of experimentation using the GridSpace virtual laboratory are that the experiments are described using a combination of

The article presents basic functioning aims of the Competitions and Customers Protection Office in Poland. The main goal of its functioning is simplification of economy development

Sesja naukowa oraz wystawa archiwalna z okazji 40-lecia KPP. Komunikaty Mazursko-Warmińskie nr

In conclusion, the results of our studies demonstrate for the first time (1) the pressor effect of the centrally acting NPY Y 1 receptor antagonist in

Yet, due to a bi-directional nature of the scavenging of the conduit and the symmetrical distribution of the mass share of the fuel vapor inside the conduit at the moment of

Pamiętnik Literacki : czasopismo kwartalne poświęcone historii i krytyce literatury polskiej 9/1/4,

The table below shows the frequency distribution of the number of dental fillings for a group of 25