• Nie Znaleziono Wyników

for Insurance Risk Processes

N/A
N/A
Protected

Academic year: 2022

Share "for Insurance Risk Processes"

Copied!
22
0
0

Pełen tekst

(1)

IV.7

Visualization Tools

for Insurance Risk Processes

Krzysztof Burnecki, Rafał Weron

7.1 Introduction. . . .. . . .. . . . 2

7.2 Software.. . . .. . . .. . . . 4

7.3 Fitting Loss and Waiting Time Distributions. . . .. . . . 4 Mean Excess Function. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . . 4 Limited Expected Value Function. . . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . . 9 Probability Plot.. .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . . 10

7.4 Risk Process and its Visualization.. . . .. . . . 14 Ruin Probability Plots.. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . . 14 Density Evolution. .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . . 17 Quantile Lines. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . . 17 Probability Gates. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . .. . .. . . . 19







Published as Chapter IV.6 (pages 899-920) in:

Ch. Chen, W. Haerdle, A. Unwin (2008) "Handbook of Data Visualization", Springer, Berlin.

(2)

Introduction

7.1

ThisDIBQUFSDPODFSOTSJTLQSPDFTTFT XIJDINBZCFUIFNPTUTVJUBCMFGPSDPNQVUFS visualization of all insurance objects. CEAt the same time, risk processes are basic in- struments for any non-life actuary – they are needed to calculate the amount of loss that an insurance company may incur. They also appear naturally in rating-triggered step-up bonds, where the interest rate is bound to random changes in company rat- ings, and catastrophe bonds, where the size of the coupon payment depends on the severity of catastrophic events.

A typical model of insurance risk, the so-called collective risk model, has two main components: one characterizing the frequency (or incidence) of events and another describing the severity (or size or amount) of the gain or loss resulting from the oc- currence of an event (Klugman et al., ; Panjer and Willmot, ; Teugels and Sundt, ). Incidence and severity are both generally assumed to be stochastic and independent of each other. Together they form the backbone of a realistic risk pro- cess. Consequently, both must be calibrated to the available historical data. All three visualization techniques discussed in Sect. .:

mean excess function

limited expected value function probability plot

are relatively simple, but at the same time they provide valuable assistance during the estimation process.

Once the stochastic models governing the incidence and severity of claims have been identified, they can be combined into the so-called aggregate claim process,

St=Nt

k=

Xk, (.)

where the claim severities are described by the random sequenceXk with a finite mean, and the number of claims in the interval(, t] is modeled by a counting pro- cess Nt, often called the claim arrival process.

The risk processRttdescribing the capital of an insurance company is then defined as

Rt= u + c(t) − St, (.)

where the non-negative constant u stands for the initial capital of the company and c(t) is the premium the company receives for sold insurance policies.

The simplicity of the risk process (.) is only illusionary. In most cases no analyt- ical conclusions regarding the time evolution of the process can be drawn. However, it is this evolution that is important to practitioners, who have to calculate function- als of the risk process, like the expected time to ruin and the ruin probability. This calls for efficient numerical simulation schemes (Burnecki et al., ) and powerful inference tools. In Sect. . we will present four such tools:

ruin probability plot

(3)

density evolution plot quantile lines probability gates.

All four of these techniques permit an immediate evaluation of model adequacy and the risks faced by the company based on visual inspection of the generated plots. As such they are especially useful for high-level managers who are interested in a general overview of the situation and do not need to study all of the computational details underlying the final results.

To illustrate the usefulness of the presented visualization tools, throughout this chapter we will apply them to two datasets. The first one, studied in detail, is the Prop- erty Claim Services (PCS) dataset, which covers losses resulting from catastrophic events in the USA. The data include – market loss amounts in US dollars (USD), adjusted for inflation using the Consumer Price Index. Only natural events that caused damage exceeding five million dollars were taken into account, see Fig. ..

The second dataset, used here solely to illustrate the risk process inference tools, con- cerns major inflation-adjusted Danish fire losses of profit (in Danish Krone, DKK) that occurred between  and  and were recorded by Copenhagen Re.

Figure .. Graph of the PCS catastrophe loss data, –. The two largest losses in this period were caused by Hurricane Andrew ( August ) and the Northridge Earthquake ( January ). From XploRe

(4)

Software

7.2

Visualization tools would not be very useful without adequate software. All of meth- ods discussed in this chapter are demonstrated using two software libraries and one standalone application. The Insurance Library of XploRe (www.xplore-stat.de) is a col- lection of quantlets that illustrate various topics related to insurance (Čižek et al.,

). It is accompanied by online, hyperlinked web tutorials that are freely down- loadable from the Web (www.xplore-stat.de/tutorials/_Xpl_Tutorials.html).

The Ruin Probabilities toolbox for MATLAB is a set of m-functions with a graphi- cal user interface (GUI) that is primarily created to visualize risk processes and evalu- ate ruin probabilities (Miśta, ). It can be downloaded from www.im.pwr.wroc.pl/

~hugo/stronaHSC/Podstrony/Programy.html.

The SDE-Solver is a standalone application for the Windows environment. It en- ables the construction and visualization of solutions of stochastic differential equa- tions (SDEs) with Gaussian, Poisson, and stable random measures (Janicki et al.,

); for SDE modeling concepts, the reader should consult (Kloeden and Platen,

). The graphics make use of quantile lines and density evolution techniques and introduce the interesting concept of interactive probability gates, which give the prob- ability that the simulated process passes through a specified interval at a specified point in time (for details see Sect. ..). More information about the software can be found at www.math.uni.wroc.pl/~janicki/solver.html.

Fitting Loss and Waiting Time Distributions

7.3

The derivation of loss and waiting times (interarrival) distributions from insurance data is not an easy task. There are three basic approaches: empirical, analytical, and moment-based. The analytical approach is probably the one most often used in prac- tice and certainly the one most frequently adopted in the actuarial literature. It re- duces to finding an analytical expression that fits the observed data well and is easy to handle (Daykin et al., ).

Having a large collection of distributions to choose from, we need to narrow our selection to a single model and a unique parameter estimate. The type of objective loss distribution (the waiting time distribution can be analyzed analogously) can eas- ily be selected by comparing the shapes of the empirical and theoretical mean excess functions. Goodness-of-fit can be verified by plotting the corresponding limited ex- pected value functions or drawing probability plots. Finally, the hypothesis that the modeled random event is governed by a certain loss distribution can be statistically tested (but is not discussed in this chapter; for a recent review of goodness-of-fit hy- pothesis testing see Burnecki et al. ()).

In the following subsections we will apply the abovementioned visual inference tools to PCS data. They will narrow our search for the optimal analytical model of

(5)

the loss and waiting time distributions to one or two probability laws. Moreover, they will allow for a visual assessment of the goodness-of-fit.

Mean Excess Function

7.3.1

For a random claim amount variable X, the mean excess function or mean resid- ual life function is the expected payment per claim on a policy with a fixed amount deductible of x, where claims of less than or equal to x are completely ignored:

e(x) = E(X − xX  x) = x − F(u) du

− F(x) . (.)

In practice, the mean excess function e is estimated by ˆen, based on a representative sample x, . . . , xn:

ˆen(x) = xixxi

#i  xi x− x . (.)

Note that in a financial risk management context, switching from the right tail to the left tail, e(x) is referred to as the expected shortfall (Weron, ).

When considering the shapes of mean excess functions, the exponential distri- bution with the cumulative distribution function (cdf) F(x) =  − exp(−βx) plays a central role. It has the memoryless property, meaning that whether or not the in- formation X  x is given, the expected value of X − x is the same as if one started at x =  and calculated E(X). The mean excess function for the exponential distri- bution is therefore constant. One can in fact easily calculate that e(x) =  β for all x  in this case.

If the distribution of X has a heavier tail than the exponential distribution, we find that the mean excess function ultimately increases, and when it has a lighter tail e(x) ultimately decreases. Hence, the shape of e(x) provides important information on the sub-exponential or super-exponential nature of the tail of the distribution at hand. That is why, in practice, this tool is used not only to discover the relevant class of the claim size distribution, but also to investigate all kinds of phenomena. We will apply it to data on both PCS loss and waiting times.

Mean excess functions of the well known and widely used distributional classes are given by the following formulae (selected shapes are sketched in Fig. ., while the empirical mean excess functions ˆen(x) for the PCS catastrophe data are plotted in Fig. .):

log-normal distribution with cdf F(x) = Φ(log x − μ) σ:

e(x) = expμ +σ  − Φ ln x−μ−σσ  − Φ ln x−μσ − x ,

where Φ(ċ) is the standard normal (with mean  and variance ) distribution function;

(6)

Pareto distribution with cdf F(x) =  − λ (λ + x)α: e(x) = λ+ x

α− , α  ; Burr distribution with cdf F(x) =  − λ (λ + xτ)α:

e(x) =λτΓα −τ Γ  +τ

Γ(α) ċ  λ

λ+ xτ−α ċ  − B  + 

τ, α−  τ, xτ

λ+ xτ − x ,

where Γ(ċ) is the standard gamma function and B(ċ, ċ, ċ) is the beta function;

Weibull distribution with cdf F(x) =  − exp(−βxτ):

e(x) =Γ( +  τ)

β  − Γ  +

τ, βxτ exp (βxτ) − x , where Γ(ċ, ċ) is the incomplete gamma function;

gamma distribution with cdf F(x) =∫xβ(βs)α−exp(−βs) Γ(α)ds:

e(x) = α

βċ− F (x, α + , β)

− F (x, α, β) − x , where F(x, α, β) is the gamma cdf;

mixture of two exponential distributions with cdf F(x) = a  − exp(−βx) + ( − a) − exp(−βx):

e(x) =

a

βexp(−βx) +−aβ exp(−βx) a exp(−βx) + ( − a) exp (−βx).

A comparison of Figs. . and . suggests that log-normal, Pareto, and Burr distri- butions should provide a good fit for the loss amounts. The maximum likelihood estimates of the parameters of these three distributions are as follows: μ = .

and σ= . (log-normal), α = . and λ = .(Pareto), and α= ., λ= .  and τ= . (Burr). Unfortunately, the parameters of the Burr dis- tribution imply that the first moment is infinite, which contradicts the assumption that the random sequence of claim amountsXk has a finite mean. This assumption seems natural in the insurance world, since any premium formula usually includes the expected value of Xk. Therefore, we exclude the Burr distribution from the anal- ysis of claim severities.

The classification of waiting time data is not this straightforward. If we discard large observations, then the log-normal and Burr laws should yield a good fit. How- ever, if all of the waiting times are taken into account, then the empirical mean excess function ˆen(x) approximates a straight line (although one that oscillates somewhat), which suggests that the exponential law could be a reasonable alternative.

(7)

Figure .. Top panel: shapes of the mean excess function e(x) for the log-normal (dashed line), gamma with α<  (dotted line), gamma with α   (solid line) and a mixture of two exponential distributions (long-dashed line). Bottom panel: shapes of the mean excess function e(x) for the Pareto (dashed line), Burr (long-dashed line), Weibull with τ<  (solid line) and Weibull with τ   (dotted line) distributions. From XploRe

(8)

Figure .. The empirical mean excess function ˆen(x) for the PCS catastrophe loss amounts in billions of USD (top panel) and waiting times in years (bottom panel). Comparison with Fig. . suggests that log-normal, Pareto, and Burr distributions should provide a good fit for loss amounts, while log-normal, Burr, and exponential laws work well for the waiting times. From XploRe.

(9)

Limited Expected Value Function

7.3.2

The limited expected value function L of a claim size variable X, or of the correspond- ing cdf F(x), is defined as

L(x) = Emin(X, x) =

xydF(y) + x  − F(x) , x   . (.) The value of the function L at point x is equal to the expectation of the cdf F(x) truncated at this point. In other words, it represents the expected amount per claim retained by the insured on a policy with a fixed amount deductible of x. The empirical estimate is defined as follows:

ˆLn(x) =n





xj<x

xj+ 

xjx

x

. (.)

In order to fit the limited expected value function L of an analytical distribution to the observed data, the estimate ˆLn is first constructed. Thereafter, one attempts to find a suitable analytical cdf F such that the corresponding limited expected value function L is as close to the observed ˆLnas possible.

The limited expected value function (LEVF) has the following important proper- ties:

(i) the graph of L is concave, continuous and increasing (ii) L(x)  E(X) as x  

(iii) F(x) =  − L(x), where L(x) is the derivative of the function L at point x; if F is discontinuous at x, then the equality holds true for the right-hand derivative L(x+).

The limited expected value function is a particularly suitable tool for our purposes because it represents the claim size distribution in the monetary dimension. For ex- ample, we have L() = E(X) if it exists. The cdf F, on the other hand, operates on the probability scale, i.e., it takes values of between  and . Therefore, it is usu- ally difficult to work out how sensitive the price of the insurance – the premium – is to changes in the values of F by looking only at F(x), while the LEVF immediately shows how different parts of the claim size cdf contribute to the premium. Aside from its applicability to curve-fitting, the function L also turns out to be a very useful con- cept when dealing with deductibles Burnecki et al. (). It is also worth mentioning that there is a connection between the limited expected value function and the mean excess function:

E(X) = L(x) + P(X  x)e(x) . (.)

The limited expected value functions (LEVFs) for all of the distributions considered in this chapter are given by the following formulae:

exponential distribution:

L(x) =

β − exp(−βx) ;

(10)

log-normal distribution:

L(x) = exp μ +σ

  Φ ln x− μ − σ

σ  + x  − Φ ln x− μ σ  ; Pareto distribution:

L(x) = λ− λα(λ + x)−α

α−  ;

Burr distribution:

L(x) =λτΓα −τ Γ  + τ

Γ(α) B + 

τ, α−  τ; xτ

λ+ xτ + x  λ λ+ xτ

α

; Weibull distribution:

L(x) =Γ( +  τ) βτ Γ + 

τ, βxα + xe−βxα; gamma distribution:

L(x) =α

βF(x, α + , β) + x  − F(x, α, β) ; mixture of two exponential distributions:

L(x) = a

β − exp (−βx) +− a

β  − exp (−βx) .

From a curve-fitting point of view, the advantage of using the LEVFs rather than the cdfs is that both the analytical function and the corresponding observed func- tion ˆLn, based on the observed discrete cdf, are continuous and concave, whereas the observed claim size cdf Fnis a discontinuous step function. Property (iii) implies that the limited expected value function determines the corresponding cdf uniquely.

When the limited expected value functions of two distributions are close to each other, not only are the mean values of the distributions close to each other, but the whole distributions are too.

Since the LEVF represents the claim size distribution in the monetary dimension, it is usually used exclusively to analyze price data. In Fig. ., we depict the empirical and analytical LEVFs for the two distributions that best fit the PCS catastrophe loss amounts (as suggested by the mean excess function). We can see that the Pareto law is definitely superior to the log-normal one.

Probability Plot

7.3.3

The purpose of the probability plot is to graphically assess whether the data comes from a specific distribution. It can provide some assurance that this assumption is not being violated, or it can provide an early warning of a problem with our assumptions.

(11)

Figure .. The empirical (solid line) and analytical limited expected value functions (LEVFs) for the log-normal (dashed line) and Pareto (dotted line) distributions for the PCS loss catastrophe data. From XploRe

The probability plot is constructed in the following way. First, the observations x, ..., xn are ordered from the smallest to the largest: x()  ...  x(n). Next, they are plotted against their observed cumulative frequency, i.e., the points (the crosses in Figs. .–.) correspond to the pairs(x(i), F−([i − .] n)), for i = , ..., n. If the hypothesized distribution F adequately describes the data, the plotted points fall approximately along a straight line. If the plotted points deviate significantly from a straight line, especially at the ends, then the hypothesized distribution is not ap- propriate.

Figure. . shows a Pareto probability plot of the PCS loss data. Apart from the two very extreme observations (corresponding to Hurricane Andrew and Northridge Earthquake), the points more or less constitute a straight line, validating the choice of the Pareto distribution. Figures . and . present log-normal probability plots of the PCS data. To this end we applied the standard normal probability plots to the logarithms of the losses and waiting times, respectively. Figure . suggests that the log-normal distribution for the loss severity is not the best choice, whereas Fig. .

justifies the use of that particular distribution for the waiting time data. Figure .

depicts an exponential probability plot of the latter dataset. We can see that the expo- nential distribution is not a very good candidate for the underlying distribution of the waiting time data – the points deviate from a straight line at the far end. Nevertheless, the deviation is not that large, and the exponential law may be an acceptable model.

(12)

Figure .. Pareto probability plot of the PCS loss data. Apart from the two very extreme observations (Hurricane Andrew and Northridge Earthquake), the points (crosses) more or less constitute a straight line, validating the choice of the Pareto distribution. The inset is a magnification of the bottom left part of the original plot. From the Ruin Probabilities Toolbox

Figure .. Log-normal probability plot of the PCS loss data. The x-axis corresponds to logarithms of the losses. The deviations from the straight line at both ends question the adequacy of the log-normal law. From the Ruin Probabilities Toolbox

(13)

Figure .. Log-normal probability plot of the PCS waiting time data. The x-axis corresponds to logarithms of the losses. From the Ruin Probabilities Toolbox

Figure .. Exponential probability plot of the PCS waiting time data. The plot deviates from a straight line at the far end. From the Ruin Probabilities Toolbox

These probability plots suggest that, as far as the loss amounts are concerned, the Pareto law provides a much better fit than the log-normal distribution. In fact, apart from the two very extreme observations (Hurricane Andrew and Northridge Earth-

(14)

quake), it does provide a very good fit. Note that the procedure of trimming the top

– % of the data before calibration is known as “robust estimation” and it leads to very similar conclusions (see a recent paper by Chernobai et al., ).

From the probability plots, we can also infer that the waiting time data can be described by the log-normal and – to some degree – the exponential distribution.

The maximum likelihood estimates of the parameters of these two distributions are given by μ= −. and σ = . (log-normal) and β = . (exponential).

Risk Process and its Visualization

7.4

The risk process (.) is the sum of the initial capital and the premium function mi- nus the aggregate claim process governed by two random phenomena – the severity and incidence of claims. In many practical situations, it is reasonable to consider the counting process Nt(responsible for the incidence of events) to be (i) a renewal pro- cess, i.e., a counting process with interarrival times that are i.i.d. positive random vari- ables with a mean of  λ, and (ii) independent of the claim severities Xk. In such a case, the premium function can be defined in a natural way as c(t) = ( + θ)μλt, where μ is the expectation of Xk and θ   is the relative safety loading on the premium which “guarantees” the survival of the insurance company. The (homoge- neous) Poisson process (HPP) is a special case of the renewal process with exponen- tially distributed waiting times.

Two standard ways of presenting sample trajectories of the risk process are dis- played in Figs. . and .. Here, we use the Danish fire losses dataset, which can be modeled by a log-normal claim amount distribution with parameters μ= .

and σ= . (obtained via maximum likelihood) and a HPP counting process with a monthly intensity λ = .. The company’s initial capital is assumed to be u = 

million DKK. In Fig. ., five real (discontinuous) sample realizations of the resulting risk process are presented, whereas in Fig. . the trajectories are shown in a con- tinuous fashion. The latter way of depicting sample realizations seems to be more illustrative. Also note that one of the trajectories falls below zero, indicating a sce- nario leading to bankruptcy of the insurance company.

Ruin Probability Plots

7.4.1

When examining the nature of the risk associated with a business portfolio, it is often interesting to assess how the portfolio may be expected to perform over an extended period of time. This is where the ruin theory (Grandell, ) comes in handy. Ruin theory is concerned with the excess of the income c(t) (with respect to a business portfolio) over the outgoings, or claims paid, S(t). This quantity, referred to as the insurer’s surplus, varies over time. Specifically, ruin is said to occur if the insurer’s surplus reaches a specified lower bound, e.g., minus the initial capital. This can be observed in Fig. ., where the time of ruin is denoted by a star. One measure of risk is the probability of such an event, which clearly reflects the volatility inherent in the

(15)

Figure .. Discontinuous visualization of the trajectories of a risk process. The initial capital u= 

million DKK, the relative safety loading θ= ., the claim size distribution is log-normal with parameters μ= . and σ = ., and the driving counting process is a HPP with monthly intensity λ= .. From the Ruin Probabilities Toolbox

Figure .. Alternative (continuous) visualization of the trajectories of a risk process. The bankruptcy time is denoted by a star. The parameters of the risk process are the same as in Fig. .. From the Ruin Probabilities Toolbox

(16)

business. In addition, it can serve as a useful tool when planning how the insurer’s funds are to be used over the long term.

The ruin probability in a finite time T is given by ψ(u, T) = P  inf

<t<TRt <  . (.)

Most insurance managers will closely follow the development of the risk business and increase the premium if the business behaves badly. The planning horizon may be thought of as the sum of the following: the time until the risk business is found to behave “badly,” the time until the management reacts, and the time until the decision to increase the premium takes effect (Grandell, ).

In Fig. ., a three-dimensional (-D) visualization of the ruin probability with respect to the initial capital u (varying from zero to eight million DKK) and the time horizon T (ranging from zero to five months) is depicted. The remaining parameters of the risk process are the same as those used in Figs. . and ., except that the relative safety loading was raised from θ= . to θ = .. We can see that the ruin probability increases with the time horizon and decreases as the initial capital grows.

The ruin probability in finite time can always be computed directly using Monte Carlo simulations. Naturally, the choice of the intensity function and the distribution of claim severities heavily affects the simulated values and the ruin probability. The graphical tools presented below can help in assessing whether the choices made result

Figure .. Ruin probability plot with respect to the time horizon T (left axis, in months) and the initial capital u (right axis, in million DKK). The relative safety loading θ= .; other parameters of the risk process are the same as in Fig. .. From the Ruin Probabilities Toolbox

(17)

in a reasonable risk process and, hence, greatly reduce the time needed to construct an adequate model.

Density Evolution

7.4.2

Density evolution plots (and their two-dimensional projections) are a visually at- tractive method of representing the time evolution of a process. At each time point t= t, t, ..., tn, a density estimate of the distribution of process values at this time point is evaluated; see Fig. . (the same parameters of the risk process as those in Fig. . were used). Then the densities are plotted on a grid of t values. The three- dimensional surface obtained can be further rotated to better present the behavior of the process over a particular interval.

A two-dimensional projection of the density evolution plot (see Fig. .) reveals equivalent information to that represented by the quantile lines (see Sect. ..). How- ever, in this case, the presented visual information is more attractive to the eye, but not that rigid. Depending on the discretization of the time and process value intervals and the kernel density estimate used, slightly different density evolution plots can be obtained (Janicki and Izydorczyk, ; Miśta, ).

Quantile Lines

7.4.3

The function ˆxp(t) is called a sample p-quantile line if for each t  [t, T], ˆxp(t) is the sample p-quantile, i.e., if it satisfies Fn(xp−)  p  Fn(xp), where Fnis the empirical

Figure .. Three-dimensional visualization of the density evolution of a risk process with respect to the risk process value Rt(left axis) and time t (right axis). The parameters of the risk process are the same as in Fig. .. From the Ruin Probabilities Toolbox

(18)

Figure .. Two-dimensional projection of the density evolution depicted in Fig. .. From the Ruin Probabilities Toolbox

distribution function (edf). Recall that for a sample of observationsx, . . . , xn, the edf is defined as

Fn(x) =

n#i  xi x , (.)

in other words, it is a piecewise constant function with jumps of size  n at points xi

Burnecki et al. ().

Quantile lines are a very helpful tool in the analysis of stochastic processes. For example, they can provide a simple justification of the stationarity of a process (or the lack of it); (see Janicki and Weron, ). In Figs. ., ., and ., they visualize the evolution of the risk process.

Quantile lines can be also a useful tool for comparing two processes; see Fig. .. It depicts quantile lines and two sample trajectories of the risk process and its diffusion approximation; consult Burnecki et al. () for a discussion of different approxi- mations in the context of ruin probability. The parameters of the risk process are the same as in Fig. .. We can see that the quantile lines of the risk process and its ap- proximation coincide. This justifies the use of the Brownian approximation for these parameters of the risk process.

We now return to the PCS dataset. To study the evolution of the risk process we simulate sample trajectories and compute quantile lines. We consider a hypotheti- cal scenario where the insurance company insures losses resulting from catastrophic events in the United States. The company’s initial capital is assumed to be u = 

(19)

Figure .. A Poisson-driven risk process (discontinuous thin lines) and its Brownian motion approximation (continuous thin lines). The quantile lines enable an easy and fast comparison of the processes. The thick solid lines represent the sample ., . . . , .-quantile lines based on 

trajectories of the risk process, whereas the thick dashed lines correspond to their approximation counterparts. The parameters of the risk process are the same as in Fig. .. From the Ruin Probabilities Toolbox

billion USD and the relative safety loading used is θ = .. We choose two differ- ent models of the risk process based on the results from a statistical analysis (see Sect. .): a homogeneous Poisson process (HPP) with log-normal claim sizes, and a renewal process with Pareto claim sizes and log-normal waiting times. The results are presented in Fig. .. The thick solid line is the “real” risk process, i.e., a trajectory constructed from the historical arrival times and values of the losses. The different shapes of the “real” risk process in the subplots are due to the different forms of the premium function c(t). The thin solid line is a sample trajectory. The dotted lines are the sample ., ., ., ., ., ., ., ., .-quantile lines based on  trajectories of the risk process. The quantile lines visualize the evolution of the density of the risk process. Clearly, if the claim severities are Pareto-distributed then extreme events are more likely to happen than in the log-normal case, where the historical trajectory falls outside even the .-quantile line. Figure . suggests that the second model (Pareto-distributed claim sizes and log-normal waiting times) yields a reasonable model for the “real” risk process.

Probability Gates

7.4.4

“Probability gates” are an interactive graphical tool implemented in SDE-Solver. They can provide invaluable assistance in the real-time analysis of the risk process and

(20)

Figure .. The PCS data simulation results for a homogeneous Poisson process with log-normal claim sizes (top panel) and a renewal process with Pareto claim sizes and log-normal waiting times (bottom panel). The dotted lines are the sample ., ., ., ., ., ., ., ., .-quantile lines based on  trajectories of the risk process. From XploRe

(21)

Figure .. “Probability gates” are an interactive graphical tool used for determining the probability that the process passes through a specified interval. The ., ..., .-quantile lines (thick lines) are based on  simulated trajectories (thin lines) of the risk process originating at u=  billion USD. The parameters of the α-stable Lévy motion approximation of the risk process were chosen to comply with PCS data. From the SDE-Solver

its models. A “probability gate” gives the so-called cylindrical probability PXt  (a, b] that the simulated process Xt passes through a specified interval(a, b] at a specified point in time t(Janicki and Izydorczyk, ; Janicki et al., ). Two probability gates are defined in Fig. .; Monte Carlo simulations can be used to ob- tain the probability estimates. One yields the probability of the risk process falling below  billion after four years, i.e., PR  (, ] = .. The other yields the probability of the risk process being in the range(, ] billion after eight years, i.e., PR  (, ] = .. Additionally, product probabilities of the process passing through all of the defined gates are provided. In the above example, the prod- uct probability of the risk process first falling below  billion (after four years) and then recovering to over  billion but less than  billion (after eight years) is equal to PR  (, ], R  (, ] = .. The parameters of the α-stable Lévy motion approximation (Furrer et al., ) of the risk process were chosen to comply with the catastrophic data example.

References

Burnecki, K., Härdle, W. and Weron, R. (). Simulation of risk processes, in Teu- gels, J., Sundt, B. (eds.) Encyclopedia of Actuarial Science, Wiley, Chichester, UK.

(22)

Burnecki, K., Misiorek, A. and Weron, R. (). Loss Distributions, in Cizek, P., Härdle, W., Weron, R. (eds.) Statistical Tools for Finance and Insurance, Springer, Berlin, –.

Burnecki, K., Miśta, P. and Weron, A. (). Ruin Probabilities in Finite and Infinite Time, in Cizek, P., Härdle, W., Weron, R. (eds.) Statistical Tools for Finance and Insurance, Springer, Berlin, –.

Burnecki, K., Nowicka-Zagrajek, J. and Wyłomańska, A. (). Pure Risk Premiums under Deductibles, in Cizek, P., Härdle, W., Weron, R. (eds.) Statistical Tools for Finance and Insurance, Springer, Berlin, –.

Čižek, P., Härdle, W. and Weron, R. (eds.) (). Statistical Tools for Finance and Insurance, Springer, Berlin.

Chernobai, A., Burnecki, K., Rachev, S.T., Trück, S. and Weron, R. (). Modelling catastrophe claims with left-truncated severity distributions, Computational Statistics, :-.

Daykin, C.D., Pentikainen, T. and Pesonen, M. (). Practical Risk Theory for Actuaries, Chapman, London.

Grandell, J. (). Aspects of Risk Theory, Springer, New York.

Furrer, H., Michna, Z. and Weron, A. (). Stable Lévy motion approximation in collective risk theory, Insurance: Mathematics and Economics, :-.

Janicki, A. and Izydorczyk, A. (). Computer Methods in Stochastic Modeling, WNT, Warszawa (in Polish).

Janicki, A., Izydorczyk, A. and Gradalski, P. (). Computer Simulation of Stochastic Models with SDE-Solver Software Package, Lecture Notes in Computer Science, :-.

Janicki, A. and Weron, A. (). Simulation and Chaotic Behavior of α-Stable Stochastic Processes, Marcel Dekker, New York.

Kloeden, P.E. and Platen, E. (). Numerical Solution of Stochastic Differential Equations, Springer, Berlin.

Klugman, S.A., Panjer, H.H. and Willmot, G.E. (). Loss Models: From Data to Decisions, Wiley, New York.

Miśta, P. (). Computer methods of ruin probability approximation and its control via optimal reinsurance policies, Fundacja “Warta”, Warsaw (in Polish).

Panjer, H.H. and Willmot, G.E. (). Insurance Risk Models, Society of Actuaries, Chicago, IL.

Teugels, J. and Sundt, B. (). Encyclopedia of Actuarial Science, Wiley, Chichester, UK.

Weron, R. (). Computationally Intensive Value at Risk Calculations, in Gen- tle, J.E., Härdle, W., Mori, Y. (eds.) Handbook of Computational Statistics, Springer, Berlin, –.

Cytaty

Powiązane dokumenty

Pawel Kmiotek, Yassine Ruichek, ”A Fixed Size Assumption Based Data Association Method for Coalescing Objects Tracking using a Laser Scanner Sensor”, The 2009 IEEE

This PhD thesis describes the structure and magnetic properties of ultrathin layers composed of Fe and MgO: subnanometer Fe films in a MgO/Fe/MgO stack, Fe/MgO/Fe trilayers and

W poniższej rozprawie poruszono temat sposobu wytwarzania, własności oraz mikrostruktury spiekanej stali wytworzonej z dyfuzyjnie stopowanego proszku Distaloy AQ z dodatkiem 0,8% C

In 2014 a book by dr Arkadiusz Zawadzki (an assistant lecturer at the Archive Studies Unit, Department of History and International Relations, Siedlce University of Natural

For systematic screw series, for instance, the thrust and torque will in future be given in the form of polynomials of the advance coefficient, pitch ratio, blade- area ratio and

We propose an efficient model for a chemical reactor where hydrodynamics of the solvent is determined by Stochastic Rotation Dynamics and a reaction occurs over a catalytic

Te „pozorne żużle“ naj­ łatwiej rozróżnić przy pomocy magnesu, zachowały się w nich bowiem albo cząsteczki metalu, albo też tlenek żelazawo-żelazowy

To identify the mechanism responsible for trapping of bubbles by single-beam acoustical traps, we used a theoretical model to compute the acoustic radiation force in 3D and com-