Math 5637 (395) Risk Theory

Fall 2009

 

Instructor – James G. Bridgeman

instructor's web site

syllabus for the course

Change in Requirements:  You are only required to do 6 projects, rather than 8!

Errata for textbook: http://www.soa.org/files/pdf/edu-loss-models-errata-corrections.pdf

 

Final Exam  Exam Solutions  Exam Solutions Spreadsheets

Grading Worksheet  Final grades have been posted on the Registrar’s site

Maximum Entropy Paper (K. Conrad)          

 

EXCEL Example for Convolution (see page 208) 

(note use of the EXCEL functions OFFSET and SUMPRODUCT)

 

Distribution fitting example (pp 207-208)

 

EXCEL Example for Panjer Recursion (try convolution on this one first!)

 

Stop-Loss Example    Stop Loss Example Spreadsheet 

 

Ruin Theory I   Ruin Theory II

 

Example of Compound Geometric and Panjer Recursion For Ruin Probabilities

 

Cumulative Assignments (Most recent on top)(Final)

Study the Two Ruin Theory Notes above and the spreadsheet example for ruin probabilities

Sec. 11.1-11.4 and exerc. 11.1-11.3, 11.6-11.7, 11.9-11.18

Sec. 10.1-10.2

Study the Stop-Loss Example and Spreadsheet above … be able to do such problems independently

Study the EXCEL examples and distribution fitting examples above and be able to do such calculations independently.

Sec. 9.8-9.12 and exerc, 9.47-9.65, 9.67-9.69

Sec. 9.1-9.7 and exerc. 9.1-9.36

Sec. 6.10-6.13 and 8.6.  Exer.6.20-6.28, 6.32, 8.29-8.34

Sec. 6.7-6.9 and exerc. 6.10-6.19

Sec. 6.1-6.6 and exer. 6.1-6.9 and use Faa’s formula to calculate the first 4 raw and central moments of the Poisson, Neg. Binomial, and Binomial distributions

Validate (comparing formulas is good enough, but surface interpretation is interesting so you might want to try it) that if X is a log-logistic then the k-th conditional tail moment distribution of X is a transformed beta (or, when γ=1, a generalized Pareto)

Sec. 5.5 and exerc. 5.27

Sec. 3.4 and 3.5; exer.3.25 to 3.37 (Beware some misprints in both the text and the solution manual.  See errata!)

Write down a formula for the 3rd moment analogous to Theorem 8.8

Be sure that you can see Theorems 8.3, 8.5, 8.6, 8.7 and 8.8 in terms of the surface interpretation

Sec. 8.1-8.5 and exer. 8.1-8.28 (In chapter 8 try to think in terms of the surface interpretation.  It will simplify everything)

Study the Maximum Entropy paper (download above)

Sec. 5.4 and exer. 5.24-5.26

Sec. 5.1-5.3 and exer. 5.1-5.23 (keep a bookmark in appendix A!)

Sec. 4.1-4.2 and exer. 4.1-4.12

Sec. 3.3 and exer. 3.21-3.24

Sec. 3.1-3.2 and Exer. 3.1-3.20

Ch. 1&2 and Exer.2.1-2.5 

 

Project Topics: (pick any eight to submit by end of semester … topics will be added as we go)

#1 Critique the “proof” given in class that vanishing of y kS(y) as y goes to infinity implies existence of the k-th moment (assume non-negative support).

#2 The surface interpretation shows that the size of a stationary population is proportional to the average lifetime (expectation of life at birth).  What does the surface interpretation say about the size of a stably growing population (the rate of births at time t is Bt = B0ek t for some constant k)?

#3 Use the surface interpretation (i.e. don’t use an integral anywhere in your work) and a little bit of algebra to write formulas for E[Xj(X^d)k] (or combinations thereof) for as many combinations of j=1,2,3,4 and k=1,2,3,4 as you can without using any powers higher than 4 in your answer.  Assume non-negative support.  Hint: compare with project #22.

#4 Develop a purely algebraic method to express E[(X-d)+k] in terms of E[Xj] and E[(X^d)j] for j < k.  Show how it works for k = 1, 2, 3, 4.  Do not use the surface interpretation (as I do in class) and do not use integration by parts (as the textbook does) in any way, but use only algebra.  Forget that you know anything about surface interpretation or integration by parts.  In fact, do not use an integral anywhere in your work.  Do NOT use the results from #3; they depended upon surface interpretation.  Assume non-negative support.

#5 Make three dimensional visual illustrations for the surface interpretation, including 2nd and 3rd moments and the relation of e(d), e2(d), and e3(d) to E[X],

E[X2], E[X3], E[X^d], E[(X^d)2], and E[(X^d)3].  Assume non-negative support.

#6 For a continuous random variable X with non-negative support define a function LX(u) by LX(u)=E[XΛu].  Find an expression for ∫0,y LX(u)du in terms of E[XΛy] and E[(XΛy)2].  Prove that ∫0,∞SX(u)eX(u)du=˝E[X2].  Explain in words what that integral is telling us.  Explain what ˝E[X2]/E[X] expresses.

#7 Prove Faa’s Formula

#8 Derive the Euler-Lagrange Differential Equation (rigorously) and explain in words and/or pictures why it is believable (intuitively). 

#9 Working intuitively (rigorously would be hard to impossible) try to develop something like the “no-special treatment for any one value or set of values” concept using arithmetic rather than geometric averages of the density values.  Show how it breaks down (fails to work or results in infinite answers) at some point in both the discrete and the continuous case.  Conclude that the geometric average of the density values, leading to the maximum entropy principle, is the correct way to implement the concept of “no-special treatment for any one value or set of values.”

#10 Derive the Laplace distribution using a system of two Euler Lagrange Differential Equations in two unknown functions f1 and f2, each one supported on only one side of the mean.

#11 Attach your name to our “no name” random variable by finding the probability density function for the distribution with maximum entropy on -∞ to ∞ subject to the constraints (I) it is a probability density (II) the integral of (x-µ)f(x)dx over -∞ to ∞ is 0 (III) it has tail constraint function d(x)=ln{ln[e^((x-µ)/b)+e^(-(x-µ)/b]}, i.e. the integral of d(x)f(x)dx over -∞ to ∞ is one.

#12 Work out the definitions and properties (i.e. Appendix A) of a family of severity distributions analogous to the transformed beta family, but based upon transformations of the log-Laplace distribution rather than the log-logistic.

#13 Work out the definitions and properties (i.e. Appendix A) of a family of severity distributions analogous to the transformed beta family, but based upon transformations of the lognormal distribution rather than the log-logistic.

#14 Work out the definitions and properties (i.e. Appendix A) of a family of severity distributions analogous to the transformed beta family, but based upon transformations of the no-name distribution (from #11) rather than the log-logistic.

#15 (speculative) Work out (by working backwards) what constraints in a maximum entropy derivation correspond to each member of the transformed gamma and transformed beta families.

#16 (speculative) Work out (by working backwards) what constraints in a maximum entropy derivation correspond to the exponential-exponential distribution (the one with density exe-exp(x))

#17 Work out what happens in the transformed beta and transformed gamma families if you replace the α-th conditional tail moment distributions: 

                                                 

with the α-th equilibrium distributions):

                                                  

How do the resulting distributions differ from the gamma, transformed gamma, generalized Pareto, and transformed beta (that arose from the α-th conditional tail moment distributions)?

#18 Make a three dimensional visual illustration for the relationship below, and write down an interpretation in words.

                                                  

#19 Work out the definitions and properties of a family of severity distributions analogous to the transformed beta family, but based upon transformations of the true inverse Gaussian distribution presented in class rather than the log-logistic.

#20 Work out the definitions and properties of a true inverse logistic, inverse logistic and reciprocal inverse logistic family of distributions, analogous to the true inverse Gaussian, inverse Gaussian and reciprocal inverse Gaussian presented in class.

#21 Work out the definitions and properties of a family of severity distributions analogous to the transformed beta family, but based upon transformations of any one (you pick one) of the inverse logistic family of distributions developed in project #20, rather than the log-logistic.

#22 Using surface interpretation (i.e. don’t use an integral anywhere in your work) and a little algebra work out formulas for E[X(X^d)], E[X(X^d)(X-d)+], and 4E[X(X^d)(X-d)+2]-6E[X2(X^d)2].  Assume non-negative support.  Hint: compare with project #3.

#23 Prove (or, if a formal proof eludes you, just illustrate and discuss the connections) that the negative binomial is like a Poisson with contagion; i.e. the negative binomial with parameters (r,βt) gives the number of events in time t if the probability of one event in infinitesimal time t to t+dt, conditional on exactly m events having occurred from time 0 to time t, is equal to dt(rβ)((1+m/r)/(1+βt)).   Try to make a similar interpretation of the binomial distribution.

#24 Work out the parameter space for the the (a,b,2) family of frequency distributions; include an analysis of the distributions on the line r = -1, analogous to the geometric (r = 1 in (a,b,0)) and logarithmic (r = 0 in (a,b,1)) distributions.  Does any kind of interesting series summation arise (analogous to the geometric and logarithmic series)?  Also work out the probability generating function on the line r= -1.

#25 Show (using probability generating functions) that a mixed Poisson distribution with infinitely divisible mixing distribution is also a compound Poisson, and give two specific examples of the phenomenon.  Explain clearly why the infinite divisibility assumption is needed.

#26 (speculative) We have see that the Negative Binomial can be the result of a Poisson mixture or of a compound Poisson.  Can the Binomial distribution be the result of a Poisson mixture or of a compound Poisson?  If so give an example and work out the parameters.  If not, explain what goes wrong.

#27 Prove the Panjer recursion formula for an (a,b,1) primary distribution using Faa’s formula.

#28 Come up with a spreadsheet (or other programming) algorithm to generate sets {j k} with ∑ (k=1, ∞) j k k = n, n=0,1, 2, … Is this efficient enough to warrant replacing Panjer recursion with direct use of Faa’s formula to calculate compound distribution probabilities for (a, b, 0) primary distributions?  Note that this would give you a calculation technique anytime the probability generating function of the primary distribution is known, whether or not there is a recursive feature to it.  Is this an improvement versus brute force convolution?

#29 State and prove a usable generalization of Faa’s Formula for the case of three nested functions.

#30 Develop recursive approximation formulae for E[(x-d)+^3] in terms of E[(x-(d-h))+^3], S, and lower moments of (x-d)+; one formula for the discrete case (stair-step F) and one formula for the continuous case.

#31 Try to copy the development of ruin theory for the compound Poisson process using instead a compound Negative Binomial.   Point out exactly what goes wrong.  (speculative) Can you suggest or follow a way to keep going?