# Lecture 14—Wednesday, October 13, 2010

## Refitting the analysis of covariance model from last time

I refit the frequentist and Bayesian analysis of covariance models we worked with in lecture 13. I use the BUGS model definition file (not shown) that was called ipomopsis.txt in lecture 13.

out2 <- lm(Fruit~Root+Grazing, data=ipomopsis)
#Bayesian analysis
x <- ipomopsis\$Root
y <- ipomopsis\$Fruit
z <- as.numeric(ipomopsis\$Grazing=='Ungrazed')
n <- length(y)
#set directory to location of BUGS file
setwd("C:/Users/jmweiss/Documents/statistics")
#variables needed for bugs and jags functions
ipo.data <- c('y','x','z','n')
ipo.inits <- function() {list(b0=rnorm(1), b1=rnorm(1), b2=rnorm(1), sigma.y=runif(1))}
ipo.parameters <- c("b0","b1","b2","sigma.y")
#for WinBUGS
library(arm)
#run WinBUGS first with 100 iterations as a check
ipo.1 <- bugs(ipo.data1, ipo.inits, ipo.parameters, "ipomopsisbugs.txt", n.chains=3, n.iter=100, debug=T)
ipo.1 <- bugs(ipo.data, ipo.inits, ipo.parameters, "ipomopsisbugs.txt", n.chains=3, n.iter=10000, debug=T)
#for JAGS
library(R2jags)
#run jags
ipo.1j <- jags(ipo.data, ipo.inits, ipo.parameters, "ipomopsisbugs.txt", n.chains=3, n.iter=10000)

## Generating trace plots of the Markov chains

Using the argument debug=T of the bugs function causes the WinBUGS program to pause after the model is fit allowing us to inspect the graphs and diagnostics that WinBUGS has produced before the program shuts down and the numerical results are returned to R. This is not possible with the jags function because JAGS is not a stand-alone application. Trace plots of the Markov chains have to be generated in R.

The thinned Markov chains that are supposed to be samples from the posterior distributions of the parameters are returned to R in three versions. With WinBUGS they are components of the bugs object; with JAGS they are components of the \$BUGSoutput component component of the jags object. The three versions are the following (fuller explanations can be found in lecture 13).

• In the \$sims.matrix component the combined Markov chains (with order randomized) occur as the columns of a matrix. Each column corresponds to a different parameter and the rows are the samples from the posterior distribution.
• In the \$sims.list component the combined Markov chains (with order randomized) occur as separate elements of a list. Each list element corresponds to a different parameter and the elements of the list are the samples from the posterior distribution.
• In the \$sims.array component the simulation results are returned in the form of a three-dimensional array, i.e., a set of matrices. Each matrix corresponds to a single parameter whose columns are the thinned individual Markov chains with the observations in the order they were obtained.

Of these three only the \$sims.array component is useful for generating trace plots of the individual Markov chains. The function below uses the output from jags to plot these chains. By default the jags function returns 1000 observations per chain. The function is written so that if the individual chains contain more than 500 observations, it thins them by one third. Object is the jags model object and parm is the name of the parameter to plot.

jags.plot <- function(object, parm) {
mycol <- 2:4
mymat <- object\$BUGSoutput\$sims.array[,,parm]
#the next line thins the JAGS output further
if(nrow(mymat)>500) mymat <- mymat[seq(1,nrow(mymat), length=nrow(mymat)/3),]
plot(c(1,nrow(mymat)), range(mymat), type='n', xlab='iteration', ylab=parm)
junk <- sapply(1:3, function(x) lines(1:nrow(mymat), mymat[,x], col=mycol[x]) )
}

We can use the function to produce trace plots of the parameters one at a time or for a set of parameters of interest.

#trace plot for b0
jags.plot(ipo.1j,"b0")
names(ipo.4j\$BUGSoutput\$sims.list)
[1] "b0"       "b1"       "b2"       "deviance" "sigma.y"
#trace plots for all parameters
par(mfrow=c(2,2))
sapply(c(1:3,5), function(x) jags.plot(ipo.4j, names(ipo.4j\$BUGSoutput\$sims.list)[x])) -> temp
par(mfrow=c(1,1))

Fig. 1 shows the trace plots of all parameters. In all four cases the chains exhibit good mixing.

Fig. 1  Trace plots of the Markov chains from JAGS output

## Comparing Bayesian and frequentist results

With an entire posterior distribution at our disposal we have to choose what we want to call the Bayesian point estimate. Obvious choices are the mean or the median of the posterior distribution. For symmetrical posterior distributions these should be fairly close. The frequentist maximum likelihood estimate is the value of the parameter that maximizes the likelihood, the parameter value that makes the data we obtained most likely. With an uninformative prior, the posterior distribution should be proportional to the likelihood in which case the maximum likelihood estimate should correspond to the mode of the posterior distribution.

Fig. 2 displays a kernel density estimate (smoothed histogram) of the posterior distribution of the intercept, b0, using the samples of the posterior distribution contained in the \$sims.matrix component. Indicated on the graph are the locations of the maximum likelihood estimate (frequentist estimate from lm) as well as the mean, median, and model of the estimated posterior distribution (Bayesian). Observe that all the estimates are very close to each other.

plot(density(ipo.1\$sims.matrix[,"b0"]),main='Posterior distribution',xlab='b0', col='skyblue', lwd=2)
out.density <- density(ipo.1\$sims.matrix[,"b0"])
mode.posterior <- out.density\$x[out.density\$y==max(out.density\$y)]
c(coef(out2)[1], median(ipo.1\$sims.matrix[,"b0"]), mean(ipo.1\$sims.matrix[,"b0"]), mode.posterior)
(Intercept)
-127.8294   -127.9000   -127.8129   -128.4677
abline(v=mode.posterior, lwd=3, col='grey70')
abline(v=coef(out2)[1], lty=1)
abline(v=median(ipo.1\$sims.matrix[,"b0"]), lty=2, col=2)
abline(v=mean(ipo.1\$sims.matrix[,"b0"]), lty=2, col='seagreen')
legend('topright', c('MLE', 'posterior median', 'posterior mean', 'posterior mode'), col=c(1,2, 'seagreen', 'grey70'), lty=c(1,2,2,1), lwd=c(1,1,1,2), bty='n', cex=.9)

Fig. 2  Posterior distribution of the intercept

## Bayesian credibility intervals

The \$summary component of the bugs object contains some representative quantiles of the posterior distribution for each parameter. Using these values we can for instance construct 95% credibility intervals.

ipo.1\$summary[,c('2.5%','97.5%')]
2.5%      97.5%
b0       -149.27500 -108.80750
b1         21.23025   25.99000
b2         29.38150   42.87975
sigma.y     5.55330    8.90250
deviance  263.70000  275.79500

We can also obtain the quantiles directly from the \$sims.matrix component of the bugs object. To get them in this fashion we just use the quantile function.

quantile(ipo.1\$sims.matrix[,"b0"], c(.025,.975))
2.5%     97.5%
-149.2750 -108.8075

To obtain credibility intervals in this fashion for more than one parameter at a time we can use the sapply function.

sapply(c('b0','b1','b2'), function(x) quantile(ipo.1\$sims.matrix[,x], c(.025,.975)))
b0       b1       b2
2.5%  -149.2750 21.23025 29.38150
97.5% -108.8075 25.99000 42.8797

To compare these results to the frequentist's 95% confidence intervals, we need to extract information from the corresponding lm model object, out2, and the summary object constructed from that model.

out2.sum <- summary(out2)
names(out2.sum)
[1] "call"          "terms"       "residuals"
[4] "coefficients"  "aliased"     "sigma"
[10] "fstatistic"    "cov.unscaled"

The \$coefficients component of the summary object has the estimates we need. The first column contains the parameter estimates while the second column contains the standard errors.

out2.sum\$coefficients
Estimate Std. Error   t value     Pr(>|t|)
(Intercept)     -127.82936   9.664095 -13.22725 1.349804e-15
Root              23.56005   1.148771  20.50892 8.408231e-22
GrazingUngrazed   36.10325   3.357396  10.75335 6.107286e-13

The degrees of freedom for the t-statistic are the residual degrees of freedom for the model.

names(out2)
[1] "coefficients"  "residuals"      "effects"
[4] "rank"          "fitted.values"  "assign"
[7] "qr"            "df.residual"    "contrasts"
[10] "xlevels"       "call"           "terms"
[13] "model"

out2\$df.residual
[1] 37

The basic formula for a 95% confidence interval for a parameter θ is the following.

Using this I calculate the confidence intervals and then transpose the result so that things are arranged in the same way as in the credibility intervals shown above.

#frequentist confidence intervals
t(cbind(out2.sum\$coefficients[,1]-qt(.975,out2\$df.residual)* out2.sum\$coefficients[,2], out2.sum\$coefficients[,1]+ qt(.975,out2\$df.residual)*out2.sum\$coefficients[,2]))
(Intercept)      Root GrazingUngrazed
[1,]  -147.4107 21.23242         29.30052
[2,]  -108.2480 25.88768         42.90598

#Bayesian credibility intervals
sapply(c('b0','b1','b2'),function(x) quantile(ipo.1\$sims.matrix[,x],c(.025,.975)))
b0       b1       b2
2.5%  -149.2750 21.23025 29.38150
97.5% -108.8075 25.99000 42.8797

The intervals are similar.

## Choosing uninformative priors in Bayesian models

As we saw last time when supposedly uninformative priors are too informative it can affect the estimates we obtain. On the other hand BUGS can become computationally unstable when uninformative priors allow parameters to have too wide a range. In WinBUGS this can yield an error message such as "cannot bracket slice for node...." indicating that the current prior is too diffuse. The best solution is to always rescale the variables in the model so that all the regression parameters have the same limited range. A good rule is to rescale things so that parameter values lie in the interval –10 to 10. When this is done the same uninformative priors will work for every model and no further adjustments need to be made to the priors.

### Uninformative priors for regression parameters

Without prior information about a regression parameter, the usual approach is to assume that a regression parameter has a normal distribution with a mean of zero (representing no relationship with the response) and a very large variance. This choice yields a distribution that is as close to being a flat prior as is possible when dealing with the entire real number line and leaves the regression parameter essentially unrestricted. Because the normal distribution in the BUGS language is parameterized in terms of precision rather than variance, a large variance is obtained by specifying a very small precision. Fig. 3 shows four normal priors with decreasing levels of precision (increasing variance). Notice that as the precision is decreased each normal distribution is flatter than the one before. On the scale of the most precise normal prior, dnorm(0, .01), the least precise normal prior, dnorm(0, .00001), appears to be essentially flat.

curve(dnorm(x, 0, sqrt(1/.01)), from=-200, to=200, col=1, ylab='Density', xlab='x')
legend('topleft', c('dnorm(0,.01)', 'dnorm(0,.001)', 'dnorm(0,.0001)', 'dnorm(0,.00001)'), col=1:4, lty=1, bty='n', cex=.9, title=expression(bold('Normal priors')))

Fig. 3  The effect of varying the precision parameter of normal priors

### Uninformative priors for precision parameters

The following line is commented out in the BUGS program file we used in lecture 13.

tau.y~dgamma(.001,.001)

This gives an "uninformative" prior for the precision parameter of a normal distribution and is the prior that typically appears in textbooks on Bayesian modeling. In general if Y ~ dgamma(a,b) then Y has mean and variance . The two parameters of the gamma distribution used in the BUGS parameterization are referred to as the shape and rate parameters in the R parameterization. Fig. 4 illustrates how changing these two parameters affects the shape of the gamma distribution.

#gamma priors
curve(dgamma(x,.001,.001), from=0, to=10, ylab='Density', xlab='x')
legend('center', c('dgamma(.001,.001)', 'dgamma(.1,.1)', 'dgamma(.01,.01)', 'dgamma(.01,.1)'), col=1:4, bty='n', cex=.9, lty=1, title=expression(bold('Gamma priors')))

Fig. 4  The effect of varying parameter values on the gamma prior

The prior dgamma(.001,.001) is a gamma distribution with a mean of 1 and a variance of 1000. From the graph this resembles a classic dust bunny distribution (McCune & Grace 2002). Nearly all of the weight is placed at zero but because of the large variance, large values are also possible. Without the large spike at zero, the prior is essentially flat. Because this is a prior for a precision parameter, a large weight at zero is not considered a problem. In using such a prior for the precision we're saying that we have no information about the parameter that has this precision.

The primary rationale for using a gamma prior is that it is a conjugate prior for the precision in a normal model with a known mean. (Technically when the normal priors for the regression coefficients are independent of the gamma prior for the precision, as they are here, we call them conditionally conjugate or semi-conjugate priors for the normal mean and precision.) Using a conjugate prior yields a posterior distribution of known form. While historically this was essential for getting estimates analytically, with MCMC it is no longer considered necessary. In the BUGS program of lecture 13 instead of modeling the precision tau.y directly we chose instead to place a uniform prior on sigma.y , the standard deviation. This is also the approach taken by Gelman & Hill (2006) and Jackman (2009).

tau.y <- pow(sigma.y,-2)
sigma.y~dunif(0,10000)

In BUGS models gamma priors for precisions tend to cause numerical problems more often than do uniform priors for standard deviations. Furthermore the spike at the origin can make it difficult to get variance estimates in some of the multilevel problems we'll be considering later. A gamma prior that is flat over most of its distribution but is quite informative around zero causes problems if the true value of the precision parameter is actually close to zero. Consequently I will rarely use a gamma prior for the precision in this course.

## ANOVA as a Bayesian model

As a second example of fitting a Bayesian model, I reconsider the analysis of variance model that was developed in lecture 2 and examined further in lecture 5. The following code creates the necessary variables and refits the various models.

#extract variables
prelim <- as.character(test.dat\$var)
treatment <- substring(prelim,6,9)
response <- as.numeric(substring(prelim, 10, nchar(prelim)))
#create factors
fac1 <- factor(substring(treatment,1,2))
fac2 <- factor(substring(treatment,3,3))
fac3 <- factor(substring(treatment,4,4))
#fit model
out1 <- lm(response~fac1*fac2*fac3)
out2 <- update(out1,.~.-fac1:fac2:fac3)
out3 <- update(out2,.~.-fac1:fac3)
out4 <- update(out3,.~.-fac1:fac2)
summary(out4)

Call:
lm(formula = response ~ fac1 + fac2 + fac3 + fac2:fac3)

Residuals:
Min       1Q   Median      3Q     Max
-0.69537 -0.12802 -0.01759 0.12839 1.09991

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  3.35409    0.04442  75.514  < 2e-16 ***
fac1No       0.55946    0.03844  14.553  < 2e-16 ***
fac1Ru       0.62461    0.04079  15.314  < 2e-16 ***
fac2S        0.06247    0.04954   1.261  0.20857
fac32        0.09988    0.04833   2.067  0.03988 *
fac2S:fac32  0.18046    0.06500   2.776  0.00595 **
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.247 on 233 degrees of freedom
Multiple R-Squared: 0.6148, Adjusted R-squared: 0.6066
F-statistic: 74.38 on 5 and 233 DF, p-value: < 2.2e-16

The final model includes factor 1 as a main effect and the interaction of factor 2 and factor 3 along with their main effects. I generate the variables needed for the WinBUGS run. Interaction terms are created by multiplying the corresponding dummy variables together.

y <- response
z2 <- as.numeric(fac2)-1
z3 <- as.numeric(fac3)-1
z12 <- as.numeric(fac1=='No')
z13 <- as.numeric(fac1=='Ru')
z2z3 <- z2*z3
n <- length(y)

The WinBUGS model definition file is shown below and is saved as tadpolebugs.txt. Because the parameter estimates are less than 10 and most are less than 1, the precision for the uninformative priors does not need to be set as severely as it was in the previous problem.

model{
for(i in 1:n) {
y[i]~dnorm(y.hat[i],tau.y)
y.hat[i] <- b0 + b1*z12[i] + b2*z13[i] + b3*z2[i] + b4*z3[i] + b5*z2z3[i]
}
b0~dnorm(0,.00001)
b1~dnorm(0,.00001)
b2~dnorm(0,.00001)
b3~dnorm(0,.00001)
b4~dnorm(0,.00001)
b5~dnorm(0,.00001)
tau.y <- pow(sigma.y,-2)
sigma.y~dunif(0,10000)
}

The remaining lines set up the WinBUGS run.

tad.inits <- function() {list(b0=rnorm(1),b1=rnorm(1), b2=rnorm(1), b3=rnorm(1), b4=rnorm(1), b5=rnorm(1), sigma.y=runif(1))}

Here's the alternative to the last line for running the model in JAGS.

Fig. 4 shows the trace plots of all parameters. In all seven cases the chains exhibit good mixing.

oldmar <- par("mar")
par(mar=c(5.1,4.1,1.1,2.1))
par(mfrow=c(3,3))
# using JAGS output for convenience
[1] "b0"       "b1"       "b2"       "b3"       "b4"       "b5"       "deviance" "sigma.y"
par(mfrow=c(1,1))
par(mar=oldmar)

Fig. 4  Trace plots of the Markov chains for all model parameters

Examining the WinBUGS output we find the following.

mean         sd        2.5%      25%      50%        75%      97.5%     Rhat n.eff
b0        3.3549511 0.04614818  3.26802500 3.323000  3.35450  3.3860000  3.4469750 1.002987   700
b1        0.5596015 0.03845099  0.48201000 0.534100  0.55980  0.5856250  0.6348950 1.000704  1000
b2        0.6244328 0.04084198  0.54461000 0.597000  0.62560  0.6526750  0.7031000 1.001223  1000
b3        0.0612567 0.05088419 -0.04097350 0.026570  0.06275  0.0949875  0.1575200 1.004073   440
b4        0.0993939 0.04905849  0.00147055 0.065265  0.09904  0.1318000  0.1976725 1.001939   820
b5        0.1816551 0.06803408  0.04661850 0.135775  0.18025  0.2279750  0.3139000 1.002621  1000
sigma.y   0.2481149 0.01204706  0.22630500 0.239825  0.24730  0.2560000  0.2733950 1.003287   650
deviance 10.9467705 3.67814814  5.58592203 8.258250 10.32500 13.0000000 20.0479961 1.003879  1000

The Rhat statistics are all less than 1.1 and the effective sample sizes (n.eff) are large. This coupled with the good mixing of the chains in Fig. 4 suggests that we are indeed sampling from the posterior distributions. Using these posterior distributions I obtain 95% credibility intervals for the regression parameters and compare them to the frequentist 95% confidence intervals. For the confidence intervals I need to extract information from the model object, out4 above, and the model's summary table. The basic formula for a 95% confidence interval for a parameter θ is given below.

The regression coefficients and their standard errors are contained in the model summary table. The degrees of freedom for the t-statistic are the residual degrees of freedom and can be extracted from the model object itself. We went through the details above so I just show the calculations.

out4.sum <- summary(out4)
t(cbind(out4.sum\$coefficients[,1] - qt(.975,out4\$df.residual) * out4.sum\$coefficients[,2], out4.sum\$coefficients[,1] + qt(.975,out4\$df.residual) * out4.sum\$coefficients[,2]))

(Intercept)   fac1No   fac1Ru      fac2S      fac32 fac2S:fac32
[1,]     3.26658 0.483717 0.544252 -0.0351347 0.00466036   0.0523897
[2,]     3.44160 0.635199 0.704971  0.1600784 0.19510839   0.3085327

b0       b1       b2         b3         b4        b5
2.5%  3.26803 0.487605 0.546523 -0.0405645 0.00282245 0.0587728
97.5% 3.44298 0.631397 0.698880  0.1598200 0.19125000 0.3189925

## Obtaining credibility intervals for ANOVA treatment means

In lecture 5 we used the effects package to obtain standard errors and confidence intervals for treatment means and then in lecture 6 we saw how these could also be obtained by hand. The methods we covered there only work for linear combinations of parameters. For nonlinear combinations of parameters the methodology required is far more complicated and the best we can do is obtain approximate solutions.

One of the truly attractive features of Bayesian analysis is how easy it is obtain interval estimates of any quantity of interest. To find the posterior distribution for any quantity that is a function of parameters in the model, we just apply that same function to the Markov chains used to produce the posterior distributions of those parameters. With BUGS or JAGS we can do this either by defining these derived quantities in the BUGS program and then including them in our parameter list in the final run, or by applying the function to the posterior distributions of the parameters contained in the \$sims.list or \$sims.matrix components that were returned.

To illustrate the first method I add to the BUGS program twelve derived quantities that correspond to the treatment means for the twelve treatment groups. The formulas used for this come from the regression equation with appropriate values plugged in for the dummy variables. I save this new WinBUGS model definition file as tadpolemod2.txt.

model {
#likelihood
for(i in 1:n) {
y[i]~dnorm(y.hat[i],tau.y)
y.hat[i] <- b0 + b1*z12[i] + b2*z13[i] + b3*z2[i] + b4*z3[i] + b5*z2z3[i]
}
#priors
b0~dnorm(0,.000001)
b1~dnorm(0,.000001)
b2~dnorm(0,.000001)
b3~dnorm(0,.000001)
b4~dnorm(0,.000001)
b5~dnorm(0,.000001)
tau.y <- pow(sigma.y,-2)
sigma.y~dunif(0,10000)
#treatment means
mu.Co.D.1 <- b0
mu.No.D.1 <- b0+b1
mu.Ru.D.1 <- b0+b2
mu.Co.S.1 <- b0+b3
mu.No.S.1 <- b0+b1+b3
mu.Ru.S.1 <- b0+b2+b3
mu.Co.D.2 <- b0+b4
mu.No.D.2 <- b0+b1+b4
mu.Ru.D.2 <- b0+b2+b4
mu.Co.S.2 <- b0+b3+b4+b5
mu.No.S.2 <- b0+b1+b3+b4+b5
mu.Ru.S.2 <- b0+b2+b3+b4+b5
}

To avoid refitting the model from scratch we can use the stopping point of the last run, tad.1 for WinBUGS or tad.1j for JAGS, as the starting point for a new run. This is not particularly helpful here because the model fits so rapidly, but it can be very useful for complicated models and large data sets that take hours (or even days) to run before the convergence diagnostics look good. The last values of each Markov chain for each parameter are contained in the \$last.values component of the bugs object or the \$BUGSoutput\$last.values component of the jags object.

I modify the earlier parameter list so that it includes the treatment means and rerun the model using the \$last.values component for the initial values. Given that the model diagnostics are good I don't bother with a burn-in period. So I use the n.burnin argument and set it to a small value (it cannot be set to zero). With n.iter=20100 and n.burnin=100 we get 20,000 values from each Markov chain which are then thinned so that we have a total of 334 observations from each chain (WinBUGS) or 1000 observations from each chain (JAGS).

tad.parameters2 <- c("b0","b1","b2","b3","b4","b5","sigma.y", "mu.Co.D.1", "mu.No.D.1", "mu.Ru.D.1", "mu.Co.S.1", "mu.No.S.1", "mu.Ru.S.1", "mu.Co.D.2", "mu.No.D.2", "mu.Ru.D.2", "mu.Co.S.2", "mu.No.S.2", "mu.Ru.S.2")

The last line using JAGS would be the following.

I extract the 95% credibility intervals for each of the treatment means

2.5%    97.5%
mu.Co.D.1 3.265025 3.443000
mu.No.D.1 3.829000 3.997000
mu.Ru.D.1 3.887000 4.067975
mu.Co.S.1 3.341000 3.491975
mu.No.S.1 3.896025 4.048975
mu.Ru.S.1 3.962025 4.124000
mu.Co.D.2 3.376050 3.535950
mu.No.D.2 3.939000 4.084975
mu.Ru.D.2 4.002000 4.154975
mu.Co.S.2 3.624000 3.772000
mu.No.S.2 4.180050 4.331000
mu.Ru.S.2 4.250025 4.403000

Alternatively, we really didn't need to refit the model and include the treatment means as derived quantities in the model definition file. Instead we could have worked with the posterior distributions we already had. For instance to obtain a 95% credibility interval for the last treatment mean: fac1 = Ru, fac2 = S, fac3 = 2 we can just apply the regression formula directly to the separate posterior samples of the model parameters contained in the \$sims.list, \$sims.matrix, or even the \$sims.array components. Here I use the \$sims.list component.

2.5%    97.5%
4.250344 4.402943

Alternatively we can use the \$sims.matrix component.

2.5%    97.5%
4.250344 4.402943

These estimates are not exactly the same as what we get if we calculate the quantiles of the mu.Ru.S.2 column of the \$sims.matrix component directly.

2.5%    97.5%
4.250025 4.403000

This last interval is what is also reported in the \$summary component. The differences in the two methods apparently arise because bugs returns output from WinBUGS using only four significant digits. This rounding of values affects the two calculations differently. It's worth noting that this doesn't happen with output from jags.

2.5%    97.5%
4.249478 4.398274
2.5%    97.5%
4.249478 4.398274

## Cited references

• Gelman, Andrew and Jennifer Hill. 2006. Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge University Press: New York.
• Jackman, Simon. 2009. Bayesian Analysis for the Social Sciences. Wiley.
• McCune, Bruce and James Grace. 2002. The Analysis of Ecological Communities. MjM Software Design.

## R Code

A compact collection of all the R code displayed in this document appears here.

 Jack Weiss Phone: (919) 962-5930 E-Mail: jack_weiss@unc.edu Address: Curriculum in Ecology, Box 3275, University of North Carolina, Chapel Hill, 27516 Copyright © 2010 Last Revised--October 23, 2010 URL: http://www.unc.edu/courses/2010fall/ecol/563/001/docs/lectures/lecture14.htm