R: RStanをためす (2) [統計]
つづいて、久保本9章の例題をやってみる。
## Kubo Book Chapter 9 library(rstan) ## load data load(url("http://hosho.ees.hokudai.ac.jp/~kubo/stat/iwanamibook/fig/gibbs/d.RData")) model <- ' data { int<lower=0> N; // number of data real X[N]; // explanatory variable real MeanX; // mean of X int<lower=0> Y[N]; // response variable } parameters { real beta1; real beta2; } transformed parameters { real<lower=0> lambda[N]; for (i in 1:N) { lambda[i] <- exp(beta1 + beta2 * (X[i] - MeanX)); } } model { for (i in 1:N) { Y[i] ~ poisson(lambda[i]); } beta1 ~ normal(0, 100); beta2 ~ normal(0, 100); }' data <- list(N = nrow(d), X = d$x, Y = d$y, MeanX = mean(d$x)) fit <- stan(model_code = model, data = data, warmup = 100, iter = 1600, chains = 3)
結果
> fit <- stan(model_code = model, data = data, + warmup = 100, iter = 1600, chains = 3) TRANSLATING MODEL 'anon_model' FROM Stan CODE TO C++ CODE NOW. COMPILING THE C++ CODE FOR MODEL 'anon_model' NOW. SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). Iteration: 1600 / 1600 [100%] (Sampling) SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). Iteration: 1600 / 1600 [100%] (Sampling) SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). Iteration: 1600 / 1600 [100%] (Sampling) > > print(fit, digits = 3) Inference for Stan model: anon_model. 3 chains: each with iter=1600; warmup=100; thin=1; 1600 iterations saved. mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat beta1 1.976 0.002 0.083 1.806 1.923 1.979 2.033 2.134 1420 1.002 beta2 0.084 0.002 0.066 -0.046 0.040 0.084 0.126 0.211 1686 1.000 lambda[1] 6.187 0.026 1.013 4.403 5.499 6.123 6.810 8.383 1491 1.000 lambda[2] 6.285 0.025 0.954 4.562 5.634 6.232 6.878 8.350 1471 1.001 lambda[3] 6.386 0.024 0.895 4.765 5.781 6.348 6.944 8.288 1449 1.001 lambda[4] 6.490 0.022 0.838 4.934 5.917 6.455 7.013 8.246 1427 1.001 lambda[5] 6.597 0.021 0.783 5.132 6.062 6.575 7.090 8.238 1405 1.001 lambda[6] 6.707 0.020 0.732 5.326 6.216 6.671 7.169 8.235 1384 1.001 lambda[7] 6.820 0.019 0.686 5.499 6.350 6.788 7.266 8.240 1298 1.002 lambda[8] 6.936 0.018 0.647 5.669 6.492 6.909 7.357 8.254 1297 1.002 lambda[9] 7.056 0.017 0.618 5.853 6.632 7.046 7.466 8.317 1307 1.002 lambda[10] 7.179 0.016 0.602 6.007 6.788 7.167 7.580 8.391 1403 1.002 lambda[11] 7.305 0.016 0.602 6.116 6.902 7.306 7.698 8.514 1430 1.002 lambda[12] 7.436 0.016 0.618 6.243 7.020 7.439 7.849 8.664 1472 1.002 lambda[13] 7.570 0.017 0.652 6.317 7.119 7.563 8.005 8.881 1533 1.002 lambda[14] 7.707 0.018 0.702 6.364 7.235 7.697 8.175 9.164 1585 1.002 lambda[15] 7.849 0.019 0.768 6.396 7.327 7.846 8.349 9.467 1633 1.001 lambda[16] 7.995 0.021 0.847 6.395 7.423 7.986 8.535 9.764 1671 1.001 lambda[17] 8.146 0.023 0.938 6.381 7.507 8.137 8.724 10.085 1691 1.001 lambda[18] 8.301 0.025 1.040 6.381 7.604 8.268 8.934 10.443 1703 1.001 lambda[19] 8.460 0.028 1.151 6.369 7.670 8.406 9.152 10.832 1710 1.001 lambda[20] 8.624 0.031 1.272 6.356 7.762 8.565 9.392 11.281 1713 1.000 lp__ 144.013 0.032 1.004 141.223 143.624 144.336 144.720 144.954 983 1.000 Sample were drawn using NUTS2 at Sat Sep 1 15:32:43 2012. For each parameter, n_eff is a crude measure of effective sample size, and Rhat is the potential scale reduction factor on split chains (at convergence, Rhat=1).
> traceplot(fit, inc_warmup = FALSE)
コメント 0