Easy to type

個人的な勉強の記録です。データ分析、可視化などをメイントピックとしています。

はじめての 統計データ分析 ―ベイズ的〈ポストp値時代〉の統計学― その6

その6です。今回は第5章の章末問題に取り組んでいきます。

5章 実験計画による多群の差の推測

内容

  • 分散分析によるF検定へのオルタナ
  • 要因から生じる影響の調査
    • 要因 : 離散的なカテゴリーによる影響のこと
    • 水準: 要因が取りうる様々な状態のこと
    • 水準数: 水準のパターン数。以下要因Aの水準数をaで表す。
  • 独立した1要因モデル

    • モデル
      • y_{ij} = \mu_j + e
      •  e \sim Normal(0, \sigma_e)
      • 要因のj番目の水準におけるi番目の測定値が、j番目の水準がもつ平均的な効果と正規分布から生じる乱数による現れる
    • 以上を纏めて、事後確率はf(y_{ij} | \mu_j, \sigma_e)と表される
    • 水準内の測定値は独立に観測されてると考える 
      • 同時確率は f(y_{ij} | \mu_j, \sigma_e) = \Pi_{i = 1}^n f(y_{ij}|\mu_j, \sigma_e)
      • 即ち、尤度は f(y|\theta) =\Pi_{i = 1}^a f(y_j|\mu_j, \sigma_e)
  • 生成量

    • 全平均 :  \mu = \Sigma_{i=1}^a \frac{\mu}{a}
      • 各水準の平均の単純な平均
    • 水準の効果(eol): a_j = \mu_j - \mu
      • 各平均から全平均を引いたもの
      • 水準の効果の和が0になることに注意
    • 説明率:  \eta^{2} = \frac{\sigma_{a}^{2}}{\sigma_{y}^{2}}
      • 測定値の分散に占める要因の分散の割合で、要因が観測値をどれほど説明するかを示す
      •  \sigma_{y}^{2}は効果と誤差が互いに独立ならば、 \sigma_{y}^{2} = \sigma_{a}^{2} + \sigma_{e}^{2}である
        • 測定値の分散は要因の分散 + 誤差の分散になるということ
      •  0 \leq \eta \leq 1で定義される
    • 効果量 :  \delta = \frac{\sigma_a}{\sigma_e}
      • 水準間の標準偏差が、水準内の標準偏差の何倍に相当するか示すもの
      •  \sigma_eは要因の影響を受けない、水準内の平均的バラつきと等価であると考える事ができる
  • モデルを使った要因の効果の方法例

    • 水準の効果はどのぐらいか、有るのか無いのか
      • 水準の効果が基準値cより大きい確率をEAPで評価する
    • その要素はどれぐらいの効果の大きさを持つのか
      • 効果量、説明量を利用
    • 水準間で比較をするとどうなるのか
      • 水準ごとの平均の差のEAPを調べる
    • 連言命題が正しい確率はどれぐらいか
      • 連言命題 : 複数の演算子により表される複数の命題
      • 各命題が成り立つ確率の同時確率のEAPを調べる
    • 興味のある2水準の推測
      • 基本的に、今までやってきた2群比較と同様
      • しかし2水準以外の水準から得られるデータも利用するため、全体のデータを使って事後分布を推測する。この方が全体で誤差値が共通であるため、正確な予測が可能となる。
  • 独立した2要因モデル

    • 要因を複数導入
      • 水準の組み合わせ1単位のことをセルと以下では呼称する
    • 交互作用: 一方の要因の水準毎に、他方の要因の平均の高低パターンが異なる現象
    • モデル
      •  y_{ijk} = \mu  + a_{j} + b_{k} + ab_{jk} + e
      •  e \sim Normal(0, \sigma_e)
      • 水準 j, k i番目の測定値が、全体の平均に要因 a j番目の水準の効果 + 要因 b k番目の水準の効果 +  a bの交互作用の効果 + 乱数から構成されることを示す
      • セルの平均 + 乱数
    • それぞれ全平均(水準の効果)の性質から次の制約を持つ(水準の効果を全水準で足すと0になる性質を定式化)
      •  \Sigma_{i=1}^{a} a_{i} = 0
      •  \Sigma_{i=1}^{b} b_{i} = 0
      •  \Sigma_{j=1}^{a} ab_{jk} = 0
      •  \Sigma_{k=1}^{a} ab_{jk} = 0
    • 尤度 f(y|\theta) = f(y|\mu, \sigma_e = \Pi_{j=1}^{a}\Pi_{k=1}^{b} f(y_{jk} | \mu_{jk}, \sigma_e)

章末問題

1要因の推測

データの取得
import numpy as np
import os
from logging import getLogger, Formatter, StreamHandler, DEBUG

# printではなくloggerを使う
def get_logger():
    logger = getLogger(__name__)
    logger.setLevel(DEBUG)
    log_fmt = '%(asctime)s : %(name)s : %(levelname)s : %(message)s'
    formatter = Formatter(log_fmt)
    stream_handler = StreamHandler()
    stream_handler.setLevel(DEBUG)
    stream_handler.setFormatter(formatter)
    logger.addHandler(stream_handler)
    return logger
logger = get_logger()

# データの準備
ld = np.array([5.02, 6.67, 8.17, 2.79, 8.13, 6.34, 6.32, 3.97])
ll = np.array([9.89, 9.58, 11.20, 9.05, 12.33, 9.39, 10.88, 9.37, 17.40])
dm = np.array([10.20, 7.29, 7.57, 3.42, 5.82, 10.92, 5.21, 13.47, 8.64, 6.05])
y = np.concatenate([a, b, c])
A_ld = np.full(ld.size, 1, np.int)
A_ll = np.full(ll.size, 2, np.int)
A_dm = np.full(dm.size, 3, np.int)
A = np.concatenate([A_ld, A_ll, A_dm])

# 出力用ディレクトリの用意
result_dir = os.path.join("result", "chapter5")
if os.path.exists(result_dir) is False:
    os.makedirs(result_dir)
    logger.info("{0} is made".format(result_dir))
Stanによる統計モデルの構築

特に難しいところは無いような気がします。説明量を計算するために、分散値を計算する手間があるぐらいでしょうか?

import os
import pystan
import pickle

# Stanのモデルを読み込んでコンパイルする
stan_file = os.path.join("stan", "e1_ind.stan")
stan_file_c = os.path.join("stan", "e1_ind.pkl")
model = pystan.StanModel(file=stan_file)
with open(stan_file_c, "wb") as f:
    pickle.dump(model, f)
logger.info("Stan model is compiled to {0}".format(stan_file_c))

e1_ind.stan

data {
    int<lower=0> n ;
    int<lower=0> a ;
    real<lower=0> y[n] ;
    int<lower=1, upper=3> A[n] ;
}

parameters {
    vector[a] mu_a ;
    real<lower=0> sigma_e ;
}

model {
    for(i in 1:n){
        y[i] ~ normal(mu_a[A[i]], sigma_e) ;
    }
}

generated quantities {
    real mu ;
    vector[a] eol ;
    vector[a] eol2 ;
    vector[a] prob_eol_upper_0 ;
    real var_e ;
    real var_a ;
    real eta2 ;
    real delta ;
    int<lower=0, upper=1> prob_ll_upper_dm ;
    int<lower=0, upper=1> prob_ll_upper_ld ;
    int<lower=0, upper=1> prob_dm_upper_ld ;
    vector[n] log_lik ;

    mu = mean(mu_a) ;
    for(i in 1:a){
        eol[i] = mu_a[i] - mu ;
        eol2[i] = pow(eol[i], 2) ;
        prob_eol_upper_0[i] = eol[i] > 0 ? 1 : 0 ;
    }
    var_a = mean(eol2) ;
    var_e = pow(sigma_e, 2) ;
    eta2 = var_a / (var_a + var_e) ;
    delta = sqrt(var_a) / sigma_e ;
    prob_ll_upper_dm = mu_a[2] - mu_a[3] > 0 ? 1 : 0 ;
    prob_ll_upper_ld = mu_a[2] - mu_a[1] > 0 ? 1 : 0 ;
    prob_dm_upper_ld = mu_a[3] - mu_a[1] > 0 ? 1 : 0 ;
    for(i in 1:n){
        log_lik[i] = normal_lpdf(y[i] | mu_a[A[i]], sigma_e) ;
    }
}
統計モデルによる事後分布のサンプリング
import pandas as pd
import pickle
import pystan
import matplotlib
import matplotlib.pyplot as plt
import os
from IPython.core.display import display
from tabulate import tabulate
matplotlib.rcParams['figure.figsize'] = (10, 50)

# Stanで使うデータの用意
stan_data = {"n": y.size,
             "a": 3,
             "y": y,
             "A": A}
# 興味のあるパラメータの設定
par = ["mu_a",
       "sigma_e",
       "eol",
       "eta2",
       "delta",
       "prob_eol_upper_0",
       "prob_ll_upper_dm",
       "prob_ll_upper_ld",
       "prob_dm_upper_ld",
       "log_lik"]
prob = [0.025, 0.05, 0.25, 0.5, 0.75, 0.95, 0.975]

# モデルの読み込み
stan_file_c = os.path.join("stan", "e1_ind.pkl")
with open(stan_file_c, "rb") as f:
    model = pickle.load(f)

# MCMCでサンプリング
logger.info("Start MCMC sampling")
fit = model.sampling(data=stan_data,
                     pars=par,
                     iter=21000,
                     chains=5,
                     warmup=1000,
                     seed=1234,
                     algorithm="NUTS")


# 事後分布の表を取得
summary = fit.summary(pars=par, probs=prob)
summary_df = pd.DataFrame(summary["summary"],
                          index=summary["summary_rownames"],
                          columns=summary["summary_colnames"])
display(summary_df)
summary_df_path = os.path.join(result_dir, "df_summary_1.md")
with open(summary_df_path, "w") as f:
    f.write(tabulate(summary_df, summary_df.columns, tablefmt="pipe"))
logger.info("MCMC result summary is saved at {0}".format(summary_df_path))

# 事後分布の可視化
fit.traceplot(par)
traceplot_path = os.path.join(result_dir, "traceplot_1.png")
plt.savefig(traceplot_path)
plt.show()
logger.info("traceplot result is saved at {0}".format(traceplot_path))

# WAICの計算
log_lik = fit.extract("log_lik")["log_lik"]
waic = -2 * np.sum(np.log(np.mean(np.exp(log_lik), axis=0))) + 2 * np.sum(np.var(log_lik, axis=0))
logger.info("WAICの値は{0}です".format(waic))
  • 結果(表と分布)
mean se_mean sd 2.5% 5% 25% 50% 75% 95% 97.5% n_eff Rhat
mu_a[0] 5.92465 0.0030795 0.973823 3.99253 4.32772 5.28835 5.92998 6.56208 7.51919 7.84787 100000 1.00007
mu_a[1] 11.0072 0.00292346 0.924478 9.17396 9.49254 10.4026 11.0076 11.6092 12.5191 12.8357 100000 0.999985
mu_a[2] 7.85955 0.00276635 0.874797 6.12594 6.42644 7.28474 7.86113 8.43353 9.29189 9.58305 100000 1
sigma_e 2.74451 0.00151073 0.42629 2.06146 2.14526 2.44184 2.69385 2.9907 3.51953 3.72462 79623 0.999968
eol[0] -2.33916 0.00245045 0.7749 -3.88038 -3.61524 -2.8447 -2.33923 -1.83011 -1.07629 -0.817259 100000 1.00008
eol[1] 2.74342 0.00239336 0.756846 1.24128 1.50518 2.24837 2.74077 3.23758 3.98729 4.24112 100000 0.999999
eol[2] -0.404261 0.00233393 0.738053 -1.86101 -1.60891 -0.887243 -0.405952 0.0805197 0.804455 1.05156 100000 1.00003
eta2 0.384362 0.000399327 0.126278 0.122444 0.162176 0.299146 0.391642 0.476528 0.579842 0.608479 100000 0.999981
delta 0.804616 0.000704594 0.222812 0.373535 0.439964 0.653323 0.802353 0.954108 1.17476 1.24665 100000 0.999984
prob_eol_upper_0[0] 0.00215 0.000165199 0.0463185 0 0 0 0 0 0 0 78613 1.00006
prob_eol_upper_0[1] 0.99963 6.2415e-05 0.0192319 1 1 1 1 1 1 1 94944 1.00003
prob_eol_upper_0[2] 0.28713 0.00152318 0.452425 0 0 0 0 1 1 1 88225 1.00002
prob_ll_upper_dm 0.99168 0.000333967 0.0908342 1 1 1 1 1 1 1 73976 0.999978
prob_ll_upper_ld 0.99975 5.19414e-05 0.0158095 1 1 1 1 1 1 1 92642 0.999991
prob_dm_upper_ld 0.93277 0.000930336 0.250421 0 0 1 1 1 1 1 72454 1.00007
log_lik[0] -2.03678 0.000758605 0.199372 -2.49373 -2.39575 -2.14793 -2.01548 -1.89931 -1.75258 -1.7086 69071 0.999994
log_lik[1] -2.01783 0.000754794 0.191828 -2.4532 -2.35815 -2.12696 -1.99886 -1.88578 -1.74113 -1.69764 64590 1.00002
log_lik[2] -2.33612 0.000992587 0.313883 -3.08375 -2.92534 -2.50565 -2.2841 -2.11391 -1.92039 -1.86752 100000 1.00004
log_lik[3] -2.67679 0.00135735 0.429231 -3.69657 -3.47937 -2.91461 -2.60959 -2.36613 -2.10742 -2.04192 100000 1.00004
log_lik[4] -2.32348 0.000978475 0.309421 -3.06092 -2.90399 -2.49026 -2.27211 -2.10467 -1.91389 -1.86127 100000 1.00004
log_lik[5] -1.99069 0.000735103 0.179753 -2.39033 -2.3073 -2.0972 -1.97534 -1.86648 -1.72545 -1.6824 59794 1.00001
log_lik[6] -1.98954 0.00073424 0.17923 -2.38764 -2.30472 -2.09579 -1.9742 -1.86567 -1.72457 -1.6816 59586 1.00001
log_lik[7] -2.25015 0.000893585 0.282577 -2.92974 -2.78158 -2.40036 -2.20673 -2.05154 -1.87364 -1.82393 100000 1.00003
log_lik[8] -2.06094 0.000756641 0.203617 -2.52871 -2.42581 -2.17552 -2.0385 -1.91954 -1.77099 -1.72765 72418 1
log_lik[9] -2.11696 0.000790546 0.223723 -2.63793 -2.52526 -2.2399 -2.08855 -1.96137 -1.80487 -1.76051 80088 0.999999
log_lik[10] -1.9749 0.000705525 0.171411 -2.3497 -2.27598 -2.07886 -1.96221 -1.85611 -1.71601 -1.67376 59027 1.00001
log_lik[11] -2.24435 0.000846443 0.267669 -2.87634 -2.74126 -2.38938 -2.20473 -2.05568 -1.88209 -1.83201 100000 0.999994
log_lik[12] -2.09647 0.000768147 0.216744 -2.60167 -2.49056 -2.21654 -2.06987 -1.94642 -1.79251 -1.74675 79617 1
log_lik[13] -2.15804 0.000816763 0.238127 -2.71684 -2.59706 -2.2877 -2.12586 -1.9917 -1.83017 -1.78421 85001 0.999997
log_lik[14] -1.97342 0.000706486 0.170781 -2.34448 -2.27222 -2.07722 -1.96171 -1.85505 -1.71519 -1.67338 58435 1.00001
log_lik[15] -2.16266 0.000819785 0.239732 -2.72544 -2.60451 -2.29303 -2.13016 -1.99503 -1.83272 -1.78664 85517 0.999997
log_lik[16] -4.87403 0.00339268 1.07286 -7.32982 -6.8243 -5.5138 -4.74535 -4.09518 -3.35255 -3.15931 100000 0.999977
log_lik[17] -2.35558 0.000919091 0.290642 -3.03811 -2.89536 -2.51793 -2.31435 -2.14877 -1.95975 -1.90793 100000 1
log_lik[18] -1.98969 0.000680232 0.173859 -2.37117 -2.2949 -2.09513 -1.97689 -1.86882 -1.72784 -1.68611 65325 0.999965
log_lik[19] -1.97261 0.000674084 0.168196 -2.33564 -2.26748 -2.07642 -1.96134 -1.85564 -1.7164 -1.67546 62259 0.999965
log_lik[20] -3.36627 0.00188208 0.595165 -4.73815 -4.46265 -3.7144 -3.2838 -2.93321 -2.54231 -2.44062 100000 0.999985
log_lik[21] -2.26206 0.000825123 0.260927 -2.87464 -2.74387 -2.40867 -2.22729 -2.07788 -1.90249 -1.85212 100000 0.999979
log_lik[22] -2.6317 0.00118696 0.37535 -3.50938 -3.32991 -2.84462 -2.57612 -2.36118 -2.12327 -2.05933 100000 1
log_lik[23] -2.46518 0.00102346 0.323646 -3.2244 -3.0644 -2.64582 -2.41881 -2.23303 -2.02502 -1.96827 100000 0.999983
log_lik[24] -4.20173 0.00267471 0.845818 -6.14049 -5.74926 -4.70266 -4.09605 -3.58666 -3.0153 -2.86741 100000 0.99999
log_lik[25] -2.00989 0.000689887 0.180727 -2.40828 -2.32687 -2.11763 -1.99477 -1.88481 -1.74166 -1.69933 68626 0.999981
log_lik[26] -2.19919 0.000762851 0.241235 -2.76133 -2.64207 -2.33492 -2.16884 -2.03002 -1.86336 -1.81479 100000 0.999977

f:id:ajhjhaf:20170531233354p:plain

2要因の推測

データの取得

データの取得がめちゃめちゃ汚い気がしますが、許してください…

import numpy as np
import os
from logging import getLogger, Formatter, StreamHandler, DEBUG

# printではなくloggerを使う
def get_logger():
    logger = getLogger(__name__)
    logger.setLevel(DEBUG)
    log_fmt = '%(asctime)s : %(name)s : %(levelname)s : %(message)s'
    formatter = Formatter(log_fmt)
    stream_handler = StreamHandler()
    stream_handler.setLevel(DEBUG)
    stream_handler.setFormatter(formatter)
    logger.addHandler(stream_handler)
    return logger
logger = get_logger()

# データの準備
y = np.array([
140,146,149,136,147,147,143,143,143,141,
139,136,136,140,135,132,140,134,
123,127,131,130,138,128,129,
115,120,118,118,121,124,129,119,128,
128,124,123,121,122,126,131,122,
121,121,120,116,117,113,118,
143,141,142,145,149,145,143,141,142,155,
138,134,142,136,135,136,131,133,
131,128,128,128,127,130,130,
117,125,132,122,119,122,129,117,127,
117,120,124,122,122,122,118,122,
119,125,122,116,119,113,122])
A_loaded = np.full(49, 1, np.int)
A_empty = np.full(y.size - A_loaded.size, 2, np.int)
A = np.concatenate([A_loaded, A_empty])
B_loaded_st = np.full(10, 1, np.int)
B_loaded_cut = np.full(8, 2, np.int)
B_loaded_f = np.full(7, 3, np.int)
B_loaded_ch = np.full(9, 4, np.int)
B_loaded_sl = np.full(8, 5, np.int)
B_loaded_cur = np.full(7, 6, np.int)
B_empty_st = np.full(10, 1, np.int)
B_empty_cut = np.full(8, 2, np.int)
B_empty_f = np.full(7, 3, np.int)
B_empty_ch = np.full(9, 4, np.int)
B_empty_sl = np.full(8, 5, np.int)
B_empty_cur = np.full(7, 6, np.int)
B = np.concatenate([B_loaded_st,
                    B_loaded_cut,
                    B_loaded_f,
                    B_loaded_ch,
                    B_loaded_sl,
                    B_loaded_cur,
                    B_empty_st,
                    B_empty_cut,
                    B_empty_f,
                    B_empty_ch,
                    B_empty_sl,
                    B_empty_cur])


# 出力用ディレクトリの用意
result_dir = os.path.join("result", "chapter5")
if os.path.exists(result_dir) is False:
    os.makedirs(result_dir)
    logger.info("{0} is made".format(result_dir))
Stanによる統計モデルの構築
import os
import pystan
import pickle

# Stanのモデルを読み込んでコンパイルする
stan_file = os.path.join("stan", "e2_ind.stan")
stan_file_c = os.path.join("stan", "e2_ind.pkl")
model = pystan.StanModel(file=stan_file)
with open(stan_file_c, "wb") as f:
    pickle.dump(model, f)
logger.info("Stan model is compiled to {0}".format(stan_file_c))

e2_ind.stan

  • PyStanがmatrixを可視化する際にうまくうけとってくれないので、ベクトルに変換してからPythonへ渡しています
  • 本の付録のStanコードではtransformed parametersの部分が冗長だったので、forとif文で対応しています。
data {
    int<lower=1> n ;
    int<lower=1> n_a ;
    int<lower=1> n_b ;
    int<lower=0> y[n] ;
    int<lower=1, upper=n_a> A[n] ;
    int<lower=1, upper=n_b> B[n] ;
}

parameters {
    real<lower=0> mu ;
    vector[n_a-1] a ;
    vector[n_b-1] b ;
    matrix[n_a-1, n_b-1] ab ;
    real<lower=0> sigma_e ;
}

transformed parameters {
    # 水準の効果の和が0になる制約条件を使う
    vector[n_a] a_r ;
    vector[n_b] b_r ;
    matrix[n_a, n_b] ab_r ;
    vector[n_a*n_b] ab_v ;

    for(i in 1:n_a-1){
        a_r[i] = a[i] ;
    }
    a_r[n_a] = - sum(a_r[1:n_a - 1]) ;

    for(i in 1:(n_b-1)){
        b_r[i] = b[i] ;
    }
    b_r[n_b] = - sum(b_r[1:n_b - 1]) ;

    for (j in 1:n_a) {
        for (k in 1:n_b) {
            if (j == n_a && k == n_b) {
                ab_r[j, k] = sum(ab[1:j-1, 1:k-1]) ;
            } else if (j == n_a){
                ab_r[j, k] = -sum(ab[1:j-1, k]) ;
            } else if (k == n_b){
                ab_r[j, k] = -sum(ab[j, 1:k-1]) ;
            } else {
                ab_r[j, k] = ab[j, k] ;
            }
            ab_v[(j-1) * n_b + k] = ab_r[j, k] ;
        }
    }
}

model {
    for (i in 1:n) {
        y[i] ~ normal(mu + a_r[A[i]] + b_r[B[i]] + ab_r[A[i], B[i]], sigma_e) ;
    }
}

generated quantities {
    vector[n] log_lik ;
    vector[n_b] prob_b_upper_0 ;
    vector[n_a] a_r2 ;
    vector[n_b] b_r2 ;
    vector[n_a*n_b] ab_r2 ;
    real var_a ;
    real var_b ;
    real var_ab ;
    real var_e ;
    real var_y ;
    real eta_a2 ;
    real eta_b2 ;
    real eta_ab2 ;
    real eta_t2 ;
    real delta_a ;
    real delta_b ;
    real delta_ab ;
    matrix[n_b, n_b] prob_b_comparison ;

    for (i in 1:n) {
        log_lik[i] = normal_lpdf(y[i] | mu + a_r[A[i]] + b_r[B[i]] + ab_r[A[i], B[i]], sigma_e) ;
    }
    for (i in 1:n_b) {
        prob_b_upper_0[i] = b_r[i] > 0 ? 1 : 0 ;
    }
    for (i in 1:n_a) {
        a_r2[i] = pow(a_r[i], 2) ;
    }
    for (i in 1:n_b) {
        b_r2[i] = pow(b_r[i], 2) ;
    }
    for (i in 1:n_a*n_b) {
        ab_r2[i] = pow(ab_v[i], 2) ;
    }
    var_a = mean(a_r2) ;
    var_b = mean(b_r2) ;
    var_ab = mean(ab_r2) ;
    var_e = pow(sigma_e, 2) ;
    var_y = var_a + var_b + var_ab + var_e ;
    eta_a2 = var_a / var_y ;
    eta_b2 = var_b / var_y ;
    eta_ab2 = var_ab / var_y ;
    eta_t2 = (var_a + var_b + var_ab) / var_y ;
    delta_a = sqrt(var_a) / sigma_e ;
    delta_b = sqrt(var_b) / sigma_e ;
    delta_ab = sqrt(var_ab) / sigma_e ;

    for (j in 1:n_b){
        for (k in 1:n_b){
            prob_b_comparison[j, k] = b_r[j] > b_r[k] ? 1 : 0 ;
        }
    }
}
統計モデルによる事後分布のサンプリング
  • データ型が行列のパラメータは、可視化する前に除いています
  • 水準の効果の差については、別途ピボットテーブルを作成して可視化しました。
import pandas as pd
import pickle
import pystan
import matplotlib
import matplotlib.pyplot as plt
import os
from IPython.core.display import display
from tabulate import tabulate
matplotlib.rcParams['figure.figsize'] = (10, 50)

# Stanで使うデータの用意
stan_data = {"n": y.size,
             "n_a": np.unique(A).size,
             "n_b": np.unique(B).size,
             "y": y,
             "A": A,
             "B": B}
# 興味のあるパラメータの設定
pars = ["mu",
       "sigma_e",
       "a_r",
       "b_r",
       "ab_v",
       "prob_b_upper_0",
       "eta_a2",
       "eta_b2",
       "eta_ab2",
       "eta_t2",
       "delta_a",
       "delta_b",
       "delta_ab",
       "prob_b_comparison",
       "log_lik"]
prob = [0.025, 0.05, 0.25, 0.5, 0.75, 0.95, 0.975]

# モデルの読み込み
stan_file_c = os.path.join("stan", "e2_ind.pkl")
with open(stan_file_c, "rb") as f:
    model = pickle.load(f)

# MCMCでサンプリング
logger.info("Start MCMC sampling")
fit = model.sampling(data=stan_data,
                     pars=pars,
                     iter=21000,
                     chains=5,
                     warmup=1000,
                     seed=1234,
                     algorithm="NUTS")


# 事後分布の表を取得
summary = fit.summary(pars=pars, probs=prob)
summary_df = pd.DataFrame(summary["summary"],
                          index=summary["summary_rownames"],
                          columns=summary["summary_colnames"])
display(summary_df)
summary_df_path = os.path.join(result_dir, "df_summary_2.md")
with open(summary_df_path, "w") as f:
    f.write(tabulate(summary_df, summary_df.columns, tablefmt="pipe"))
logger.info("MCMC result summary is saved at {0}".format(summary_df_path))

# 水準毎に上回る確率を調べる
comparison = fit.summary(pars=["prob_b_comparison"], probs=prob)
comparison_df = pd.DataFrame(comparison["summary"],
                             index=comparison["summary_rownames"],
                             columns=comparison["summary_colnames"])
comparison_index = pd.Series(comparison_df.index.str.strip("prob_b_comparison[").str.strip("]").str.split(","))
b_dict = {"0":"straight", "1":"cutball", "2":"fork", "3":"changeup", "4":"slider", "5":"curve"}
upper_index = comparison_index.apply(lambda x:b_dict[x[0]])
lower_index = comparison_index.apply(lambda x:b_dict[x[1]])
comparison_triple = pd.DataFrame([upper_index, lower_index, comparison_df["mean"].values]).T
comparison_triple.columns = ["upper", "lower", "possibility"]
comparison_pivot = comparison_triple.pivot(index="upper", columns="lower", values="possibility")
display(comparison_pivot)
b_df_path = os.path.join(result_dir, "b_comparison.md")
with open(b_df_path, "w") as f:
    f.write(tabulate(comparison_pivot, comparison_pivot.columns, tablefmt="pipe"))
logger.info("comparison table is saved at {0}".format(b_df_path))

# 事後分布の可視化
pars.remove("prob_b_upper_0")
pars.remove("prob_b_comparison")
fit.traceplot(pars)
traceplot_path = os.path.join(result_dir, "traceplot_2.png")
plt.savefig(traceplot_path)
plt.show()
logger.info("traceplot result is saved at {0}".format(traceplot_path))

# WAICの計算
log_lik = fit.extract("log_lik")["log_lik"]
waic = -2 * np.sum(np.log(np.mean(np.exp(log_lik), axis=0))) + 2 * np.sum(np.var(log_lik, axis=0))
logger.info("WAICの値は{0}です".format(waic))
  • 結果(表と分布)
mean se_mean sd 2.5% 5% 25% 50% 75% 95% 97.5% n_eff Rhat
mu 128.842 0.00125739 0.397622 128.059 128.188 128.577 128.842 129.107 129.495 129.622 100000 0.999982
sigma_e 3.89006 0.000955725 0.302227 3.35328 3.42584 3.67831 3.87124 4.08093 4.41686 4.53412 100000 1.00008
a_r[0] 0.0566725 0.00125659 0.39737 -0.722588 -0.595841 -0.208101 0.0562403 0.321384 0.710725 0.840036 100000 0.999989
a_r[1] -0.0566725 0.00125659 0.39737 -0.840036 -0.710725 -0.321384 -0.0562403 0.208101 0.595841 0.722588 100000 0.999989
b_r[0] 15.2089 0.00256537 0.811242 13.6165 13.8763 14.6635 15.2074 15.7524 16.5397 16.8083 100000 0.999994
b_r[1] 7.22147 0.00280979 0.888533 5.47369 5.76311 6.62748 7.22389 7.81345 8.68103 8.97568 100000 1
b_r[2] 0.303162 0.00295644 0.93491 -1.53609 -1.2345 -0.321006 0.298546 0.928776 1.83405 2.15155 100000 0.999977
b_r[3] -6.50771 0.00270552 0.85556 -8.17839 -7.90278 -7.08083 -6.51271 -5.93491 -5.10512 -4.82134 100000 0.999979
b_r[4] -6.09233 0.00281005 0.888616 -7.83205 -7.54945 -6.68611 -6.09106 -5.49716 -4.62802 -4.3491 100000 0.999976
b_r[5] -10.1335 0.00296533 0.93772 -11.9753 -11.6719 -10.7628 -10.1342 -9.49813 -8.59766 -8.30235 100000 0.999964
ab_v[0] -0.607631 0.00258641 0.817893 -2.21712 -1.9559 -1.15031 -0.607124 -0.0654354 0.744995 1.00949 100000 0.999985
ab_v[1] 0.381475 0.00281664 0.8907 -1.37495 -1.08112 -0.212969 0.382057 0.976252 1.84594 2.13356 100000 0.999998
ab_v[2] 0.23221 0.00297549 0.940934 -1.60767 -1.31047 -0.399151 0.229995 0.866498 1.77931 2.08061 100000 0.999969
ab_v[3] -1.05319 0.00268952 0.8505 -2.72909 -2.45038 -1.62572 -1.05216 -0.482988 0.345127 0.618895 100000 0.99998
ab_v[4] 1.81474 0.00281049 0.888756 0.0653521 0.350221 1.22523 1.81277 2.40855 3.27038 3.56064 100000 0.999986
ab_v[5] -0.767603 0.00297959 0.94223 -2.60335 -2.31877 -1.40147 -0.769324 -0.130914 0.778092 1.07769 100000 0.999986
ab_v[6] 0.607631 0.00258641 0.817893 -1.00949 -0.744995 0.0654354 0.607124 1.15031 1.9559 2.21712 100000 0.999985
ab_v[7] -0.381475 0.00281664 0.8907 -2.13356 -1.84594 -0.976252 -0.382057 0.212969 1.08112 1.37495 100000 0.999998
ab_v[8] -0.23221 0.00297549 0.940934 -2.08061 -1.77931 -0.866498 -0.229995 0.399151 1.31047 1.60767 100000 0.999969
ab_v[9] 1.05319 0.00268952 0.8505 -0.618895 -0.345127 0.482988 1.05216 1.62572 2.45038 2.72909 100000 0.99998
ab_v[10] -1.81474 0.00281049 0.888756 -3.56064 -3.27038 -2.40855 -1.81277 -1.22523 -0.350221 -0.0653521 100000 0.999986
ab_v[11] 0.767603 0.00297959 0.94223 -1.07769 -0.778092 0.130914 0.769324 1.40147 2.31877 2.60335 100000 0.999986
prob_b_upper_0[0] 1 0 0 1 1 1 1 1 1 1 100000 nan
prob_b_upper_0[1] 1 0 0 1 1 1 1 1 1 1 100000 nan
prob_b_upper_0[2] 0.62685 0.00152942 0.483644 0 0 0 1 1 1 1 100000 1
prob_b_upper_0[3] 0 0 0 0 0 0 0 0 0 0 100000 nan
prob_b_upper_0[4] 0 0 0 0 0 0 0 0 0 0 100000 nan
prob_b_upper_0[5] 0 0 0 0 0 0 0 0 0 0 100000 nan
eta_a2 0.00168409 1.10406e-05 0.00242859 1.69709e-06 6.70442e-06 0.000168269 0.000753203 0.00220135 0.00650296 0.00855088 48386 1.00007
eta_b2 0.820698 8.31697e-05 0.0263006 0.762582 0.77391 0.804681 0.822937 0.83927 0.859708 0.865594 100000 1.00006
eta_ab2 0.0179692 3.28066e-05 0.00948945 0.00401563 0.00533333 0.0109958 0.0164943 0.0233222 0.0357468 0.0404685 83668 1.00012
eta_t2 0.840352 7.44251e-05 0.0235353 0.788162 0.798497 0.826116 0.842443 0.857014 0.875117 0.880387 100000 1.00003
delta_a 0.0820056 0.000277709 0.0621218 0.0032616 0.00655807 0.032731 0.0691728 0.117879 0.201804 0.231203 50039 1.00011
delta_b 2.28756 0.000631375 0.199658 1.90128 1.96222 2.15243 2.28528 2.42122 2.62019 2.68599 100000 1.00004
delta_ab 0.325882 0.000285382 0.0902456 0.157327 0.181951 0.262588 0.3238 0.386283 0.478362 0.50898 100000 1.00008
prob_b_comparison[0,0] 0 0 0 0 0 0 0 0 0 0 100000 nan
prob_b_comparison[1,0] 0 0 0 0 0 0 0 0 0 0 100000 nan
prob_b_comparison[2,0] 0 0 0 0 0 0 0 0 0 0 100000 nan
prob_b_comparison[3,0] 0 0 0 0 0 0 0 0 0 0 100000 nan
prob_b_comparison[4,0] 0 0 0 0 0 0 0 0 0 0 100000 nan
prob_b_comparison[5,0] 0 0 0 0 0 0 0 0 0 0 100000 nan
prob_b_comparison[0,1] 1 0 0 1 1 1 1 1 1 1 100000 nan
prob_b_comparison[1,1] 0 0 0 0 0 0 0 0 0 0 100000 nan
prob_b_comparison[2,1] 0 0 0 0 0 0 0 0 0 0 100000 nan
prob_b_comparison[3,1] 0 0 0 0 0 0 0 0 0 0 100000 nan
prob_b_comparison[4,1] 0 0 0 0 0 0 0 0 0 0 100000 nan
prob_b_comparison[5,1] 0 0 0 0 0 0 0 0 0 0 100000 nan
prob_b_comparison[0,2] 1 0 0 1 1 1 1 1 1 1 100000 nan
prob_b_comparison[1,2] 1 0 0 1 1 1 1 1 1 1 100000 nan
prob_b_comparison[2,2] 0 0 0 0 0 0 0 0 0 0 100000 nan
prob_b_comparison[3,2] 1e-05 1e-05 0.00316228 0 0 0 0 0 0 0 100000 1
prob_b_comparison[4,2] 0 0 0 0 0 0 0 0 0 0 100000 nan
prob_b_comparison[5,2] 0 0 0 0 0 0 0 0 0 0 100000 nan
prob_b_comparison[0,3] 1 0 0 1 1 1 1 1 1 1 100000 nan
prob_b_comparison[1,3] 1 0 0 1 1 1 1 1 1 1 100000 nan
prob_b_comparison[2,3] 0.99999 1e-05 0.00316228 1 1 1 1 1 1 1 100000 1
prob_b_comparison[3,3] 0 0 0 0 0 0 0 0 0 0 100000 nan
prob_b_comparison[4,3] 0.62121 0.00153398 0.485088 0 0 0 1 1 1 1 100000 0.999983
prob_b_comparison[5,3] 0.00526 0.000240106 0.0723352 0 0 0 0 0 0 0 90760 0.999992
prob_b_comparison[0,4] 1 0 0 1 1 1 1 1 1 1 100000 nan
prob_b_comparison[1,4] 1 0 0 1 1 1 1 1 1 1 100000 nan
prob_b_comparison[2,4] 1 0 0 1 1 1 1 1 1 1 100000 nan
prob_b_comparison[3,4] 0.37879 0.00153398 0.485088 0 0 0 0 1 1 1 100000 0.999983
prob_b_comparison[4,4] 0 0 0 0 0 0 0 0 0 0 100000 nan
prob_b_comparison[5,4] 0.00256 0.000167342 0.0505319 0 0 0 0 0 0 0 91184 1
prob_b_comparison[0,5] 1 0 0 1 1 1 1 1 1 1 100000 nan
prob_b_comparison[1,5] 1 0 0 1 1 1 1 1 1 1 100000 nan
prob_b_comparison[2,5] 1 0 0 1 1 1 1 1 1 1 100000 nan
prob_b_comparison[3,5] 0.99474 0.000240106 0.0723352 1 1 1 1 1 1 1 90760 0.999992
prob_b_comparison[4,5] 0.99744 0.000167342 0.0505319 1 1 1 1 1 1 1 91184 1
prob_b_comparison[5,5] 0 0 0 0 0 0 0 0 0 0 100000 nan
log_lik[0] -2.73633 0.000935194 0.295734 -3.43849 -3.28893 -2.90328 -2.68784 -2.5182 -2.34766 -2.30554 100000 1.00002
log_lik[1] -2.53453 0.00070078 0.221606 -3.08005 -2.95548 -2.65052 -2.49078 -2.37503 -2.25745 -2.22527 100000 0.999975
log_lik[2] -3.34177 0.00146612 0.463629 -4.39499 -4.1848 -3.6203 -3.28603 -3.00338 -2.6859 -2.6025 100000 0.999994
log_lik[3] -4.21624 0.00207097 0.654899 -5.66336 -5.38642 -4.62001 -4.15503 -3.74499 -3.25356 -3.11148 100000 0.999974
log_lik[4] -2.73634 0.000936502 0.296148 -3.44194 -3.28963 -2.90404 -2.68719 -2.51799 -2.34749 -2.30663 100000 0.999975
log_lik[5] -2.73634 0.000936502 0.296148 -3.44194 -3.28963 -2.90404 -2.68719 -2.51799 -2.34749 -2.30663 100000 0.999975
log_lik[6] -2.33272 0.000426818 0.111143 -2.59392 -2.52945 -2.39027 -2.31979 -2.25852 -2.17872 -2.15482 67807 1.0002
log_lik[7] -2.33272 0.000426818 0.111143 -2.59392 -2.52945 -2.39027 -2.31979 -2.25852 -2.17872 -2.15482 67807 1.0002
log_lik[8] -2.33272 0.000426818 0.111143 -2.59392 -2.52945 -2.39027 -2.31979 -2.25852 -2.17872 -2.15482 67807 1.0002
log_lik[9] -2.53452 0.00069946 0.221189 -3.07629 -2.95565 -2.65031 -2.49165 -2.37471 -2.25725 -2.22585 100000 1.00006
log_lik[10] -2.54659 0.000785481 0.248391 -3.16542 -3.02514 -2.67084 -2.4917 -2.36823 -2.24892 -2.2172 100000 1.00005
log_lik[11] -2.34498 0.000496465 0.125425 -2.64993 -2.57005 -2.40342 -2.3269 -2.26265 -2.18144 -2.15687 63825 1.00009
log_lik[12] -2.34498 0.000496465 0.125425 -2.64993 -2.57005 -2.40342 -2.3269 -2.26265 -2.18144 -2.15687 63825 1.00009
log_lik[13] -2.74833 0.00104876 0.331647 -3.546 -3.37344 -2.93064 -2.68727 -2.50315 -2.32712 -2.28515 100000 1.00003
log_lik[14] -2.41232 0.000613607 0.176642 -2.86322 -2.75229 -2.4868 -2.3749 -2.29388 -2.20328 -2.1765 82872 1.00003
log_lik[15] -3.01794 0.00133705 0.422813 -4.00776 -3.80561 -3.25894 -2.95459 -2.70599 -2.4528 -2.39164 100000 1.00002
log_lik[16] -2.74833 0.00104876 0.331647 -3.546 -3.37344 -2.93064 -2.68727 -2.50315 -2.32712 -2.28515 100000 1.00003
log_lik[17] -2.54692 0.000791085 0.250163 -3.16951 -3.02844 -2.67035 -2.49157 -2.36771 -2.24974 -2.21829 100000 1.00002
log_lik[18] -3.73765 0.00205657 0.650345 -5.21599 -4.92322 -4.1275 -3.66109 -3.2632 -2.81517 -2.69676 100000 0.999991
log_lik[19] -2.5447 0.000828546 0.262009 -3.21331 -3.05791 -2.67094 -2.48239 -2.35768 -2.24115 -2.21039 100000 1.00001
log_lik[20] -2.42805 0.000693641 0.195391 -2.93941 -2.809 -2.50779 -2.38289 -2.29826 -2.2048 -2.17837 79349 1.00005
log_lik[21] -2.3563 0.000560908 0.138402 -2.70641 -2.61165 -2.4159 -2.33316 -2.26661 -2.1839 -2.15899 60884 1.00014
log_lik[22] -4.81377 0.00283885 0.897723 -6.79217 -6.42019 -5.36376 -4.72578 -4.16988 -3.49737 -3.30988 100000 0.999977
log_lik[23] -2.41463 0.000656152 0.184738 -2.90055 -2.776 -2.48901 -2.37315 -2.29235 -2.20083 -2.17451 79269 1.00006
log_lik[24] -2.35183 0.000548798 0.133508 -2.68773 -2.59816 -2.41028 -2.33023 -2.26506 -2.18237 -2.15774 59182 1.00015
log_lik[25] -3.68089 0.00179969 0.569113 -4.95951 -4.71565 -4.02176 -3.61904 -3.26868 -2.86285 -2.7564 100000 0.99997
log_lik[26] -2.39015 0.000531975 0.156079 -2.78711 -2.69127 -2.45852 -2.36012 -2.28619 -2.19753 -2.17168 86081 1.00001
log_lik[27] -2.70464 0.000949753 0.300338 -3.42914 -3.27577 -2.86761 -2.65075 -2.48359 -2.32096 -2.28113 100000 0.999965
log_lik[28] -2.70464 0.000949753 0.300338 -3.42914 -3.27577 -2.86761 -2.65075 -2.48359 -2.32096 -2.28113 100000 0.999965
log_lik[29] -2.3338 0.000448643 0.114007 -2.60405 -2.53915 -2.39156 -2.31964 -2.25791 -2.17838 -2.15378 64574 1.00008
log_lik[30] -2.56839 0.000779171 0.246395 -3.17826 -3.04225 -2.69622 -2.51769 -2.39041 -2.2649 -2.23265 100000 0.999982
log_lik[31] -4.30475 0.00222789 0.70452 -5.86668 -5.56526 -4.73925 -4.23866 -3.80034 -3.26551 -3.12075 100000 0.999971
log_lik[32] -2.51376 0.000705027 0.222949 -3.07066 -2.94619 -2.62376 -2.46574 -2.35573 -2.24338 -2.21269 100000 0.999976
log_lik[33] -3.82294 0.00190039 0.600955 -5.17464 -4.90752 -4.18988 -3.75953 -3.39001 -2.94919 -2.83484 100000 0.999968
log_lik[34] -2.72106 0.00102137 0.322986 -3.50706 -3.33946 -2.89458 -2.66028 -2.48187 -2.31426 -2.27516 100000 0.99998
log_lik[35] -2.34996 0.000499418 0.129235 -2.67219 -2.58849 -2.40913 -2.32994 -2.2649 -2.18353 -2.15836 66963 1.00008
log_lik[36] -2.42536 0.000583436 0.184499 -2.9031 -2.7866 -2.50546 -2.38443 -2.3008 -2.20712 -2.18051 100000 1.00001
log_lik[37] -2.77796 0.00108888 0.344334 -3.61273 -3.42961 -2.96609 -2.71626 -2.52376 -2.33709 -2.29548 100000 0.99997
log_lik[38] -2.56802 0.000821847 0.259891 -3.21964 -3.07226 -2.69821 -2.51079 -2.37974 -2.25574 -2.22447 100000 0.999978
log_lik[39] -2.40097 0.000587076 0.168242 -2.83704 -2.72648 -2.47175 -2.36656 -2.28945 -2.19963 -2.17254 82126 1.00004
log_lik[40] -3.7057 0.00191992 0.607131 -5.07782 -4.81287 -4.07006 -3.63623 -3.26529 -2.83716 -2.72761 100000 0.999971
log_lik[41] -2.56802 0.000821847 0.259891 -3.21964 -3.07226 -2.69821 -2.51079 -2.37974 -2.25574 -2.22447 100000 0.999978
log_lik[42] -2.64915 0.000989698 0.31297 -3.42632 -3.25123 -2.80993 -2.57989 -2.41906 -2.27508 -2.24037 100000 0.999983
log_lik[43] -2.64915 0.000989698 0.31297 -3.42632 -3.25123 -2.80993 -2.57989 -2.41906 -2.27508 -2.24037 100000 0.999983
log_lik[44] -2.48078 0.000720873 0.22796 -3.07134 -2.92787 -2.58129 -2.42439 -2.32413 -2.22039 -2.19265 100000 0.999998
log_lik[45] -2.47998 0.0007191 0.227399 -3.06829 -2.92811 -2.58068 -2.42439 -2.32295 -2.2199 -2.19207 100000 0.999991
log_lik[46] -2.37927 0.000572994 0.1578 -2.78904 -2.68228 -2.44339 -2.34819 -2.27626 -2.19009 -2.16486 75843 1.00002
log_lik[47] -3.1857 0.00158996 0.502788 -4.36319 -4.12359 -3.47549 -3.11292 -2.81382 -2.50661 -2.43336 100000 0.999983
log_lik[48] -2.34584 0.00051513 0.126888 -2.65972 -2.57614 -2.40448 -2.32682 -2.26275 -2.18111 -2.15621 60675 1.00006
log_lik[49] -2.41088 0.000514388 0.162664 -2.82099 -2.72195 -2.48676 -2.37974 -2.29996 -2.20902 -2.18182 100000 0.999999
log_lik[50] -2.76093 0.000962221 0.304281 -3.48081 -3.32983 -2.932 -2.71175 -2.53689 -2.35856 -2.31605 100000 0.999983
log_lik[51] -2.55227 0.000724273 0.229035 -3.11222 -2.98866 -2.67227 -2.5078 -2.38647 -2.26473 -2.23413 100000 0.999982
log_lik[52] -2.32991 0.000421662 0.109997 -2.58737 -2.52469 -2.38765 -2.31737 -2.25642 -2.17761 -2.15289 68051 1.00014
log_lik[53] -2.97519 0.001168 0.369353 -3.83217 -3.65693 -3.18955 -2.92343 -2.70504 -2.47099 -2.41237 100000 1.00001
log_lik[54] -2.32991 0.000421662 0.109997 -2.58737 -2.52469 -2.38765 -2.31737 -2.25642 -2.17761 -2.15289 68051 1.00014
log_lik[55] -2.41088 0.000514388 0.162664 -2.82099 -2.72195 -2.48676 -2.37974 -2.29996 -2.20902 -2.18182 100000 0.999999
log_lik[56] -2.76093 0.000962221 0.304281 -3.48081 -3.32983 -2.932 -2.71175 -2.53689 -2.35856 -2.31605 100000 0.999983
log_lik[57] -2.55227 0.000724273 0.229035 -3.11222 -2.98866 -2.67227 -2.5078 -2.38647 -2.26473 -2.23413 100000 0.999982
log_lik[58] -5.96118 0.00310773 0.982751 -8.08284 -7.69382 -6.58038 -5.88692 -5.26594 -4.47592 -4.2481 100000 1
log_lik[59] -2.52657 0.000757293 0.239477 -3.12965 -2.99042 -2.64358 -2.47232 -2.35648 -2.24149 -2.21098 100000 0.999994
log_lik[60] -2.42542 0.000581973 0.184036 -2.89803 -2.78347 -2.50676 -2.38427 -2.30052 -2.2072 -2.17959 100000 1.00003
log_lik[61] -3.70403 0.00191809 0.606552 -5.07289 -4.8054 -4.06898 -3.63492 -3.26146 -2.84054 -2.73181 100000 1
log_lik[62] -2.34145 0.000491539 0.121126 -2.63625 -2.55886 -2.40019 -2.32462 -2.26172 -2.18046 -2.15654 60724 1.00006
log_lik[63] -2.3498 0.000495617 0.128819 -2.66905 -2.58578 -2.40933 -2.33002 -2.26458 -2.18315 -2.158 67557 1.00007
log_lik[64] -2.34145 0.000491539 0.121126 -2.63625 -2.55886 -2.40019 -2.32462 -2.26172 -2.18046 -2.15654 60724 1.00006
log_lik[65] -3.05587 0.00137053 0.4334 -4.06844 -3.86188 -3.3063 -2.9924 -2.73696 -2.46901 -2.40783 100000 0.999993
log_lik[66] -2.5683 0.000819629 0.259189 -3.21226 -3.06855 -2.70014 -2.51147 -2.38054 -2.25607 -2.22414 100000 1.00001
log_lik[67] -2.49983 0.00075334 0.238227 -3.10952 -2.96599 -2.60912 -2.4412 -2.33334 -2.22697 -2.19767 100000 1
log_lik[68] -2.36999 0.00057377 0.149574 -2.75292 -2.65382 -2.43233 -2.34244 -2.27251 -2.18818 -2.16267 67957 1.00001
log_lik[69] -2.36999 0.00057377 0.149574 -2.75292 -2.65382 -2.43233 -2.34244 -2.27251 -2.18818 -2.16267 67957 1.00001
log_lik[70] -2.36999 0.00057377 0.149574 -2.75292 -2.65382 -2.43233 -2.34244 -2.27251 -2.18818 -2.16267 67957 1.00001
log_lik[71] -2.46125 0.000725389 0.215494 -3.01841 -2.88701 -2.5549 -2.4086 -2.31443 -2.21489 -2.1868 88253 0.999968
log_lik[72] -2.38928 0.000607463 0.165664 -2.81825 -2.70878 -2.45625 -2.35539 -2.28072 -2.19397 -2.16786 74373 1.00004
log_lik[73] -2.38928 0.000607463 0.165664 -2.81825 -2.70878 -2.45625 -2.35539 -2.28072 -2.19397 -2.16786 74373 1.00004
log_lik[74] -3.67851 0.0018051 0.570824 -4.96139 -4.70864 -4.02303 -3.61492 -3.26414 -2.86209 -2.75324 100000 0.999999
log_lik[75] -2.42412 0.000559944 0.17707 -2.87225 -2.76381 -2.50505 -2.38741 -2.30404 -2.20929 -2.18255 100000 1.00006
log_lik[76] -4.85816 0.00258955 0.818888 -6.64133 -6.31093 -5.36921 -4.78662 -4.26925 -3.64582 -3.46943 100000 0.999995
log_lik[77] -2.39 0.000493201 0.155964 -2.78444 -2.6866 -2.4595 -2.36016 -2.28576 -2.1976 -2.17153 100000 1
log_lik[78] -2.9613 0.00121791 0.385138 -3.86024 -3.67563 -3.1825 -2.90355 -2.6775 -2.44311 -2.38634 100000 0.999986
log_lik[79] -2.39 0.000493201 0.155964 -2.78444 -2.6866 -2.4595 -2.36016 -2.28576 -2.1976 -2.17153 100000 1
log_lik[80] -3.41139 0.00160314 0.506957 -4.55559 -4.33322 -3.71439 -3.35026 -3.03988 -2.69708 -2.60639 100000 0.999997
log_lik[81] -3.67851 0.0018051 0.570824 -4.96139 -4.70864 -4.02303 -3.61492 -3.26414 -2.86209 -2.75324 100000 0.999999
log_lik[82] -2.78322 0.00103984 0.328827 -3.56402 -3.39689 -2.96697 -2.72775 -2.54094 -2.35577 -2.31086 100000 1.00001
log_lik[83] -2.84325 0.0011636 0.367961 -3.72012 -3.53811 -3.04991 -2.78014 -2.57055 -2.36481 -2.31905 100000 1
log_lik[84] -2.36323 0.000525796 0.140284 -2.71569 -2.62599 -2.42459 -2.33932 -2.27104 -2.18727 -2.16231 71184 0.999996
log_lik[85] -2.66497 0.000953673 0.301578 -3.40396 -3.24526 -2.82653 -2.60503 -2.44195 -2.29206 -2.25465 100000 1.00001
log_lik[86] -2.37956 0.000542134 0.152967 -2.77117 -2.67185 -2.44493 -2.35106 -2.27869 -2.19225 -2.16674 79613 1
log_lik[87] -2.37956 0.000542134 0.152967 -2.77117 -2.67185 -2.44493 -2.35106 -2.27869 -2.19225 -2.16674 79613 1
log_lik[88] -2.37956 0.000542134 0.152967 -2.77117 -2.67185 -2.44493 -2.35106 -2.27869 -2.19225 -2.16674 79613 1
log_lik[89] -2.61597 0.000890215 0.281511 -3.30934 -3.15762 -2.7638 -2.55755 -2.40974 -2.27316 -2.23913 100000 1
log_lik[90] -2.37956 0.000542134 0.152967 -2.77117 -2.67185 -2.44493 -2.35106 -2.27869 -2.19225 -2.16674 79613 1
log_lik[91] -2.35169 0.000524891 0.13297 -2.6827 -2.595 -2.41106 -2.33035 -2.26492 -2.18286 -2.1578 64175 0.999993
log_lik[92] -3.39297 0.00177845 0.562394 -4.69445 -4.4338 -3.71977 -3.31813 -2.98064 -2.61791 -2.52666 100000 0.999975
log_lik[93] -2.56962 0.000873156 0.276116 -3.26589 -3.1083 -2.7045 -2.50481 -2.37047 -2.24856 -2.21752 100000 0.999975
log_lik[94] -2.73918 0.00110666 0.349957 -3.59245 -3.40636 -2.92461 -2.66867 -2.48011 -2.3068 -2.26847 100000 0.999968
log_lik[95] -2.35169 0.000524891 0.13297 -2.6827 -2.595 -2.41106 -2.33035 -2.26492 -2.18286 -2.1578 64175 0.999993
log_lik[96] -3.73209 0.00205986 0.651384 -5.20444 -4.91743 -4.1244 -3.65239 -3.25571 -2.80926 -2.69637 100000 0.999975
log_lik[97] -2.56962 0.000873156 0.276116 -3.26589 -3.1083 -2.7045 -2.50481 -2.37047 -2.24856 -2.21752 100000 0.999975

水準の差

changeup curve cutball fork slider straight
changeup 0 0.99474 0 1e-05 0.37879 0
curve 0.00526 0 0 0 0.00256 0
cutball 1 1 0 1 1 0
fork 0.99999 1 0 0 1 0
slider 0.62121 0.99744 0 0 0 0
straight 1 1 1 1 1 0

f:id:ajhjhaf:20170531233333p:plain

考察
  • 水準の効果の分布より、走者の状況も交絡因子も殆ど影響力が無いことが推し量れる
    • 説明量や効果量についても、圧倒的に走者(b)についての値が大きく、モデルの殆どがこれで説明される事がわかる
    • 交絡因子の大きさについても図れるのはメリットぽい。