Partial code for finding probabilities from cmb_likelihood.py output

Hello everyone! Below is a partially finished code meant to illustrate how to find the probabilities P(d | n) and P( d | Q ) and the means as well as uncertainties of n and Q.

You will first need to load n_values, Q_values, as well as lnL, from the output of cmb_likelihood.py. For pointers on how to do this - check plot_contours.py  where this is done in the beginning of the file! Also not shown is how to compute dn and dQ - these are the step distances in n_values and Q_values - so here you can simply check the distance in the steps in the arrays you loaded (subtract two adjacent steps). Below it is shown how to find P(n|d) (P_n), finding P_Q is a matter of integrating over the other dimension of P(d | Q,n) (P) in the for loop.

The code:

# First, exponentiate, and subtract off the biggest

# value to avoid overflow and find P(d| Q,n)

P = np.exp(lnL-np.max(lnL))  

# Normalize to unit integral

P = P / (np.sum(P)*dn*dQ)    


# Compute marginal distribution, P(n|d)
P_n = np.zeros(n_num)
for i in range(0, n_num):
    # Here we are integrating
    # to get P_n, for P_Q be sure to integrate
    # over the other dimension in P!
    P_n[i] = np.sum(P[:,i])  

# We now normalize (note the dn - replace for P_Q!)

P_n = P_n / (sum(P_n) * dn)  

# We can now compute the mean

mu_n    = sum(P_n * n_values)*dn                    

# And lastly, we find the uncertainty

sigma_n = np.sqrt(sum(P_n * (n_values-mu_n)**2)*dn) 

 

Publisert 1. nov. 2016 19:44 - Sist endret 1. nov. 2016 19:54