├── .gitignore ├── CPC_metacog_tutorial ├── cpc_metacog_tutorial.m ├── cpc_metacog_tutorial.mlx └── cpc_metacog_utils │ ├── cpc_AUtype2roc.m │ ├── cpc_calcAU_type2roc.m │ ├── cpc_metad_sim.m │ ├── cpc_plot_confidence.m │ ├── cpc_plot_simVfit.m │ ├── cpc_plot_type2roc.m │ ├── cpc_type2_SDT_sim.m │ ├── cpc_type2roc.m │ └── fit_meta_d_MLE.m ├── LICENSE ├── Matlab ├── Bayes_metad.txt ├── Bayes_metad_group.txt ├── Bayes_metad_group_corr.txt ├── Bayes_metad_group_nodp.txt ├── Bayes_metad_group_regress_nodp.txt ├── Bayes_metad_group_regress_nodp_2cov.txt ├── Bayes_metad_group_regress_nodp_3cov.txt ├── Bayes_metad_group_regress_nodp_4cov.txt ├── Bayes_metad_group_regress_nodp_5cov.txt ├── Bayes_metad_rc.txt ├── Bayes_metad_rc_group.txt ├── Bayes_metad_rc_group_nodp.txt ├── calc_CI.m ├── calc_HDI.m ├── exampleFit.m ├── exampleFit_corr.m ├── exampleFit_group.m ├── exampleFit_group_rc.m ├── exampleFit_group_regression.m ├── exampleFit_rc.m ├── exampleFit_twoGroups.m ├── exampleFit_twoTasks.m ├── fit_meta_d_mcmc.m ├── fit_meta_d_mcmc_group.m ├── fit_meta_d_mcmc_groupCorr.m ├── fit_meta_d_mcmc_regression.m ├── fit_meta_d_params.m ├── matjags.m ├── metad_group_visualise.m ├── metad_sim.m ├── metad_visualise.m ├── plotSamples.m ├── plot_generative_model.m ├── tmpjags │ └── .gitignore ├── trials2counts.m └── type2_SDT_sim.m ├── R ├── .Rhistory ├── Bayes_metad_2wayANOVA.txt ├── Bayes_metad_group_R.txt ├── Bayes_metad_group_corr2_R.txt ├── Bayes_metad_group_corr3_R.txt ├── Bayes_metad_group_corr4_R.txt ├── Bayes_metad_group_regress_nodp.txt ├── Bayes_metad_indiv_R.txt ├── example_metad_2wayANOVA.R ├── example_metad_group.R ├── example_metad_group_corr.R ├── example_metad_indiv.R ├── fit_meta_d_mcmc_regression.R ├── fit_metad_2wayANOVA.R ├── fit_metad_group.R ├── fit_metad_groupcorr.R ├── fit_metad_indiv.R └── trials2counts.R └── README.md /.gitignore: -------------------------------------------------------------------------------- 1 | # OS generated files # 2 | ###################### 3 | .DS_Store 4 | .DS_Store? 5 | ._* 6 | .Spotlight-V100 7 | .Trashes 8 | ehthumbs.db 9 | Thumbs.db 10 | 11 | # Matlab ignore 12 | *.m~ 13 | *.asv 14 | 15 | # Project ignore 16 | tmpjags/* 17 | -------------------------------------------------------------------------------- /CPC_metacog_tutorial/cpc_metacog_tutorial.mlx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/metacoglab/HMeta-d/62197e8a604fb1369d8e63ee57d0b1f7ab8ac73c/CPC_metacog_tutorial/cpc_metacog_tutorial.mlx -------------------------------------------------------------------------------- /CPC_metacog_tutorial/cpc_metacog_utils/cpc_AUtype2roc.m: -------------------------------------------------------------------------------- 1 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 2 | %%%%%%%%%%% CPC METACOGNITION TUTORIAL 2019: CALCULATE TYPE2ROC %%%%%%%%%%% 3 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 4 | 5 | % Function to calculate the area under the type2 ROC curve 6 | 7 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 8 | function auroc2 = cpc_AUtype2roc(nR_S1, nR_S2, Nratings) 9 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 10 | 11 | temp_FA1 = fliplr(nR_S1); 12 | temp_FA2 = fliplr(nR_S2); 13 | 14 | for c = 1:Nratings 15 | S1_H2(c) = nR_S1(c) + 0.5; 16 | S2_H2(c) = nR_S2(c) + 0.5; 17 | S1_FA2(c) = temp_FA1(c) + 0.5; 18 | S2_FA2(c) = temp_FA2(c) + 0.5; 19 | end 20 | 21 | H2 = S1_H2 + S2_H2; 22 | FA2 = S1_FA2 + S2_FA2; 23 | 24 | H2 = H2./sum(H2); 25 | FA2 = FA2./sum(FA2); 26 | cum_H2 = [0 cumsum(H2)]; 27 | cum_FA2 = [0 cumsum(FA2)]; 28 | 29 | i=1; 30 | for c = 1:Nratings 31 | k(i) = (cum_H2(c+1) - cum_FA2(c))^2 - (cum_H2(c) - cum_FA2(c+1))^2; 32 | i = i+1; 33 | end 34 | auroc2 = 0.5 + 0.25*sum(k); 35 | 36 | 37 | end -------------------------------------------------------------------------------- /CPC_metacog_tutorial/cpc_metacog_utils/cpc_calcAU_type2roc.m: -------------------------------------------------------------------------------- 1 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 2 | %%%%%%%%%%% CPC METACOGNITION TUTORIAL 2019: CALCULATE TYPE2ROC %%%%%%%%%%% 3 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 4 | 5 | % Function to calculate the area under the type2 ROC curve 6 | 7 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 8 | function auroc2 = cpc_calcAU_type2roc(data) 9 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 10 | 11 | nR_S1 = data.responses.nR_S1; 12 | nR_S2 = data.responses.nR_S2; 13 | 14 | Nratings = length(nR_S1) / 2; 15 | 16 | flip_nR_S1 = fliplr(nR_S1); 17 | flip_nR_S2 = fliplr(nR_S2); 18 | 19 | for c = 1:Nratings 20 | S1_H2(c) = nR_S1(c) + 0.5; 21 | S2_H2(c) = flip_nR_S2(c) + 0.5; 22 | S1_FA2(c) = flip_nR_S1(c) + 0.5; 23 | S2_FA2(c) = nR_S2(c) + 0.5; 24 | end 25 | 26 | H2 = S1_H2 + S2_H2; 27 | FA2 = S1_FA2 + S2_FA2; 28 | 29 | H2 = H2./sum(H2); 30 | FA2 = FA2./sum(FA2); 31 | cum_H2 = [0 cumsum(H2)]; 32 | cum_FA2 = [0 cumsum(FA2)]; 33 | 34 | i=1; 35 | for c = 1:Nratings 36 | k(i) = (cum_H2(c+1) - cum_FA2(c))^2 - (cum_H2(c) - cum_FA2(c+1))^2; 37 | i = i+1; 38 | end 39 | auroc2 = 0.5 + 0.25*sum(k); 40 | 41 | 42 | end -------------------------------------------------------------------------------- /CPC_metacog_tutorial/cpc_metacog_utils/cpc_metad_sim.m: -------------------------------------------------------------------------------- 1 | function sim = cpc_metad_sim(d, metad, c, nRatings, Ntrials) 2 | % sim = metad_sim(d, metad, c, c1, c2, Ntrials) 3 | % 4 | % INPUTS 5 | % d - type 1 dprime 6 | % metad - type 2 sensitivity in units of type 1 dprime 7 | % 8 | % c - type 1 criterion 9 | % c1 - type 2 criteria for S1 response 10 | % c2 - type 2 criteria for S2 response 11 | % Ntrials - number of trials to simulate, assumes equal S/N 12 | % 13 | % OUTPUT 14 | % 15 | % sim - structure containing nR_S1 and nR_S2 response counts 16 | % 17 | % SF 2014 18 | 19 | % Specify the confidence criterions based on the number of ratings 20 | c1 = c + linspace(-1.5, -0.5, (nRatings - 1)); 21 | c2 = c + linspace(0.5, 1.5, (nRatings - 1)); 22 | 23 | % Calc type 1 response counts 24 | H = round((1-normcdf(c,d/2)).*(Ntrials/2)); 25 | FA = round((1-normcdf(c,-d/2)).*(Ntrials/2)); 26 | CR = round(normcdf(c,-d/2).*(Ntrials/2)); 27 | M = round(normcdf(c,d/2).*(Ntrials/2)); 28 | 29 | % Calc type 2 probabilities 30 | S1mu = -metad/2; 31 | S2mu = metad/2; 32 | 33 | % Normalising constants 34 | C_area_rS1 = normcdf(c,S1mu); 35 | I_area_rS1 = normcdf(c,S2mu); 36 | C_area_rS2 = 1-normcdf(c,S2mu); 37 | I_area_rS2 = 1-normcdf(c,S1mu); 38 | 39 | t2c1x = [-Inf c1 c c2 Inf]; 40 | 41 | for i = 1:nRatings 42 | prC_rS1(i) = ( normcdf(t2c1x(i+1),S1mu) - normcdf(t2c1x(i),S1mu) ) / C_area_rS1; 43 | prI_rS1(i) = ( normcdf(t2c1x(i+1),S2mu) - normcdf(t2c1x(i),S2mu) ) / I_area_rS1; 44 | 45 | prC_rS2(i) = ( (1-normcdf(t2c1x(nRatings+i),S2mu)) - (1-normcdf(t2c1x(nRatings+i+1),S2mu)) ) / C_area_rS2; 46 | prI_rS2(i) = ( (1-normcdf(t2c1x(nRatings+i),S1mu)) - (1-normcdf(t2c1x(nRatings+i+1),S1mu)) ) / I_area_rS2; 47 | end 48 | 49 | % Ensure vectors sum to 1 to avoid problems with mnrnd 50 | prC_rS1 = prC_rS1./sum(prC_rS1); 51 | prI_rS1 = prI_rS1./sum(prI_rS1); 52 | prC_rS2 = prC_rS2./sum(prC_rS2); 53 | prI_rS2 = prI_rS2./sum(prI_rS2); 54 | 55 | % Sample 4 response classes from multinomial distirbution (normalised 56 | % within each response class) 57 | nC_rS1 = mnrnd(CR,prC_rS1); 58 | nI_rS1 = mnrnd(M,prI_rS1); 59 | nC_rS2 = mnrnd(H,prC_rS2); 60 | nI_rS2 = mnrnd(FA,prI_rS2); 61 | 62 | % Add to data vectors 63 | sim.nR_S1 = [nC_rS1 nI_rS2]; 64 | sim.nR_S2 = [nI_rS1 nC_rS2]; 65 | -------------------------------------------------------------------------------- /CPC_metacog_tutorial/cpc_metacog_utils/cpc_plot_confidence.m: -------------------------------------------------------------------------------- 1 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 2 | %%%%%%%%%%%% CPC METACOGNITION TUTORIAL 2019: PLOT CONFIDENCE %%%%%%%%%%%%% 3 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 4 | 5 | % Function to plot confidence scores from simulated and fit data 6 | 7 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 8 | function cpc_plot_confidence(data, titleText, toPlot) 9 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 10 | 11 | % Show model fit in terms of conditional probability of confidence ratings, collapse 12 | % over responses 13 | if strcmp(toPlot, 'est') == 1 14 | if isfield(data.fit, 'meta_d') 15 | meta_d = data.fit.meta_d; 16 | elseif isfield(data.fit, 'meta_da') 17 | meta_d = data.fit.meta_da; 18 | end 19 | mu1 = meta_d./2; 20 | mean_c2 = (data.fit.t2ca_rS2 + abs(data.fit.t2ca_rS1(end:-1:1)))./2; 21 | I_area = 1-normcdf(0,-mu1,1); 22 | C_area = 1-normcdf(0,mu1,1); 23 | allC = [0 mean_c2 Inf]; 24 | for i = 1:length(allC)-1 25 | I_prop_model(i) = (normcdf(allC(i+1), -mu1, 1) - normcdf(allC(i), -mu1, 1))./I_area; 26 | C_prop_model(i) = (normcdf(allC(i+1), mu1, 1) - normcdf(allC(i), mu1, 1))./C_area; 27 | end 28 | end 29 | 30 | % Get the number of ratings 31 | Nrating = length(data.responses.nR_S1) / 2; 32 | 33 | % Collapse across two stimuli for correct and incorrect responses 34 | obsCount = (data.responses.nR_S1 + data.responses.nR_S2(end:-1:1)); % this gives flipped corrects followed by incorrects 35 | C_prop_data = fliplr(obsCount(1:Nrating))./sum(obsCount(1:Nrating)); 36 | I_prop_data = obsCount(Nrating+1:2*Nrating)./sum(obsCount(Nrating+1:2*Nrating)); 37 | 38 | % Plot responses 39 | if strcmp(toPlot, 'obs') == 1 40 | bar([1:Nrating]-0.2, I_prop_data, 0.3, 'r', 'LineWidth', 2); 41 | hold on 42 | bar([1:Nrating]+0.2, C_prop_data, 0.3, 'g', 'LineWidth', 2); 43 | legend('Obs Incorrect','Obs Correct','Location','NorthEast'); 44 | title(titleText); 45 | elseif strcmp(toPlot, 'est') == 1 46 | bar([1:Nrating]-0.2, I_prop_data, 0.3, 'r', 'LineWidth', 2); 47 | hold on 48 | plot([1:Nrating]-0.2, I_prop_model, 'ro ', 'MarkerSize', 8, 'LineWidth', 2, 'MarkerEdgeColor','r', 'MarkerFaceColor', [1 1 1]); 49 | bar([1:Nrating]+0.2, C_prop_data, 0.3, 'g', 'LineWidth', 2); 50 | plot([1:Nrating]+0.2, C_prop_model, 'go ', 'MarkerSize', 8, 'LineWidth', 2, 'MarkerEdgeColor','g', 'MarkerFaceColor', [1 1 1]); 51 | legend('Obs Incorrect','Est Incorrect','Correct','Est Correct','Location','NorthEast'); 52 | title([titleText, ': ', sprintf('meta-d = %.2f\n', meta_d)]); 53 | end 54 | ylabel('P(conf = y) | outcome)'); 55 | xlabel('Confidence rating'); 56 | set(gca, 'FontSize', 14, 'XTick', [1:Nrating], 'YLim', [0 0.7]) 57 | box off 58 | 59 | end 60 | -------------------------------------------------------------------------------- /CPC_metacog_tutorial/cpc_metacog_utils/cpc_plot_simVfit.m: -------------------------------------------------------------------------------- 1 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 2 | %%%%%%%%%%%%% CPC METACOGNITION TUTORIAL 2019: PLOT TYPE2ROC %%%%%%%%%%%%%% 3 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 4 | 5 | % Function to plot type2 ROC curve from observed ± estimated data fit 6 | 7 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 8 | function cpc_plot_simVfit(sim, fit, parameter, fitType) 9 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 10 | 11 | clear est 12 | 13 | % Pull out parameter of interest 14 | if strcmp(fitType, 'single') == 1 15 | for a = 1:length(sim.params.meta_d) 16 | for b = 1:sim.params.Nsims 17 | sim.meta_d(b,a) = sim.values{a}.meta_d(b); 18 | sim.Mratio(b,a) = sim.values{a}.meta_d(b) / sim.values{a}.d(b); 19 | est.meta_d(b,a) = fit{b,a}.meta_d; 20 | est.Mratio(b,a) = fit{b,a}.M_ratio; 21 | end 22 | end 23 | elseif strcmp(fitType, 'group') == 1 24 | for a = 1:length(sim.params.meta_d) 25 | for b = 1:sim.params.Nsims 26 | sim.meta_d(b,a) = sim.values{a}.meta_d(b); 27 | sim.Mratio(b,a) = sim.values{a}.meta_d(b) / sim.values{a}.d(b); 28 | est.meta_d(b,a) = fit{a}.meta_d(b); 29 | est.Mratio(b,a) = fit{a}.Mratio(b); 30 | end 31 | end 32 | end 33 | 34 | figure 35 | hold on 36 | if strcmp(parameter, 'meta-d') == 1 37 | for a = 1:sim.params.Nsims 38 | scatter(sim.meta_d(a,:), est.meta_d(a,:), 'k'); 39 | end 40 | xlabel('Simulated meta-d'); 41 | ylabel('Recovered meta-d'); 42 | title('META-D PARAMETER RECOVERY'); 43 | rf1 = refline(1, 0); 44 | rf1.LineStyle = '--'; 45 | rf1.Color = 'k'; 46 | rf1.LineWidth = 1; 47 | rf2 = refline(0, 0); 48 | rf2.LineStyle = ':'; 49 | rf2.Color = 'k'; 50 | rf2.LineWidth = 1.5; 51 | elseif strcmp(parameter, 'Mratio') == 1 52 | for a = 1:sim.params.Nsims 53 | scatter(sim.Mratio(a,:), est.Mratio(a,:), 'k'); 54 | end 55 | xlabel('Simulated Mratio'); 56 | ylabel('Recovered Mratio'); 57 | title('MRATIO PARAMETER RECOVERY'); 58 | rf1 = refline(1, 0); 59 | rf1.LineStyle = '--'; 60 | rf1.Color = 'k'; 61 | rf1.LineWidth = 1; 62 | rf2 = refline(0, 0); 63 | rf2.LineStyle = ':'; 64 | rf2.Color = 'k'; 65 | rf2.LineWidth = 1.5; 66 | elseif strcmp(fitType, 'regression') == 1 && strcmp(parameter, 'log(Mratio)') == 1 67 | S1 = scatter(sim.values.covZscored, log(sim.fit.bayesGroupMean.Mratio)); 68 | refline 69 | S2 = scatter(sim.values.covZscored, log(sim.fit.bayesGroupRegression.Mratio)); 70 | refline 71 | xlabel('Covariate'); 72 | ylabel('Recovered log(Mratio)'); 73 | title('REGRESSION COMPARISON'); 74 | legend([S1 S2], 'HMeta-d','RHMeta-d','Location','SouthEast'); 75 | end 76 | 77 | set(gca, 'FontSize', 12) 78 | 79 | end -------------------------------------------------------------------------------- /CPC_metacog_tutorial/cpc_metacog_utils/cpc_plot_type2roc.m: -------------------------------------------------------------------------------- 1 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 2 | %%%%%%%%%%%%% CPC METACOGNITION TUTORIAL 2019: PLOT TYPE2ROC %%%%%%%%%%%%%% 3 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 4 | 5 | % Function to plot type2 ROC curve from observed ± estimated data fit 6 | 7 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 8 | function cpc_plot_type2roc(data, titleText, toPlot) 9 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 10 | 11 | if strcmp(toPlot, 'obs') == 1 12 | nRatings = length(data.responses.nR_S1) / 2; 13 | % Find incorrect observed ratings... 14 | I_nR_rS2 = data.responses.nR_S1(nRatings+1:end); 15 | I_nR_rS1 = data.responses.nR_S2(nRatings:-1:1); 16 | I_nR = I_nR_rS2 + I_nR_rS1; 17 | % Find correct observed ratings... 18 | C_nR_rS2 = data.responses.nR_S2(nRatings+1:end); 19 | C_nR_rS1 = data.responses.nR_S1(nRatings:-1:1); 20 | C_nR = C_nR_rS2 + C_nR_rS1; 21 | % Calculate type 2 hits and false alarms 22 | for i = 1:nRatings 23 | obs_FAR2_rS2(i) = sum( I_nR_rS2(i:end) ) / sum(I_nR_rS2); 24 | obs_HR2_rS2(i) = sum( C_nR_rS2(i:end) ) / sum(C_nR_rS2); 25 | obs_FAR2_rS1(i) = sum( I_nR_rS1(i:end) ) / sum(I_nR_rS1); 26 | obs_HR2_rS1(i) = sum( C_nR_rS1(i:end) ) / sum(C_nR_rS1); 27 | obs_FAR2(i) = sum( I_nR(i:end) ) / sum(I_nR); 28 | obs_HR2(i) = sum( C_nR(i:end) ) / sum(C_nR); 29 | end 30 | obs_FAR2(nRatings+1) = 0; 31 | obs_HR2(nRatings+1) = 0; 32 | plot(obs_FAR2, obs_HR2, 'ko-', 'linewidth', 1.5, 'markersize', 12); 33 | hold on 34 | legend('Observed','Location','SouthEast'); 35 | title(['TYPE2 ROC CURVE: ', titleText]); 36 | ylabel('TYPE2 P(CORRECT)'); 37 | xlabel('TYPE2 P(INCORRECT)'); 38 | elseif strcmp(toPlot, 'est') == 1 39 | figure 40 | set(gcf, 'Units', 'normalized'); 41 | set(gcf, 'Position', [0.2 0.2 0.5 0.43]); 42 | subplot(1,2,1); 43 | plot(data.fit.obs_FAR2_rS1, data.fit.obs_HR2_rS1, 'ko-','linewidth',1.5,'markersize',12); 44 | hold on 45 | plot(data.fit.est_FAR2_rS1, data.fit.est_HR2_rS1, '+-','color',[0.5 0.5 0.5], 'linewidth',1.5,'markersize',10); 46 | legend('Observed','Estimated','Location','SouthEast'); 47 | text(0.5, 1.15, [titleText, ': ', sprintf('fit meta-d = %.2f\n', data.fit.meta_d)], 'FontSize', 25, 'FontWeight', 'bold'); 48 | title('STIMULUS 1'); 49 | ylabel('TYPE2 HR'); 50 | xlabel('TYPE2 FAR'); 51 | end 52 | set(gca, 'XLim', [0 1], 'YLim', [0 1], 'FontSize', 16); 53 | line([0 1],[0 1],'linestyle','--','color','k','HandleVisibility','off'); 54 | axis square 55 | 56 | if strcmp(toPlot, 'est') == 1 57 | subplot(1,2,2); 58 | if strcmp(toPlot, 'obs') == 1 59 | plot(obs_FAR2_rS2, obs_HR2_rS2, 'ko-', 'linewidth', 1.5, 'markersize', 12); 60 | hold on 61 | legend('Observed','Location','SouthEast'); 62 | elseif strcmp(toPlot, 'est') == 1 63 | plot(data.fit.obs_FAR2_rS2, data.fit.obs_HR2_rS2, 'ko-','linewidth',1.5,'markersize',12); 64 | hold on 65 | plot(data.fit.est_FAR2_rS2, data.fit.est_HR2_rS2, '+-','color',[0.5 0.5 0.5], 'linewidth',1.5,'markersize',10); 66 | legend('Observed','Estimated','Location','SouthEast'); 67 | end 68 | set(gca, 'XLim', [0 1], 'YLim', [0 1], 'FontSize', 16); 69 | ylabel('TYPE2 HR'); 70 | xlabel('TYPE2 FAR'); 71 | line([0 1],[0 1],'linestyle','--','color','k','HandleVisibility','off'); 72 | title('STIMULUS 2'); 73 | axis square 74 | 75 | hold off 76 | end 77 | 78 | end -------------------------------------------------------------------------------- /CPC_metacog_tutorial/cpc_metacog_utils/cpc_type2_SDT_sim.m: -------------------------------------------------------------------------------- 1 | function sim = cpc_type2_SDT_sim(d, noise, c, Nratings, Ntrials) 2 | % Type 2 SDT simulation with variable noise 3 | % sim = type2_SDT_sim(d, noise, c, c1, c2, Ntrials) 4 | % 5 | % INPUTS 6 | % d - type 1 dprime 7 | % noise - standard deviation of noise to be added to type 1 internal 8 | % response for type 2 judgment. If noise is a 1 x 2 vector then this will 9 | % simulate response-conditional type 2 data where noise = [sigma_rS1 10 | % sigma_rS2] 11 | % 12 | % c - type 1 criterion 13 | % c1 - type 2 criteria for S1 response 14 | % c2 - type 2 criteria for S2 response 15 | % Ntrials - number of trials to simulate 16 | % 17 | % OUTPUT 18 | % 19 | % sim - structure containing nR_S1 and nR_S2 response counts 20 | % 21 | % SF 2014 22 | 23 | % Specify the confidence criterions based on the number of ratings 24 | c1 = c + linspace(-1.5, -0.5, (Nratings - 1)); 25 | c2 = c + linspace(0.5, 1.5, (Nratings - 1)); 26 | 27 | if length(noise) > 1 28 | rc = 1; 29 | sigma1 = noise(1); 30 | sigma2 = noise(2); 31 | else 32 | rc = 0; 33 | sigma = noise; 34 | end 35 | 36 | S1mu = -d/2; 37 | S2mu = d/2; 38 | 39 | % Initialise response arrays 40 | nC_rS1 = zeros(1, length(c1)+1); 41 | nI_rS1 = zeros(1, length(c1)+1); 42 | nC_rS2 = zeros(1, length(c2)+1); 43 | nI_rS2 = zeros(1, length(c2)+1); 44 | 45 | for t = 1:Ntrials 46 | s = round(rand); 47 | 48 | % Type 1 SDT model 49 | if s == 1 50 | x = normrnd(S2mu, 1); 51 | else 52 | x = normrnd(S1mu, 1); 53 | end 54 | 55 | % Add type 2 noise to signal 56 | if rc % add response-conditional noise 57 | if x < c 58 | if sigma1 > 0 59 | x2 = normrnd(x, sigma1); 60 | else 61 | x2 = x; 62 | end 63 | else 64 | if sigma2 > 0 65 | x2 = normrnd(x, sigma2); 66 | else 67 | x2 = x; 68 | end 69 | end 70 | else 71 | if sigma > 0 72 | x2 = normrnd(x,sigma); 73 | else 74 | x2 = x; 75 | end 76 | end 77 | 78 | % Generate confidence ratings 79 | if s == 0 && x < c % stimulus S1 and response S1 80 | pos = (x2 <= [c1 c]); 81 | [y ind] = find(pos); 82 | i = min(ind); 83 | nC_rS1(i) = nC_rS1(i) + 1; 84 | 85 | elseif s == 0 && x >= c % stimulus S1 and response S2 86 | pos = (x2 >= [c c2]); 87 | [y ind] = find(pos); 88 | i = max(ind); 89 | nI_rS2(i) = nI_rS2(i) + 1; 90 | 91 | elseif s == 1 && x < c % stimulus S2 and response S1 92 | pos = (x2 <= [c1 c]); 93 | [y ind] = find(pos); 94 | i = min(ind); 95 | nI_rS1(i) = nI_rS1(i) + 1; 96 | 97 | elseif s == 1 && x >= c % stimulus S2 and response S2 98 | pos = (x2 >= [c c2]); 99 | [y ind] = find(pos); 100 | i = max(ind); 101 | nC_rS2(i) = nC_rS2(i) + 1; 102 | end 103 | 104 | end 105 | 106 | sim.nR_S1 = [nC_rS1 nI_rS2]; 107 | sim.nR_S2 = [nI_rS1 nC_rS2]; 108 | -------------------------------------------------------------------------------- /CPC_metacog_tutorial/cpc_metacog_utils/cpc_type2roc.m: -------------------------------------------------------------------------------- 1 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 2 | %%%%%%%%%%% CPC METACOGNITION TUTORIAL 2019: CALCULATE TYPE2ROC %%%%%%%%%%% 3 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 4 | 5 | % Function to calculate the area under the type2 ROC curve 6 | 7 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 8 | function auroc2 = cpc_type2roc(nR_S1, nR_S2, Nratings) 9 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 10 | 11 | flipped_nR_S1 = fliplr(nR_S1); 12 | flipped_nR_S2 = fliplr(nR_S2); 13 | 14 | for c = 1:Nratings 15 | S1_H2(c) = nR_S1(c) + 0.5; 16 | S2_H2(c) = flipped_nR_S2(c) + 0.5; 17 | S1_FA2(c) = flipped_nR_S1(c) + 0.5; 18 | S2_FA2(c) = nR_S2(c) + 0.5; 19 | end 20 | 21 | H2 = S1_H2 + S2_H2; 22 | FA2 = S1_FA2 + S2_FA2; 23 | 24 | H2 = H2./sum(H2); 25 | FA2 = FA2./sum(FA2); 26 | cum_H2 = [0 cumsum(H2)]; 27 | cum_FA2 = [0 cumsum(FA2)]; 28 | 29 | i=1; 30 | for c = 1:Nratings 31 | k(i) = (cum_H2(c+1) - cum_FA2(c))^2 - (cum_H2(c) - cum_FA2(c+1))^2; 32 | i = i+1; 33 | end 34 | auroc2 = 0.5 + 0.25*sum(k); 35 | 36 | 37 | end -------------------------------------------------------------------------------- /CPC_metacog_tutorial/cpc_metacog_utils/fit_meta_d_MLE.m: -------------------------------------------------------------------------------- 1 | 2 | % ------------------------------------------------------------------------- 3 | % Downloaded 10.08.2018 from: 4 | % http://www.columbia.edu/~bsm2105/type2sdt/archive/index.html 5 | % Author: Brian Maniscalco 6 | % Contact: brian@psych.columbia.edu 7 | 8 | % 10.08.2018: OKF added calculation for type 1 c to the results 9 | 10 | % NOTE FROM WEBPAGE: 11 | % If you use this analysis file, please reference the Consciousness & 12 | % Cognition paper: 13 | % Maniscalco, B., & Lau, H. (2012). A signal detection 14 | % theoretic approach for estimating metacognitive sensitivity from 15 | % confidence ratings. Consciousness and Cognition, 21(1), 422–430. 16 | % doi:10.1016/j.concog.2011.09.021) 17 | % And the website: 18 | % http://www.columbia.edu/~bsm2105/type2sdt/archive/index.html 19 | % ------------------------------------------------------------------------- 20 | 21 | function fit = fit_meta_d_MLE(nR_S1, nR_S2, s, fncdf, fninv) 22 | 23 | % fit = fit_meta_d_MLE(nR_S1, nR_S2, s, fncdf, fninv) 24 | % 25 | % Given data from an experiment where an observer discriminates between two 26 | % stimulus alternatives on every trial and provides confidence ratings, 27 | % provides a type 2 SDT analysis of the data. 28 | % 29 | % INPUTS 30 | % 31 | % * nR_S1, nR_S2 32 | % these are vectors containing the total number of responses in 33 | % each response category, conditional on presentation of S1 and S2. 34 | % 35 | % e.g. if nR_S1 = [100 50 20 10 5 1], then when stimulus S1 was 36 | % presented, the subject had the following response counts: 37 | % responded S1, rating=3 : 100 times 38 | % responded S1, rating=2 : 50 times 39 | % responded S1, rating=1 : 20 times 40 | % responded S2, rating=1 : 10 times 41 | % responded S2, rating=2 : 5 times 42 | % responded S2, rating=3 : 1 time 43 | % 44 | % The ordering of response / rating counts for S2 should be the same as it 45 | % is for S1. e.g. if nR_S2 = [3 7 8 12 27 89], then when stimulus S2 was 46 | % presented, the subject had the following response counts: 47 | % responded S1, rating=3 : 3 times 48 | % responded S1, rating=2 : 7 times 49 | % responded S1, rating=1 : 8 times 50 | % responded S2, rating=1 : 12 times 51 | % responded S2, rating=2 : 27 times 52 | % responded S2, rating=3 : 89 times 53 | % 54 | % N.B. if nR_S1 or nR_S2 contain zeros, this may interfere with estimation of 55 | % meta-d'. 56 | % 57 | % Some options for dealing with response cell counts containing zeros are: 58 | % 59 | % (1) Add a small adjustment factor, e.g. adj_f = 1/(length(nR_S1), to each 60 | % input vector: 61 | % 62 | % adj_f = 1/length(nR_S1); 63 | % nR_S1_adj = nR_S1 + adj_f; 64 | % nR_S2_adj = nR_S2 + adj_f; 65 | % 66 | % This is a generalization of the correction for similar estimation issues of 67 | % type 1 d' as recommended in 68 | % 69 | % Hautus, M. J. (1995). Corrections for extreme proportions and their biasing 70 | % effects on estimated values of d'. Behavior Research Methods, Instruments, 71 | % & Computers, 27, 46-51. 72 | % 73 | % When using this correction method, it is recommended to add the adjustment 74 | % factor to ALL data for all subjects, even for those subjects whose data is 75 | % not in need of such correction, in order to avoid biases in the analysis 76 | % (cf Snodgrass & Corwin, 1988). 77 | % 78 | % (2) Collapse across rating categories. 79 | % 80 | % e.g. if your data set has 4 possible confidence ratings such that length(nR_S1)==8, 81 | % defining new input vectors 82 | % 83 | % nR_S1_new = [sum(nR_S1(1:2)), sum(nR_S1(3:4)), sum(nR_S1(5:6)), sum(nR_S1(7:8))]; 84 | % nR_S2_new = [sum(nR_S2(1:2)), sum(nR_S2(3:4)), sum(nR_S2(5:6)), sum(nR_S2(7:8))]; 85 | % 86 | % might be sufficient to eliminate zeros from the input without using an adjustment. 87 | % 88 | % * s 89 | % this is the ratio of standard deviations for type 1 distributions, i.e. 90 | % 91 | % s = sd(S1) / sd(S2) 92 | % 93 | % if not specified, s is set to a default value of 1. 94 | % For most purposes, we recommend setting s = 1. 95 | % See http://www.columbia.edu/~bsm2105/type2sdt for further discussion. 96 | % 97 | % * fncdf 98 | % a function handle for the CDF of the type 1 distribution. 99 | % if not specified, fncdf defaults to @normcdf (i.e. CDF for normal 100 | % distribution) 101 | % 102 | % * fninv 103 | % a function handle for the inverse CDF of the type 1 distribution. 104 | % if not specified, fninv defaults to @norminv 105 | % 106 | % OUTPUT 107 | % 108 | % Output is packaged in the struct "fit." 109 | % In the following, let S1 and S2 represent the distributions of evidence 110 | % generated by stimulus classes S1 and S2. 111 | % Then the fields of "fit" are as follows: 112 | % 113 | % fit.d1 = mean(S2) - mean(S1), in room-mean-square(sd(S1),sd(S2)) units 114 | % fit.c1 = type 1 criterion 115 | % fit.s = sd(S1) / sd(S2) 116 | % fit.meta_d = meta-d' in RMS units 117 | % fit.M_diff = meta_da - da 118 | % fit.M_ratio = meta_da / da 119 | % fit.meta_c1 = type 1 criterion for meta-d' fit, RMS units 120 | % fit.t2ca_rS1 = type 2 criteria of "S1" responses for meta-d' fit, RMS units 121 | % fit.t2ca_rS2 = type 2 criteria of "S2" responses for meta-d' fit, RMS units 122 | % 123 | % fit.S1units = contains same parameters in sd(S1) units. 124 | % these may be of use since the data-fitting is conducted 125 | % using parameters specified in sd(S1) units. 126 | % 127 | % fit.logL = log likelihood of the data fit 128 | % 129 | % fit.est_HR2_rS1 = estimated (from meta-d' fit) type 2 hit rates for S1 responses 130 | % fit.obs_HR2_rS1 = actual type 2 hit rates for S1 responses 131 | % fit.est_FAR2_rS1 = estimated type 2 false alarm rates for S1 responses 132 | % fit.obs_FAR2_rS1 = actual type 2 false alarm rates for S1 responses 133 | % 134 | % fit.est_HR2_rS2 = estimated type 2 hit rates for S2 responses 135 | % fit.obs_HR2_rS2 = actual type 2 hit rates for S2 responses 136 | % fit.est_FAR2_rS2 = estimated type 2 false alarm rates for S2 responses 137 | % fit.obs_FAR2_rS2 = actual type 2 false alarm rates for S2 responses 138 | % 139 | % If there are N ratings, then there will be N-1 type 2 hit rates and false 140 | % alarm rates. 141 | 142 | % 2015/07/23 - fixed bug for output fit.meta_ca and fit.S1units.meta_c1. 143 | % - added comments to help section as well as a warning output 144 | % for nR_S1 or nR_S2 inputs containing zeros 145 | % 2014/10/14 - updated discussion of "s" input in the help section above. 146 | % 2010/09/07 - created 147 | 148 | 149 | %% parse inputs 150 | 151 | % check inputs 152 | if ~mod(length(nR_S1),2)==0, error('input arrays must have an even number of elements'); end 153 | if length(nR_S1)~=length(nR_S2), error('input arrays must have the same number of elements'); end 154 | 155 | if any(nR_S1 == 0) || any(nR_S2 == 0) 156 | disp(' ') 157 | disp('WARNING!!') 158 | disp('---------') 159 | disp('Your inputs') 160 | disp(' ') 161 | disp(['nR_S1 = [' num2str(nR_S1) ']']) 162 | disp(['nR_S2 = [' num2str(nR_S2) ']']) 163 | disp(' ') 164 | disp('contain zeros! This may interfere with proper estimation of meta-d''.') 165 | disp('See ''help fit_meta_d_MLE'' for more information.') 166 | disp(' ') 167 | disp(' ') 168 | end 169 | 170 | % assign input default values 171 | if ~exist('s','var') || isempty(s) 172 | s = 1; 173 | end 174 | 175 | if ~exist('fncdf','var') || isempty(fncdf) 176 | fncdf = @normcdf; 177 | end 178 | 179 | if ~exist('fninv','var') || isempty(fninv) 180 | fninv = @norminv; 181 | end 182 | 183 | nRatings = length(nR_S1) / 2; 184 | nCriteria = 2*nRatings - 1; 185 | 186 | 187 | %% set up constraints for MLE estimation 188 | 189 | % parameters 190 | % meta-d' - 1 191 | % t2c - nCriteria-1 192 | 193 | A = []; 194 | b = []; 195 | 196 | % constrain type 2 criteria values, 197 | % such that t2c(i) is always <= t2c(i+1) 198 | % want t2c(i) <= t2c(i+1) 199 | % --> t2c(i+1) >= c(i) + 1e-5 (i.e. very small deviation from equality) 200 | % --> t2c(i) - t2c(i+1) <= -1e-5 201 | for i = 2 : nCriteria-1 202 | 203 | A(end+1,[i i+1]) = [1 -1]; 204 | b(end+1) = -1e-5; 205 | 206 | end 207 | 208 | % lower bounds on parameters 209 | LB = []; 210 | LB = [LB -10]; % meta-d' 211 | LB = [LB -20*ones(1,(nCriteria-1)/2)]; % criteria lower than t1c 212 | LB = [LB zeros(1,(nCriteria-1)/2)]; % criteria higher than t1c 213 | 214 | % upper bounds on parameters 215 | UB = []; 216 | UB = [UB 10]; % meta-d' 217 | UB = [UB zeros(1,(nCriteria-1)/2)]; % criteria lower than t1c 218 | UB = [UB 20*ones(1,(nCriteria-1)/2)]; % criteria higher than t1c 219 | 220 | 221 | %% select constant criterion type 222 | 223 | constant_criterion = 'meta_d1 * (t1c1 / d1)'; % relative criterion 224 | 225 | 226 | %% set up initial guess at parameter values 227 | 228 | ratingHR = []; 229 | ratingFAR = []; 230 | for c = 2:nRatings*2 231 | ratingHR(end+1) = sum(nR_S2(c:end)) / sum(nR_S2); 232 | ratingFAR(end+1) = sum(nR_S1(c:end)) / sum(nR_S1); 233 | end 234 | 235 | t1_index = nRatings; 236 | t2_index = setdiff(1:2*nRatings-1, t1_index); 237 | 238 | d1 = (1/s) * fninv( ratingHR(t1_index) ) - fninv( ratingFAR(t1_index) ); 239 | meta_d1 = d1; 240 | c = -0.5 .* (fninv(ratingHR(t1_index)) + fninv(ratingFAR(t1_index))); 241 | c1 = (-1/(1+s)) * ( fninv( ratingHR ) + fninv( ratingFAR ) ); 242 | t1c1 = c1(t1_index); 243 | t2c1 = c1(t2_index); 244 | 245 | guess = [meta_d1 t2c1 - eval(constant_criterion)]; 246 | 247 | 248 | 249 | %% find the best fit for type 2 hits and FAs 250 | 251 | % save fit_meta_d_MLE.mat nR_S1 nR_S2 t2FAR_rS2 t2HR_rS2 t2FAR_rS1 t2HR_rS1 nRatings t1c1 s d1 fncdf constant_criterion 252 | save fit_meta_d_MLE.mat nR_S1 nR_S2 nRatings d1 t1c1 s constant_criterion fncdf fninv 253 | 254 | op = optimset(@fmincon); 255 | op = optimset(op,'MaxFunEvals',100000); 256 | 257 | [x f] = fmincon(@fit_meta_d_logL,guess,A,b,[],[],LB,UB,[],op); 258 | 259 | meta_d1 = x(1); 260 | t2c1 = x(2:end) + eval(constant_criterion); 261 | logL = -f; 262 | 263 | 264 | %% data is fit, now to package it... 265 | 266 | %% find observed t2FAR and t2HR 267 | 268 | % I_nR and C_nR are rating trial counts for incorrect and correct trials 269 | % element i corresponds to # (in)correct w/ rating i 270 | I_nR_rS2 = nR_S1(nRatings+1:end); 271 | I_nR_rS1 = nR_S2(nRatings:-1:1); 272 | 273 | C_nR_rS2 = nR_S2(nRatings+1:end); 274 | C_nR_rS1 = nR_S1(nRatings:-1:1); 275 | 276 | for i = 2:nRatings 277 | obs_FAR2_rS2(i-1) = sum( I_nR_rS2(i:end) ) / sum(I_nR_rS2); 278 | obs_HR2_rS2(i-1) = sum( C_nR_rS2(i:end) ) / sum(C_nR_rS2); 279 | 280 | obs_FAR2_rS1(i-1) = sum( I_nR_rS1(i:end) ) / sum(I_nR_rS1); 281 | obs_HR2_rS1(i-1) = sum( C_nR_rS1(i:end) ) / sum(C_nR_rS1); 282 | end 283 | 284 | 285 | %% find estimated t2FAR and t2HR 286 | 287 | S1mu = -meta_d1/2; S1sd = 1; 288 | S2mu = meta_d1/2; S2sd = S1sd/s; 289 | 290 | mt1c1 = eval(constant_criterion); 291 | 292 | C_area_rS2 = 1-fncdf(mt1c1,S2mu,S2sd); 293 | I_area_rS2 = 1-fncdf(mt1c1,S1mu,S1sd); 294 | 295 | C_area_rS1 = fncdf(mt1c1,S1mu,S1sd); 296 | I_area_rS1 = fncdf(mt1c1,S2mu,S2sd); 297 | 298 | for i=1:nRatings-1 299 | 300 | t2c1_lower = t2c1(nRatings-i); 301 | t2c1_upper = t2c1(nRatings-1+i); 302 | 303 | I_FAR_area_rS2 = 1-fncdf(t2c1_upper,S1mu,S1sd); 304 | C_HR_area_rS2 = 1-fncdf(t2c1_upper,S2mu,S2sd); 305 | 306 | I_FAR_area_rS1 = fncdf(t2c1_lower,S2mu,S2sd); 307 | C_HR_area_rS1 = fncdf(t2c1_lower,S1mu,S1sd); 308 | 309 | 310 | est_FAR2_rS2(i) = I_FAR_area_rS2 / I_area_rS2; 311 | est_HR2_rS2(i) = C_HR_area_rS2 / C_area_rS2; 312 | 313 | est_FAR2_rS1(i) = I_FAR_area_rS1 / I_area_rS1; 314 | est_HR2_rS1(i) = C_HR_area_rS1 / C_area_rS1; 315 | 316 | end 317 | 318 | 319 | %% package output 320 | 321 | fit.d1 = sqrt(2/(1+s^2)) * s * d1; 322 | fit.c1 = c; 323 | fit.s = s; 324 | fit.meta_d = sqrt(2/(1+s^2)) * s * meta_d1; 325 | fit.M_diff = fit.meta_d - fit.d1; 326 | fit.M_ratio = fit.meta_d / fit.d1; 327 | 328 | mt1c1 = eval(constant_criterion); 329 | fit.meta_c1 = ( sqrt(2).*s ./ sqrt(1+s.^2) ) .* mt1c1; 330 | 331 | t2ca = ( sqrt(2).*s ./ sqrt(1+s.^2) ) .* t2c1; 332 | fit.t2ca_rS1 = t2ca(1:nRatings-1); 333 | fit.t2ca_rS2 = t2ca(nRatings:end); 334 | 335 | fit.S1units.d1 = d1; 336 | fit.S1units.meta_d1 = meta_d1; 337 | fit.S1units.s = s; 338 | fit.S1units.meta_c1 = mt1c1; 339 | fit.S1units.t2c1_rS1 = t2c1(1:nRatings-1); 340 | fit.S1units.t2c1_rS2 = t2c1(nRatings:end); 341 | 342 | fit.logL = logL; 343 | 344 | fit.est_HR2_rS1 = est_HR2_rS1; 345 | fit.obs_HR2_rS1 = obs_HR2_rS1; 346 | 347 | fit.est_FAR2_rS1 = est_FAR2_rS1; 348 | fit.obs_FAR2_rS1 = obs_FAR2_rS1; 349 | 350 | fit.est_HR2_rS2 = est_HR2_rS2; 351 | fit.obs_HR2_rS2 = obs_HR2_rS2; 352 | 353 | fit.est_FAR2_rS2 = est_FAR2_rS2; 354 | fit.obs_FAR2_rS2 = obs_FAR2_rS2; 355 | 356 | 357 | %% clean up 358 | delete fit_meta_d_MLE.mat 359 | 360 | end 361 | 362 | 363 | %% function to find the likelihood of parameter values, given observed data 364 | function logL = fit_meta_d_logL(parameters) 365 | 366 | % set up parameters 367 | meta_d1 = parameters(1); 368 | t2c1 = parameters(2:end); 369 | 370 | % loads: 371 | % nR_S1 nR_S2 nRatings d1 t1c1 s constant_criterion fncdf fninv 372 | load fit_meta_d_MLE.mat 373 | 374 | 375 | % define mean and SD of S1 and S2 distributions 376 | S1mu = -meta_d1/2; S1sd = 1; 377 | S2mu = meta_d1/2; S2sd = S1sd/s; 378 | 379 | 380 | % adjust so that the type 1 criterion is set at 0 381 | % (this is just to work with optimization toolbox constraints... 382 | % to simplify defining the upper and lower bounds of type 2 criteria) 383 | S1mu = S1mu - eval(constant_criterion); 384 | S2mu = S2mu - eval(constant_criterion); 385 | 386 | t1c1 = 0; 387 | 388 | 389 | 390 | %%% set up MLE analysis 391 | 392 | % get type 2 response counts 393 | for i = 1:nRatings 394 | 395 | % S1 responses 396 | nC_rS1(i) = nR_S1(i); 397 | nI_rS1(i) = nR_S2(i); 398 | 399 | % S2 responses 400 | nC_rS2(i) = nR_S2(nRatings+i); 401 | nI_rS2(i) = nR_S1(nRatings+i); 402 | 403 | end 404 | 405 | % get type 2 probabilities 406 | C_area_rS1 = fncdf(t1c1,S1mu,S1sd); 407 | I_area_rS1 = fncdf(t1c1,S2mu,S2sd); 408 | 409 | C_area_rS2 = 1-fncdf(t1c1,S2mu,S2sd); 410 | I_area_rS2 = 1-fncdf(t1c1,S1mu,S1sd); 411 | 412 | t2c1x = [-Inf t2c1(1:nRatings-1) t1c1 t2c1(nRatings:end) Inf]; 413 | 414 | for i = 1:nRatings 415 | prC_rS1(i) = ( fncdf(t2c1x(i+1),S1mu,S1sd) - fncdf(t2c1x(i),S1mu,S1sd) ) / C_area_rS1; 416 | prI_rS1(i) = ( fncdf(t2c1x(i+1),S2mu,S2sd) - fncdf(t2c1x(i),S2mu,S2sd) ) / I_area_rS1; 417 | 418 | prC_rS2(i) = ( (1-fncdf(t2c1x(nRatings+i),S2mu,S2sd)) - (1-fncdf(t2c1x(nRatings+i+1),S2mu,S2sd)) ) / C_area_rS2; 419 | prI_rS2(i) = ( (1-fncdf(t2c1x(nRatings+i),S1mu,S1sd)) - (1-fncdf(t2c1x(nRatings+i+1),S1mu,S1sd)) ) / I_area_rS2; 420 | end 421 | 422 | 423 | % calculate logL 424 | logL = 0; 425 | for i = 1:nRatings 426 | 427 | logL = logL + nC_rS1(i)*log(prC_rS1(i)) + nI_rS1(i)*log(prI_rS1(i)) + ... 428 | nC_rS2(i)*log(prC_rS2(i)) + nI_rS2(i)*log(prI_rS2(i)); 429 | 430 | end 431 | 432 | if isnan(logL), logL=-Inf; end 433 | 434 | logL = -logL; 435 | 436 | end -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2017 Stephen Fleming 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /Matlab/Bayes_metad.txt: -------------------------------------------------------------------------------- 1 | # Bayesian estimation of meta-d for a single subject 2 | 3 | data { 4 | # Type 1 counts 5 | N <- sum(counts[1:nratings*2]) 6 | S <- sum(counts[(nratings*2)+1:nratings*4]) 7 | H <- sum(counts[(nratings*3)+1:nratings*4]) 8 | M <- sum(counts[(nratings*2)+1:nratings*3]) 9 | FA <- sum(counts[nratings+1:nratings*2]) 10 | CR <- sum(counts[1:nratings]) 11 | } 12 | 13 | model { 14 | 15 | ## TYPE 1 SDT BINOMIAL MODEL 16 | H ~ dbin(h,S) 17 | FA ~ dbin(f,N) 18 | h <- phi(d1/2-c1) 19 | f <- phi(-d1/2-c1) 20 | 21 | # Type 1 priors 22 | c1 ~ dnorm(0, 2) 23 | d1 ~ dnorm(0, 0.5) 24 | 25 | ## TYPE 2 SDT MODEL (META-D) 26 | # Multinomial likelihood for response counts ordered as c(nR_S1,nR_S2) 27 | counts[1:nratings] ~ dmulti(prT[1:nratings],CR) 28 | counts[nratings+1:nratings*2] ~ dmulti(prT[nratings+1:nratings*2],FA) 29 | counts[(nratings*2)+1:nratings*3] ~ dmulti(prT[(nratings*2)+1:nratings*3],M) 30 | counts[(nratings*3)+1:nratings*4] ~ dmulti(prT[(nratings*3)+1:nratings*4],H) 31 | 32 | # Means of SDT distributions 33 | S2mu <- meta_d/2 34 | S1mu <- -meta_d/2 35 | 36 | # Calculate normalisation constants 37 | C_area_rS1 <- phi(c1 - S1mu) 38 | I_area_rS1 <- phi(c1 - S2mu) 39 | C_area_rS2 <- 1-phi(c1 - S2mu) 40 | I_area_rS2 <- 1-phi(c1 - S1mu) 41 | 42 | # Get nC_rS1 probs 43 | pr[1] <- phi(cS1[1] - S1mu)/C_area_rS1 44 | for (k in 1:nratings-2) { 45 | pr[k+1] <- (phi(cS1[k+1] - S1mu)-phi(cS1[k] - S1mu))/C_area_rS1 46 | } 47 | pr[nratings] <- (phi(c1 - S1mu)-phi(cS1[nratings-1] - S1mu))/C_area_rS1 48 | 49 | # Get nI_rS2 probs 50 | pr[nratings+1] <- ((1-phi(c1 - S1mu))-(1-phi(cS2[1] - S1mu)))/I_area_rS2 51 | for (k in 1:nratings-2) { 52 | pr[nratings+1+k] <- ((1-phi(cS2[k] - S1mu))-(1-phi(cS2[k+1] - S1mu)))/I_area_rS2 53 | } 54 | pr[nratings*2] <- (1-phi(cS2[nratings-1] - S1mu))/I_area_rS2 55 | 56 | # Get nI_rS1 probs 57 | pr[(nratings*2)+1] <- phi(cS1[1] - S2mu)/I_area_rS1 58 | for (k in 1:nratings-2) { 59 | pr[(nratings*2)+1+k] <- (phi(cS1[k+1] - S2mu)-phi(cS1[k] - S2mu))/I_area_rS1 60 | } 61 | pr[nratings*3] <- (phi(c1 - S2mu)-phi(cS1[nratings-1] - S2mu))/I_area_rS1 62 | 63 | # Get nC_rS2 probs 64 | pr[(nratings*3)+1] <- ((1-phi(c1 - S2mu))-(1-phi(cS2[1] - S2mu)))/C_area_rS2 65 | for (k in 1:nratings-2) { 66 | pr[(nratings*3)+1+k] <- ((1-phi(cS2[k] - S2mu))-(1-phi(cS2[k+1] - S2mu)))/C_area_rS2 67 | } 68 | pr[nratings*4] <- (1-phi(cS2[nratings-1] - S2mu))/C_area_rS2 69 | 70 | # Avoid underflow of probabilities 71 | for (i in 1:nratings*4) { 72 | prT[i] <- ifelse(pr[i] < Tol, Tol, pr[i]) 73 | } 74 | 75 | # Specify ordered prior on criteria (bounded above and below by Type 1 c1) 76 | for (j in 1:nratings-1) { 77 | cS1_raw[j] ~ dnorm(0,2) I(,c1-Tol) 78 | cS2_raw[j] ~ dnorm(0,2) I(c1+Tol,) 79 | } 80 | cS1[1:nratings-1] <- sort(cS1_raw) 81 | cS2[1:nratings-1] <- sort(cS2_raw) 82 | 83 | # Type 2 priors 84 | meta_d ~ dnorm(d1,0.5) 85 | 86 | } -------------------------------------------------------------------------------- /Matlab/Bayes_metad_group.txt: -------------------------------------------------------------------------------- 1 | # Bayesian estimation of meta-d/d for group 2 | 3 | data { 4 | for (s in 1:nsubj) { 5 | # Type 1 counts 6 | N[s] <- sum(counts[s,1:nratings*2]) 7 | S[s] <- sum(counts[s,(nratings*2)+1:nratings*4]) 8 | H[s] <- sum(counts[s,(nratings*3)+1:nratings*4]) 9 | M[s] <- sum(counts[s,(nratings*2)+1:nratings*3]) 10 | FA[s] <- sum(counts[s,nratings+1:nratings*2]) 11 | CR[s] <- sum(counts[s,1:nratings]) 12 | } 13 | } 14 | 15 | model { 16 | for (s in 1:nsubj) { 17 | 18 | ## TYPE 1 SDT BINOMIAL MODEL 19 | H[s] ~ dbin(h[s],S[s]) 20 | FA[s] ~ dbin(f[s],N[s]) 21 | h[s] <- phi(d1[s]/2-c1[s]) 22 | f[s] <- phi(-d1[s]/2-c1[s]) 23 | 24 | # Type 1 priors 25 | c1[s] ~ dnorm(mu_c, lambda_c) 26 | d1[s] ~ dnorm(mu_d1, lambda_d1) 27 | 28 | ## TYPE 2 SDT MODEL (META-D) 29 | # Multinomial likelihood for response counts ordered as c(nR_S1,nR_S2) 30 | counts[s,1:nratings] ~ dmulti(prT[s,1:nratings],CR[s]) 31 | counts[s,nratings+1:nratings*2] ~ dmulti(prT[s,nratings+1:nratings*2],FA[s]) 32 | counts[s,(nratings*2)+1:nratings*3] ~ dmulti(prT[s,(nratings*2)+1:nratings*3],M[s]) 33 | counts[s,(nratings*3)+1:nratings*4] ~ dmulti(prT[s,(nratings*3)+1:nratings*4],H[s]) 34 | 35 | # Means of SDT distributions] 36 | mu[s] <- Mratio[s]*d1[s] 37 | S2mu[s] <- mu[s]/2 38 | S1mu[s] <- -mu[s]/2 39 | 40 | # Calculate normalisation constants 41 | C_area_rS1[s] <- phi(c1[s] - S1mu[s]) 42 | I_area_rS1[s] <- phi(c1[s] - S2mu[s]) 43 | C_area_rS2[s] <- 1-phi(c1[s] - S2mu[s]) 44 | I_area_rS2[s] <- 1-phi(c1[s] - S1mu[s]) 45 | 46 | # Get nC_rS1 probs 47 | pr[s,1] <- phi(cS1[s,1] - S1mu[s])/C_area_rS1[s] 48 | for (k in 1:nratings-2) { 49 | pr[s,k+1] <- (phi(cS1[s,k+1] - S1mu[s])-phi(cS1[s,k] - S1mu[s]))/C_area_rS1[s] 50 | } 51 | pr[s,nratings] <- (phi(c1[s] - S1mu[s])-phi(cS1[s,nratings-1] - S1mu[s]))/C_area_rS1[s] 52 | 53 | # Get nI_rS2 probs 54 | pr[s,nratings+1] <- ((1-phi(c1[s] - S1mu[s]))-(1-phi(cS2[s,1] - S1mu[s])))/I_area_rS2[s] 55 | for (k in 1:nratings-2) { 56 | pr[s,nratings+1+k] <- ((1-phi(cS2[s,k] - S1mu[s]))-(1-phi(cS2[s,k+1] - S1mu[s])))/I_area_rS2[s] 57 | } 58 | pr[s,nratings*2] <- (1-phi(cS2[s,nratings-1] - S1mu[s]))/I_area_rS2[s] 59 | 60 | # Get nI_rS1 probs 61 | pr[s,(nratings*2)+1] <- phi(cS1[s,1] - S2mu[s])/I_area_rS1[s] 62 | for (k in 1:nratings-2) { 63 | pr[s,(nratings*2)+1+k] <- (phi(cS1[s,k+1] - S2mu[s])-phi(cS1[s,k] - S2mu[s]))/I_area_rS1[s] 64 | } 65 | pr[s,nratings*3] <- (phi(c1[s] - S2mu[s])-phi(cS1[s,nratings-1] - S2mu[s]))/I_area_rS1[s] 66 | 67 | # Get nC_rS2 probs 68 | pr[s,(nratings*3)+1] <- ((1-phi(c1[s] - S2mu[s]))-(1-phi(cS2[s,1] - S2mu[s])))/C_area_rS2[s] 69 | for (k in 1:nratings-2) { 70 | pr[s,(nratings*3)+1+k] <- ((1-phi(cS2[s,k] - S2mu[s]))-(1-phi(cS2[s,k+1] - S2mu[s])))/C_area_rS2[s] 71 | } 72 | pr[s,nratings*4] <- (1-phi(cS2[s,nratings-1] - S2mu[s]))/C_area_rS2[s] 73 | 74 | # Avoid underflow of probabilities 75 | for (i in 1:nratings*4) { 76 | prT[s,i] <- ifelse(pr[s,i] < Tol, Tol, pr[s,i]) 77 | } 78 | 79 | # Specify ordered prior on criteria 80 | for (j in 1:nratings-1) { 81 | cS1_raw[s,j] ~ dnorm(-mu_c2, lambda_c2) T(,c1[s]) 82 | cS2_raw[s,j] ~ dnorm(mu_c2, lambda_c2) T(c1[s],) 83 | } 84 | cS1[s,1:nratings-1] <- sort(cS1_raw[s, ]) 85 | cS2[s,1:nratings-1] <- sort(cS2_raw[s, ]) 86 | 87 | delta[s] ~ dnorm(0, lambda_delta) 88 | logMratio[s] <- mu_logMratio + epsilon_logMratio*delta[s] 89 | Mratio[s] <- exp(logMratio[s]) 90 | } 91 | 92 | # hyperpriors 93 | mu_d1 ~ dnorm(0, 0.01) 94 | mu_c ~ dnorm(0, 0.01) 95 | sigma_d1 ~ dnorm(0, 0.01) I(0, ) 96 | sigma_c ~ dnorm(0, 0.01) I(0, ) 97 | lambda_d1 <- pow(sigma_d1, -2) 98 | lambda_c <- pow(sigma_c, -2) 99 | 100 | mu_c2 ~ dnorm(0, 0.01) 101 | sigma_c2 ~ dnorm(0, 0.01) I(0, ) 102 | lambda_c2 <- pow(sigma_c2, -2) 103 | 104 | mu_logMratio ~ dnorm(0, 1) 105 | sigma_delta ~ dnorm(0, 1) I(0,) 106 | lambda_delta <- pow(sigma_delta, -2) 107 | epsilon_logMratio ~ dbeta(1,1) 108 | sigma_logMratio <- abs(epsilon_logMratio)*sigma_delta 109 | 110 | } 111 | -------------------------------------------------------------------------------- /Matlab/Bayes_metad_group_corr.txt: -------------------------------------------------------------------------------- 1 | # Bayesian estimation of meta-d/d for group correlation between domains 2 | 3 | data { 4 | for (s in 1:nsubj) { 5 | # Type 1 counts for task 1 6 | N[s,1] <- sum(counts1[s,1:nratings*2]) 7 | S[s,1] <- sum(counts1[s,(nratings*2)+1:nratings*4]) 8 | H[s,1] <- sum(counts1[s,(nratings*3)+1:nratings*4]) 9 | M[s,1] <- sum(counts1[s,(nratings*2)+1:nratings*3]) 10 | FA[s,1] <- sum(counts1[s,nratings+1:nratings*2]) 11 | CR[s,1] <- sum(counts1[s,1:nratings]) 12 | 13 | # Type 1 counts for task 2 14 | N[s,2] <- sum(counts2[s,1:nratings*2]) 15 | S[s,2] <- sum(counts2[s,(nratings*2)+1:nratings*4]) 16 | H[s,2] <- sum(counts2[s,(nratings*3)+1:nratings*4]) 17 | M[s,2] <- sum(counts2[s,(nratings*2)+1:nratings*3]) 18 | FA[s,2] <- sum(counts2[s,nratings+1:nratings*2]) 19 | CR[s,2] <- sum(counts2[s,1:nratings]) 20 | } 21 | } 22 | 23 | model { 24 | for (s in 1:nsubj) { 25 | 26 | ## TYPE 2 SDT MODEL (META-D) 27 | # Multinomial likelihood for response counts ordered as c(nR_S1,nR_S2) 28 | counts1[s,1:nratings] ~ dmulti(prT[s,1:nratings,1],CR[s,1]) 29 | counts1[s,nratings+1:nratings*2] ~ dmulti(prT[s,nratings+1:nratings*2,1],FA[s,1]) 30 | counts1[s,(nratings*2)+1:nratings*3] ~ dmulti(prT[s,(nratings*2)+1:nratings*3,1],M[s,1]) 31 | counts1[s,(nratings*3)+1:nratings*4] ~ dmulti(prT[s,(nratings*3)+1:nratings*4,1],H[s,1]) 32 | 33 | counts2[s,1:nratings] ~ dmulti(prT[s,1:nratings,2],CR[s,2]) 34 | counts2[s,nratings+1:nratings*2] ~ dmulti(prT[s,nratings+1:nratings*2,2],FA[s,2]) 35 | counts2[s,(nratings*2)+1:nratings*3] ~ dmulti(prT[s,(nratings*2)+1:nratings*3,2],M[s,2]) 36 | counts2[s,(nratings*3)+1:nratings*4] ~ dmulti(prT[s,(nratings*3)+1:nratings*4,2],H[s,2]) 37 | 38 | for (task in 1:2) { 39 | 40 | # Means of SDT distributions] 41 | mu[s,task] <- Mratio[s,task]*d1[s,task] 42 | S2mu[s,task] <- mu[s,task]/2 43 | S1mu[s,task] <- -mu[s,task]/2 44 | 45 | # Calculate normalisation constants 46 | C_area_rS1[s,task] <- phi(c1[s,task] - S1mu[s,task]) 47 | I_area_rS1[s,task] <- phi(c1[s,task] - S2mu[s,task]) 48 | C_area_rS2[s,task] <- 1-phi(c1[s,task] - S2mu[s,task]) 49 | I_area_rS2[s,task] <- 1-phi(c1[s,task] - S1mu[s,task]) 50 | 51 | # Get nC_rS1 probs 52 | pr[s,1,task] <- phi(cS1[s,1,task] - S1mu[s,task])/C_area_rS1[s,task] 53 | for (k in 1:nratings-2) { 54 | pr[s,k+1,task] <- (phi(cS1[s,k+1,task] - S1mu[s,task])-phi(cS1[s,k,task] - S1mu[s,task]))/C_area_rS1[s,task] 55 | } 56 | pr[s,nratings,task] <- (phi(c1[s,task] - S1mu[s,task])-phi(cS1[s,nratings-1,task] - S1mu[s,task]))/C_area_rS1[s,task] 57 | 58 | # Get nI_rS2 probs 59 | pr[s,nratings+1,task] <- ((1-phi(c1[s,task] - S1mu[s,task]))-(1-phi(cS2[s,1,task] - S1mu[s,task])))/I_area_rS2[s,task] 60 | for (k in 1:nratings-2) { 61 | pr[s,nratings+1+k,task] <- ((1-phi(cS2[s,k,task] - S1mu[s,task]))-(1-phi(cS2[s,k+1,task] - S1mu[s,task])))/I_area_rS2[s,task] 62 | } 63 | pr[s,nratings*2,task] <- (1-phi(cS2[s,nratings-1,task] - S1mu[s,task]))/I_area_rS2[s,task] 64 | 65 | # Get nI_rS1 probs 66 | pr[s,(nratings*2)+1, task] <- phi(cS1[s,1,task] - S2mu[s,task])/I_area_rS1[s,task] 67 | for (k in 1:nratings-2) { 68 | pr[s,(nratings*2)+1+k,task] <- (phi(cS1[s,k+1,task] - S2mu[s,task])-phi(cS1[s,k,task] - S2mu[s,task]))/I_area_rS1[s,task] 69 | } 70 | pr[s,nratings*3,task] <- (phi(c1[s,task] - S2mu[s,task])-phi(cS1[s,nratings-1,task] - S2mu[s,task]))/I_area_rS1[s,task] 71 | 72 | # Get nC_rS2 probs 73 | pr[s,(nratings*3)+1,task] <- ((1-phi(c1[s,task] - S2mu[s,task]))-(1-phi(cS2[s,1,task] - S2mu[s,task])))/C_area_rS2[s,task] 74 | for (k in 1:nratings-2) { 75 | pr[s,(nratings*3)+1+k,task] <- ((1-phi(cS2[s,k,task] - S2mu[s,task]))-(1-phi(cS2[s,k+1,task] - S2mu[s,task])))/C_area_rS2[s,task] 76 | } 77 | pr[s,nratings*4,task] <- (1-phi(cS2[s,nratings-1,task] - S2mu[s,task]))/C_area_rS2[s,task] 78 | 79 | # Avoid underflow of probabilities 80 | for (i in 1:nratings*4) { 81 | prT[s,i,task] <- ifelse(pr[s,i,task] < Tol, Tol, pr[s,i,task]) 82 | } 83 | 84 | # Specify ordered prior on criteria (bounded above and below by Type 1 c) 85 | for (j in 1:(nratings-1)) { 86 | cS1_raw[s,j,task] ~ dnorm(-mu_c2[task], lambda_c2[task]) T(,c1[s,task]) 87 | cS2_raw[s,j,task] ~ dnorm(mu_c2[task], lambda_c2[task]) T(c1[s,task],) 88 | } 89 | cS1[s,1:nratings-1,task] <- sort(cS1_raw[s,1:nratings-1,task]) 90 | cS2[s,1:nratings-1,task] <- sort(cS2_raw[s,1:nratings-1,task]) 91 | 92 | Mratio[s,task] <- exp(logMratio[s,task]) 93 | 94 | } 95 | 96 | # Draw log(M)'s from bivariate Gaussian 97 | logMratio[s,1:2] ~ dmnorm(mu_logMratio[], TI[,]) 98 | 99 | } 100 | 101 | mu_c2[1] ~ dnorm(0, 0.01) 102 | mu_c2[2] ~ dnorm(0, 0.01) 103 | sigma_c2[1] ~ dnorm(0, 0.01) I(0, ) 104 | sigma_c2[2] ~ dnorm(0, 0.01) I(0, ) 105 | lambda_c2[1] <- pow(sigma_c2[1], -2) 106 | lambda_c2[2] <- pow(sigma_c2[2], -2) 107 | 108 | mu_logMratio[1] ~ dnorm(0, 1) 109 | mu_logMratio[2] ~ dnorm(0, 1) 110 | lambda_logMratio[1] ~ dgamma(0.001,0.001) 111 | lambda_logMratio[2] ~ dgamma(0.001,0.001) 112 | sigma_logMratio[1] <- 1/sqrt(lambda_logMratio[1]) 113 | sigma_logMratio[2] <- 1/sqrt(lambda_logMratio[2]) 114 | rho ~ dunif(-1,1) 115 | 116 | T[1,1] <- 1/lambda_logMratio[1] 117 | T[1,2] <- rho*sigma_logMratio[1]*sigma_logMratio[2] 118 | T[2,2] <- 1/lambda_logMratio[2] 119 | T[2,1] <- rho*sigma_logMratio[1]*sigma_logMratio[2] 120 | TI[1:2,1:2] <- inverse(T[1:2,1:2]) 121 | 122 | } 123 | -------------------------------------------------------------------------------- /Matlab/Bayes_metad_group_nodp.txt: -------------------------------------------------------------------------------- 1 | # Bayesian estimation of meta-d/d for group 2 | 3 | data { 4 | for (s in 1:nsubj) { 5 | # Type 1 counts 6 | N[s] <- sum(counts[s,1:nratings*2]) 7 | S[s] <- sum(counts[s,(nratings*2)+1:nratings*4]) 8 | H[s] <- sum(counts[s,(nratings*3)+1:nratings*4]) 9 | M[s] <- sum(counts[s,(nratings*2)+1:nratings*3]) 10 | FA[s] <- sum(counts[s,nratings+1:nratings*2]) 11 | CR[s] <- sum(counts[s,1:nratings]) 12 | } 13 | } 14 | 15 | model { 16 | for (s in 1:nsubj) { 17 | 18 | ## TYPE 2 SDT MODEL (META-D) 19 | # Multinomial likelihood for response counts ordered as c(nR_S1,nR_S2) 20 | counts[s,1:nratings] ~ dmulti(prT[s,1:nratings],CR[s]) 21 | counts[s,nratings+1:nratings*2] ~ dmulti(prT[s,nratings+1:nratings*2],FA[s]) 22 | counts[s,(nratings*2)+1:nratings*3] ~ dmulti(prT[s,(nratings*2)+1:nratings*3],M[s]) 23 | counts[s,(nratings*3)+1:nratings*4] ~ dmulti(prT[s,(nratings*3)+1:nratings*4],H[s]) 24 | 25 | # Means of SDT distributions] 26 | mu[s] <- Mratio[s]*d1[s] 27 | S2mu[s] <- mu[s]/2 28 | S1mu[s] <- -mu[s]/2 29 | 30 | # Calculate normalisation constants 31 | C_area_rS1[s] <- phi(c1[s] - S1mu[s]) 32 | I_area_rS1[s] <- phi(c1[s] - S2mu[s]) 33 | C_area_rS2[s] <- 1-phi(c1[s] - S2mu[s]) 34 | I_area_rS2[s] <- 1-phi(c1[s] - S1mu[s]) 35 | 36 | # Get nC_rS1 probs 37 | pr[s,1] <- phi(cS1[s,1] - S1mu[s])/C_area_rS1[s] 38 | for (k in 1:nratings-2) { 39 | pr[s,k+1] <- (phi(cS1[s,k+1] - S1mu[s])-phi(cS1[s,k] - S1mu[s]))/C_area_rS1[s] 40 | } 41 | pr[s,nratings] <- (phi(c1[s] - S1mu[s])-phi(cS1[s,nratings-1] - S1mu[s]))/C_area_rS1[s] 42 | 43 | # Get nI_rS2 probs 44 | pr[s,nratings+1] <- ((1-phi(c1[s] - S1mu[s]))-(1-phi(cS2[s,1] - S1mu[s])))/I_area_rS2[s] 45 | for (k in 1:nratings-2) { 46 | pr[s,nratings+1+k] <- ((1-phi(cS2[s,k] - S1mu[s]))-(1-phi(cS2[s,k+1] - S1mu[s])))/I_area_rS2[s] 47 | } 48 | pr[s,nratings*2] <- (1-phi(cS2[s,nratings-1] - S1mu[s]))/I_area_rS2[s] 49 | 50 | # Get nI_rS1 probs 51 | pr[s,(nratings*2)+1] <- phi(cS1[s,1] - S2mu[s])/I_area_rS1[s] 52 | for (k in 1:nratings-2) { 53 | pr[s,(nratings*2)+1+k] <- (phi(cS1[s,k+1] - S2mu[s])-phi(cS1[s,k] - S2mu[s]))/I_area_rS1[s] 54 | } 55 | pr[s,nratings*3] <- (phi(c1[s] - S2mu[s])-phi(cS1[s,nratings-1] - S2mu[s]))/I_area_rS1[s] 56 | 57 | # Get nC_rS2 probs 58 | pr[s,(nratings*3)+1] <- ((1-phi(c1[s] - S2mu[s]))-(1-phi(cS2[s,1] - S2mu[s])))/C_area_rS2[s] 59 | for (k in 1:nratings-2) { 60 | pr[s,(nratings*3)+1+k] <- ((1-phi(cS2[s,k] - S2mu[s]))-(1-phi(cS2[s,k+1] - S2mu[s])))/C_area_rS2[s] 61 | } 62 | pr[s,nratings*4] <- (1-phi(cS2[s,nratings-1] - S2mu[s]))/C_area_rS2[s] 63 | 64 | # Avoid underflow of probabilities 65 | for (i in 1:nratings*4) { 66 | prT[s,i] <- ifelse(pr[s,i] < Tol, Tol, pr[s,i]) 67 | } 68 | 69 | # Specify ordered prior on criteria 70 | for (j in 1:(nratings-1)) { 71 | cS1_raw[s,j] ~ dnorm(-mu_c2, lambda_c2) T(,c1[s]) 72 | cS2_raw[s,j] ~ dnorm(mu_c2, lambda_c2) T(c1[s],) 73 | } 74 | cS1[s,1:nratings-1] <- sort(cS1_raw[s, ]) 75 | cS2[s,1:nratings-1] <- sort(cS2_raw[s, ]) 76 | 77 | delta[s] ~ dnorm(0, lambda_delta) 78 | logMratio[s] <- mu_logMratio + epsilon_logMratio*delta[s] 79 | Mratio[s] <- exp(logMratio[s]) 80 | } 81 | 82 | # hyperpriors 83 | mu_c2 ~ dnorm(0, 0.01) 84 | sigma_c2 ~ dnorm(0, 0.01) I(0, ) 85 | lambda_c2 <- pow(sigma_c2, -2) 86 | 87 | mu_logMratio ~ dnorm(0, 1) 88 | sigma_delta ~ dnorm(0, 1) I(0,) 89 | lambda_delta <- pow(sigma_delta, -2) 90 | epsilon_logMratio ~ dbeta(1,1) 91 | sigma_logMratio <- abs(epsilon_logMratio)*sigma_delta 92 | 93 | } 94 | -------------------------------------------------------------------------------- /Matlab/Bayes_metad_group_regress_nodp.txt: -------------------------------------------------------------------------------- 1 | # Bayesian estimation of meta-d/d for group 2 | 3 | data { 4 | for (s in 1:nsubj) { 5 | # Type 1 counts 6 | N[s] <- sum(counts[s,1:nratings*2]) 7 | S[s] <- sum(counts[s,(nratings*2)+1:nratings*4]) 8 | H[s] <- sum(counts[s,(nratings*3)+1:nratings*4]) 9 | M[s] <- sum(counts[s,(nratings*2)+1:nratings*3]) 10 | FA[s] <- sum(counts[s,nratings+1:nratings*2]) 11 | CR[s] <- sum(counts[s,1:nratings]) 12 | } 13 | } 14 | 15 | model { 16 | for (s in 1:nsubj) { 17 | 18 | ## TYPE 2 SDT MODEL (META-D) 19 | # Multinomial likelihood for response counts ordered as c(nR_S1,nR_S2) 20 | counts[s,1:nratings] ~ dmulti(prT[s,1:nratings],CR[s]) 21 | counts[s,nratings+1:nratings*2] ~ dmulti(prT[s,nratings+1:nratings*2],FA[s]) 22 | counts[s,(nratings*2)+1:nratings*3] ~ dmulti(prT[s,(nratings*2)+1:nratings*3],M[s]) 23 | counts[s,(nratings*3)+1:nratings*4] ~ dmulti(prT[s,(nratings*3)+1:nratings*4],H[s]) 24 | 25 | # Means of SDT distributions] 26 | mu[s] <- Mratio[s]*d1[s] 27 | S2mu[s] <- mu[s]/2 28 | S1mu[s] <- -mu[s]/2 29 | 30 | # Calculate normalisation constants 31 | C_area_rS1[s] <- phi(c1[s] - S1mu[s]) 32 | I_area_rS1[s] <- phi(c1[s] - S2mu[s]) 33 | C_area_rS2[s] <- 1-phi(c1[s] - S2mu[s]) 34 | I_area_rS2[s] <- 1-phi(c1[s] - S1mu[s]) 35 | 36 | # Get nC_rS1 probs 37 | pr[s,1] <- phi(cS1[s,1] - S1mu[s])/C_area_rS1[s] 38 | for (k in 1:nratings-2) { 39 | pr[s,k+1] <- (phi(cS1[s,k+1] - S1mu[s])-phi(cS1[s,k] - S1mu[s]))/C_area_rS1[s] 40 | } 41 | pr[s,nratings] <- (phi(c1[s] - S1mu[s])-phi(cS1[s,nratings-1] - S1mu[s]))/C_area_rS1[s] 42 | 43 | # Get nI_rS2 probs 44 | pr[s,nratings+1] <- ((1-phi(c1[s] - S1mu[s]))-(1-phi(cS2[s,1] - S1mu[s])))/I_area_rS2[s] 45 | for (k in 1:nratings-2) { 46 | pr[s,nratings+1+k] <- ((1-phi(cS2[s,k] - S1mu[s]))-(1-phi(cS2[s,k+1] - S1mu[s])))/I_area_rS2[s] 47 | } 48 | pr[s,nratings*2] <- (1-phi(cS2[s,nratings-1] - S1mu[s]))/I_area_rS2[s] 49 | 50 | # Get nI_rS1 probs 51 | pr[s,(nratings*2)+1] <- phi(cS1[s,1] - S2mu[s])/I_area_rS1[s] 52 | for (k in 1:nratings-2) { 53 | pr[s,(nratings*2)+1+k] <- (phi(cS1[s,k+1] - S2mu[s])-phi(cS1[s,k] - S2mu[s]))/I_area_rS1[s] 54 | } 55 | pr[s,nratings*3] <- (phi(c1[s] - S2mu[s])-phi(cS1[s,nratings-1] - S2mu[s]))/I_area_rS1[s] 56 | 57 | # Get nC_rS2 probs 58 | pr[s,(nratings*3)+1] <- ((1-phi(c1[s] - S2mu[s]))-(1-phi(cS2[s,1] - S2mu[s])))/C_area_rS2[s] 59 | for (k in 1:nratings-2) { 60 | pr[s,(nratings*3)+1+k] <- ((1-phi(cS2[s,k] - S2mu[s]))-(1-phi(cS2[s,k+1] - S2mu[s])))/C_area_rS2[s] 61 | } 62 | pr[s,nratings*4] <- (1-phi(cS2[s,nratings-1] - S2mu[s]))/C_area_rS2[s] 63 | 64 | # Avoid underflow of probabilities 65 | for (i in 1:nratings*4) { 66 | prT[s,i] <- ifelse(pr[s,i] < Tol, Tol, pr[s,i]) 67 | } 68 | 69 | # Specify ordered prior on criteria (bounded above and below by Type 1 c) 70 | for (j in 1:(nratings-1)) { 71 | cS1_raw[s,j] ~ dnorm(-mu_c2, lambda_c2) T(,c1[s]) 72 | cS2_raw[s,j] ~ dnorm(mu_c2, lambda_c2) T(c1[s],) 73 | } 74 | cS1[s,1:nratings-1] <- sort(cS1_raw[s, ]) 75 | cS2[s,1:nratings-1] <- sort(cS2_raw[s, ]) 76 | 77 | delta[s] ~ dt(0, lambda_delta, 5) 78 | logMratio[s] <- mu_logMratio + mu_beta1*cov[s] + epsilon_logMratio*delta[s] 79 | Mratio[s] <- exp(logMratio[s]) 80 | 81 | } 82 | 83 | # hyperpriors 84 | mu_c2 ~ dnorm(0, 0.01) 85 | sigma_c2 ~ dnorm(0, 0.01) I(0, ) 86 | lambda_c2 <- pow(sigma_c2, -2) 87 | 88 | mu_logMratio ~ dnorm(0, 1) 89 | mu_beta1 ~ dnorm(0, 1) 90 | 91 | sigma_delta ~ dnorm(0, 1) I(0,) 92 | lambda_delta <- pow(sigma_delta, -2) 93 | epsilon_logMratio ~ dbeta(1,1) 94 | sigma_logMratio <- abs(epsilon_logMratio)*sigma_delta 95 | 96 | } 97 | -------------------------------------------------------------------------------- /Matlab/Bayes_metad_group_regress_nodp_2cov.txt: -------------------------------------------------------------------------------- 1 | # Bayesian estimation of meta-d/d for group 2 | 3 | data { 4 | for (s in 1:nsubj) { 5 | # Type 1 counts 6 | N[s] <- sum(counts[s,1:nratings*2]) 7 | S[s] <- sum(counts[s,(nratings*2)+1:nratings*4]) 8 | H[s] <- sum(counts[s,(nratings*3)+1:nratings*4]) 9 | M[s] <- sum(counts[s,(nratings*2)+1:nratings*3]) 10 | FA[s] <- sum(counts[s,nratings+1:nratings*2]) 11 | CR[s] <- sum(counts[s,1:nratings]) 12 | } 13 | } 14 | 15 | model { 16 | for (s in 1:nsubj) { 17 | 18 | ## TYPE 2 SDT MODEL (META-D) 19 | # Multinomial likelihood for response counts ordered as c(nR_S1,nR_S2) 20 | counts[s,1:nratings] ~ dmulti(prT[s,1:nratings],CR[s]) 21 | counts[s,nratings+1:nratings*2] ~ dmulti(prT[s,nratings+1:nratings*2],FA[s]) 22 | counts[s,(nratings*2)+1:nratings*3] ~ dmulti(prT[s,(nratings*2)+1:nratings*3],M[s]) 23 | counts[s,(nratings*3)+1:nratings*4] ~ dmulti(prT[s,(nratings*3)+1:nratings*4],H[s]) 24 | 25 | # Means of SDT distributions] 26 | mu[s] <- Mratio[s]*d1[s] 27 | S2mu[s] <- mu[s]/2 28 | S1mu[s] <- -mu[s]/2 29 | 30 | # Calculate normalisation constants 31 | C_area_rS1[s] <- phi(c1[s] - S1mu[s]) 32 | I_area_rS1[s] <- phi(c1[s] - S2mu[s]) 33 | C_area_rS2[s] <- 1-phi(c1[s] - S2mu[s]) 34 | I_area_rS2[s] <- 1-phi(c1[s] - S1mu[s]) 35 | 36 | # Get nC_rS1 probs 37 | pr[s,1] <- phi(cS1[s,1] - S1mu[s])/C_area_rS1[s] 38 | for (k in 1:nratings-2) { 39 | pr[s,k+1] <- (phi(cS1[s,k+1] - S1mu[s])-phi(cS1[s,k] - S1mu[s]))/C_area_rS1[s] 40 | } 41 | pr[s,nratings] <- (phi(c1[s] - S1mu[s])-phi(cS1[s,nratings-1] - S1mu[s]))/C_area_rS1[s] 42 | 43 | # Get nI_rS2 probs 44 | pr[s,nratings+1] <- ((1-phi(c1[s] - S1mu[s]))-(1-phi(cS2[s,1] - S1mu[s])))/I_area_rS2[s] 45 | for (k in 1:nratings-2) { 46 | pr[s,nratings+1+k] <- ((1-phi(cS2[s,k] - S1mu[s]))-(1-phi(cS2[s,k+1] - S1mu[s])))/I_area_rS2[s] 47 | } 48 | pr[s,nratings*2] <- (1-phi(cS2[s,nratings-1] - S1mu[s]))/I_area_rS2[s] 49 | 50 | # Get nI_rS1 probs 51 | pr[s,(nratings*2)+1] <- phi(cS1[s,1] - S2mu[s])/I_area_rS1[s] 52 | for (k in 1:nratings-2) { 53 | pr[s,(nratings*2)+1+k] <- (phi(cS1[s,k+1] - S2mu[s])-phi(cS1[s,k] - S2mu[s]))/I_area_rS1[s] 54 | } 55 | pr[s,nratings*3] <- (phi(c1[s] - S2mu[s])-phi(cS1[s,nratings-1] - S2mu[s]))/I_area_rS1[s] 56 | 57 | # Get nC_rS2 probs 58 | pr[s,(nratings*3)+1] <- ((1-phi(c1[s] - S2mu[s]))-(1-phi(cS2[s,1] - S2mu[s])))/C_area_rS2[s] 59 | for (k in 1:nratings-2) { 60 | pr[s,(nratings*3)+1+k] <- ((1-phi(cS2[s,k] - S2mu[s]))-(1-phi(cS2[s,k+1] - S2mu[s])))/C_area_rS2[s] 61 | } 62 | pr[s,nratings*4] <- (1-phi(cS2[s,nratings-1] - S2mu[s]))/C_area_rS2[s] 63 | 64 | # Avoid underflow of probabilities 65 | for (i in 1:nratings*4) { 66 | prT[s,i] <- ifelse(pr[s,i] < Tol, Tol, pr[s,i]) 67 | } 68 | 69 | # Specify ordered prior on criteria (bounded above and below by Type 1 c) 70 | for (j in 1:(nratings-1)) { 71 | cS1_raw[s,j] ~ dnorm(-mu_c2, lambda_c2) T(,c1[s]) 72 | cS2_raw[s,j] ~ dnorm(mu_c2, lambda_c2) T(c1[s],) 73 | } 74 | cS1[s,1:nratings-1] <- sort(cS1_raw[s, ]) 75 | cS2[s,1:nratings-1] <- sort(cS2_raw[s, ]) 76 | 77 | delta[s] ~ dt(0, lambda_delta, 5) 78 | logMratio[s] <- mu_logMratio + mu_beta1*cov[1,s] + mu_beta2*cov[2,s] + epsilon_logMratio*delta[s] 79 | Mratio[s] <- exp(logMratio[s]) 80 | 81 | } 82 | 83 | # hyperpriors 84 | mu_c2 ~ dnorm(0, 0.01) 85 | sigma_c2 ~ dnorm(0, 0.01) I(0, ) 86 | lambda_c2 <- pow(sigma_c2, -2) 87 | 88 | mu_logMratio ~ dnorm(0, 1) 89 | mu_beta1 ~ dnorm(0, 1) 90 | mu_beta2 ~ dnorm(0, 1) 91 | 92 | sigma_delta ~ dnorm(0, 1) I(0,) 93 | lambda_delta <- pow(sigma_delta, -2) 94 | epsilon_logMratio ~ dbeta(1,1) 95 | sigma_logMratio <- abs(epsilon_logMratio)*sigma_delta 96 | 97 | } -------------------------------------------------------------------------------- /Matlab/Bayes_metad_group_regress_nodp_3cov.txt: -------------------------------------------------------------------------------- 1 | # Bayesian estimation of meta-d/d for group 2 | 3 | data { 4 | for (s in 1:nsubj) { 5 | # Type 1 counts 6 | N[s] <- sum(counts[s,1:nratings*2]) 7 | S[s] <- sum(counts[s,(nratings*2)+1:nratings*4]) 8 | H[s] <- sum(counts[s,(nratings*3)+1:nratings*4]) 9 | M[s] <- sum(counts[s,(nratings*2)+1:nratings*3]) 10 | FA[s] <- sum(counts[s,nratings+1:nratings*2]) 11 | CR[s] <- sum(counts[s,1:nratings]) 12 | } 13 | } 14 | 15 | model { 16 | for (s in 1:nsubj) { 17 | 18 | ## TYPE 2 SDT MODEL (META-D) 19 | # Multinomial likelihood for response counts ordered as c(nR_S1,nR_S2) 20 | counts[s,1:nratings] ~ dmulti(prT[s,1:nratings],CR[s]) 21 | counts[s,nratings+1:nratings*2] ~ dmulti(prT[s,nratings+1:nratings*2],FA[s]) 22 | counts[s,(nratings*2)+1:nratings*3] ~ dmulti(prT[s,(nratings*2)+1:nratings*3],M[s]) 23 | counts[s,(nratings*3)+1:nratings*4] ~ dmulti(prT[s,(nratings*3)+1:nratings*4],H[s]) 24 | 25 | # Means of SDT distributions] 26 | mu[s] <- Mratio[s]*d1[s] 27 | S2mu[s] <- mu[s]/2 28 | S1mu[s] <- -mu[s]/2 29 | 30 | # Calculate normalisation constants 31 | C_area_rS1[s] <- phi(c1[s] - S1mu[s]) 32 | I_area_rS1[s] <- phi(c1[s] - S2mu[s]) 33 | C_area_rS2[s] <- 1-phi(c1[s] - S2mu[s]) 34 | I_area_rS2[s] <- 1-phi(c1[s] - S1mu[s]) 35 | 36 | # Get nC_rS1 probs 37 | pr[s,1] <- phi(cS1[s,1] - S1mu[s])/C_area_rS1[s] 38 | for (k in 1:nratings-2) { 39 | pr[s,k+1] <- (phi(cS1[s,k+1] - S1mu[s])-phi(cS1[s,k] - S1mu[s]))/C_area_rS1[s] 40 | } 41 | pr[s,nratings] <- (phi(c1[s] - S1mu[s])-phi(cS1[s,nratings-1] - S1mu[s]))/C_area_rS1[s] 42 | 43 | # Get nI_rS2 probs 44 | pr[s,nratings+1] <- ((1-phi(c1[s] - S1mu[s]))-(1-phi(cS2[s,1] - S1mu[s])))/I_area_rS2[s] 45 | for (k in 1:nratings-2) { 46 | pr[s,nratings+1+k] <- ((1-phi(cS2[s,k] - S1mu[s]))-(1-phi(cS2[s,k+1] - S1mu[s])))/I_area_rS2[s] 47 | } 48 | pr[s,nratings*2] <- (1-phi(cS2[s,nratings-1] - S1mu[s]))/I_area_rS2[s] 49 | 50 | # Get nI_rS1 probs 51 | pr[s,(nratings*2)+1] <- phi(cS1[s,1] - S2mu[s])/I_area_rS1[s] 52 | for (k in 1:nratings-2) { 53 | pr[s,(nratings*2)+1+k] <- (phi(cS1[s,k+1] - S2mu[s])-phi(cS1[s,k] - S2mu[s]))/I_area_rS1[s] 54 | } 55 | pr[s,nratings*3] <- (phi(c1[s] - S2mu[s])-phi(cS1[s,nratings-1] - S2mu[s]))/I_area_rS1[s] 56 | 57 | # Get nC_rS2 probs 58 | pr[s,(nratings*3)+1] <- ((1-phi(c1[s] - S2mu[s]))-(1-phi(cS2[s,1] - S2mu[s])))/C_area_rS2[s] 59 | for (k in 1:nratings-2) { 60 | pr[s,(nratings*3)+1+k] <- ((1-phi(cS2[s,k] - S2mu[s]))-(1-phi(cS2[s,k+1] - S2mu[s])))/C_area_rS2[s] 61 | } 62 | pr[s,nratings*4] <- (1-phi(cS2[s,nratings-1] - S2mu[s]))/C_area_rS2[s] 63 | 64 | # Avoid underflow of probabilities 65 | for (i in 1:nratings*4) { 66 | prT[s,i] <- ifelse(pr[s,i] < Tol, Tol, pr[s,i]) 67 | } 68 | 69 | # Specify ordered prior on criteria (bounded above and below by Type 1 c) 70 | for (j in 1:(nratings-1)) { 71 | cS1_raw[s,j] ~ dnorm(-mu_c2, lambda_c2) T(,c1[s]) 72 | cS2_raw[s,j] ~ dnorm(mu_c2, lambda_c2) T(c1[s],) 73 | } 74 | cS1[s,1:nratings-1] <- sort(cS1_raw[s, ]) 75 | cS2[s,1:nratings-1] <- sort(cS2_raw[s, ]) 76 | 77 | delta[s] ~ dt(0, lambda_delta, 5) 78 | logMratio[s] <- mu_logMratio + mu_beta1*cov[1,s] + mu_beta2*cov[2,s] + mu_beta3*cov[3,s] + epsilon_logMratio*delta[s] 79 | Mratio[s] <- exp(logMratio[s]) 80 | 81 | } 82 | 83 | # hyperpriors 84 | mu_c2 ~ dnorm(0, 0.01) 85 | sigma_c2 ~ dnorm(0, 0.01) I(0, ) 86 | lambda_c2 <- pow(sigma_c2, -2) 87 | 88 | mu_logMratio ~ dnorm(0, 1) 89 | mu_beta1 ~ dnorm(0, 1) 90 | mu_beta2 ~ dnorm(0, 1) 91 | mu_beta3 ~ dnorm(0, 1) 92 | 93 | sigma_delta ~ dnorm(0, 1) I(0,) 94 | lambda_delta <- pow(sigma_delta, -2) 95 | epsilon_logMratio ~ dbeta(1,1) 96 | sigma_logMratio <- abs(epsilon_logMratio)*sigma_delta 97 | 98 | } -------------------------------------------------------------------------------- /Matlab/Bayes_metad_group_regress_nodp_4cov.txt: -------------------------------------------------------------------------------- 1 | # Bayesian estimation of meta-d/d for group 2 | 3 | data { 4 | for (s in 1:nsubj) { 5 | # Type 1 counts 6 | N[s] <- sum(counts[s,1:nratings*2]) 7 | S[s] <- sum(counts[s,(nratings*2)+1:nratings*4]) 8 | H[s] <- sum(counts[s,(nratings*3)+1:nratings*4]) 9 | M[s] <- sum(counts[s,(nratings*2)+1:nratings*3]) 10 | FA[s] <- sum(counts[s,nratings+1:nratings*2]) 11 | CR[s] <- sum(counts[s,1:nratings]) 12 | } 13 | } 14 | 15 | model { 16 | for (s in 1:nsubj) { 17 | 18 | ## TYPE 2 SDT MODEL (META-D) 19 | # Multinomial likelihood for response counts ordered as c(nR_S1,nR_S2) 20 | counts[s,1:nratings] ~ dmulti(prT[s,1:nratings],CR[s]) 21 | counts[s,nratings+1:nratings*2] ~ dmulti(prT[s,nratings+1:nratings*2],FA[s]) 22 | counts[s,(nratings*2)+1:nratings*3] ~ dmulti(prT[s,(nratings*2)+1:nratings*3],M[s]) 23 | counts[s,(nratings*3)+1:nratings*4] ~ dmulti(prT[s,(nratings*3)+1:nratings*4],H[s]) 24 | 25 | # Means of SDT distributions] 26 | mu[s] <- Mratio[s]*d1[s] 27 | S2mu[s] <- mu[s]/2 28 | S1mu[s] <- -mu[s]/2 29 | 30 | # Calculate normalisation constants 31 | C_area_rS1[s] <- phi(c1[s] - S1mu[s]) 32 | I_area_rS1[s] <- phi(c1[s] - S2mu[s]) 33 | C_area_rS2[s] <- 1-phi(c1[s] - S2mu[s]) 34 | I_area_rS2[s] <- 1-phi(c1[s] - S1mu[s]) 35 | 36 | # Get nC_rS1 probs 37 | pr[s,1] <- phi(cS1[s,1] - S1mu[s])/C_area_rS1[s] 38 | for (k in 1:nratings-2) { 39 | pr[s,k+1] <- (phi(cS1[s,k+1] - S1mu[s])-phi(cS1[s,k] - S1mu[s]))/C_area_rS1[s] 40 | } 41 | pr[s,nratings] <- (phi(c1[s] - S1mu[s])-phi(cS1[s,nratings-1] - S1mu[s]))/C_area_rS1[s] 42 | 43 | # Get nI_rS2 probs 44 | pr[s,nratings+1] <- ((1-phi(c1[s] - S1mu[s]))-(1-phi(cS2[s,1] - S1mu[s])))/I_area_rS2[s] 45 | for (k in 1:nratings-2) { 46 | pr[s,nratings+1+k] <- ((1-phi(cS2[s,k] - S1mu[s]))-(1-phi(cS2[s,k+1] - S1mu[s])))/I_area_rS2[s] 47 | } 48 | pr[s,nratings*2] <- (1-phi(cS2[s,nratings-1] - S1mu[s]))/I_area_rS2[s] 49 | 50 | # Get nI_rS1 probs 51 | pr[s,(nratings*2)+1] <- phi(cS1[s,1] - S2mu[s])/I_area_rS1[s] 52 | for (k in 1:nratings-2) { 53 | pr[s,(nratings*2)+1+k] <- (phi(cS1[s,k+1] - S2mu[s])-phi(cS1[s,k] - S2mu[s]))/I_area_rS1[s] 54 | } 55 | pr[s,nratings*3] <- (phi(c1[s] - S2mu[s])-phi(cS1[s,nratings-1] - S2mu[s]))/I_area_rS1[s] 56 | 57 | # Get nC_rS2 probs 58 | pr[s,(nratings*3)+1] <- ((1-phi(c1[s] - S2mu[s]))-(1-phi(cS2[s,1] - S2mu[s])))/C_area_rS2[s] 59 | for (k in 1:nratings-2) { 60 | pr[s,(nratings*3)+1+k] <- ((1-phi(cS2[s,k] - S2mu[s]))-(1-phi(cS2[s,k+1] - S2mu[s])))/C_area_rS2[s] 61 | } 62 | pr[s,nratings*4] <- (1-phi(cS2[s,nratings-1] - S2mu[s]))/C_area_rS2[s] 63 | 64 | # Avoid underflow of probabilities 65 | for (i in 1:nratings*4) { 66 | prT[s,i] <- ifelse(pr[s,i] < Tol, Tol, pr[s,i]) 67 | } 68 | 69 | # Specify ordered prior on criteria (bounded above and below by Type 1 c) 70 | for (j in 1:(nratings-1)) { 71 | cS1_raw[s,j] ~ dnorm(-mu_c2, lambda_c2) T(,c1[s]) 72 | cS2_raw[s,j] ~ dnorm(mu_c2, lambda_c2) T(c1[s],) 73 | } 74 | cS1[s,1:nratings-1] <- sort(cS1_raw[s, ]) 75 | cS2[s,1:nratings-1] <- sort(cS2_raw[s, ]) 76 | 77 | delta[s] ~ dt(0, lambda_delta, 5) 78 | logMratio[s] <- mu_logMratio + mu_beta1*cov[1,s] + mu_beta2*cov[2,s] + mu_beta3*cov[3,s] + mu_beta4*cov[4,s] + epsilon_logMratio*delta[s] 79 | Mratio[s] <- exp(logMratio[s]) 80 | 81 | } 82 | 83 | # hyperpriors 84 | mu_c2 ~ dnorm(0, 0.01) 85 | sigma_c2 ~ dnorm(0, 0.01) I(0, ) 86 | lambda_c2 <- pow(sigma_c2, -2) 87 | 88 | mu_logMratio ~ dnorm(0, 1) 89 | mu_beta1 ~ dnorm(0, 1) 90 | mu_beta2 ~ dnorm(0, 1) 91 | mu_beta3 ~ dnorm(0, 1) 92 | mu_beta4 ~ dnorm(0, 1) 93 | 94 | sigma_delta ~ dnorm(0, 1) I(0,) 95 | lambda_delta <- pow(sigma_delta, -2) 96 | epsilon_logMratio ~ dbeta(1,1) 97 | sigma_logMratio <- abs(epsilon_logMratio)*sigma_delta 98 | 99 | } -------------------------------------------------------------------------------- /Matlab/Bayes_metad_group_regress_nodp_5cov.txt: -------------------------------------------------------------------------------- 1 | # Bayesian estimation of meta-d/d for group 2 | 3 | data { 4 | for (s in 1:nsubj) { 5 | # Type 1 counts 6 | N[s] <- sum(counts[s,1:nratings*2]) 7 | S[s] <- sum(counts[s,(nratings*2)+1:nratings*4]) 8 | H[s] <- sum(counts[s,(nratings*3)+1:nratings*4]) 9 | M[s] <- sum(counts[s,(nratings*2)+1:nratings*3]) 10 | FA[s] <- sum(counts[s,nratings+1:nratings*2]) 11 | CR[s] <- sum(counts[s,1:nratings]) 12 | } 13 | } 14 | 15 | model { 16 | for (s in 1:nsubj) { 17 | 18 | ## TYPE 2 SDT MODEL (META-D) 19 | # Multinomial likelihood for response counts ordered as c(nR_S1,nR_S2) 20 | counts[s,1:nratings] ~ dmulti(prT[s,1:nratings],CR[s]) 21 | counts[s,nratings+1:nratings*2] ~ dmulti(prT[s,nratings+1:nratings*2],FA[s]) 22 | counts[s,(nratings*2)+1:nratings*3] ~ dmulti(prT[s,(nratings*2)+1:nratings*3],M[s]) 23 | counts[s,(nratings*3)+1:nratings*4] ~ dmulti(prT[s,(nratings*3)+1:nratings*4],H[s]) 24 | 25 | # Means of SDT distributions] 26 | mu[s] <- Mratio[s]*d1[s] 27 | S2mu[s] <- mu[s]/2 28 | S1mu[s] <- -mu[s]/2 29 | 30 | # Calculate normalisation constants 31 | C_area_rS1[s] <- phi(c1[s] - S1mu[s]) 32 | I_area_rS1[s] <- phi(c1[s] - S2mu[s]) 33 | C_area_rS2[s] <- 1-phi(c1[s] - S2mu[s]) 34 | I_area_rS2[s] <- 1-phi(c1[s] - S1mu[s]) 35 | 36 | # Get nC_rS1 probs 37 | pr[s,1] <- phi(cS1[s,1] - S1mu[s])/C_area_rS1[s] 38 | for (k in 1:nratings-2) { 39 | pr[s,k+1] <- (phi(cS1[s,k+1] - S1mu[s])-phi(cS1[s,k] - S1mu[s]))/C_area_rS1[s] 40 | } 41 | pr[s,nratings] <- (phi(c1[s] - S1mu[s])-phi(cS1[s,nratings-1] - S1mu[s]))/C_area_rS1[s] 42 | 43 | # Get nI_rS2 probs 44 | pr[s,nratings+1] <- ((1-phi(c1[s] - S1mu[s]))-(1-phi(cS2[s,1] - S1mu[s])))/I_area_rS2[s] 45 | for (k in 1:nratings-2) { 46 | pr[s,nratings+1+k] <- ((1-phi(cS2[s,k] - S1mu[s]))-(1-phi(cS2[s,k+1] - S1mu[s])))/I_area_rS2[s] 47 | } 48 | pr[s,nratings*2] <- (1-phi(cS2[s,nratings-1] - S1mu[s]))/I_area_rS2[s] 49 | 50 | # Get nI_rS1 probs 51 | pr[s,(nratings*2)+1] <- phi(cS1[s,1] - S2mu[s])/I_area_rS1[s] 52 | for (k in 1:nratings-2) { 53 | pr[s,(nratings*2)+1+k] <- (phi(cS1[s,k+1] - S2mu[s])-phi(cS1[s,k] - S2mu[s]))/I_area_rS1[s] 54 | } 55 | pr[s,nratings*3] <- (phi(c1[s] - S2mu[s])-phi(cS1[s,nratings-1] - S2mu[s]))/I_area_rS1[s] 56 | 57 | # Get nC_rS2 probs 58 | pr[s,(nratings*3)+1] <- ((1-phi(c1[s] - S2mu[s]))-(1-phi(cS2[s,1] - S2mu[s])))/C_area_rS2[s] 59 | for (k in 1:nratings-2) { 60 | pr[s,(nratings*3)+1+k] <- ((1-phi(cS2[s,k] - S2mu[s]))-(1-phi(cS2[s,k+1] - S2mu[s])))/C_area_rS2[s] 61 | } 62 | pr[s,nratings*4] <- (1-phi(cS2[s,nratings-1] - S2mu[s]))/C_area_rS2[s] 63 | 64 | # Avoid underflow of probabilities 65 | for (i in 1:nratings*4) { 66 | prT[s,i] <- ifelse(pr[s,i] < Tol, Tol, pr[s,i]) 67 | } 68 | 69 | # Specify ordered prior on criteria (bounded above and below by Type 1 c) 70 | for (j in 1:(nratings-1)) { 71 | cS1_raw[s,j] ~ dnorm(-mu_c2, lambda_c2) T(,c1[s]) 72 | cS2_raw[s,j] ~ dnorm(mu_c2, lambda_c2) T(c1[s],) 73 | } 74 | cS1[s,1:nratings-1] <- sort(cS1_raw[s, ]) 75 | cS2[s,1:nratings-1] <- sort(cS2_raw[s, ]) 76 | 77 | delta[s] ~ dt(0, lambda_delta, 5) 78 | logMratio[s] <- mu_logMratio + mu_beta1*cov[1,s] + mu_beta2*cov[2,s] + mu_beta3*cov[3,s] + mu_beta4*cov[4,s] + mu_beta5*cov[5,s] + epsilon_logMratio*delta[s] 79 | Mratio[s] <- exp(logMratio[s]) 80 | 81 | } 82 | 83 | # hyperpriors 84 | mu_c2 ~ dnorm(0, 0.01) 85 | sigma_c2 ~ dnorm(0, 0.01) I(0, ) 86 | lambda_c2 <- pow(sigma_c2, -2) 87 | 88 | mu_logMratio ~ dnorm(0, 1) 89 | mu_beta1 ~ dnorm(0, 1) 90 | mu_beta2 ~ dnorm(0, 1) 91 | mu_beta3 ~ dnorm(0, 1) 92 | mu_beta4 ~ dnorm(0, 1) 93 | mu_beta5 ~ dnorm(0, 1) 94 | 95 | sigma_delta ~ dnorm(0, 1) I(0,) 96 | lambda_delta <- pow(sigma_delta, -2) 97 | epsilon_logMratio ~ dbeta(1,1) 98 | sigma_logMratio <- abs(epsilon_logMratio)*sigma_delta 99 | 100 | } -------------------------------------------------------------------------------- /Matlab/Bayes_metad_rc.txt: -------------------------------------------------------------------------------- 1 | # Bayesian estimation of meta-d for a single subject 2 | 3 | data { 4 | # Type 1 counts 5 | N <- sum(counts[1:nratings*2]) 6 | S <- sum(counts[(nratings*2)+1:nratings*4]) 7 | H <- sum(counts[(nratings*3)+1:nratings*4]) 8 | M <- sum(counts[(nratings*2)+1:nratings*3]) 9 | FA <- sum(counts[nratings+1:nratings*2]) 10 | CR <- sum(counts[1:nratings]) 11 | } 12 | 13 | model { 14 | 15 | ## TYPE 1 SDT BINOMIAL MODEL 16 | H ~ dbin(h,S) 17 | FA ~ dbin(f,N) 18 | h <- phi(d1/2-c1) 19 | f <- phi(-d1/2-c1) 20 | 21 | # Type 1 priors 22 | c1 ~ dnorm(0, 2) 23 | d1 ~ dnorm(0, 0.5) 24 | 25 | ## TYPE 2 SDT MODEL (META-D) 26 | # Multinomial likelihood for response counts ordered as c(nR_S1,nR_S2) 27 | counts[1:nratings] ~ dmulti(prT[1:nratings],CR) 28 | counts[nratings+1:nratings*2] ~ dmulti(prT[nratings+1:nratings*2],FA) 29 | counts[(nratings*2)+1:nratings*3] ~ dmulti(prT[(nratings*2)+1:nratings*3],M) 30 | counts[(nratings*3)+1:nratings*4] ~ dmulti(prT[(nratings*3)+1:nratings*4],H) 31 | 32 | # Means of SDT distributions separately for each response in RC model 33 | S2mu_rS1 <- meta_d_rS1/2 34 | S1mu_rS1 <- -meta_d_rS1/2 35 | S2mu_rS2 <- meta_d_rS2/2 36 | S1mu_rS2 <- -meta_d_rS2/2 37 | 38 | # Calculate normalisation constants 39 | C_area_rS1 <- phi(c1 - S1mu_rS1) 40 | I_area_rS1 <- phi(c1 - S2mu_rS1) 41 | C_area_rS2 <- 1-phi(c1 - S2mu_rS2) 42 | I_area_rS2 <- 1-phi(c1 - S1mu_rS2) 43 | 44 | # Get nC_rS1 probs 45 | pr[1] <- phi(cS1[1] - S1mu_rS1)/C_area_rS1 46 | for (k in 1:nratings-2) { 47 | pr[k+1] <- (phi(cS1[k+1] - S1mu_rS1)-phi(cS1[k] - S1mu_rS1))/C_area_rS1 48 | } 49 | pr[nratings] <- (phi(c1 - S1mu_rS1)-phi(cS1[nratings-1] - S1mu_rS1))/C_area_rS1 50 | 51 | # Get nI_rS2 probs 52 | pr[nratings+1] <- ((1-phi(c1 - S1mu_rS2))-(1-phi(cS2[1] - S1mu_rS2)))/I_area_rS2 53 | for (k in 1:nratings-2) { 54 | pr[nratings+1+k] <- ((1-phi(cS2[k] - S1mu_rS2))-(1-phi(cS2[k+1] - S1mu_rS2)))/I_area_rS2 55 | } 56 | pr[nratings*2] <- (1-phi(cS2[nratings-1] - S1mu_rS2))/I_area_rS2 57 | 58 | # Get nI_rS1 probs 59 | pr[(nratings*2)+1] <- phi(cS1[1] - S2mu_rS1)/I_area_rS1 60 | for (k in 1:nratings-2) { 61 | pr[(nratings*2)+1+k] <- (phi(cS1[k+1] - S2mu_rS1)-phi(cS1[k] - S2mu_rS1))/I_area_rS1 62 | } 63 | pr[nratings*3] <- (phi(c1 - S2mu_rS1)-phi(cS1[nratings-1] - S2mu_rS1))/I_area_rS1 64 | 65 | # Get nC_rS2 probs 66 | pr[(nratings*3)+1] <- ((1-phi(c1 - S2mu_rS2))-(1-phi(cS2[1] - S2mu_rS2)))/C_area_rS2 67 | for (k in 1:nratings-2) { 68 | pr[(nratings*3)+1+k] <- ((1-phi(cS2[k] - S2mu_rS2))-(1-phi(cS2[k+1] - S2mu_rS2)))/C_area_rS2 69 | } 70 | pr[nratings*4] <- (1-phi(cS2[nratings-1] - S2mu_rS2))/C_area_rS2 71 | 72 | # Avoid underflow of probabilities 73 | for (i in 1:nratings*4) { 74 | prT[i] <- ifelse(pr[i] < Tol, Tol, pr[i]) 75 | } 76 | 77 | # Specify ordered prior on criteria (bounded above and below by Type 1 c1) 78 | for (j in 1:nratings-1) { 79 | cS1_raw[j] ~ dnorm(0,2) I(,c1-Tol) 80 | cS2_raw[j] ~ dnorm(0,2) I(c1+Tol,) 81 | } 82 | cS1[1:nratings-1] <- sort(cS1_raw) 83 | cS2[1:nratings-1] <- sort(cS2_raw) 84 | 85 | meta_d_rS1 ~ dnorm(d1,0.5) 86 | meta_d_rS2 ~ dnorm(d1,0.5) 87 | } -------------------------------------------------------------------------------- /Matlab/Bayes_metad_rc_group.txt: -------------------------------------------------------------------------------- 1 | # Bayesian estimation of response-conditional meta-d for group data 2 | 3 | data { 4 | for (s in 1:nsubj) { 5 | # Type 1 counts 6 | N[s] <- sum(counts[s,1:nratings*2]) 7 | S[s] <- sum(counts[s,(nratings*2)+1:nratings*4]) 8 | H[s] <- sum(counts[s,(nratings*3)+1:nratings*4]) 9 | M[s] <- sum(counts[s,(nratings*2)+1:nratings*3]) 10 | FA[s] <- sum(counts[s,nratings+1:nratings*2]) 11 | CR[s] <- sum(counts[s,1:nratings]) 12 | } 13 | } 14 | 15 | model { 16 | for (s in 1:nsubj) { 17 | 18 | ## TYPE 1 SDT BINOMIAL MODEL 19 | H[s] ~ dbin(h[s],S[s]) 20 | FA[s] ~ dbin(f[s],N[s]) 21 | h[s] <- phi(d1[s]/2-c1[s]) 22 | f[s] <- phi(-d1[s]/2-c1[s]) 23 | 24 | # Type 1 priors 25 | c1[s] ~ dnorm(mu_c, lambda_c) 26 | d1[s] ~ dnorm(mu_d1, lambda_d1) 27 | 28 | ## TYPE 2 SDT MODEL (META-D) 29 | # Multinomial likelihood for response counts ordered as c(nR_S1,nR_S2) 30 | counts[s,1:nratings] ~ dmulti(prT[s,1:nratings],CR[s]) 31 | counts[s,nratings+1:nratings*2] ~ dmulti(prT[s,nratings+1:nratings*2],FA[s]) 32 | counts[s,(nratings*2)+1:nratings*3] ~ dmulti(prT[s,(nratings*2)+1:nratings*3],M[s]) 33 | counts[s,(nratings*3)+1:nratings*4] ~ dmulti(prT[s,(nratings*3)+1:nratings*4],H[s]) 34 | 35 | # Means of SDT distributions] 36 | mu_rS1[s] <- Mratio_rS1[s]*d1[s] 37 | mu_rS2[s] <- Mratio_rS2[s]*d1[s] 38 | S2mu_rS1[s] <- mu_rS1[s]/2 39 | S1mu_rS1[s] <- -mu_rS1[s]/2 40 | S2mu_rS2[s] <- mu_rS2[s]/2 41 | S1mu_rS2[s] <- -mu_rS2[s]/2 42 | 43 | # Calculate normalisation constants 44 | C_area_rS1[s] <- phi(c1[s] - S1mu_rS1[s]) 45 | I_area_rS1[s] <- phi(c1[s] - S2mu_rS1[s]) 46 | C_area_rS2[s] <- 1-phi(c1[s] - S2mu_rS2[s]) 47 | I_area_rS2[s] <- 1-phi(c1[s] - S1mu_rS2[s]) 48 | 49 | # Get nC_rS1 probs 50 | pr[s,1] <- phi(cS1[s,1] - S1mu_rS1[s])/C_area_rS1[s] 51 | for (k in 1:nratings-2) { 52 | pr[s,k+1] <- (phi(cS1[s,k+1] - S1mu_rS1[s])-phi(cS1[s,k] - S1mu_rS1[s]))/C_area_rS1[s] 53 | } 54 | pr[s,nratings] <- (phi(c1[s] - S1mu_rS1[s])-phi(cS1[s,nratings-1] - S1mu_rS1[s]))/C_area_rS1[s] 55 | 56 | # Get nI_rS2 probs 57 | pr[s,nratings+1] <- ((1-phi(c1[s] - S1mu_rS2[s]))-(1-phi(cS2[s,1] - S1mu_rS2[s])))/I_area_rS2[s] 58 | for (k in 1:nratings-2) { 59 | pr[s,nratings+1+k] <- ((1-phi(cS2[s,k] - S1mu_rS2[s]))-(1-phi(cS2[s,k+1] - S1mu_rS2[s])))/I_area_rS2[s] 60 | } 61 | pr[s,nratings*2] <- (1-phi(cS2[s,nratings-1] - S1mu_rS2[s]))/I_area_rS2[s] 62 | 63 | # Get nI_rS1 probs 64 | pr[s,(nratings*2)+1] <- phi(cS1[s,1] - S2mu_rS1[s])/I_area_rS1[s] 65 | for (k in 1:nratings-2) { 66 | pr[s,(nratings*2)+1+k] <- (phi(cS1[s,k+1] - S2mu_rS1[s])-phi(cS1[s,k] - S2mu_rS1[s]))/I_area_rS1[s] 67 | } 68 | pr[s,nratings*3] <- (phi(c1[s] - S2mu_rS1[s])-phi(cS1[s,nratings-1] - S2mu_rS1[s]))/I_area_rS1[s] 69 | 70 | # Get nC_rS2 probs 71 | pr[s,(nratings*3)+1] <- ((1-phi(c1[s] - S2mu_rS2[s]))-(1-phi(cS2[s,1] - S2mu_rS2[s])))/C_area_rS2[s] 72 | for (k in 1:nratings-2) { 73 | pr[s,(nratings*3)+1+k] <- ((1-phi(cS2[s,k] - S2mu_rS2[s]))-(1-phi(cS2[s,k+1] - S2mu_rS2[s])))/C_area_rS2[s] 74 | } 75 | pr[s,nratings*4] <- (1-phi(cS2[s,nratings-1] - S2mu_rS2[s]))/C_area_rS2[s] 76 | 77 | # Avoid underflow of probabilities 78 | for (i in 1:nratings*4) { 79 | prT[s,i] <- ifelse(pr[s,i] < Tol, Tol, pr[s,i]) 80 | } 81 | 82 | # Specify ordered prior on criteria (bounded above and below by Type 1 c) 83 | for (j in 1:nratings-1) { 84 | cS1_raw[s,j] ~ dnorm(-mu_c2, lambda_c2) T(,c1[s]) 85 | cS2_raw[s,j] ~ dnorm(mu_c2, lambda_c2) T(c1[s],) 86 | } 87 | cS1[s,1:nratings-1] <- sort(cS1_raw[s, ]) 88 | cS2[s,1:nratings-1] <- sort(cS2_raw[s, ]) 89 | 90 | delta_rS1[s] ~ dnorm(0, lambda_delta_rS1) 91 | logMratio_rS1[s] <- mu_logMratio_rS1 + epsilon_logMratio_rS1*delta_rS1[s] 92 | Mratio_rS1[s] <- exp(logMratio_rS1[s]) 93 | delta_rS2[s] ~ dnorm(0, lambda_delta_rS2) 94 | logMratio_rS2[s] <- mu_logMratio_rS2 + epsilon_logMratio_rS2*delta_rS2[s] 95 | Mratio_rS2[s] <- exp(logMratio_rS2[s]) 96 | 97 | } 98 | 99 | # hyperpriors 100 | mu_d1 ~ dnorm(0, 0.01) 101 | mu_c ~ dnorm(0, 0.01) 102 | sigma_d1 ~ dnorm(0, 0.01) I(0, ) 103 | sigma_c ~ dnorm(0, 0.01) I(0, ) 104 | lambda_d1 <- pow(sigma_d1, -2) 105 | lambda_c <- pow(sigma_c, -2) 106 | 107 | mu_c2 ~ dnorm(0, 0.01) 108 | sigma_c2 ~ dnorm(0, 0.01) I(0, ) 109 | lambda_c2 <- pow(sigma_c2, -2) 110 | 111 | mu_logMratio_rS1 ~ dnorm(0, 0.5) 112 | sigma_delta_rS1 ~ dnorm(0, 1) I(0,) 113 | lambda_delta_rS1 <- pow(sigma_delta_rS1, -2) 114 | epsilon_logMratio_rS1 ~ dbeta(1,1) 115 | sigma_logMratio_rS1 <- abs(epsilon_logMratio_rS1)*sigma_delta_rS1 116 | 117 | mu_logMratio_rS2 ~ dnorm(0, 0.5) 118 | sigma_delta_rS2 ~ dnorm(0, 1) I(0,) 119 | lambda_delta_rS2 <- pow(sigma_delta_rS2, -2) 120 | epsilon_logMratio_rS2 ~ dbeta(1,1) 121 | sigma_logMratio_rS2 <- abs(epsilon_logMratio_rS2)*sigma_delta_rS2 122 | 123 | } 124 | -------------------------------------------------------------------------------- /Matlab/Bayes_metad_rc_group_nodp.txt: -------------------------------------------------------------------------------- 1 | # Bayesian estimation of response-conditional meta-d for group data 2 | 3 | data { 4 | for (s in 1:nsubj) { 5 | # Type 1 counts 6 | N[s] <- sum(counts[s,1:nratings*2]) 7 | S[s] <- sum(counts[s,(nratings*2)+1:nratings*4]) 8 | H[s] <- sum(counts[s,(nratings*3)+1:nratings*4]) 9 | M[s] <- sum(counts[s,(nratings*2)+1:nratings*3]) 10 | FA[s] <- sum(counts[s,nratings+1:nratings*2]) 11 | CR[s] <- sum(counts[s,1:nratings]) 12 | } 13 | } 14 | 15 | model { 16 | for (s in 1:nsubj) { 17 | 18 | ## TYPE 2 SDT MODEL (META-D) 19 | # Multinomial likelihood for response counts ordered as c(nR_S1,nR_S2) 20 | counts[s,1:nratings] ~ dmulti(prT[s,1:nratings],CR[s]) 21 | counts[s,nratings+1:nratings*2] ~ dmulti(prT[s,nratings+1:nratings*2],FA[s]) 22 | counts[s,(nratings*2)+1:nratings*3] ~ dmulti(prT[s,(nratings*2)+1:nratings*3],M[s]) 23 | counts[s,(nratings*3)+1:nratings*4] ~ dmulti(prT[s,(nratings*3)+1:nratings*4],H[s]) 24 | 25 | # Means of SDT distributions] 26 | mu_rS1[s] <- Mratio_rS1[s]*d1[s] 27 | mu_rS2[s] <- Mratio_rS2[s]*d1[s] 28 | S2mu_rS1[s] <- mu_rS1[s]/2 29 | S1mu_rS1[s] <- -mu_rS1[s]/2 30 | S2mu_rS2[s] <- mu_rS2[s]/2 31 | S1mu_rS2[s] <- -mu_rS2[s]/2 32 | 33 | # Calculate normalisation constants 34 | C_area_rS1[s] <- phi(c1[s] - S1mu_rS1[s]) 35 | I_area_rS1[s] <- phi(c1[s] - S2mu_rS1[s]) 36 | C_area_rS2[s] <- 1-phi(c1[s] - S2mu_rS2[s]) 37 | I_area_rS2[s] <- 1-phi(c1[s] - S1mu_rS2[s]) 38 | 39 | # Get nC_rS1 probs 40 | pr[s,1] <- phi(cS1[s,1] - S1mu_rS1[s])/C_area_rS1[s] 41 | for (k in 1:nratings-2) { 42 | pr[s,k+1] <- (phi(cS1[s,k+1] - S1mu_rS1[s])-phi(cS1[s,k] - S1mu_rS1[s]))/C_area_rS1[s] 43 | } 44 | pr[s,nratings] <- (phi(c1[s] - S1mu_rS1[s])-phi(cS1[s,nratings-1] - S1mu_rS1[s]))/C_area_rS1[s] 45 | 46 | # Get nI_rS2 probs 47 | pr[s,nratings+1] <- ((1-phi(c1[s] - S1mu_rS2[s]))-(1-phi(cS2[s,1] - S1mu_rS2[s])))/I_area_rS2[s] 48 | for (k in 1:nratings-2) { 49 | pr[s,nratings+1+k] <- ((1-phi(cS2[s,k] - S1mu_rS2[s]))-(1-phi(cS2[s,k+1] - S1mu_rS2[s])))/I_area_rS2[s] 50 | } 51 | pr[s,nratings*2] <- (1-phi(cS2[s,nratings-1] - S1mu_rS2[s]))/I_area_rS2[s] 52 | 53 | # Get nI_rS1 probs 54 | pr[s,(nratings*2)+1] <- phi(cS1[s,1] - S2mu_rS1[s])/I_area_rS1[s] 55 | for (k in 1:nratings-2) { 56 | pr[s,(nratings*2)+1+k] <- (phi(cS1[s,k+1] - S2mu_rS1[s])-phi(cS1[s,k] - S2mu_rS1[s]))/I_area_rS1[s] 57 | } 58 | pr[s,nratings*3] <- (phi(c1[s] - S2mu_rS1[s])-phi(cS1[s,nratings-1] - S2mu_rS1[s]))/I_area_rS1[s] 59 | 60 | # Get nC_rS2 probs 61 | pr[s,(nratings*3)+1] <- ((1-phi(c1[s] - S2mu_rS2[s]))-(1-phi(cS2[s,1] - S2mu_rS2[s])))/C_area_rS2[s] 62 | for (k in 1:nratings-2) { 63 | pr[s,(nratings*3)+1+k] <- ((1-phi(cS2[s,k] - S2mu_rS2[s]))-(1-phi(cS2[s,k+1] - S2mu_rS2[s])))/C_area_rS2[s] 64 | } 65 | pr[s,nratings*4] <- (1-phi(cS2[s,nratings-1] - S2mu_rS2[s]))/C_area_rS2[s] 66 | 67 | # Avoid underflow of probabilities 68 | for (i in 1:nratings*4) { 69 | prT[s,i] <- ifelse(pr[s,i] < Tol, Tol, pr[s,i]) 70 | } 71 | 72 | # Specify ordered prior on criteria (bounded above and below by Type 1 c) 73 | for (j in 1:nratings-1) { 74 | cS1_raw[s,j] ~ dnorm(-mu_c2, lambda_c2) T(,c1[s]) 75 | cS2_raw[s,j] ~ dnorm(mu_c2, lambda_c2) T(c1[s],) 76 | } 77 | cS1[s,1:nratings-1] <- sort(cS1_raw[s, ]) 78 | cS2[s,1:nratings-1] <- sort(cS2_raw[s, ]) 79 | 80 | delta_rS1[s] ~ dnorm(0, lambda_delta_rS1) 81 | logMratio_rS1[s] <- mu_logMratio_rS1 + epsilon_logMratio_rS1*delta_rS1[s] 82 | Mratio_rS1[s] <- exp(logMratio_rS1[s]) 83 | delta_rS2[s] ~ dnorm(0, lambda_delta_rS2) 84 | logMratio_rS2[s] <- mu_logMratio_rS2 + epsilon_logMratio_rS2*delta_rS2[s] 85 | Mratio_rS2[s] <- exp(logMratio_rS2[s]) 86 | 87 | } 88 | 89 | # hyperpriors 90 | mu_c2 ~ dnorm(0, 0.01) 91 | sigma_c2 ~ dnorm(0, 0.01) I(0, ) 92 | lambda_c2 <- pow(sigma_c2, -2) 93 | 94 | mu_logMratio_rS1 ~ dnorm(0, 1) 95 | sigma_delta_rS1 ~ dnorm(0, 1) I(0,) 96 | lambda_delta_rS1 <- pow(sigma_delta_rS1, -2) 97 | epsilon_logMratio_rS1 ~ dbeta(1,1) 98 | sigma_logMratio_rS1 <- abs(epsilon_logMratio_rS1)*sigma_delta_rS1 99 | 100 | mu_logMratio_rS2 ~ dnorm(0, 1) 101 | sigma_delta_rS2 ~ dnorm(0, 1) I(0,) 102 | lambda_delta_rS2 <- pow(sigma_delta_rS2, -2) 103 | epsilon_logMratio_rS2 ~ dbeta(1,1) 104 | sigma_logMratio_rS2 <- abs(epsilon_logMratio_rS2)*sigma_delta_rS2 105 | 106 | } 107 | -------------------------------------------------------------------------------- /Matlab/calc_CI.m: -------------------------------------------------------------------------------- 1 | function hdi = calc_CI(samples, q) 2 | % function hdi = calc_CI(samples, q) 3 | % 4 | % Calculates symmetric 95% confidence interval for MCMC samples 5 | % 6 | % INPUTS 7 | % 8 | % samples - vector of MCMC samples 9 | % q - bounds on symmetric CI, default [0.025 0.975] 10 | % 11 | % Steve Fleming 2015 stephen.fleming@ucl.ac.uk 12 | 13 | if ~exist('q','var') || isempty(fninv) 14 | 15 | q = [0.025 0.975]; 16 | 17 | end 18 | 19 | hdi = quantile(samples, q); 20 | 21 | -------------------------------------------------------------------------------- /Matlab/calc_HDI.m: -------------------------------------------------------------------------------- 1 | function hdi = calc_HDI(samples, q) 2 | % function hdi = calc_HDI(samples, q) 3 | % 4 | % Calculates highest-density interval on chain of MCMC samples 5 | % See Kruschke (2015) Doing Bayesian Data Analysis 6 | % 7 | % INPUTS 8 | % 9 | % samples - vector of MCMC samples 10 | % q - credible mass, scalar between 0 and 1, default is 0.95 11 | % for 95% HDI 12 | % 13 | % Based on R code by Kruschke (2015) 14 | % http://www.indiana.edu/~kruschke/BEST/ 15 | % and Matlab translation by Nils Winter 16 | % https://github.com/NilsWinter/matlab-bayesian-estimation/blob/master/mbe_hdi.m 17 | % 18 | % Steve Fleming 2015 stephen.fleming@ucl.ac.uk 19 | 20 | if ~exist('q','var') %|| isempty(fninv) 21 | q = 0.95; 22 | end 23 | 24 | sortedVec = sort(samples); 25 | ciIdx = ceil(q * length(sortedVec)); 26 | nCIs = length(sortedVec) - ciIdx; % number of vector elements that make HDI 27 | 28 | % Determine middle of HDI to get upper and lower bound 29 | ciWidth = zeros(nCIs,1); 30 | for ind = 1:nCIs 31 | ciWidth(ind) = sortedVec(ind + ciIdx) - sortedVec(ind); 32 | end 33 | [~,idxMin] = min(ciWidth); 34 | HDImin = sortedVec(idxMin); 35 | HDImax = sortedVec(idxMin + ciIdx); 36 | hdi = [HDImin, HDImax]; 37 | 38 | % 39 | % if ~exist('q','var') || isempty(fninv) 40 | % 41 | % q = [0.025 0.975]; 42 | % 43 | % end 44 | % 45 | % % hdi = quantile(samples, q); 46 | 47 | -------------------------------------------------------------------------------- /Matlab/exampleFit.m: -------------------------------------------------------------------------------- 1 | % Example Bayesian meta-d fit (single subject) 2 | clear all 3 | close all 4 | 5 | Ntrials = 1000; 6 | c = 0; 7 | c1 = [-1.5 -1 -0.5]; 8 | c2 = [0.5 1 1.5]; 9 | d = 1.5; 10 | meta_d = 1; 11 | 12 | mcmc_params = fit_meta_d_params; 13 | mcmc_params.estimate_dprime = 0; 14 | 15 | % Generate data 16 | sim = metad_sim(d, meta_d, c, c1, c2, Ntrials); 17 | 18 | % Fit data 19 | fit = fit_meta_d_mcmc(sim.nR_S1, sim.nR_S2, mcmc_params); 20 | hdi = calc_HDI(fit.mcmc.samples.meta_d(:)); 21 | fprintf(['\n HDI: ', num2str(hdi) '\n\n']) 22 | 23 | % Visualise single-subject fits 24 | metad_visualise -------------------------------------------------------------------------------- /Matlab/exampleFit_corr.m: -------------------------------------------------------------------------------- 1 | % Demonstration of hierarchical model fits 2 | % 3 | % SF 2014 4 | 5 | clear all 6 | HMMpath = '~/Documents/HMM'; 7 | addpath(HMMpath); 8 | addpath('~/Dropbox/Utils/graphics/export_fig/') 9 | 10 | Ntrials = 400; 11 | Nsub = 100; 12 | c = 0; 13 | c1 = [-1.5 -1 -0.5]; 14 | c2 = [0.5 1 1.5]; 15 | 16 | group_d = 2; 17 | group_mratio = 0.8; 18 | type1_sigma = 0.2; 19 | rho = 0.6; 20 | type2_sigma = 0.2; 21 | 22 | for i = 1:Nsub 23 | 24 | % Generate Mratios for this subject 25 | %bigSigma = [type2_sigma^2 0; 0 type2_sigma^2]; 26 | bigSigma = [type2_sigma^2 rho.*type2_sigma^2; rho.*type2_sigma^2 type2_sigma^2]; 27 | mratios(i,:) = mvnrnd([group_mratio group_mratio], bigSigma); 28 | 29 | %% Task 1 30 | % Generate dprime 31 | d = normrnd(group_d, type1_sigma); 32 | metad = mratios(i,1).*d; 33 | 34 | % Generate data 35 | sim = metad_sim(d, metad, c, c1, c2, Ntrials); 36 | 37 | nR_S1(1).counts{i} = sim.nR_S1; 38 | nR_S2(1).counts{i} = sim.nR_S2; 39 | 40 | %% Task 2 41 | d = normrnd(group_d, type1_sigma); 42 | metad = mratios(i,2).*d; 43 | 44 | % Generate data 45 | sim = metad_sim(d, metad, c, c1, c2, Ntrials); 46 | 47 | nR_S1(2).counts{i} = sim.nR_S1; 48 | nR_S2(2).counts{i} = sim.nR_S2; 49 | 50 | end 51 | 52 | % Fit group data all at once 53 | mcmc_params = fit_meta_d_params; 54 | fit = fit_meta_d_mcmc_groupCorr(nR_S1, nR_S2, mcmc_params); 55 | plotSamples(fit.mcmc.samples.mu_logMratio(:,:,1)) 56 | plotSamples(fit.mcmc.samples.mu_logMratio(:,:,2)) 57 | 58 | h1 = figure; 59 | set(gcf, 'Position', [200 200 400 300]) 60 | h= histogram(fit.mcmc.samples.rho(:), 'Normalization', 'probability'); 61 | xlabel('\rho'); 62 | ylabel('Posterior density'); 63 | line([rho rho],[0 max(h.Values)+0.015], 'LineWidth', 2, 'Color', 'k', 'LineStyle', '--') 64 | ci = calc_CI(fit.mcmc.samples.rho(:)); 65 | line([ci(1) ci(2)],[0.002 0.002], 'LineWidth', 3, 'Color', [1 1 1]) 66 | box off 67 | set(gca, 'FontSize', 14, 'XLim', [-1 1]) -------------------------------------------------------------------------------- /Matlab/exampleFit_group.m: -------------------------------------------------------------------------------- 1 | % Demonstration of hierarchical model fits 2 | % 3 | % SF 2014 4 | 5 | clear all 6 | 7 | Ntrials = 200; 8 | Nsub = 30; 9 | c = -0.3; 10 | c1 = [-1.5 -1 -0.5]; 11 | c2 = [0.5 1 1.5]; 12 | 13 | group_d = 2; 14 | group_mratio = 0.8; 15 | sigma = 0.5; 16 | 17 | mcmc_params = fit_meta_d_params; 18 | mcmc_params.estimate_dprime = 0; 19 | 20 | for i = 1:Nsub 21 | 22 | % Generate dprime 23 | d(i) = normrnd(group_d, sigma); 24 | metad = group_mratio.*d(i); 25 | 26 | % Generate data 27 | sim = metad_sim(d(i), metad, c, c1, c2, Ntrials); 28 | 29 | nR_S1{i} = sim.nR_S1; 30 | nR_S2{i} = sim.nR_S2; 31 | 32 | end 33 | 34 | % Fit group data all at once 35 | fit = fit_meta_d_mcmc_group(nR_S1, nR_S2, mcmc_params); 36 | 37 | % Call plotSamples to plot posterior of group Mratio 38 | plotSamples(exp(fit.mcmc.samples.mu_logMratio)) 39 | hdi = calc_HDI(exp(fit.mcmc.samples.mu_logMratio(:))); 40 | fprintf(['\n HDI on meta-d/d: ', num2str(hdi) '\n\n']) 41 | 42 | metad_group_visualise 43 | -------------------------------------------------------------------------------- /Matlab/exampleFit_group_rc.m: -------------------------------------------------------------------------------- 1 | % Demonstration of hierarchical model fits 2 | % 3 | % SF 2014 4 | 5 | clear all 6 | close all 7 | 8 | Ntrials = 1000; 9 | Nsub = 10; 10 | c = 0; 11 | c1 = [-1.5 -1 -0.5]; 12 | c2 = [0.5 1 1.5]; 13 | 14 | group_d = 2; 15 | sigma = 0.5; 16 | noise = [1/3 2/3]; 17 | 18 | for i = 1:Nsub 19 | 20 | % Generate dprime 21 | d(i) = normrnd(group_d, sigma); 22 | 23 | % Generate data 24 | sim = type2_SDT_sim(d(i), noise, c, c1, c2, Ntrials); 25 | 26 | nR_S1{i} = sim.nR_S1; 27 | nR_S2{i} = sim.nR_S2; 28 | 29 | end 30 | 31 | % Get default parameters 32 | mcmc_params = fit_meta_d_params; 33 | % Change defaults to make response-conditional 34 | mcmc_params.response_conditional = 1; 35 | 36 | % Fit group data all at once 37 | fit = fit_meta_d_mcmc_group(nR_S1, nR_S2, mcmc_params); 38 | 39 | % Plot output 40 | plotSamples(exp(fit.mcmc.samples.mu_logMratio_rS1)) 41 | plotSamples(exp(fit.mcmc.samples.mu_logMratio_rS2)) -------------------------------------------------------------------------------- /Matlab/exampleFit_group_regression.m: -------------------------------------------------------------------------------- 1 | % Example script for estimating a regression model 2 | % 3 | % Steve Fleming 2018 4 | 5 | clear all 6 | 7 | Ntrials = 100; 8 | Nsub = 50; 9 | c = 0; 10 | c1 = [-1.5 -1 -0.5]; 11 | c2 = [0.5 1 1.5]; 12 | 13 | group_d = 2; 14 | group_baseline_mratio = 0.8; 15 | sigma = 0.5; 16 | sigma_beta = 0.2; 17 | gen_beta = 0.5; 18 | cov = rand(Nsub,1)'; 19 | 20 | mcmc_params = fit_meta_d_params; 21 | mcmc_params.estimate_dprime = 0; 22 | 23 | for i = 1:Nsub 24 | 25 | % Generate dprime 26 | d(i) = normrnd(group_d, sigma); 27 | beta(i) = normrnd(gen_beta, sigma_beta); 28 | metad(i) = (group_baseline_mratio + beta(i).*cov(i)).*d(i); % note this is assuming no unmodelled variance in beta; previous line would do this 29 | 30 | % Generate data 31 | sim = metad_sim(d(i), metad(i), c, c1, c2, Ntrials); 32 | 33 | nR_S1{i} = sim.nR_S1; 34 | nR_S2{i} = sim.nR_S2; 35 | 36 | end 37 | 38 | %% Regression fit 39 | fit = fit_meta_d_mcmc_regression(nR_S1, nR_S2, cov, mcmc_params); 40 | 41 | % % Call plotSamples to plot posterior of group Mratio 42 | plotSamples(exp(fit.mcmc.samples.mu_logMratio)) 43 | hdi = calc_HDI(exp(fit.mcmc.samples.mu_logMratio(:))); 44 | fprintf(['\n HDI on meta-d/d: ', num2str(hdi) '\n\n']) 45 | 46 | plotSamples(fit.mcmc.samples.mu_beta1) 47 | hdi = calc_HDI(fit.mcmc.samples.mu_beta1(:)); 48 | fprintf(['\n HDI on beta1: ', num2str(hdi) '\n\n']) 49 | -------------------------------------------------------------------------------- /Matlab/exampleFit_rc.m: -------------------------------------------------------------------------------- 1 | % Comparison of vanilla and response-conditional meta-d' model fit to 2 | % simulated data with response-conditional noise 3 | 4 | clear all 5 | 6 | Ntrials = 10000; 7 | c = 0; 8 | c1 = [-1.5 -1 -0.5]; 9 | c2 = [0.5 1 1.5]; 10 | 11 | d = 2; 12 | noise = [1/4 3/4]; 13 | 14 | % Generate data (passing in a 2-vector of noise terms generates 15 | % differential RC meta-d') 16 | sim = type2_SDT_sim(d, noise, c, c1, c2, Ntrials); 17 | 18 | % Fit the data using vanilla model 19 | mcmc_params = fit_meta_d_params; 20 | vanilla_fit = fit_meta_d_mcmc(sim.nR_S1, sim.nR_S2, mcmc_params); 21 | 22 | % Fit the data using response-conditional model 23 | mcmc_params = fit_meta_d_params; 24 | mcmc_params.response_conditional = 1; 25 | rc_fit = fit_meta_d_mcmc(sim.nR_S1, sim.nR_S2, mcmc_params); 26 | 27 | % Visualise fits of both models 28 | h1 = figure; 29 | set(gcf, 'Position', [500 500 1000 500]); 30 | 31 | subplot(1,2,1); 32 | plot(vanilla_fit.obs_FAR2_rS1, vanilla_fit.obs_HR2_rS1, 'ko-','linewidth',1.5,'markersize',12); 33 | hold on 34 | plot(vanilla_fit.est_FAR2_rS1, vanilla_fit.est_HR2_rS1, '+-','color',[0.5 0.5 0.5], 'linewidth',1.5,'markersize',10); 35 | plot(rc_fit.est_FAR2_rS1, rc_fit.est_HR2_rS1, '+--','color',[0.5 0.5 0.5], 'linewidth',1.5,'markersize',10); 36 | set(gca, 'XLim', [0 1], 'YLim', [0 1], 'FontSize', 16); 37 | ylabel('HR2, "S1"'); 38 | xlabel('FAR2, "S1"'); 39 | line([0 1],[0 1],'linestyle','--','color','k'); 40 | axis square 41 | box off 42 | 43 | subplot(1,2,2); 44 | plot(vanilla_fit.obs_FAR2_rS2, vanilla_fit.obs_HR2_rS2, 'ko-','linewidth',1.5,'markersize',12); 45 | hold on 46 | plot(vanilla_fit.est_FAR2_rS2, vanilla_fit.est_HR2_rS2, '+-','color',[0.5 0.5 0.5], 'linewidth',1.5,'markersize',10); 47 | plot(rc_fit.est_FAR2_rS2, rc_fit.est_HR2_rS2, '+--','color',[0.5 0.5 0.5], 'linewidth',1.5,'markersize',10); 48 | set(gca, 'XLim', [0 1], 'YLim', [0 1], 'FontSize', 16); 49 | ylabel('HR2, "S2"'); 50 | xlabel('FAR2, "S2"'); 51 | line([0 1],[0 1],'linestyle','--','color','k'); 52 | axis square 53 | box off 54 | legend('Data','Vanilla fit','RC fit','Location','SouthEast'); -------------------------------------------------------------------------------- /Matlab/exampleFit_twoGroups.m: -------------------------------------------------------------------------------- 1 | % Demonstration of comparison between two independent groups 2 | % 3 | % SF 2014 4 | 5 | clear all 6 | close all 7 | 8 | Ntrials = 500; 9 | Nsub = 30; 10 | 11 | c = 0; 12 | c1 = [-1.5 -1 -0.5]; 13 | c2 = [0.5 1 1.5]; 14 | 15 | group_d = 2; % same dprime across groups 16 | sigma = 0.5; 17 | 18 | group_mratio(1) = 1; 19 | group_mratio(2) = 0.6; % group 2 has worse metacognition than group 1 20 | 21 | j=1; 22 | for group = 1:2 23 | for i = 1:Nsub 24 | 25 | % Generate dprime 26 | d(j) = normrnd(group_d, sigma); 27 | 28 | % Generate data 29 | metad = group_mratio(group).*d; 30 | sim = metad_sim(d(j), metad, c, c1, c2, Ntrials); 31 | 32 | DATA(group).nR_S1{i} = sim.nR_S1; 33 | DATA(group).nR_S2{i} = sim.nR_S2; 34 | 35 | j = j+1; 36 | end 37 | 38 | end 39 | 40 | % Fit group 1 41 | fit1 = fit_meta_d_mcmc_group(DATA(1).nR_S1, DATA(1).nR_S2); 42 | 43 | % Fit group 2 44 | fit2 = fit_meta_d_mcmc_group(DATA(2).nR_S1, DATA(2).nR_S2); 45 | 46 | % Compute HDI of difference 47 | sampleDiff = fit1.mcmc.samples.mu_logMratio - fit2.mcmc.samples.mu_logMratio; 48 | hdi = calc_HDI(sampleDiff(:)); 49 | fprintf(['\n HDI on difference in log(meta-d''/d''): ', num2str(hdi) '\n\n']) 50 | 51 | % Plot group Mratio and the difference 52 | plotSamples(exp(fit1.mcmc.samples.mu_logMratio)) 53 | plotSamples(exp(fit2.mcmc.samples.mu_logMratio)) 54 | plotSamples(sampleDiff) 55 | -------------------------------------------------------------------------------- /Matlab/exampleFit_twoTasks.m: -------------------------------------------------------------------------------- 1 | % Demonstration of estimating covariance between paired measures of meta-d' 2 | % (e.g. two tasks from the same individuals) 3 | % 4 | % SF 2019 5 | 6 | clear all 7 | 8 | Ntrials = 400; 9 | Nsub = 100; 10 | c = 0; 11 | c1 = [-1.5 -1 -0.5]; 12 | c2 = [0.5 1 1.5]; 13 | 14 | group_d = 2; 15 | group_mratio = 0.8; 16 | type1_sigma = 0.2; 17 | rho = 0.6; 18 | type2_sigma = 0.5; 19 | 20 | for i = 1:Nsub 21 | 22 | % Generate Mratios for this subject 23 | bigSigma = [type2_sigma^2 rho.*type2_sigma^2; rho.*type2_sigma^2 type2_sigma^2]; 24 | mratios(i,:) = mvnrnd([group_mratio group_mratio], bigSigma); 25 | 26 | %% Task 1 27 | % Generate dprime 28 | d = normrnd(group_d, type1_sigma); 29 | metad = mratios(i,1).*d; 30 | 31 | % Generate data 32 | sim = metad_sim(d, metad, c, c1, c2, Ntrials); 33 | 34 | nR_S1(1).counts{i} = sim.nR_S1; 35 | nR_S2(1).counts{i} = sim.nR_S2; 36 | 37 | %% Task 2 38 | d = normrnd(group_d, type1_sigma); 39 | metad = mratios(i,2).*d; 40 | 41 | % Generate data 42 | sim = metad_sim(d, metad, c, c1, c2, Ntrials); 43 | 44 | nR_S1(2).counts{i} = sim.nR_S1; 45 | nR_S2(2).counts{i} = sim.nR_S2; 46 | 47 | end 48 | 49 | % Fit group data all at once 50 | mcmc_params = fit_meta_d_params; 51 | fit = fit_meta_d_mcmc_groupCorr(nR_S1, nR_S2, mcmc_params); 52 | plotSamples(fit.mcmc.samples.mu_logMratio(:,:,1)) 53 | plotSamples(fit.mcmc.samples.mu_logMratio(:,:,2)) 54 | 55 | % Compute HDI of difference between tasks 56 | sampleDiff = fit.mcmc.samples.mu_logMratio(:,:,1) - fit.mcmc.samples.mu_logMratio(:,:,2); 57 | hdi = calc_HDI(sampleDiff(:)); 58 | fprintf(['\n HDI on difference in log(meta-d''/d''): ', num2str(hdi) '\n\n']) 59 | 60 | % Plot difference in meta-d/d ratio between two tasks 61 | plotSamples(sampleDiff) 62 | 63 | % Plot estimate of correlation 64 | h1 = figure; 65 | set(gcf, 'Position', [200 200 400 300]) 66 | h= histogram(fit.mcmc.samples.rho(:), 'Normalization', 'probability'); 67 | xlabel('\rho'); 68 | ylabel('Posterior density'); 69 | line([rho rho],[0 max(h.Values)+0.015], 'LineWidth', 2, 'Color', 'k', 'LineStyle', '--') 70 | ci = calc_CI(fit.mcmc.samples.rho(:)); 71 | line([ci(1) ci(2)],[0.002 0.002], 'LineWidth', 3, 'Color', [1 1 1]) 72 | box off 73 | set(gca, 'FontSize', 14, 'XLim', [-1 1]) -------------------------------------------------------------------------------- /Matlab/fit_meta_d_mcmc.m: -------------------------------------------------------------------------------- 1 | function fit = fit_meta_d_mcmc(nR_S1, nR_S2, mcmc_params, fncdf, fninv, name) 2 | % fit = fit_meta_d_mcmc(nR_S1, nR_S2, mcmc_params, s, fncdf, fninv) 3 | % 4 | % Given data from an experiment where an observer discriminates between two 5 | % stimulus alternatives on every trial and provides confidence ratings, 6 | % fits meta-d' using MCMC implemented in 7 | % JAGS. Requires JAGS to be installed 8 | % (see http://psiexp.ss.uci.edu/research/programs_data/jags/) 9 | % 10 | % For more information on the type 1 d' model please see: 11 | % 12 | % Lee (2008) BayesSDT: Software for Bayesian inference with signal 13 | % detection theory. Behavior Research Methods 40 (2), 450-456 14 | % 15 | % For more information on the meta-d' model please see: 16 | % 17 | % Maniscalco B, Lau H (2012) A signal detection theoretic approach for 18 | % estimating metacognitive sensitivity from confidence ratings. 19 | % Consciousness and Cognition 20 | % 21 | % Also allows fitting of response-conditional meta-d' via setting in mcmc_params 22 | % (see below). This model fits meta-d' SEPARATELY for S1 and S2 responses. 23 | % For more details on this model variant please see: 24 | 25 | % Maniscalco & Lau (2014) Signal detection theory analysis of Type 1 and 26 | % Type 2 data: meta-d', response-specific meta-d' and the unequal variance 27 | % SDT model. In SM Fleming & CD Frith (eds) The Cognitive Neuroscience of 28 | % Metacognition. Springer. 29 | % 30 | % INPUTS 31 | % 32 | % * nR_S1, nR_S2 33 | % these are vectors containing the total number of responses in 34 | % each response category, conditional on presentation of S1 and S2. S1 35 | % responses are always listed first, followed by S2 responses. 36 | % 37 | % e.g. if nR_S1 = [100 50 20 10 5 1], then when stimulus S1 was 38 | % presented, the subject had the following response counts: 39 | % responded S1, rating=3 : 100 times 40 | % responded S1, rating=2 : 50 times 41 | % responded S1, rating=1 : 20 times 42 | % responded S2, rating=1 : 10 times 43 | % responded S2, rating=2 : 5 times 44 | % responded S2, rating=3 : 1 time 45 | % 46 | % and if nR_S2 = [2 6 9 18 40 110], then when stimulus S2 was 47 | % presented, the subject had the following response counts: 48 | % responded S1, rating=3 : 2 times 49 | % responded S1, rating=2 : 6 times 50 | % responded S1, rating=1 : 9 times 51 | % responded S2, rating=1 : 18 times 52 | % responded S2, rating=2 : 40 times 53 | % responded S2, rating=3 : 110 times 54 | % 55 | % * fncdf 56 | % a function handle for the CDF of the type 1 distribution. 57 | % if not specified, fncdf defaults to @normcdf (i.e. CDF for normal 58 | % distribution) 59 | % 60 | % * fninv 61 | % a function handle for the inverse CDF of the type 1 distribution. 62 | % if not specified, fninv defaults to @norminv 63 | % 64 | % * mcmc_params 65 | % a structure specifying parameters for running the MCMC chains in JAGS. 66 | % Type "help matjags" for more details. If empty defaults to the following 67 | % parameters: 68 | % 69 | % mcmc_params.response_conditional = 0; % Do we want to fit response-conditional meta-d'? 70 | % mcmc_params.nchains = 3; % How Many Chains? 71 | % mcmc_params.nburnin = 1000; % How Many Burn-in Samples? 72 | % mcmc_params.nsamples = 10000; %How Many Recorded Samples? 73 | % mcmc_params.nthin = 1; % How Often is a Sample Recorded? 74 | % mcmc_params.doparallel = 0; % Parallel Option 75 | % mcmc_params.dic = 1; % Save DIC 76 | % 77 | % * name 78 | % optionally specify a name for the temporary samples folder - recommended 79 | % when running multiple fits at once to avoid interference. 80 | % if not specified, defaults to tmpjags folder 81 | % 82 | % OUTPUT 83 | % 84 | % Output is packaged in the struct "fit". All parameter values are taken 85 | % from the means of the posterior MCMC distributions, with full 86 | % posteriors stored in fit.mcmc 87 | % 88 | % In the following, let S1 and S2 represent the distributions of evidence 89 | % generated by stimulus classes S1 and S2. 90 | % Then the fields of "fit" are as follows: 91 | % 92 | % fit.d1 = type 1 d' 93 | % fit.c1 = type 1 criterion 94 | % fit.meta_d = meta-d' 95 | % fit.M_diff = meta_d' - d' 96 | % fit.M_ratio = meta_d'/d' 97 | % fit.t2ca_rS1 = type 2 criteria for response=S1 98 | % fit.t2ca_rS2 = type 2 criteria for response=S2 99 | % 100 | % fit.mcmc.dic = deviance information criterion (DIC) for model 101 | % fit.mcmc.Rhat = Gelman & Rubin's Rhat statistic for each parameter 102 | % 103 | % fit.obs_HR2_rS1 = actual type 2 hit rates for S1 responses 104 | % fit.est_HR2_rS1 = estimated type 2 hit rates for S1 responses 105 | % fit.obs_FAR2_rS1 = actual type 2 false alarm rates for S1 responses 106 | % fit.est_FAR2_rS1 = estimated type 2 false alarm rates for S1 responses 107 | % 108 | % fit.obs_HR2_rS2 = actual type 2 hit rates for S2 responses 109 | % fit.est_HR2_rS2 = estimated type 2 hit rates for S2 responses 110 | % fit.obs_FAR2_rS2 = actual type 2 false alarm rates for S2 responses 111 | % fit.est_FAR2_rS2 = estimated type 2 false alarm rates for S2 responses 112 | % 113 | % If there are N ratings, then there will be N-1 type 2 hit rates and false 114 | % alarm rates. If meta-d' is fit using the response-conditional model, 115 | % these parameters will be replicated separately for S1 and S2 responses. 116 | % 117 | % 6/5/2014 Steve Fleming www.stevefleming.org 118 | % Parts of this code are adapted from Brian Maniscalco's meta-d' toolbox 119 | % which can be found at http://www.columbia.edu/~bsm2105/type2sdt/ 120 | % 121 | % Updated 12/10/15 to allow estimation of type 1 d' within same model 122 | 123 | cwd = pwd; 124 | 125 | fprintf('\n') 126 | disp('----------------------------------------') 127 | disp('Hierarchical meta-d'' model') 128 | disp('https://github.com/smfleming/HMeta-d') 129 | disp('----------------------------------------') 130 | fprintf('\n') 131 | 132 | findpath = which('Bayes_metad_group.txt'); 133 | if isempty(findpath) 134 | error('Please add HMetaD directory to the path') 135 | else 136 | hmmPath = fileparts(findpath); 137 | cd(hmmPath) 138 | end 139 | 140 | if ~mod(length(nR_S1),2)==0, error('input arrays must have an even number of elements'); end 141 | if length(nR_S1)~=length(nR_S2), error('input arrays must have the same number of elements'); end 142 | 143 | if ~exist('fncdf','var') || isempty(fncdf) 144 | fncdf = @normcdf; 145 | end 146 | 147 | if ~exist('fninv','var') || isempty(fninv) 148 | fninv = @norminv; 149 | end 150 | 151 | if ~exist('name','var') || isempty(name) 152 | tmpfolder = 'tmpjags'; 153 | else 154 | tmpfolder = name; 155 | mkdir(tmpfolder); 156 | end 157 | 158 | % Transform data and type 1 d' calculations 159 | counts = [nR_S1 nR_S2]; 160 | nTot = sum(counts); 161 | nRatings = length(nR_S1)/2; 162 | 163 | % Adjust to ensure non-zero counts for type 1 d' point estimate (not 164 | % necessary if estimating d' inside JAGS) 165 | adj_f = 1/length(nR_S1); 166 | nR_S1_adj = nR_S1 + adj_f; 167 | nR_S2_adj = nR_S2 + adj_f; 168 | ratingHR = []; 169 | ratingFAR = []; 170 | for c = 2:nRatings*2 171 | ratingHR(end+1) = sum(nR_S2_adj(c:end)) / sum(nR_S2_adj); 172 | ratingFAR(end+1) = sum(nR_S1_adj(c:end)) / sum(nR_S1_adj); 173 | end 174 | 175 | t1_index = nRatings; 176 | 177 | d1 = fninv(ratingHR(t1_index)) - fninv(ratingFAR(t1_index)); 178 | c1 = -0.5 .* (fninv(ratingHR(t1_index)) + fninv(ratingFAR(t1_index))); 179 | 180 | %% Sampling 181 | if ~exist('mcmc_params','var') || isempty(mcmc_params) 182 | % MCMC Parameters 183 | mcmc_params.response_conditional = 0; 184 | mcmc_params.estimate_dprime = 0; % also estimate dprime in same model? 185 | mcmc_params.nchains = 3; % How Many Chains? 186 | mcmc_params.nburnin = 1000; % How Many Burn-in Samples? 187 | mcmc_params.nsamples = 10000; %How Many Recorded Samples? 188 | mcmc_params.nthin = 1; % How Often is a Sample Recorded? 189 | mcmc_params.doparallel = 0; % Parallel Option 190 | mcmc_params.dic = 1; 191 | end 192 | % Ensure init0 is correct size 193 | if ~isfield(mcmc_params, 'init0') 194 | for i=1:mcmc_params.nchains 195 | mcmc_params.init0(i) = struct; 196 | end 197 | end 198 | % Assign variables to the observed nodes 199 | switch mcmc_params.estimate_dprime 200 | case 1 201 | datastruct = struct('counts', counts, 'nratings', nRatings, 'nTot', nTot, 'Tol', 1e-05); 202 | case 0 203 | datastruct = struct('d1', d1, 'c1', c1, 'counts', counts, 'nratings', nRatings, 'nTot', nTot, 'Tol', 1e-05); 204 | end 205 | 206 | % Select model file and parameters to monitor 207 | 208 | switch mcmc_params.response_conditional 209 | case 0 210 | model_file = 'Bayes_metad.txt'; 211 | monitorparams = {'meta_d','d1','c1','cS1','cS2'}; 212 | 213 | case 1 214 | model_file = 'Bayes_metad_rc.txt'; 215 | monitorparams = {'meta_d_rS1','meta_d_rS2','d1','c1','cS1','cS2'}; 216 | end 217 | 218 | % Use JAGS to Sample 219 | try 220 | tic 221 | fprintf( 'Running JAGS ...\n' ); 222 | [samples, stats] = matjags( ... 223 | datastruct, ... 224 | fullfile(pwd, model_file), ... 225 | mcmc_params.init0, ... 226 | 'doparallel' , mcmc_params.doparallel, ... 227 | 'nchains', mcmc_params.nchains,... 228 | 'nburnin', mcmc_params.nburnin,... 229 | 'nsamples', mcmc_params.nsamples, ... 230 | 'thin', mcmc_params.nthin, ... 231 | 'dic', mcmc_params.dic,... 232 | 'monitorparams', monitorparams, ... 233 | 'savejagsoutput' , 0 , ... 234 | 'verbosity' , 1 , ... 235 | 'cleanup' , 1 , ... 236 | 'workingdir' , tmpfolder ); 237 | toc 238 | catch ME 239 | % Remove temporary directory if specified 240 | if exist('name','var') 241 | if exist(['../', tmpfolder],'dir') 242 | rmdir(['../', tmpfolder], 's'); 243 | end 244 | end 245 | % Print the error message 246 | rethrow(ME); 247 | end 248 | 249 | % Remove temporary directory if specified 250 | if exist('name','var') 251 | if exist(tmpfolder,'dir') 252 | rmdir(tmpfolder, 's'); 253 | end 254 | end 255 | 256 | %% Data is fit, now package output 257 | 258 | I_nR_rS2 = nR_S1(nRatings+1:end); 259 | I_nR_rS1 = nR_S2(nRatings:-1:1); 260 | 261 | C_nR_rS2 = nR_S2(nRatings+1:end); 262 | C_nR_rS1 = nR_S1(nRatings:-1:1); 263 | 264 | for i = 2:nRatings 265 | obs_FAR2_rS2(i-1) = sum( I_nR_rS2(i:end) ) / sum(I_nR_rS2); 266 | obs_HR2_rS2(i-1) = sum( C_nR_rS2(i:end) ) / sum(C_nR_rS2); 267 | 268 | obs_FAR2_rS1(i-1) = sum( I_nR_rS1(i:end) ) / sum(I_nR_rS1); 269 | obs_HR2_rS1(i-1) = sum( C_nR_rS1(i:end) ) / sum(C_nR_rS1); 270 | end 271 | 272 | 273 | fit.d1 = stats.mean.d1; 274 | fit.c1 = stats.mean.c1; 275 | fit.t2ca_rS1 = stats.mean.cS1; 276 | fit.t2ca_rS2 = stats.mean.cS2; 277 | 278 | if mcmc_params.dic 279 | fit.mcmc.dic = stats.dic; 280 | end 281 | fit.mcmc.Rhat = stats.Rhat; 282 | fit.mcmc.samples = samples; 283 | fit.mcmc.params = mcmc_params; 284 | 285 | % Calculate fits based on either vanilla or response-conditional model 286 | s = 1; % assume equal variance 287 | switch mcmc_params.response_conditional 288 | 289 | case 0 290 | 291 | d1_samples = samples.d1(:); 292 | d1_samples(d1_samples == 0) = 0.0001; % avoid divide-by-zero issue for samples of exactly zero 293 | fit.meta_d = stats.mean.meta_d; 294 | fit.M_ratio = mean(samples.meta_d(:)./d1_samples); 295 | fit.M_diff = mean(samples.meta_d(:) - d1_samples); 296 | 297 | %% find estimated t2FAR and t2HR 298 | meta_d = stats.mean.meta_d; 299 | S1mu = -meta_d/2; S1sd = 1; 300 | S2mu = meta_d/2; S2sd = S1sd/s; 301 | 302 | C_area_rS2 = 1-fncdf(fit.c1,S2mu,S2sd); 303 | I_area_rS2 = 1-fncdf(fit.c1,S1mu,S1sd); 304 | 305 | C_area_rS1 = fncdf(fit.c1,S1mu,S1sd); 306 | I_area_rS1 = fncdf(fit.c1,S2mu,S2sd); 307 | 308 | t2c1 = [fit.t2ca_rS1 fit.t2ca_rS2]; 309 | 310 | for i=1:nRatings-1 311 | 312 | t2c1_lower = t2c1(nRatings-i); 313 | t2c1_upper = t2c1(nRatings-1+i); 314 | 315 | I_FAR_area_rS2 = 1-fncdf(t2c1_upper,S1mu,S1sd); 316 | C_HR_area_rS2 = 1-fncdf(t2c1_upper,S2mu,S2sd); 317 | 318 | I_FAR_area_rS1 = fncdf(t2c1_lower,S2mu,S2sd); 319 | C_HR_area_rS1 = fncdf(t2c1_lower,S1mu,S1sd); 320 | 321 | 322 | est_FAR2_rS2(i) = I_FAR_area_rS2 / I_area_rS2; 323 | est_HR2_rS2(i) = C_HR_area_rS2 / C_area_rS2; 324 | 325 | est_FAR2_rS1(i) = I_FAR_area_rS1 / I_area_rS1; 326 | est_HR2_rS1(i) = C_HR_area_rS1 / C_area_rS1; 327 | 328 | end 329 | 330 | case 1 331 | 332 | d1_samples = samples.d1(:); 333 | d1_samples(d1_samples == 0) = 0.0001; % avoid divide-by-zero issue for samples of exactly zero 334 | fit.meta_d_rS1 = stats.mean.meta_d_rS1; 335 | fit.meta_d_rS2 = stats.mean.meta_d_rS2; 336 | fit.M_ratio_rS1 = mean(samples.meta_d_rS1(:)./d1_samples); 337 | fit.M_ratio_rS2 = mean(samples.meta_d_rS2(:)./d1_samples); 338 | fit.M_diff_rS1 = mean(samples.meta_d_rS1(:) - d1_samples); 339 | fit.M_diff_rS2 = mean(samples.meta_d_rS2(:) - d1_samples); 340 | 341 | %% find estimated t2FAR and t2HR 342 | S1mu_rS1 = -stats.mean.meta_d_rS1/2; S1sd = 1; 343 | S2mu_rS1 = stats.mean.meta_d_rS1/2; S2sd = S1sd/s; 344 | S1mu_rS2 = -stats.mean.meta_d_rS2/2; 345 | S2mu_rS2 = stats.mean.meta_d_rS2/2; 346 | 347 | C_area_rS2 = 1-fncdf(fit.c1,S2mu_rS2,S2sd); 348 | I_area_rS2 = 1-fncdf(fit.c1,S1mu_rS2,S1sd); 349 | 350 | C_area_rS1 = fncdf(fit.c1,S1mu_rS1,S1sd); 351 | I_area_rS1 = fncdf(fit.c1,S2mu_rS1,S2sd); 352 | 353 | t2c1 = [fit.t2ca_rS1 fit.t2ca_rS2]; 354 | 355 | for i=1:nRatings-1 356 | 357 | t2c1_lower = t2c1(nRatings-i); 358 | t2c1_upper = t2c1(nRatings-1+i); 359 | 360 | I_FAR_area_rS2 = 1-fncdf(t2c1_upper,S1mu_rS2,S1sd); 361 | C_HR_area_rS2 = 1-fncdf(t2c1_upper,S2mu_rS2,S2sd); 362 | 363 | I_FAR_area_rS1 = fncdf(t2c1_lower,S2mu_rS1,S2sd); 364 | C_HR_area_rS1 = fncdf(t2c1_lower,S1mu_rS1,S1sd); 365 | 366 | 367 | est_FAR2_rS2(i) = I_FAR_area_rS2 / I_area_rS2; 368 | est_HR2_rS2(i) = C_HR_area_rS2 / C_area_rS2; 369 | 370 | est_FAR2_rS1(i) = I_FAR_area_rS1 / I_area_rS1; 371 | est_HR2_rS1(i) = C_HR_area_rS1 / C_area_rS1; 372 | 373 | end 374 | 375 | end 376 | fit.est_HR2_rS1 = est_HR2_rS1; 377 | fit.obs_HR2_rS1 = obs_HR2_rS1; 378 | 379 | fit.est_FAR2_rS1 = est_FAR2_rS1; 380 | fit.obs_FAR2_rS1 = obs_FAR2_rS1; 381 | 382 | fit.est_HR2_rS2 = est_HR2_rS2; 383 | fit.obs_HR2_rS2 = obs_HR2_rS2; 384 | 385 | fit.est_FAR2_rS2 = est_FAR2_rS2; 386 | fit.obs_FAR2_rS2 = obs_FAR2_rS2; 387 | 388 | %cd(cwd); 389 | 390 | -------------------------------------------------------------------------------- /Matlab/fit_meta_d_mcmc_groupCorr.m: -------------------------------------------------------------------------------- 1 | function fit = fit_meta_d_mcmc_groupCorr(nR_S1, nR_S2, mcmc_params, fncdf, fninv, name) 2 | % fit = fit_meta_d_mcmc_groupCorr(nR_S1_1, nR_S2_1, nR_S1_2, nR_S2_2, mcmc_params, fncdf, fninv) 3 | % 4 | % Estimates correlation coefficient between metacognitive effiency 5 | % estimates between two domains. See fit_meta_d_mcmc_group for full details 6 | % of the basic model 7 | % 8 | % To enter data from the two tasks use the following format: 9 | % 10 | % nR_S1(1).counts, nR_S2(1).counts, nR_S1(2).counts, nR_S2(2).counts 11 | % 12 | % 5/8/2016 Steve Fleming www.metacoglab.org 13 | % Parts of this code are adapted from Brian Maniscalco's meta-d' toolbox 14 | % which can be found at http://www.columbia.edu/~bsm2105/type2sdt/ 15 | 16 | fprintf('\n') 17 | disp('----------------------------------------') 18 | disp('Hierarchical meta-d'' model') 19 | disp('https://github.com/smfleming/HMeta-d') 20 | disp('----------------------------------------') 21 | fprintf('\n') 22 | 23 | cwd = pwd; 24 | findpath = which('Bayes_metad_group.txt'); 25 | if isempty(findpath) 26 | error('Please add HMetaD directory to the path') 27 | else 28 | hmmPath = fileparts(findpath); 29 | cd(hmmPath) 30 | end 31 | 32 | if ~exist('fncdf','var') || isempty(fncdf) 33 | fncdf = @normcdf; 34 | end 35 | 36 | if ~exist('fninv','var') || isempty(fninv) 37 | fninv = @norminv; 38 | end 39 | 40 | if ~exist('name','var') || isempty(name) 41 | tmpfolder = 'tmpjags'; 42 | else 43 | tmpfolder = name; 44 | mkdir(tmpfolder); 45 | end 46 | 47 | Nsubj = length(nR_S1(1).counts); 48 | nRatings = length(nR_S1(1).counts{1})/2; 49 | 50 | if length(nR_S1(1).counts) ~= length(nR_S1(2).counts) || length(nR_S2(1).counts) ~= length(nR_S2(2).counts) 51 | error('There are different numbers of subjects across the two conditions') 52 | end 53 | 54 | for n = 1:Nsubj 55 | for task = 1:2 56 | 57 | if length(nR_S1(task).counts{n}) ~= nRatings*2 || length(nR_S2(task).counts{n}) ~= nRatings*2 58 | error('Subjects do not have equal numbers of response categories'); 59 | end 60 | % Get type 1 SDT parameter values 61 | counts{task}(n,:) = [nR_S1(task).counts{n} nR_S2(task).counts{n}]; 62 | nTot(n,task) = sum(counts{task}(n,:)); 63 | % Adjust to ensure non-zero counts for type 1 d' point estimate (not 64 | % necessary if estimating d' inside JAGS) 65 | adj_f = 1/length(nR_S1(task).counts{n}); 66 | nR_S1_adj = nR_S1(task).counts{n} + adj_f; 67 | nR_S2_adj = nR_S2(task).counts{n} + adj_f; 68 | 69 | ratingHR = []; 70 | ratingFAR = []; 71 | for c = 2:nRatings*2 72 | ratingHR(end+1) = sum(nR_S2_adj(c:end)) / sum(nR_S2_adj); 73 | ratingFAR(end+1) = sum(nR_S1_adj(c:end)) / sum(nR_S1_adj); 74 | end 75 | 76 | t1_index = nRatings; 77 | 78 | d1(n,task) = fninv(ratingHR(t1_index)) - fninv(ratingFAR(t1_index)); 79 | c1(n,task) = -0.5 .* (fninv(ratingHR(t1_index)) + fninv(ratingFAR(t1_index))); 80 | end 81 | end 82 | 83 | %% Sampling 84 | if ~exist('mcmc_params','var') || isempty(mcmc_params) 85 | % MCMC Parameters 86 | mcmc_params.response_conditional = 0; % response-conditional meta-d? 87 | mcmc_params.estimate_dprime = 0; % also estimate dprime in same model? 88 | mcmc_params.nchains = 3; % How Many Chains? 89 | mcmc_params.nburnin = 1000; % How Many Burn-in Samples? 90 | mcmc_params.nsamples = 10000; %How Many Recorded Samples? 91 | mcmc_params.nthin = 1; % How Often is a Sample Recorded? 92 | mcmc_params.doparallel = 0; % Parallel Option 93 | mcmc_params.dic = 1; 94 | end 95 | % Ensure init0 is correct size 96 | if ~isfield(mcmc_params, 'init0') 97 | for i=1:mcmc_params.nchains 98 | mcmc_params.init0(i) = struct; 99 | end 100 | end 101 | % Assign variables to the observed nodes 102 | 103 | datastruct = struct('d1', d1, 'c1', c1, 'nsubj',Nsubj,'counts1', counts{1}, 'counts2', counts{2}, 'nratings', nRatings, 'Tol', 1e-05); 104 | 105 | % Select model file and parameters to monitor 106 | model_file = 'Bayes_metad_group_corr.txt'; 107 | monitorparams = {'d1', 'c1', 'mu_logMratio', 'sigma_logMratio', 'rho', 'Mratio'}; 108 | 109 | % Use JAGS to Sample 110 | try 111 | tic 112 | fprintf( 'Running JAGS ...\n' ); 113 | [samples, stats] = matjags( ... 114 | datastruct, ... 115 | fullfile(pwd, model_file), ... 116 | mcmc_params.init0, ... 117 | 'doparallel' , mcmc_params.doparallel, ... 118 | 'nchains', mcmc_params.nchains,... 119 | 'nburnin', mcmc_params.nburnin,... 120 | 'nsamples', mcmc_params.nsamples, ... 121 | 'thin', mcmc_params.nthin, ... 122 | 'dic', mcmc_params.dic,... 123 | 'monitorparams', monitorparams, ... 124 | 'savejagsoutput' , 0 , ... 125 | 'verbosity' , 1 , ... 126 | 'cleanup' , 1 , ... 127 | 'workingdir' , 'tmpjags' ); 128 | toc 129 | catch ME 130 | % Remove temporary directory if specified 131 | if exist('name','var') 132 | if exist(['../', tmpfolder],'dir') 133 | rmdir(['../', tmpfolder], 's'); 134 | end 135 | end 136 | % Print the error message 137 | rethrow(ME); 138 | end 139 | 140 | % Remove temporary directory if specified 141 | if exist('name','var') 142 | if exist(tmpfolder,'dir') 143 | rmdir(tmpfolder, 's'); 144 | end 145 | end 146 | 147 | fit.mu_logMratio = stats.mean.mu_logMratio; 148 | fit.sigma_logMratio = stats.mean.sigma_logMratio; 149 | fit.rho = stats.mean.rho; 150 | fit.Mratio = stats.mean.Mratio; 151 | fit.d1 = stats.mean.d1; 152 | fit.c1 = stats.mean.c1; 153 | 154 | fit.mcmc.dic = stats.dic; 155 | fit.mcmc.Rhat = stats.Rhat; 156 | fit.mcmc.samples = samples; 157 | fit.mcmc.params = mcmc_params; 158 | 159 | cd(cwd); -------------------------------------------------------------------------------- /Matlab/fit_meta_d_mcmc_regression.m: -------------------------------------------------------------------------------- 1 | function fit = fit_meta_d_mcmc_regression(nR_S1, nR_S2, cov, mcmc_params, fncdf, fninv, name) 2 | % HMeta-d for between-subjects regression on meta-d'/d' 3 | % cov is a n x s matrix of covariates, where s=number of subjects, n=number 4 | % of covariates 5 | % See fit_meta_d_mcmc_group for full details 6 | % 7 | % Steve Fleming 2017 8 | % 9 | % 16.09.2019 10 | % Ofaull added options to specify models for multiple covariates (up to 5) 11 | 12 | fprintf('\n') 13 | disp('----------------------------------------') 14 | disp('Hierarchical meta-d'' regression model') 15 | disp('https://github.com/smfleming/HMeta-d') 16 | disp('----------------------------------------') 17 | fprintf('\n') 18 | 19 | cwd = pwd; 20 | 21 | % Select model file and parameters to monitor 22 | if size(cov, 1) == 1 23 | model_file = 'Bayes_metad_group_regress_nodp.txt'; 24 | monitorparams = {'d1', 'c1', 'mu_logMratio', 'sigma_logMratio', 'mu_c2', 'sigma_c2', 'mu_beta1', 'Mratio', 'cS1', 'cS2'}; 25 | elseif size(cov, 1) == 2 26 | model_file = 'Bayes_metad_group_regress_nodp_2cov.txt'; 27 | monitorparams = {'d1', 'c1', 'mu_logMratio', 'sigma_logMratio', 'mu_c2', 'sigma_c2', 'mu_beta1', 'mu_beta2', 'Mratio', 'cS1', 'cS2'}; 28 | elseif size(cov, 1) == 3 29 | model_file = 'Bayes_metad_group_regress_nodp_3cov.txt'; 30 | monitorparams = {'d1', 'c1', 'mu_logMratio', 'sigma_logMratio', 'mu_c2', 'sigma_c2', 'mu_beta1', 'mu_beta2', 'mu_beta3', 'Mratio', 'cS1', 'cS2'}; 31 | elseif size(cov, 1) == 4 32 | model_file = 'Bayes_metad_group_regress_nodp_4cov.txt'; 33 | monitorparams = {'d1', 'c1', 'mu_logMratio', 'sigma_logMratio', 'mu_c2', 'sigma_c2', 'mu_beta1', 'mu_beta2', 'mu_beta3', 'mu_beta4', 'Mratio', 'cS1', 'cS2'}; 34 | elseif size(cov, 1) == 5 35 | model_file = 'Bayes_metad_group_regress_nodp_5cov.txt'; 36 | monitorparams = {'d1', 'c1', 'mu_logMratio', 'sigma_logMratio', 'mu_c2', 'sigma_c2', 'mu_beta1', 'mu_beta2', 'mu_beta3', 'mu_beta4', 'mu_beta5', 'Mratio', 'cS1', 'cS2'}; 37 | else 38 | error('Too many covariates specified: Max = 5') 39 | end 40 | 41 | findpath = which(model_file); 42 | if isempty(findpath) 43 | error('Please add HMetaD directory to the path') 44 | else 45 | hmmPath = fileparts(findpath); 46 | cd(hmmPath) 47 | end 48 | 49 | if ~exist('fncdf','var') || isempty(fncdf) 50 | fncdf = @normcdf; 51 | end 52 | 53 | if ~exist('fninv','var') || isempty(fninv) 54 | fninv = @norminv; 55 | end 56 | 57 | if ~exist('name','var') || isempty(name) 58 | tmpfolder = 'tmpjags'; 59 | else 60 | tmpfolder = name; 61 | mkdir(tmpfolder); 62 | end 63 | 64 | Nsubj = length(nR_S1); 65 | nRatings = length(nR_S1{1})/2; 66 | 67 | for n = 1:Nsubj 68 | if length(nR_S1{n}) ~= nRatings*2 || length(nR_S2{n}) ~= nRatings*2 69 | error('Subjects do not have equal numbers of response categories'); 70 | end 71 | % Get type 1 SDT parameter values 72 | counts(n,:) = [nR_S1{n} nR_S2{n}]; 73 | nTot(n) = sum(counts(n,:)); 74 | % Adjust to ensure non-zero counts for type 1 d' point estimate (not 75 | % necessary if estimating d' inside JAGS) 76 | adj_f = 1/length(nR_S1{n}); 77 | nR_S1_adj = nR_S1{n} + adj_f; 78 | nR_S2_adj = nR_S2{n} + adj_f; 79 | 80 | ratingHR = []; 81 | ratingFAR = []; 82 | for c = 2:nRatings*2 83 | ratingHR(end+1) = sum(nR_S2_adj(c:end)) / sum(nR_S2_adj); 84 | ratingFAR(end+1) = sum(nR_S1_adj(c:end)) / sum(nR_S1_adj); 85 | end 86 | 87 | t1_index = nRatings; 88 | 89 | d1(n) = fninv(ratingHR(t1_index)) - fninv(ratingFAR(t1_index)); 90 | c1(n) = -0.5 .* (fninv(ratingHR(t1_index)) + fninv(ratingFAR(t1_index))); 91 | end 92 | 93 | %% Sampling 94 | if ~exist('mcmc_params','var') || isempty(mcmc_params) 95 | % MCMC Parameters 96 | mcmc_params.response_conditional = 0; % response-conditional meta-d? 97 | mcmc_params.estimate_dprime = 0; % also estimate dprime in same model? 98 | mcmc_params.nchains = 3; % How Many Chains? 99 | mcmc_params.nburnin = 1000; % How Many Burn-in Samples? 100 | mcmc_params.nsamples = 10000; %How Many Recorded Samples? 101 | mcmc_params.nthin = 1; % How Often is a Sample Recorded? 102 | mcmc_params.doparallel = 0; % Parallel Option 103 | mcmc_params.dic = 1; 104 | end 105 | % Ensure init0 is correct size 106 | if ~isfield(mcmc_params, 'init0') 107 | for i=1:mcmc_params.nchains 108 | mcmc_params.init0(i) = struct; 109 | end 110 | end 111 | 112 | datastruct = struct('d1', d1, 'c1', c1,'nsubj',Nsubj,'counts', counts,'cov', cov, 'nratings', nRatings, 'nTot', nTot, 'Tol', 1e-05); 113 | 114 | % Use JAGS to Sample 115 | try 116 | tic 117 | fprintf( 'Running JAGS ...\n' ); 118 | [samples, stats] = matjags( ... 119 | datastruct, ... 120 | fullfile(pwd, model_file), ... 121 | mcmc_params.init0, ... 122 | 'doparallel' , mcmc_params.doparallel, ... 123 | 'nchains', mcmc_params.nchains,... 124 | 'nburnin', mcmc_params.nburnin,... 125 | 'nsamples', mcmc_params.nsamples, ... 126 | 'thin', mcmc_params.nthin, ... 127 | 'dic', mcmc_params.dic,... 128 | 'monitorparams', monitorparams, ... 129 | 'savejagsoutput' , 0 , ... 130 | 'verbosity' , 1 , ... 131 | 'cleanup' , 1 , ... 132 | 'workingdir' , tmpfolder ); 133 | toc 134 | catch ME 135 | % Remove temporary directory if specified 136 | if exist('name','var') 137 | if exist(['../', tmpfolder],'dir') 138 | rmdir(['../', tmpfolder], 's'); 139 | end 140 | end 141 | % Print the error message 142 | rethrow(ME); 143 | end 144 | 145 | % Remove temporary directory if specified 146 | if exist('name','var') 147 | if exist(tmpfolder,'dir') 148 | rmdir(tmpfolder, 's'); 149 | end 150 | end 151 | 152 | % Package group-level output 153 | if isrow(stats.mean.cS1) 154 | stats.mean.cS1 = stats.mean.cS1'; 155 | stats.mean.cS2 = stats.mean.cS2'; 156 | end 157 | fit.t2ca_rS1 = stats.mean.cS1; 158 | fit.t2ca_rS2 = stats.mean.cS2; 159 | fit.d1 = stats.mean.d1; 160 | fit.c1 = stats.mean.c1; 161 | 162 | fit.mu_logMratio = stats.mean.mu_logMratio; 163 | fit.sigma_logMratio = stats.mean.sigma_logMratio; 164 | fit.mu_beta1 = stats.mean.mu_beta1; 165 | if size(cov, 1) > 1 166 | fit.mu_beta2 = stats.mean.mu_beta2; 167 | end 168 | if size(cov, 1) > 2 169 | fit.mu_beta3 = stats.mean.mu_beta3; 170 | end 171 | if size(cov, 1) > 3 172 | fit.mu_beta4 = stats.mean.mu_beta4; 173 | end 174 | if size(cov, 1) > 4 175 | fit.mu_beta5 = stats.mean.mu_beta5; 176 | end 177 | 178 | fit.Mratio = stats.mean.Mratio; 179 | fit.meta_d = fit.Mratio.*stats.mean.d1; 180 | 181 | fit.mcmc.dic = stats.dic; 182 | fit.mcmc.Rhat = stats.Rhat; 183 | fit.mcmc.samples = samples; 184 | fit.mcmc.params = mcmc_params; 185 | 186 | for n = 1:Nsubj 187 | 188 | 189 | %% Data is fit, now package output 190 | I_nR_rS2 = nR_S1{n}(nRatings+1:end); 191 | I_nR_rS1 = nR_S2{n}(nRatings:-1:1); 192 | 193 | C_nR_rS2 = nR_S2{n}(nRatings+1:end); 194 | C_nR_rS1 = nR_S1{n}(nRatings:-1:1); 195 | 196 | for i = 2:nRatings 197 | obs_FAR2_rS2(i-1) = sum( I_nR_rS2(i:end) ) / sum(I_nR_rS2); 198 | obs_HR2_rS2(i-1) = sum( C_nR_rS2(i:end) ) / sum(C_nR_rS2); 199 | 200 | obs_FAR2_rS1(i-1) = sum( I_nR_rS1(i:end) ) / sum(I_nR_rS1); 201 | obs_HR2_rS1(i-1) = sum( C_nR_rS1(i:end) ) / sum(C_nR_rS1); 202 | end 203 | 204 | 205 | % Calculate fits based on either vanilla or response-conditional model 206 | s = 1; 207 | %% find estimated t2FAR and t2HR 208 | meta_d = fit.meta_d(n); 209 | S1mu = -meta_d/2; S1sd = 1; 210 | S2mu = meta_d/2; S2sd = S1sd/s; 211 | 212 | C_area_rS2 = 1-fncdf(fit.c1(n),S2mu,S2sd); 213 | I_area_rS2 = 1-fncdf(fit.c1(n),S1mu,S1sd); 214 | 215 | C_area_rS1 = fncdf(fit.c1(n),S1mu,S1sd); 216 | I_area_rS1 = fncdf(fit.c1(n),S2mu,S2sd); 217 | 218 | t2c1 = [fit.t2ca_rS1(n,:) fit.t2ca_rS2(n,:)]; 219 | 220 | for i=1:nRatings-1 221 | 222 | t2c1_lower = t2c1(nRatings-i); 223 | t2c1_upper = t2c1(nRatings-1+i); 224 | 225 | I_FAR_area_rS2 = 1-fncdf(t2c1_upper,S1mu,S1sd); 226 | C_HR_area_rS2 = 1-fncdf(t2c1_upper,S2mu,S2sd); 227 | 228 | I_FAR_area_rS1 = fncdf(t2c1_lower,S2mu,S2sd); 229 | C_HR_area_rS1 = fncdf(t2c1_lower,S1mu,S1sd); 230 | 231 | 232 | est_FAR2_rS2(i) = I_FAR_area_rS2 / I_area_rS2; 233 | est_HR2_rS2(i) = C_HR_area_rS2 / C_area_rS2; 234 | 235 | est_FAR2_rS1(i) = I_FAR_area_rS1 / I_area_rS1; 236 | est_HR2_rS1(i) = C_HR_area_rS1 / C_area_rS1; 237 | 238 | end 239 | 240 | fit.est_HR2_rS1(n,:) = est_HR2_rS1; 241 | fit.obs_HR2_rS1(n,:) = obs_HR2_rS1; 242 | 243 | fit.est_FAR2_rS1(n,:) = est_FAR2_rS1; 244 | fit.obs_FAR2_rS1(n,:) = obs_FAR2_rS1; 245 | 246 | fit.est_HR2_rS2(n,:) = est_HR2_rS2; 247 | fit.obs_HR2_rS2(n,:) = obs_HR2_rS2; 248 | 249 | fit.est_FAR2_rS2(n,:) = est_FAR2_rS2; 250 | fit.obs_FAR2_rS2(n,:) = obs_FAR2_rS2; 251 | end 252 | cd(cwd); 253 | -------------------------------------------------------------------------------- /Matlab/fit_meta_d_params.m: -------------------------------------------------------------------------------- 1 | function mcmc_params = fit_meta_d_params() 2 | % mcmc_params = fit_meta_d_params() 3 | % 4 | % Returns default mcmc_params for fit_meta_d_mcmc_group and fit_meta_d_mcmc 5 | % 6 | % SF 2015 7 | 8 | mcmc_params.response_conditional = 0; 9 | mcmc_params.estimate_dprime = 0; 10 | mcmc_params.nchains = 3; % How Many Chains? 11 | mcmc_params.nburnin = 1000; % How Many Burn-in Samples? 12 | mcmc_params.nsamples = 10000; %How Many Recorded Samples? 13 | mcmc_params.nthin = 1; % How Often is a Sample Recorded? 14 | mcmc_params.doparallel = 0; % Parallel Option 15 | mcmc_params.dic = 1; 16 | %% These lines are now part of main functions to avoid issues when changing chain number 17 | % for i=1:mcmc_params.nchains 18 | % mcmc_params.init0(i) = struct; 19 | % end -------------------------------------------------------------------------------- /Matlab/metad_group_visualise.m: -------------------------------------------------------------------------------- 1 | %% Generate mean and 95% CI for group-level ROC 2 | 3 | Nsub = length(fit.d1); 4 | ts = tinv([0.05/2, 1-0.05/2],Nsub-1); 5 | 6 | if any(isnan(fit.obs_FAR2_rS1(:))) || any(isnan(fit.obs_HR2_rS1(:))) || any(isnan(fit.obs_FAR2_rS2(:))) || any(isnan(fit.obs_HR2_rS2(:))) 7 | warning('One or more subjects have NaN entries for observed confidence rating counts; these will be omitted from the plot') 8 | end 9 | 10 | mean_obs_FAR2_rS1 = nanmean(fit.obs_FAR2_rS1); 11 | mean_obs_HR2_rS1 = nanmean(fit.obs_HR2_rS1); 12 | mean_obs_FAR2_rS2 = nanmean(fit.obs_FAR2_rS2); 13 | mean_obs_HR2_rS2 = nanmean(fit.obs_HR2_rS2); 14 | 15 | CI_obs_FAR2_rS1(1,:) = ts(1).*(nanstd(fit.obs_FAR2_rS1)./sqrt(Nsub)); 16 | CI_obs_FAR2_rS1(2,:) = ts(2).*(nanstd(fit.obs_FAR2_rS1)./sqrt(Nsub)); 17 | CI_obs_HR2_rS1(1,:) = ts(1).*(nanstd(fit.obs_HR2_rS1)./sqrt(Nsub)); 18 | CI_obs_HR2_rS1(2,:) = ts(2).*(nanstd(fit.obs_HR2_rS1)./sqrt(Nsub)); 19 | CI_obs_FAR2_rS2(1,:) = ts(1).*(nanstd(fit.obs_FAR2_rS2)./sqrt(Nsub)); 20 | CI_obs_FAR2_rS2(2,:) = ts(2).*(nanstd(fit.obs_FAR2_rS2)./sqrt(Nsub)); 21 | CI_obs_HR2_rS2(1,:) = ts(1).*(nanstd(fit.obs_HR2_rS2)./sqrt(Nsub)); 22 | CI_obs_HR2_rS2(2,:) = ts(2).*(nanstd(fit.obs_HR2_rS2)./sqrt(Nsub)); 23 | 24 | mean_est_FAR2_rS1 = nanmean(fit.est_FAR2_rS1); 25 | mean_est_HR2_rS1 = nanmean(fit.est_HR2_rS1); 26 | mean_est_FAR2_rS2 = nanmean(fit.est_FAR2_rS2); 27 | mean_est_HR2_rS2 = nanmean(fit.est_HR2_rS2); 28 | 29 | CI_est_FAR2_rS1(1,:) = ts(1).*(nanstd(fit.est_FAR2_rS1)./sqrt(Nsub)); 30 | CI_est_FAR2_rS1(2,:) = ts(2).*(nanstd(fit.est_FAR2_rS1)./sqrt(Nsub)); 31 | CI_est_HR2_rS1(1,:) = ts(1).*(nanstd(fit.est_HR2_rS1)./sqrt(Nsub)); 32 | CI_est_HR2_rS1(2,:) = ts(2).*(nanstd(fit.est_HR2_rS1)./sqrt(Nsub)); 33 | CI_est_FAR2_rS2(1,:) = ts(1).*(nanstd(fit.est_FAR2_rS2)./sqrt(Nsub)); 34 | CI_est_FAR2_rS2(2,:) = ts(2).*(nanstd(fit.est_FAR2_rS2)./sqrt(Nsub)); 35 | CI_est_HR2_rS2(1,:) = ts(1).*(nanstd(fit.est_HR2_rS2)./sqrt(Nsub)); 36 | CI_est_HR2_rS2(2,:) = ts(2).*(nanstd(fit.est_HR2_rS2)./sqrt(Nsub)); 37 | 38 | %% Observed and expected type 2 ROCs for S1 and S2 responses 39 | h1 = figure(1); 40 | set(gcf, 'Units', 'normalized'); 41 | set(gcf, 'Position', [0.2 0.2 0.5 0.5]); 42 | 43 | subplot(1,2,1); 44 | errorbar([1 mean_obs_FAR2_rS1 0], [1 mean_obs_HR2_rS1 0], [0 CI_obs_HR2_rS1(1,:) 0], [0 CI_obs_HR2_rS1(2,:) 0], ... 45 | [0 CI_obs_FAR2_rS1(1,:) 0], [0 CI_obs_FAR2_rS1(2,:) 0], 'ko-','linewidth',1.5,'markersize', 12); 46 | hold on 47 | errorbar([1 mean_est_FAR2_rS1 0], [1 mean_est_HR2_rS1 0], [0 CI_est_HR2_rS1(1,:) 0], [0 CI_est_HR2_rS1(2,:) 0], ... 48 | [0 CI_est_FAR2_rS1(1,:) 0], [0 CI_est_FAR2_rS1(2,:) 0], 'd-','color',[0.5 0.5 0.5], 'linewidth',1.5,'markersize',10); 49 | set(gca, 'XLim', [0 1], 'YLim', [0 1], 'FontSize', 16); 50 | ylabel('HR2'); 51 | xlabel('FAR2'); 52 | line([0 1],[0 1],'linestyle','--','color','k'); 53 | axis square 54 | box off 55 | legend('Data', 'Model', 'Location', 'SouthEast') 56 | legend boxoff 57 | title('Response = S1') 58 | 59 | subplot(1,2,2); 60 | errorbar([1 mean_obs_FAR2_rS2 0], [1 mean_obs_HR2_rS2 0], [0 CI_obs_HR2_rS2(1,:) 0], [0 CI_obs_HR2_rS2(2,:) 0], ... 61 | [0 CI_obs_FAR2_rS2(1,:) 0], [0 CI_obs_FAR2_rS2(2,:) 0], 'ko-','linewidth',1.5,'markersize', 12); 62 | hold on 63 | errorbar([1 mean_est_FAR2_rS2 0], [1 mean_est_HR2_rS2 0], [0 CI_est_HR2_rS2(1,:) 0], [0 CI_est_HR2_rS2(2,:) 0], ... 64 | [0 CI_est_FAR2_rS2(1,:) 0], [0 CI_est_FAR2_rS2(2,:) 0], 'd-','color',[0.5 0.5 0.5], 'linewidth',1.5,'markersize',10); 65 | set(gca, 'XLim', [0 1], 'YLim', [0 1], 'FontSize', 16); 66 | ylabel('HR2'); 67 | xlabel('FAR2'); 68 | line([0 1],[0 1],'linestyle','--','color','k'); 69 | axis square 70 | box off 71 | legend('Data', 'Model', 'Location', 'SouthEast') 72 | legend boxoff 73 | title('Response = S2') 74 | -------------------------------------------------------------------------------- /Matlab/metad_sim.m: -------------------------------------------------------------------------------- 1 | function sim = metad_sim(d, metad, c, c1, c2, Ntrials) 2 | % sim = metad_sim(d, metad, c, c1, c2, Ntrials) 3 | % 4 | % INPUTS 5 | % d - type 1 dprime 6 | % metad - type 2 sensitivity in units of type 1 dprime 7 | % 8 | % c - type 1 criterion 9 | % c1 - type 2 criteria for S1 response 10 | % c2 - type 2 criteria for S2 response 11 | % Ntrials - number of trials to simulate, assumes equal S/N 12 | % 13 | % OUTPUT 14 | % 15 | % sim - structure containing nR_S1 and nR_S2 response counts 16 | % 17 | % SF 2014 18 | 19 | nRatings = length(c1)+1; 20 | 21 | % Calc type 1 response counts 22 | H = round((1-normcdf(c,d/2)).*(Ntrials/2)); 23 | FA = round((1-normcdf(c,-d/2)).*(Ntrials/2)); 24 | CR = round(normcdf(c,-d/2).*(Ntrials/2)); 25 | M = round(normcdf(c,d/2).*(Ntrials/2)); 26 | 27 | % Calc type 2 probabilities 28 | S1mu = -metad/2; 29 | S2mu = metad/2; 30 | 31 | % Normalising constants 32 | C_area_rS1 = normcdf(c,S1mu); 33 | I_area_rS1 = normcdf(c,S2mu); 34 | C_area_rS2 = 1-normcdf(c,S2mu); 35 | I_area_rS2 = 1-normcdf(c,S1mu); 36 | 37 | t2c1x = [-Inf c1 c c2 Inf]; 38 | 39 | for i = 1:nRatings 40 | prC_rS1(i) = ( normcdf(t2c1x(i+1),S1mu) - normcdf(t2c1x(i),S1mu) ) / C_area_rS1; 41 | prI_rS1(i) = ( normcdf(t2c1x(i+1),S2mu) - normcdf(t2c1x(i),S2mu) ) / I_area_rS1; 42 | 43 | prC_rS2(i) = ( (1-normcdf(t2c1x(nRatings+i),S2mu)) - (1-normcdf(t2c1x(nRatings+i+1),S2mu)) ) / C_area_rS2; 44 | prI_rS2(i) = ( (1-normcdf(t2c1x(nRatings+i),S1mu)) - (1-normcdf(t2c1x(nRatings+i+1),S1mu)) ) / I_area_rS2; 45 | end 46 | 47 | % Ensure vectors sum to 1 to avoid problems with mnrnd 48 | prC_rS1 = prC_rS1./sum(prC_rS1); 49 | prI_rS1 = prI_rS1./sum(prI_rS1); 50 | prC_rS2 = prC_rS2./sum(prC_rS2); 51 | prI_rS2 = prI_rS2./sum(prI_rS2); 52 | 53 | % Sample 4 response classes from multinomial distirbution (normalised 54 | % within each response class) 55 | nC_rS1 = mnrnd(CR,prC_rS1); 56 | nI_rS1 = mnrnd(M,prI_rS1); 57 | nC_rS2 = mnrnd(H,prC_rS2); 58 | nI_rS2 = mnrnd(FA,prI_rS2); 59 | 60 | % Add to data vectors 61 | sim.nR_S1 = [nC_rS1 nI_rS2]; 62 | sim.nR_S2 = [nI_rS1 nC_rS2]; 63 | -------------------------------------------------------------------------------- /Matlab/metad_visualise.m: -------------------------------------------------------------------------------- 1 | % Visualisation and diagnostics for meta-d' fit object generated by 2 | % fit_meta_d_mcmc 3 | 4 | %% Observed and expected type 2 ROCs for S1 and S2 responses 5 | h1 = figure(1); 6 | set(gcf, 'Units', 'normalized'); 7 | set(gcf, 'Position', [0.2 0.2 0.5 0.5]); 8 | 9 | subplot(2,2,1); 10 | plot(fit.obs_FAR2_rS1, fit.obs_HR2_rS1, 'ko-','linewidth',1.5,'markersize',12); 11 | hold on 12 | plot(fit.est_FAR2_rS1, fit.est_HR2_rS1, '+-','color',[0.5 0.5 0.5], 'linewidth',1.5,'markersize',10); 13 | set(gca, 'XLim', [0 1], 'YLim', [0 1], 'FontSize', 16); 14 | ylabel('HR2'); 15 | xlabel('FAR2'); 16 | line([0 1],[0 1],'linestyle','--','color','k'); 17 | axis square 18 | box off 19 | 20 | subplot(2,2,2); 21 | plot(fit.obs_FAR2_rS2, fit.obs_HR2_rS2, 'ko-','linewidth',1.5,'markersize',12); 22 | hold on 23 | plot(fit.est_FAR2_rS2, fit.est_HR2_rS2, '+-','color',[0.5 0.5 0.5], 'linewidth',1.5,'markersize',10); 24 | set(gca, 'XLim', [0 1], 'YLim', [0 1], 'FontSize', 16); 25 | ylabel('HR2'); 26 | xlabel('FAR2'); 27 | line([0 1],[0 1],'linestyle','--','color','k'); 28 | axis square 29 | box off 30 | 31 | %% Observed and expected type 2 ROC in z-space 32 | 33 | subplot(2,2,3); 34 | plot(norminv(fit.obs_FAR2_rS1), norminv(fit.obs_HR2_rS1), 'ko-','linewidth',1.5,'markersize',12); 35 | hold on 36 | plot(norminv(fit.est_FAR2_rS1), norminv(fit.est_HR2_rS1), '+-','color',[0.5 0.5 0.5], 'linewidth',1.5,'markersize',10); 37 | set(gca, 'FontSize', 16); 38 | ylabel('z(HR2)'); 39 | xlabel('z(FAR2)'); 40 | axis square 41 | box off 42 | 43 | subplot(2,2,4); 44 | plot(norminv(fit.obs_FAR2_rS2), norminv(fit.obs_HR2_rS2), 'ko-','linewidth',1.5,'markersize',12); 45 | hold on 46 | plot(norminv(fit.est_FAR2_rS2), norminv(fit.est_HR2_rS2), '+-','color',[0.5 0.5 0.5], 'linewidth',1.5,'markersize',10); 47 | set(gca, 'FontSize', 16); 48 | ylabel('z(HR2)'); 49 | xlabel('z(FAR2)'); 50 | axis square 51 | box off 52 | legend('Data','Model','Location','SouthEast'); 53 | 54 | %% Plot posteriors for parameters 55 | h2 = figure(2); 56 | set(gcf, 'Units', 'normalized'); 57 | set(gcf, 'Position', [0.2 0.2 0.7 0.3]); 58 | maxSamp = []; 59 | 60 | subplot(1,3,1); 61 | [n x] = hist(fit.mcmc.samples.meta_d(:)); 62 | bar(x, n, 'edgecolor','b','facecolor',[1 1 1]); 63 | xlabel('meta-d'''); 64 | ylabel('samples'); 65 | 66 | subplot(1,3,2); 67 | plot(fit.mcmc.samples.meta_d'); 68 | xlabel('Sample'); 69 | ylabel('meta-d'); 70 | 71 | subplot(1,3,3); 72 | for i = 1:length(fit.t2ca_rS1) 73 | [n x] = hist(fit.mcmc.samples.cS1(1,:,i)); 74 | bar(x, n, 'edgecolor','b','facecolor',[1 1 1]); 75 | hold on 76 | maxSamp = [maxSamp max(n)]; 77 | end 78 | for i = 1:length(fit.t2ca_rS2) 79 | [n x] = hist(fit.mcmc.samples.cS2(1,:,i)); 80 | bar(x, n, 'edgecolor','b','facecolor',[1 1 1]); 81 | hold on 82 | maxSamp = [maxSamp max(n)]; 83 | end 84 | xlabel('c_2'); 85 | ylabel('samples'); 86 | 87 | -------------------------------------------------------------------------------- /Matlab/plotSamples.m: -------------------------------------------------------------------------------- 1 | function plotSamples(samples) 2 | % function out = plotSamples(samples) 3 | % 4 | % Plots chains and histograms of samples to check mixing 5 | % 6 | % INPUTS 7 | % 8 | % samples - vector of MCMC samples 9 | % 10 | % Steve Fleming 2015 stephen.fleming@ucl.ac.uk 11 | 12 | figure; 13 | set(gcf, 'Position', [200 200 800 300]) 14 | 15 | subplot(1,2,1) 16 | plot(samples'); 17 | xlabel('Sample'); 18 | ylabel('Parameter'); 19 | box off 20 | set(gca, 'FontSize', 14) 21 | 22 | subplot(1,2,2) 23 | histogram(samples(:)) 24 | xlabel('Parameter'); 25 | ylabel('Sample count'); 26 | box off 27 | set(gca, 'FontSize', 14) 28 | 29 | -------------------------------------------------------------------------------- /Matlab/plot_generative_model.m: -------------------------------------------------------------------------------- 1 | % Shows underlying SDT model and relationship to multinomial probabilities 2 | % for fitted type 2 data 3 | % 4 | % Requires fit, nR_S1 and nR_S2 objects from either subject-level HMM or MLE meta-d fit to be in the 5 | % workspace 6 | % 7 | % SF 2014 8 | 9 | figure; 10 | set(gcf, 'Units', 'normalized'); 11 | set(gcf, 'Position', [0.2 0.2 0.5 0.3]); 12 | base = linspace(-4,4,500); 13 | 14 | subplot(1,2,1); 15 | mu1 = fit.meta_d./2; 16 | S = normpdf(base, mu1, 1); 17 | N = normpdf(base, -mu1, 1); 18 | plot(base, S, 'k', 'LineWidth', 2); 19 | hold on 20 | plot(base, N, 'k--', 'LineWidth', 2); 21 | set(gca, 'YLim', [0 0.5], 'FontSize',12); 22 | line([fit.c1 fit.c1],[0 0.5],'Color','k','LineWidth',1); 23 | for i = 1:length(fit.t2ca_rS1) 24 | line([fit.t2ca_rS1(i) fit.t2ca_rS1(i)],[0 0.5], 'Color', 'k', 'LineStyle', '--', 'LineWidth', 1); 25 | line([fit.t2ca_rS2(i) fit.t2ca_rS2(i)],[0 0.5], 'Color', 'k', 'LineStyle', '--', 'LineWidth', 1); 26 | end 27 | xlabel('X','FontSize', 14) 28 | ylabel('f(x|S)', 'FontSize', 14); 29 | 30 | % Show model fit in terms of proportion total confidence ratings, collapse 31 | % over response 32 | mean_c2 = (fit.t2ca_rS2 + abs(fit.t2ca_rS1(end:-1:1)))./2; 33 | I_area = 1-normcdf(0,-mu1,1); 34 | C_area = 1-normcdf(0,mu1,1); 35 | allC = [0 mean_c2 Inf]; 36 | for i = 1:length(allC)-1 37 | I_prop(i) = (normcdf(allC(i+1), -mu1, 1) - normcdf(allC(i), -mu1, 1))./I_area; 38 | C_prop(i) = (normcdf(allC(i+1), mu1, 1) - normcdf(allC(i), mu1, 1))./C_area; 39 | end 40 | 41 | subplot(1,2,2); 42 | modelProp = [C_prop(end:-1:1) I_prop]; 43 | obsCount = (nR_S1 + nR_S2(end:-1:1)); 44 | Nrating = length(nR_S1)./2; 45 | obsProp(1:Nrating) = obsCount(1:Nrating)./sum(obsCount(1:Nrating)); 46 | obsProp(Nrating+1:2*Nrating) = obsCount(Nrating+1:2*Nrating)./sum(obsCount(Nrating+1:2*Nrating)); 47 | bar(obsProp); 48 | hold on 49 | plot(1:length(modelProp), modelProp, 'ro ', 'MarkerSize', 10, 'LineWidth', 2); 50 | set(gca, 'YLim', [0 1], 'XTick', [1:8], 'XTickLabel', {'4','3','2','1','1','2','3','4'},'FontSize',12); 51 | line([4.5 4.5], [0 1], 'Color', 'k', 'LineStyle', '--'); 52 | ylabel('P(conf = y) | outcome)','FontSize',14); 53 | xlabel('Confidence rating','FontSize',14); 54 | text(8, 0.7, 'Correct','FontSize',14) 55 | text(1, 0.7, 'Error','FontSize',14); -------------------------------------------------------------------------------- /Matlab/tmpjags/.gitignore: -------------------------------------------------------------------------------- 1 | # Ignore everything in this directory 2 | * 3 | # Except this file 4 | !.gitignore 5 | -------------------------------------------------------------------------------- /Matlab/trials2counts.m: -------------------------------------------------------------------------------- 1 | function [nR_S1 nR_S2] = trials2counts(stimID,response,rating,nRatings,cellpad) 2 | 3 | % [nR_S1 nR_S2] = trials2counts(stimID,response,rating,nRatings,cellpad) 4 | % 5 | % Convert trial by trial experimental information for N trials into response counts. 6 | % 7 | % INPUTS 8 | % stimID: 1xN vector. stimID(i) = 0 --> stimulus on i'th trial was S1. 9 | % stimID(i) = 1 --> stimulus on i'th trial was S2. 10 | % 11 | % response: 1xN vector. response(i) = 0 --> response on i'th trial was "S1". 12 | % response(i) = 1 --> response on i'th trial was "S2". 13 | % 14 | % rating: 1xN vector. rating(i) = X --> rating on i'th trial was X. 15 | % X must be in the range 1 <= X <= nRatings. 16 | % 17 | % nRatings: total # of available subjective ratings available for the 18 | % subject. e.g. if subject can rate confidence on a scale of 1-4, 19 | % then nRatings = 4 20 | % 21 | % cellpad: if set to 1, each response count in the output has the value of 22 | % 1/(2*nRatings) added to it. This is desirable if trial counts of 23 | % 0 interfere with model fitting. 24 | % if set to 0, trial counts are not manipulated and 0s may be 25 | % present. 26 | % default value is 0. 27 | % 28 | % OUTPUTS 29 | % nR_S1, nR_S2 30 | % these are vectors containing the total number of responses in 31 | % each response category, conditional on presentation of S1 and S2. 32 | % 33 | % e.g. if nR_S1 = [100 50 20 10 5 1], then when stimulus S1 was 34 | % presented, the subject had the following response counts: 35 | % responded S1, rating=3 : 100 times 36 | % responded S1, rating=2 : 50 times 37 | % responded S1, rating=1 : 20 times 38 | % responded S2, rating=1 : 10 times 39 | % responded S2, rating=2 : 5 times 40 | % responded S2, rating=3 : 1 time 41 | 42 | nR_S1 = []; 43 | nR_S2 = []; 44 | 45 | % S1 responses 46 | for r = nRatings : -1 : 1 47 | nR_S1(end+1) = sum(stimID==0 & response==0 & rating==r); 48 | nR_S2(end+1) = sum(stimID==1 & response==0 & rating==r); 49 | end 50 | 51 | % S2 responses 52 | for r = 1 : nRatings 53 | nR_S1(end+1) = sum(stimID==0 & response==1 & rating==r); 54 | nR_S2(end+1) = sum(stimID==1 & response==1 & rating==r); 55 | end 56 | 57 | % cell pad 58 | if ~exist('cellpad','var') || isempty(cellpad), cellpad = 0; end 59 | 60 | if cellpad 61 | 62 | padFactor = 1/(2*nRatings); 63 | 64 | nR_S1 = nR_S1 + padFactor; 65 | nR_S2 = nR_S2 + padFactor; 66 | 67 | end -------------------------------------------------------------------------------- /Matlab/type2_SDT_sim.m: -------------------------------------------------------------------------------- 1 | function sim = type2_SDT_sim(d, noise, c, c1, c2, Ntrials) 2 | % Type 2 SDT simulation with variable noise 3 | % sim = type2_SDT_sim(d, noise, c, c1, c2, Ntrials) 4 | % 5 | % INPUTS 6 | % d - type 1 dprime 7 | % noise - standard deviation of noise to be added to type 1 internal 8 | % response for type 2 judgment. If noise is a 1 x 2 vector then this will 9 | % simulate response-conditional type 2 data where noise = [sigma_rS1 10 | % sigma_rS2] 11 | % 12 | % c - type 1 criterion 13 | % c1 - type 2 criteria for S1 response 14 | % c2 - type 2 criteria for S2 response 15 | % Ntrials - number of trials to simulate 16 | % 17 | % OUTPUT 18 | % 19 | % sim - structure containing nR_S1 and nR_S2 response counts 20 | % 21 | % SF 2014 22 | 23 | if length(noise) > 1 24 | rc = 1; 25 | sigma1 = noise(1); 26 | sigma2 = noise(2); 27 | else 28 | rc = 0; 29 | sigma = noise; 30 | end 31 | 32 | S1mu = -d/2; 33 | S2mu = d/2; 34 | 35 | % Initialise response arrays 36 | nC_rS1 = zeros(1, length(c1)+1); 37 | nI_rS1 = zeros(1, length(c1)+1); 38 | nC_rS2 = zeros(1, length(c2)+1); 39 | nI_rS2 = zeros(1, length(c2)+1); 40 | 41 | for t = 1:Ntrials 42 | s = round(rand); 43 | 44 | % Type 1 SDT model 45 | if s == 1 46 | x = normrnd(S2mu, 1); 47 | else 48 | x = normrnd(S1mu, 1); 49 | end 50 | 51 | % Add type 2 noise to signal 52 | if rc % add response-conditional noise 53 | if x < c 54 | if sigma1 > 0 55 | x2 = normrnd(x, sigma1); 56 | else 57 | x2 = x; 58 | end 59 | else 60 | if sigma2 > 0 61 | x2 = normrnd(x, sigma2); 62 | else 63 | x2 = x; 64 | end 65 | end 66 | else 67 | if sigma > 0 68 | x2 = normrnd(x,sigma); 69 | else 70 | x2 = x; 71 | end 72 | end 73 | 74 | % Generate confidence ratings 75 | if s == 0 && x < c % stimulus S1 and response S1 76 | pos = (x2 <= [c1 c]); 77 | [y ind] = find(pos); 78 | i = min(ind); 79 | nC_rS1(i) = nC_rS1(i) + 1; 80 | 81 | elseif s == 0 && x >= c % stimulus S1 and response S2 82 | pos = (x2 >= [c c2]); 83 | [y ind] = find(pos); 84 | i = max(ind); 85 | nI_rS2(i) = nI_rS2(i) + 1; 86 | 87 | elseif s == 1 && x < c % stimulus S2 and response S1 88 | pos = (x2 <= [c1 c]); 89 | [y ind] = find(pos); 90 | i = min(ind); 91 | nI_rS1(i) = nI_rS1(i) + 1; 92 | 93 | elseif s == 1 && x >= c % stimulus S2 and response S2 94 | pos = (x2 >= [c c2]); 95 | [y ind] = find(pos); 96 | i = max(ind); 97 | nC_rS2(i) = nC_rS2(i) + 1; 98 | end 99 | 100 | end 101 | 102 | sim.nR_S1 = [nC_rS1 nI_rS2]; 103 | sim.nR_S2 = [nI_rS1 nC_rS2]; 104 | -------------------------------------------------------------------------------- /R/Bayes_metad_2wayANOVA.txt: -------------------------------------------------------------------------------- 1 | # Bayesian estimation of meta-d/d (2 way repeated measures ANOVA) 2 | 3 | data { 4 | for (s in 1:nsubj) { 5 | for (i in 1:4){ 6 | # Type 1 counts for task 1 7 | N[s,i] <- sum(counts[s,1:(nratings*2),i]) 8 | S[s,i] <- sum(counts[s,(nratings*2+1):(nratings*4),i]) 9 | H[s,i] <- sum(counts[s,(nratings*3+1):(nratings*4),i]) 10 | M[s,i] <- sum(counts[s,(nratings*2+1):(nratings*3),i]) 11 | FA[s,i] <- sum(counts[s,(nratings+1):(nratings*2),i]) 12 | CR[s,i] <- sum(counts[s,1:(nratings),i]) 13 | } 14 | } 15 | } 16 | 17 | model { 18 | for (s in 1:nsubj) { 19 | for (i in 1:4){ 20 | 21 | ## TYPE 2 SDT MODEL (META-D) 22 | # Multinomial likelihood for response counts ordered as c(nR_S1,nR_S2) 23 | 24 | counts[s,1:(nratings),i] ~ dmulti(prT[s,1:(nratings),i],CR[s,i]) 25 | counts[s,(nratings+1):(nratings*2),i] ~ dmulti(prT[s,(nratings+1):(nratings*2),i],FA[s,i]) 26 | counts[s,(nratings*2+1):(nratings*3),i] ~ dmulti(prT[s,(nratings*2+1):(nratings*3),i],M[s,i]) 27 | counts[s,(nratings*3+1):(nratings*4),i] ~ dmulti(prT[s,(nratings*3+1):(nratings*4),i],H[s,i]) 28 | 29 | # Means of SDT distributions] 30 | mu[s,i] <- Mratio[s,i]*d1[s,i] 31 | S2mu[s,i] <- mu[s,i]/2 32 | S1mu[s,i] <- -mu[s,i]/2 33 | 34 | # Calculate normalisation constants 35 | C_area_rS1[s,i] <- phi(c1[s,i] - S1mu[s,i]) 36 | I_area_rS1[s,i] <- phi(c1[s,i] - S2mu[s,i]) 37 | C_area_rS2[s,i] <- 1-phi(c1[s,i] - S2mu[s,i]) 38 | I_area_rS2[s,i] <- 1-phi(c1[s,i] - S1mu[s,i]) 39 | 40 | # Get nC_rS1 probs 41 | pr[s,1,i] <- phi(cS1[s,1,i] - S1mu[s,i])/C_area_rS1[s,i] 42 | for (k in 1:(nratings-2)) { 43 | pr[s,(k+1),i] <- (phi(cS1[s,(k+1),i] - S1mu[s,i])-phi(cS1[s,k,i] - S1mu[s,i]))/C_area_rS1[s,i] 44 | } 45 | pr[s,(nratings),i] <- (phi(c1[s,i] - S1mu[s,i])-phi(cS1[s,(nratings-1),i] - S1mu[s,i]))/C_area_rS1[s,i] 46 | 47 | # Get nI_rS2 probs 48 | pr[s,(nratings+1),i] <- ((1-phi(c1[s,i] - S1mu[s,i]))-(1-phi(cS2[s,1,i] - S1mu[s,i])))/I_area_rS2[s,i] 49 | for (k in 1:(nratings-2)) { 50 | pr[s,(nratings+1+k),i] <- ((1-phi(cS2[s,k,i] - S1mu[s,i]))-(1-phi(cS2[s,(k+1),i] - S1mu[s,i])))/I_area_rS2[s,i] 51 | } 52 | pr[s,(nratings*2),i] <- (1-phi(cS2[s,(nratings-1),i] - S1mu[s,i]))/I_area_rS2[s,i] 53 | 54 | # Get nI_rS1 probs 55 | pr[s,(nratings*2+1), i] <- phi(cS1[s,1,i] - S2mu[s,i])/I_area_rS1[s,i] 56 | for (k in 1:(nratings-2)) { 57 | pr[s,(nratings*2+1+k),i] <- (phi(cS1[s,(k+1),i] - S2mu[s,i])-phi(cS1[s,k,i] - S2mu[s,i]))/I_area_rS1[s,i] 58 | } 59 | pr[s,(nratings*3),i] <- (phi(c1[s,i] - S2mu[s,i])-phi(cS1[s,(nratings-1),i] - S2mu[s,i]))/I_area_rS1[s,i] 60 | 61 | # Get nC_rS2 probs 62 | pr[s,(nratings*3+1),i] <- ((1-phi(c1[s,i] - S2mu[s,i]))-(1-phi(cS2[s,1,i] - S2mu[s,i])))/C_area_rS2[s,i] 63 | for (k in 1:(nratings-2)) { 64 | pr[s,(nratings*3+1+k),i] <- ((1-phi(cS2[s,k,i] - S2mu[s,i]))-(1-phi(cS2[s,(k+1),i] - S2mu[s,i])))/C_area_rS2[s,i] 65 | } 66 | pr[s,(nratings*4),i] <- (1-phi(cS2[s,(nratings-1),i] - S2mu[s,i]))/C_area_rS2[s,i] 67 | 68 | # Avoid underflow of probabilities 69 | for (ii in 1:(nratings*4)) { 70 | prT[s,ii,i] <- ifelse(pr[s,ii,i] < Tol, Tol, pr[s,ii,i]) 71 | } 72 | 73 | # Specify ordered prior on criteria (bounded above and below by Type 1 c) 74 | for (j in 1:(nratings-1)) { 75 | cS1_raw[s,j,i] ~ dnorm(-mu_c2, lambda_c2) T(,c1[s,i]-Tol) 76 | cS2_raw[s,j,i] ~ dnorm(mu_c2, lambda_c2) T(c1[s,i]+Tol,) 77 | } 78 | cS1[s,1:(nratings-1),i] <- sort(cS1_raw[s,1:(nratings-1),i]) 79 | cS2[s,1:(nratings-1),i] <- sort(cS2_raw[s,1:(nratings-1),i]) 80 | 81 | mu_regression[s,i] <- dbase[s] + Bd_Condition1[s]*Condition1[i] + Bd_Condition2[s]*Condition2[i] + Bd_interaction[s]*Interaction[i] 82 | logMratio[s,i] ~ dnorm(mu_regression[s,i], tau[s]) 83 | Mratio[s,i] <- exp(logMratio[s,i]) 84 | } 85 | 86 | dbase[s] ~ dnorm(muD,lamD) 87 | Bd_Condition1[s] ~ dnorm(muBd_Condition1,lamBd_Condition1) 88 | Bd_Condition2[s] ~ dnorm(muBd_Condition2,lamBd_Condition2) 89 | Bd_interaction[s] ~ dnorm(muBd_interaction,lamBd_interaction) 90 | tau[s] ~ dgamma(0.01, 0.01) 91 | } 92 | 93 | mu_c2 ~ dnorm(0, 0.01) 94 | sigma_c2 ~ dnorm(0, 0.01) I(0, ) 95 | lambda_c2 <- pow(sigma_c2, -2) 96 | 97 | # Hyperpriors 98 | muD ~ dnorm(0,.001) 99 | sigma_D ~ dnorm(0, 0.1) I(0, ) 100 | lamD <- pow(sigma_D, -2) 101 | sigD <- 1/sqrt(lamD) 102 | 103 | muBd_Condition1 ~ dnorm(0,.001) 104 | sigma_Condition1 ~ dnorm(0, 0.1) I(0, ) 105 | lamBd_Condition1 <- pow(sigma_Condition1, -2) 106 | sigD_Condition1 <- 1/sqrt(lamBd_Condition1) 107 | 108 | muBd_Condition2 ~ dnorm(0,.001) 109 | sigma_Condition2 ~ dnorm(0, 0.1) I(0, ) 110 | lamBd_Condition2 <- pow(sigma_Condition2, -2) 111 | sigD_Condition2 <- 1/sqrt(lamBd_Condition2) 112 | 113 | muBd_interaction ~ dnorm(0,.001) 114 | sigma_interaction ~ dnorm(0, 0.1) I(0, ) 115 | lamBd_interaction <- pow(sigma_interaction, -2) 116 | sigD_interaction <- 1/sqrt(lamBd_interaction) 117 | } -------------------------------------------------------------------------------- /R/Bayes_metad_group_R.txt: -------------------------------------------------------------------------------- 1 | # Bayesian estimation of meta-d/d for group 2 | 3 | data { 4 | for (s in 1:nsubj) { 5 | # Type 1 counts 6 | N[s] <- sum(counts[s,1:(nratings*2)]) 7 | S[s] <- sum(counts[s,(nratings*2+1):(nratings*4)]) 8 | H[s] <- sum(counts[s,(nratings*3+1):(nratings*4)]) 9 | M[s] <- sum(counts[s,(nratings*2+1):(nratings*3)]) 10 | FA[s] <- sum(counts[s,(nratings+1):(nratings*2)]) 11 | CR[s] <- sum(counts[s,1:(nratings)]) 12 | } 13 | } 14 | 15 | model { 16 | 17 | for (s in 1:nsubj) { 18 | 19 | ## TYPE 1 SDT BINOMIAL MODEL 20 | H[s] ~ dbin(h[s],S[s]) 21 | FA[s] ~ dbin(f[s],N[s]) 22 | h[s] <- phi(d1[s]/2-c1[s]) 23 | f[s] <- phi(-d1[s]/2-c1[s]) 24 | 25 | # Type 1 priors 26 | c1[s] ~ dnorm(mu_c, lambda_c) 27 | d1[s] ~ dnorm(mu_d1, lambda_d1) 28 | 29 | ## TYPE 2 SDT MODEL (META-D) 30 | # Multinomial likelihood for response counts ordered as c(nR_S1,nR_S2) 31 | counts[s,1:(nratings)] ~ dmulti(prT[s,1:(nratings)],CR[s]) 32 | counts[s,(nratings+1):(nratings*2)] ~ dmulti(prT[s,(nratings+1):(nratings*2)],FA[s]) 33 | counts[s,(nratings*2+1):(nratings*3)] ~ dmulti(prT[s,(nratings*2+1):(nratings*3)],M[s]) 34 | counts[s,(nratings*3+1):(nratings*4)] ~ dmulti(prT[s,(nratings*3+1):(nratings*4)],H[s]) 35 | 36 | # Means of SDT distributions] 37 | mu[s] <- Mratio[s]*d1[s] 38 | S2mu[s] <- mu[s]/2 39 | S1mu[s] <- -mu[s]/2 40 | 41 | # Calculate normalisation constants 42 | C_area_rS1[s] <- phi(c1[s] - S1mu[s]) 43 | I_area_rS1[s] <- phi(c1[s] - S2mu[s]) 44 | C_area_rS2[s] <- 1-phi(c1[s] - S2mu[s]) 45 | I_area_rS2[s] <- 1-phi(c1[s] - S1mu[s]) 46 | 47 | # Get nC_rS1 probs 48 | pr[s,1] <- phi(cS1[s,1] - S1mu[s])/C_area_rS1[s] 49 | for (k in 1:(nratings-2)) { 50 | pr[s,k+1] <- (phi(cS1[s,(k+1)] - S1mu[s])-phi(cS1[s,k] - S1mu[s]))/C_area_rS1[s] 51 | } 52 | pr[s,nratings] <- (phi(c1[s] - S1mu[s])-phi(cS1[s,(nratings-1)] - S1mu[s]))/C_area_rS1[s] 53 | 54 | # Get nI_rS2 probs 55 | pr[s,(nratings+1)] <- ((1-phi(c1[s] - S1mu[s]))-(1-phi(cS2[s,1] - S1mu[s])))/I_area_rS2[s] 56 | for (k in 1:(nratings-2)) { 57 | pr[s,(nratings+1+k)] <- ((1-phi(cS2[s,k] - S1mu[s]))-(1-phi(cS2[s,(k+1)] - S1mu[s])))/I_area_rS2[s] 58 | } 59 | pr[s,(nratings*2)] <- (1-phi(cS2[s,(nratings-1)] - S1mu[s]))/I_area_rS2[s] 60 | 61 | # Get nI_rS1 probs 62 | pr[s,(nratings*2+1)] <- phi(cS1[s,1] - S2mu[s])/I_area_rS1[s] 63 | for (k in 1:(nratings-2)) { 64 | pr[s,(nratings*2+1+k)] <- (phi(cS1[s,(k+1)] - S2mu[s])-phi(cS1[s,k] - S2mu[s]))/I_area_rS1[s] 65 | } 66 | pr[s,(nratings*3)] <- (phi(c1[s] - S2mu[s])-phi(cS1[s,(nratings-1)] - S2mu[s]))/I_area_rS1[s] 67 | 68 | # Get nC_rS2 probs 69 | pr[s,(nratings*3+1)] <- ((1-phi(c1[s] - S2mu[s]))-(1-phi(cS2[s,1] - S2mu[s])))/C_area_rS2[s] 70 | for (k in 1:(nratings-2)) { 71 | pr[s,(nratings*3+1+k)] <- ((1-phi(cS2[s,k] - S2mu[s]))-(1-phi(cS2[s,(k+1)] - S2mu[s])))/C_area_rS2[s] 72 | } 73 | pr[s,(nratings*4)] <- (1-phi(cS2[s,(nratings-1)] - S2mu[s]))/C_area_rS2[s] 74 | 75 | # Avoid underflow of probabilities 76 | for (i in 1:(nratings*4)) { 77 | prT[s,i] <- ifelse(pr[s,i] < Tol, Tol, pr[s,i]) 78 | } 79 | 80 | # Specify ordered prior on criteria (bounded above and below by Type 1 c) 81 | for (j in 1:(nratings-1)) { 82 | cS1_raw[s,j] ~ dnorm(-mu_c2, lambda_c2) T(,c1[s]) 83 | cS2_raw[s,j] ~ dnorm(mu_c2, lambda_c2) T(c1[s],) 84 | } 85 | cS1[s,1:(nratings-1)] <- sort(cS1_raw[s, ]) 86 | cS2[s,1:(nratings-1)] <- sort(cS2_raw[s, ]) 87 | 88 | delta[s] ~ dnorm(0, lambda_delta) 89 | logMratio[s] <- mu_logMratio + epsilon_logMratio*delta[s] 90 | Mratio[s] <- exp(logMratio[s]) 91 | } 92 | 93 | # hyperpriors 94 | mu_d1 ~ dnorm(0, 0.01) 95 | mu_c ~ dnorm(0, 0.01) 96 | sigma_d1 ~ dnorm(0, 0.01) I(0, ) 97 | sigma_c ~ dnorm(0, 0.01) I(0, ) 98 | lambda_d1 <- pow(sigma_d1, -2) 99 | lambda_c <- pow(sigma_c, -2) 100 | 101 | mu_c2 ~ dnorm(0, 0.01) 102 | sigma_c2 ~ dnorm(0, 0.01) I(0, ) 103 | lambda_c2 <- pow(sigma_c2, -2) 104 | 105 | mu_logMratio ~ dnorm(0, 1) 106 | sigma_delta ~ dnorm(0, 1) I(0,) 107 | lambda_delta <- pow(sigma_delta, -2) 108 | epsilon_logMratio ~ dbeta(1,1) 109 | sigma_logMratio <- abs(epsilon_logMratio)*sigma_delta 110 | 111 | } 112 | -------------------------------------------------------------------------------- /R/Bayes_metad_group_corr2_R.txt: -------------------------------------------------------------------------------- 1 | # Bayesian estimation of meta-d/d for group correlation between 4 domains 2 | 3 | data { 4 | for (s in 1:nsubj) { 5 | # Type 1 counts for task 1 6 | N[s,1] <- sum(counts1[s,1:(nratings*2)]) 7 | S[s,1] <- sum(counts1[s,(nratings*2+1):(nratings*4)]) 8 | H[s,1] <- sum(counts1[s,(nratings*3+1):(nratings*4)]) 9 | M[s,1] <- sum(counts1[s,(nratings*2+1):(nratings*3)]) 10 | FA[s,1] <- sum(counts1[s,(nratings+1):(nratings*2)]) 11 | CR[s,1] <- sum(counts1[s,1:(nratings)]) 12 | 13 | # Type 1 counts for task 2 14 | N[s,2] <- sum(counts2[s,1:(nratings*2)]) 15 | S[s,2] <- sum(counts2[s,(nratings*2+1):(nratings*4)]) 16 | H[s,2] <- sum(counts2[s,(nratings*3+1):(nratings*4)]) 17 | M[s,2] <- sum(counts2[s,(nratings*2+1):(nratings*3)]) 18 | FA[s,2] <- sum(counts2[s,(nratings+1):(nratings*2)]) 19 | CR[s,2] <- sum(counts2[s,1:(nratings)]) 20 | } 21 | } 22 | 23 | model { 24 | for (s in 1:nsubj) { 25 | 26 | ## TYPE 2 SDT MODEL (META-D) 27 | # Multinomial likelihood for response counts ordered as c(nR_S1,nR_S2) 28 | 29 | counts1[s,1:(nratings)] ~ dmulti(prT[s,1:(nratings),1],CR[s,1]) 30 | counts1[s,(nratings+1):(nratings*2)] ~ dmulti(prT[s,(nratings+1):(nratings*2),1],FA[s,1]) 31 | counts1[s,(nratings*2+1):(nratings*3)] ~ dmulti(prT[s,(nratings*2+1):(nratings*3),1],M[s,1]) 32 | counts1[s,(nratings*3+1):(nratings*4)] ~ dmulti(prT[s,(nratings*3+1):(nratings*4),1],H[s,1]) 33 | 34 | counts2[s,1:(nratings)] ~ dmulti(prT[s,1:(nratings),2],CR[s,2]) 35 | counts2[s,(nratings+1):(nratings*2)] ~ dmulti(prT[s,(nratings+1):(nratings*2),2],FA[s,2]) 36 | counts2[s,(nratings*2+1):(nratings*3)] ~ dmulti(prT[s,(nratings*2+1):(nratings*3),2],M[s,2]) 37 | counts2[s,(nratings*3+1):(nratings*4)] ~ dmulti(prT[s,(nratings*3+1):(nratings*4),2],H[s,2]) 38 | 39 | for (task in 1:2) { 40 | 41 | # Means of SDT distributions] 42 | mu[s,task] <- Mratio[s,task]*d1[s,task] 43 | S2mu[s,task] <- mu[s,task]/2 44 | S1mu[s,task] <- -mu[s,task]/2 45 | 46 | # Calculate normalisation constants 47 | C_area_rS1[s,task] <- phi(c1[s,task] - S1mu[s,task]) 48 | I_area_rS1[s,task] <- phi(c1[s,task] - S2mu[s,task]) 49 | C_area_rS2[s,task] <- 1-phi(c1[s,task] - S2mu[s,task]) 50 | I_area_rS2[s,task] <- 1-phi(c1[s,task] - S1mu[s,task]) 51 | 52 | # Get nC_rS1 probs 53 | pr[s,1,task] <- phi(cS1[s,1,task] - S1mu[s,task])/C_area_rS1[s,task] 54 | for (k in 1:(nratings-2)) { 55 | pr[s,(k+1),task] <- (phi(cS1[s,(k+1),task] - S1mu[s,task])-phi(cS1[s,k,task] - S1mu[s,task]))/C_area_rS1[s,task] 56 | } 57 | pr[s,(nratings),task] <- (phi(c1[s,task] - S1mu[s,task])-phi(cS1[s,(nratings-1),task] - S1mu[s,task]))/C_area_rS1[s,task] 58 | 59 | # Get nI_rS2 probs 60 | pr[s,(nratings+1),task] <- ((1-phi(c1[s,task] - S1mu[s,task]))-(1-phi(cS2[s,1,task] - S1mu[s,task])))/I_area_rS2[s,task] 61 | for (k in 1:(nratings-2)) { 62 | pr[s,(nratings+1+k),task] <- ((1-phi(cS2[s,k,task] - S1mu[s,task]))-(1-phi(cS2[s,(k+1),task] - S1mu[s,task])))/I_area_rS2[s,task] 63 | } 64 | pr[s,(nratings*2),task] <- (1-phi(cS2[s,(nratings-1),task] - S1mu[s,task]))/I_area_rS2[s,task] 65 | 66 | # Get nI_rS1 probs 67 | pr[s,(nratings*2+1), task] <- phi(cS1[s,1,task] - S2mu[s,task])/I_area_rS1[s,task] 68 | for (k in 1:(nratings-2)) { 69 | pr[s,(nratings*2+1+k),task] <- (phi(cS1[s,(k+1),task] - S2mu[s,task])-phi(cS1[s,k,task] - S2mu[s,task]))/I_area_rS1[s,task] 70 | } 71 | pr[s,(nratings*3),task] <- (phi(c1[s,task] - S2mu[s,task])-phi(cS1[s,(nratings-1),task] - S2mu[s,task]))/I_area_rS1[s,task] 72 | 73 | # Get nC_rS2 probs 74 | pr[s,(nratings*3+1),task] <- ((1-phi(c1[s,task] - S2mu[s,task]))-(1-phi(cS2[s,1,task] - S2mu[s,task])))/C_area_rS2[s,task] 75 | for (k in 1:(nratings-2)) { 76 | pr[s,(nratings*3+1+k),task] <- ((1-phi(cS2[s,k,task] - S2mu[s,task]))-(1-phi(cS2[s,(k+1),task] - S2mu[s,task])))/C_area_rS2[s,task] 77 | } 78 | pr[s,(nratings*4),task] <- (1-phi(cS2[s,(nratings-1),task] - S2mu[s,task]))/C_area_rS2[s,task] 79 | 80 | # Avoid underflow of probabilities 81 | for (i in 1:(nratings*4)) { 82 | prT[s,i,task] <- ifelse(pr[s,i,task] < Tol, Tol, pr[s,i,task]) 83 | } 84 | 85 | # Specify ordered prior on criteria (bounded above and below by Type 1 c) 86 | for (j in 1:(nratings-1)) { 87 | cS1_raw[s,j,task] ~ dnorm(-mu_c2[task], lambda_c2[task]) T(,c1[s,task]) 88 | cS2_raw[s,j,task] ~ dnorm(mu_c2[task], lambda_c2[task]) T(c1[s,task],) 89 | } 90 | cS1[s,1:(nratings-1),task] <- sort(cS1_raw[s,1:(nratings-1),task]) 91 | cS2[s,1:(nratings-1),task] <- sort(cS2_raw[s,1:(nratings-1),task]) 92 | 93 | Mratio[s,task] <- exp(logMratio[s,task]) 94 | 95 | } 96 | 97 | # Draw log(M)'s from bivariate Gaussian 98 | logMratio[s,1:2] ~ dmnorm.vcov(mu_logMratio[], T[,]) 99 | 100 | } 101 | 102 | mu_c2[1] ~ dnorm(0, 0.01) 103 | mu_c2[2] ~ dnorm(0, 0.01) 104 | sigma_c2[1] ~ dnorm(0, 0.01) I(0, ) 105 | sigma_c2[2] ~ dnorm(0, 0.01) I(0, ) 106 | lambda_c2[1] <- pow(sigma_c2[1], -2) 107 | lambda_c2[2] <- pow(sigma_c2[2], -2) 108 | 109 | 110 | mu_logMratio[1] ~ dnorm(0, 1) 111 | mu_logMratio[2] ~ dnorm(0, 1) 112 | lambda_logMratio[1] ~ dgamma(0.001,0.001) 113 | lambda_logMratio[2] ~ dgamma(0.001,0.001) 114 | sigma_logMratio[1] <- 1/sqrt(lambda_logMratio[1]) 115 | sigma_logMratio[2] <- 1/sqrt(lambda_logMratio[2]) 116 | 117 | rho[1] ~ dunif(-1,1) 118 | 119 | T[1,1] <- 1/lambda_logMratio[1] 120 | T[1,2] <- rho[1]*sigma_logMratio[1]*sigma_logMratio[2] 121 | T[2,1] <- rho[1]*sigma_logMratio[1]*sigma_logMratio[2] 122 | T[2,2] <- 1/lambda_logMratio[2] 123 | 124 | 125 | } 126 | -------------------------------------------------------------------------------- /R/Bayes_metad_group_corr3_R.txt: -------------------------------------------------------------------------------- 1 | # Bayesian estimation of meta-d/d for group correlation between 4 domains 2 | 3 | data { 4 | for (s in 1:nsubj) { 5 | # Type 1 counts for task 1 6 | N[s,1] <- sum(counts1[s,1:(nratings*2)]) 7 | S[s,1] <- sum(counts1[s,(nratings*2+1):(nratings*4)]) 8 | H[s,1] <- sum(counts1[s,(nratings*3+1):(nratings*4)]) 9 | M[s,1] <- sum(counts1[s,(nratings*2+1):(nratings*3)]) 10 | FA[s,1] <- sum(counts1[s,(nratings+1):(nratings*2)]) 11 | CR[s,1] <- sum(counts1[s,1:(nratings)]) 12 | 13 | # Type 1 counts for task 2 14 | N[s,2] <- sum(counts2[s,1:(nratings*2)]) 15 | S[s,2] <- sum(counts2[s,(nratings*2+1):(nratings*4)]) 16 | H[s,2] <- sum(counts2[s,(nratings*3+1):(nratings*4)]) 17 | M[s,2] <- sum(counts2[s,(nratings*2+1):(nratings*3)]) 18 | FA[s,2] <- sum(counts2[s,(nratings+1):(nratings*2)]) 19 | CR[s,2] <- sum(counts2[s,1:(nratings)]) 20 | 21 | # Type 1 counts for task 3 22 | N[s,3] <- sum(counts3[s,1:(nratings*2)]) 23 | S[s,3] <- sum(counts3[s,(nratings*2+1):(nratings*4)]) 24 | H[s,3] <- sum(counts3[s,(nratings*3+1):(nratings*4)]) 25 | M[s,3] <- sum(counts3[s,(nratings*2+1):(nratings*3)]) 26 | FA[s,3] <- sum(counts3[s,(nratings+1):(nratings*2)]) 27 | CR[s,3] <- sum(counts3[s,1:(nratings)]) 28 | } 29 | } 30 | 31 | model { 32 | for (s in 1:nsubj) { 33 | 34 | ## TYPE 2 SDT MODEL (META-D) 35 | # Multinomial likelihood for response counts ordered as c(nR_S1,nR_S2) 36 | 37 | counts1[s,1:(nratings)] ~ dmulti(prT[s,1:(nratings),1],CR[s,1]) 38 | counts1[s,(nratings+1):(nratings*2)] ~ dmulti(prT[s,(nratings+1):(nratings*2),1],FA[s,1]) 39 | counts1[s,(nratings*2+1):(nratings*3)] ~ dmulti(prT[s,(nratings*2+1):(nratings*3),1],M[s,1]) 40 | counts1[s,(nratings*3+1):(nratings*4)] ~ dmulti(prT[s,(nratings*3+1):(nratings*4),1],H[s,1]) 41 | 42 | counts2[s,1:(nratings)] ~ dmulti(prT[s,1:(nratings),2],CR[s,2]) 43 | counts2[s,(nratings+1):(nratings*2)] ~ dmulti(prT[s,(nratings+1):(nratings*2),2],FA[s,2]) 44 | counts2[s,(nratings*2+1):(nratings*3)] ~ dmulti(prT[s,(nratings*2+1):(nratings*3),2],M[s,2]) 45 | counts2[s,(nratings*3+1):(nratings*4)] ~ dmulti(prT[s,(nratings*3+1):(nratings*4),2],H[s,2]) 46 | 47 | counts3[s,1:(nratings)] ~ dmulti(prT[s,1:(nratings),3],CR[s,3]) 48 | counts3[s,(nratings+1):(nratings*2)] ~ dmulti(prT[s,(nratings+1):(nratings*2),3],FA[s,3]) 49 | counts3[s,(nratings*2+1):(nratings*3)] ~ dmulti(prT[s,(nratings*2+1):(nratings*3),3],M[s,3]) 50 | counts3[s,(nratings*3+1):(nratings*4)] ~ dmulti(prT[s,(nratings*3+1):(nratings*4),3],H[s,3]) 51 | 52 | 53 | for (task in 1:3) { 54 | 55 | # Means of SDT distributions] 56 | mu[s,task] <- Mratio[s,task]*d1[s,task] 57 | S2mu[s,task] <- mu[s,task]/2 58 | S1mu[s,task] <- -mu[s,task]/2 59 | 60 | # Calculate normalisation constants 61 | C_area_rS1[s,task] <- phi(c1[s,task] - S1mu[s,task]) 62 | I_area_rS1[s,task] <- phi(c1[s,task] - S2mu[s,task]) 63 | C_area_rS2[s,task] <- 1-phi(c1[s,task] - S2mu[s,task]) 64 | I_area_rS2[s,task] <- 1-phi(c1[s,task] - S1mu[s,task]) 65 | 66 | # Get nC_rS1 probs 67 | pr[s,1,task] <- phi(cS1[s,1,task] - S1mu[s,task])/C_area_rS1[s,task] 68 | for (k in 1:(nratings-2)) { 69 | pr[s,(k+1),task] <- (phi(cS1[s,(k+1),task] - S1mu[s,task])-phi(cS1[s,k,task] - S1mu[s,task]))/C_area_rS1[s,task] 70 | } 71 | pr[s,(nratings),task] <- (phi(c1[s,task] - S1mu[s,task])-phi(cS1[s,(nratings-1),task] - S1mu[s,task]))/C_area_rS1[s,task] 72 | 73 | # Get nI_rS2 probs 74 | pr[s,(nratings+1),task] <- ((1-phi(c1[s,task] - S1mu[s,task]))-(1-phi(cS2[s,1,task] - S1mu[s,task])))/I_area_rS2[s,task] 75 | for (k in 1:(nratings-2)) { 76 | pr[s,(nratings+1+k),task] <- ((1-phi(cS2[s,k,task] - S1mu[s,task]))-(1-phi(cS2[s,(k+1),task] - S1mu[s,task])))/I_area_rS2[s,task] 77 | } 78 | pr[s,(nratings*2),task] <- (1-phi(cS2[s,(nratings-1),task] - S1mu[s,task]))/I_area_rS2[s,task] 79 | 80 | # Get nI_rS1 probs 81 | pr[s,(nratings*2+1), task] <- phi(cS1[s,1,task] - S2mu[s,task])/I_area_rS1[s,task] 82 | for (k in 1:(nratings-2)) { 83 | pr[s,(nratings*2+1+k),task] <- (phi(cS1[s,(k+1),task] - S2mu[s,task])-phi(cS1[s,k,task] - S2mu[s,task]))/I_area_rS1[s,task] 84 | } 85 | pr[s,(nratings*3),task] <- (phi(c1[s,task] - S2mu[s,task])-phi(cS1[s,(nratings-1),task] - S2mu[s,task]))/I_area_rS1[s,task] 86 | 87 | # Get nC_rS2 probs 88 | pr[s,(nratings*3+1),task] <- ((1-phi(c1[s,task] - S2mu[s,task]))-(1-phi(cS2[s,1,task] - S2mu[s,task])))/C_area_rS2[s,task] 89 | for (k in 1:(nratings-2)) { 90 | pr[s,(nratings*3+1+k),task] <- ((1-phi(cS2[s,k,task] - S2mu[s,task]))-(1-phi(cS2[s,(k+1),task] - S2mu[s,task])))/C_area_rS2[s,task] 91 | } 92 | pr[s,(nratings*4),task] <- (1-phi(cS2[s,(nratings-1),task] - S2mu[s,task]))/C_area_rS2[s,task] 93 | 94 | # Avoid underflow of probabilities 95 | for (i in 1:(nratings*4)) { 96 | prT[s,i,task] <- ifelse(pr[s,i,task] < Tol, Tol, pr[s,i,task]) 97 | } 98 | 99 | # Specify ordered prior on criteria (bounded above and below by Type 1 c) 100 | for (j in 1:(nratings-1)) { 101 | cS1_raw[s,j,task] ~ dnorm(-mu_c2[task], lambda_c2[task]) T(,c1[s,task]) 102 | cS2_raw[s,j,task] ~ dnorm(mu_c2[task], lambda_c2[task]) T(c1[s,task],) 103 | } 104 | cS1[s,1:(nratings-1),task] <- sort(cS1_raw[s,1:(nratings-1),task]) 105 | cS2[s,1:(nratings-1),task] <- sort(cS2_raw[s,1:(nratings-1),task]) 106 | 107 | Mratio[s,task] <- exp(logMratio[s,task]) 108 | 109 | } 110 | 111 | # Draw log(M)'s from bivariate Gaussian 112 | logMratio[s,1:3] ~ dmnorm.vcov(mu_logMratio[], T[,]) 113 | 114 | } 115 | 116 | mu_c2[1] ~ dnorm(0, 0.01) 117 | mu_c2[2] ~ dnorm(0, 0.01) 118 | mu_c2[3] ~ dnorm(0, 0.01) 119 | sigma_c2[1] ~ dnorm(0, 0.01) I(0, ) 120 | sigma_c2[2] ~ dnorm(0, 0.01) I(0, ) 121 | sigma_c2[3] ~ dnorm(0, 0.01) I(0, ) 122 | lambda_c2[1] <- pow(sigma_c2[1], -2) 123 | lambda_c2[2] <- pow(sigma_c2[2], -2) 124 | lambda_c2[3] <- pow(sigma_c2[3], -2) 125 | 126 | 127 | mu_logMratio[1] ~ dnorm(0, 1) 128 | mu_logMratio[2] ~ dnorm(0, 1) 129 | mu_logMratio[3] ~ dnorm(0, 1) 130 | lambda_logMratio[1] ~ dgamma(0.001,0.001) 131 | lambda_logMratio[2] ~ dgamma(0.001,0.001) 132 | lambda_logMratio[3] ~ dgamma(0.001,0.001) 133 | sigma_logMratio[1] <- 1/sqrt(lambda_logMratio[1]) 134 | sigma_logMratio[2] <- 1/sqrt(lambda_logMratio[2]) 135 | sigma_logMratio[3] <- 1/sqrt(lambda_logMratio[3]) 136 | 137 | rho[1] ~ dunif(-1,1) 138 | rho[2] ~ dunif(-1,1) 139 | rho[3] ~ dunif(-1,1) 140 | 141 | T[1,1] <- 1/lambda_logMratio[1] 142 | T[1,2] <- rho[1]*sigma_logMratio[1]*sigma_logMratio[2] 143 | T[1,3] <- rho[2]*sigma_logMratio[1]*sigma_logMratio[3] 144 | T[2,1] <- rho[1]*sigma_logMratio[1]*sigma_logMratio[2] 145 | T[2,2] <- 1/lambda_logMratio[2] 146 | T[2,3] <- rho[3]*sigma_logMratio[2]*sigma_logMratio[3] 147 | T[3,1] <- rho[2]*sigma_logMratio[1]*sigma_logMratio[3] 148 | T[3,2] <- rho[3]*sigma_logMratio[2]*sigma_logMratio[3] 149 | T[3,3] <- 1/lambda_logMratio[3] 150 | 151 | 152 | } 153 | -------------------------------------------------------------------------------- /R/Bayes_metad_group_corr4_R.txt: -------------------------------------------------------------------------------- 1 | # Bayesian estimation of meta-d/d for group correlation between 4 domains 2 | 3 | data { 4 | for (s in 1:nsubj) { 5 | # Type 1 counts for task 1 6 | N[s,1] <- sum(counts1[s,1:(nratings*2)]) 7 | S[s,1] <- sum(counts1[s,(nratings*2+1):(nratings*4)]) 8 | H[s,1] <- sum(counts1[s,(nratings*3+1):(nratings*4)]) 9 | M[s,1] <- sum(counts1[s,(nratings*2+1):(nratings*3)]) 10 | FA[s,1] <- sum(counts1[s,(nratings+1):(nratings*2)]) 11 | CR[s,1] <- sum(counts1[s,1:(nratings)]) 12 | 13 | # Type 1 counts for task 2 14 | N[s,2] <- sum(counts2[s,1:(nratings*2)]) 15 | S[s,2] <- sum(counts2[s,(nratings*2+1):(nratings*4)]) 16 | H[s,2] <- sum(counts2[s,(nratings*3+1):(nratings*4)]) 17 | M[s,2] <- sum(counts2[s,(nratings*2+1):(nratings*3)]) 18 | FA[s,2] <- sum(counts2[s,(nratings+1):(nratings*2)]) 19 | CR[s,2] <- sum(counts2[s,1:(nratings)]) 20 | 21 | # Type 1 counts for task 3 22 | N[s,3] <- sum(counts3[s,1:(nratings*2)]) 23 | S[s,3] <- sum(counts3[s,(nratings*2+1):(nratings*4)]) 24 | H[s,3] <- sum(counts3[s,(nratings*3+1):(nratings*4)]) 25 | M[s,3] <- sum(counts3[s,(nratings*2+1):(nratings*3)]) 26 | FA[s,3] <- sum(counts3[s,(nratings+1):(nratings*2)]) 27 | CR[s,3] <- sum(counts3[s,1:(nratings)]) 28 | 29 | # Type 1 counts for task 4 30 | N[s,4] <- sum(counts4[s,1:(nratings*2)]) 31 | S[s,4] <- sum(counts4[s,(nratings*2+1):(nratings*4)]) 32 | H[s,4] <- sum(counts4[s,(nratings*3+1):(nratings*4)]) 33 | M[s,4] <- sum(counts4[s,(nratings*2+1):(nratings*3)]) 34 | FA[s,4] <- sum(counts4[s,(nratings+1):(nratings*2)]) 35 | CR[s,4] <- sum(counts4[s,1:(nratings)]) 36 | } 37 | } 38 | 39 | model { 40 | for (s in 1:nsubj) { 41 | 42 | ## TYPE 2 SDT MODEL (META-D) 43 | # Multinomial likelihood for response counts ordered as c(nR_S1,nR_S2) 44 | 45 | counts1[s,1:(nratings)] ~ dmulti(prT[s,1:(nratings),1],CR[s,1]) 46 | counts1[s,(nratings+1):(nratings*2)] ~ dmulti(prT[s,(nratings+1):(nratings*2),1],FA[s,1]) 47 | counts1[s,(nratings*2+1):(nratings*3)] ~ dmulti(prT[s,(nratings*2+1):(nratings*3),1],M[s,1]) 48 | counts1[s,(nratings*3+1):(nratings*4)] ~ dmulti(prT[s,(nratings*3+1):(nratings*4),1],H[s,1]) 49 | 50 | counts2[s,1:(nratings)] ~ dmulti(prT[s,1:(nratings),2],CR[s,2]) 51 | counts2[s,(nratings+1):(nratings*2)] ~ dmulti(prT[s,(nratings+1):(nratings*2),2],FA[s,2]) 52 | counts2[s,(nratings*2+1):(nratings*3)] ~ dmulti(prT[s,(nratings*2+1):(nratings*3),2],M[s,2]) 53 | counts2[s,(nratings*3+1):(nratings*4)] ~ dmulti(prT[s,(nratings*3+1):(nratings*4),2],H[s,2]) 54 | 55 | counts3[s,1:(nratings)] ~ dmulti(prT[s,1:(nratings),3],CR[s,3]) 56 | counts3[s,(nratings+1):(nratings*2)] ~ dmulti(prT[s,(nratings+1):(nratings*2),3],FA[s,3]) 57 | counts3[s,(nratings*2+1):(nratings*3)] ~ dmulti(prT[s,(nratings*2+1):(nratings*3),3],M[s,3]) 58 | counts3[s,(nratings*3+1):(nratings*4)] ~ dmulti(prT[s,(nratings*3+1):(nratings*4),3],H[s,3]) 59 | 60 | counts4[s,1:(nratings)] ~ dmulti(prT[s,1:(nratings),4],CR[s,4]) 61 | counts4[s,(nratings+1):(nratings*2)] ~ dmulti(prT[s,(nratings+1):(nratings*2),4],FA[s,4]) 62 | counts4[s,(nratings*2+1):(nratings*3)] ~ dmulti(prT[s,(nratings*2+1):(nratings*3),4],M[s,4]) 63 | counts4[s,(nratings*3+1):(nratings*4)] ~ dmulti(prT[s,(nratings*3+1):(nratings*4),4],H[s,4]) 64 | 65 | for (task in 1:4) { 66 | 67 | # Means of SDT distributions] 68 | mu[s,task] <- Mratio[s,task]*d1[s,task] 69 | S2mu[s,task] <- mu[s,task]/2 70 | S1mu[s,task] <- -mu[s,task]/2 71 | 72 | # Calculate normalisation constants 73 | C_area_rS1[s,task] <- phi(c1[s,task] - S1mu[s,task]) 74 | I_area_rS1[s,task] <- phi(c1[s,task] - S2mu[s,task]) 75 | C_area_rS2[s,task] <- 1-phi(c1[s,task] - S2mu[s,task]) 76 | I_area_rS2[s,task] <- 1-phi(c1[s,task] - S1mu[s,task]) 77 | 78 | # Get nC_rS1 probs 79 | pr[s,1,task] <- phi(cS1[s,1,task] - S1mu[s,task])/C_area_rS1[s,task] 80 | for (k in 1:(nratings-2)) { 81 | pr[s,(k+1),task] <- (phi(cS1[s,(k+1),task] - S1mu[s,task])-phi(cS1[s,k,task] - S1mu[s,task]))/C_area_rS1[s,task] 82 | } 83 | pr[s,(nratings),task] <- (phi(c1[s,task] - S1mu[s,task])-phi(cS1[s,(nratings-1),task] - S1mu[s,task]))/C_area_rS1[s,task] 84 | 85 | # Get nI_rS2 probs 86 | pr[s,(nratings+1),task] <- ((1-phi(c1[s,task] - S1mu[s,task]))-(1-phi(cS2[s,1,task] - S1mu[s,task])))/I_area_rS2[s,task] 87 | for (k in 1:(nratings-2)) { 88 | pr[s,(nratings+1+k),task] <- ((1-phi(cS2[s,k,task] - S1mu[s,task]))-(1-phi(cS2[s,(k+1),task] - S1mu[s,task])))/I_area_rS2[s,task] 89 | } 90 | pr[s,(nratings*2),task] <- (1-phi(cS2[s,(nratings-1),task] - S1mu[s,task]))/I_area_rS2[s,task] 91 | 92 | # Get nI_rS1 probs 93 | pr[s,(nratings*2+1), task] <- phi(cS1[s,1,task] - S2mu[s,task])/I_area_rS1[s,task] 94 | for (k in 1:(nratings-2)) { 95 | pr[s,(nratings*2+1+k),task] <- (phi(cS1[s,(k+1),task] - S2mu[s,task])-phi(cS1[s,k,task] - S2mu[s,task]))/I_area_rS1[s,task] 96 | } 97 | pr[s,(nratings*3),task] <- (phi(c1[s,task] - S2mu[s,task])-phi(cS1[s,(nratings-1),task] - S2mu[s,task]))/I_area_rS1[s,task] 98 | 99 | # Get nC_rS2 probs 100 | pr[s,(nratings*3+1),task] <- ((1-phi(c1[s,task] - S2mu[s,task]))-(1-phi(cS2[s,1,task] - S2mu[s,task])))/C_area_rS2[s,task] 101 | for (k in 1:(nratings-2)) { 102 | pr[s,(nratings*3+1+k),task] <- ((1-phi(cS2[s,k,task] - S2mu[s,task]))-(1-phi(cS2[s,(k+1),task] - S2mu[s,task])))/C_area_rS2[s,task] 103 | } 104 | pr[s,(nratings*4),task] <- (1-phi(cS2[s,(nratings-1),task] - S2mu[s,task]))/C_area_rS2[s,task] 105 | 106 | # Avoid underflow of probabilities 107 | for (i in 1:(nratings*4)) { 108 | prT[s,i,task] <- ifelse(pr[s,i,task] < Tol, Tol, pr[s,i,task]) 109 | } 110 | 111 | # Specify ordered prior on criteria (bounded above and below by Type 1 c) 112 | for (j in 1:(nratings-1)) { 113 | cS1_raw[s,j,task] ~ dnorm(-mu_c2[task], lambda_c2[task]) T(,c1[s,task]) 114 | cS2_raw[s,j,task] ~ dnorm(mu_c2[task], lambda_c2[task]) T(c1[s,task],) 115 | } 116 | cS1[s,1:(nratings-1),task] <- sort(cS1_raw[s,1:(nratings-1),task]) 117 | cS2[s,1:(nratings-1),task] <- sort(cS2_raw[s,1:(nratings-1),task]) 118 | 119 | Mratio[s,task] <- exp(logMratio[s,task]) 120 | 121 | } 122 | 123 | # Draw log(M)'s from bivariate Gaussian 124 | logMratio[s,1:4] ~ dmnorm.vcov(mu_logMratio[], T[,]) 125 | 126 | } 127 | 128 | mu_c2[1] ~ dnorm(0, 0.01) 129 | mu_c2[2] ~ dnorm(0, 0.01) 130 | mu_c2[3] ~ dnorm(0, 0.01) 131 | mu_c2[4] ~ dnorm(0, 0.01) 132 | sigma_c2[1] ~ dnorm(0, 0.01) I(0, ) 133 | sigma_c2[2] ~ dnorm(0, 0.01) I(0, ) 134 | sigma_c2[3] ~ dnorm(0, 0.01) I(0, ) 135 | sigma_c2[4] ~ dnorm(0, 0.01) I(0, ) 136 | lambda_c2[1] <- pow(sigma_c2[1], -2) 137 | lambda_c2[2] <- pow(sigma_c2[2], -2) 138 | lambda_c2[3] <- pow(sigma_c2[3], -2) 139 | lambda_c2[4] <- pow(sigma_c2[4], -2) 140 | 141 | 142 | mu_logMratio[1] ~ dnorm(0, 1) 143 | mu_logMratio[2] ~ dnorm(0, 1) 144 | mu_logMratio[3] ~ dnorm(0, 1) 145 | mu_logMratio[4] ~ dnorm(0, 1) 146 | lambda_logMratio[1] ~ dgamma(0.001,0.001) 147 | lambda_logMratio[2] ~ dgamma(0.001,0.001) 148 | lambda_logMratio[3] ~ dgamma(0.001,0.001) 149 | lambda_logMratio[4] ~ dgamma(0.001,0.001) 150 | sigma_logMratio[1] <- 1/sqrt(lambda_logMratio[1]) 151 | sigma_logMratio[2] <- 1/sqrt(lambda_logMratio[2]) 152 | sigma_logMratio[3] <- 1/sqrt(lambda_logMratio[3]) 153 | sigma_logMratio[4] <- 1/sqrt(lambda_logMratio[4]) 154 | 155 | rho[1] ~ dunif(-1,1) 156 | rho[2] ~ dunif(-1,1) 157 | rho[3] ~ dunif(-1,1) 158 | rho[4] ~ dunif(-1,1) 159 | rho[5] ~ dunif(-1,1) 160 | rho[6] ~ dunif(-1,1) 161 | 162 | T[1,1] <- 1/lambda_logMratio[1] 163 | T[1,2] <- rho[1]*sigma_logMratio[1]*sigma_logMratio[2] 164 | T[1,3] <- rho[2]*sigma_logMratio[1]*sigma_logMratio[3] 165 | T[1,4] <- rho[3]*sigma_logMratio[1]*sigma_logMratio[4] 166 | T[2,1] <- rho[1]*sigma_logMratio[1]*sigma_logMratio[2] 167 | T[2,2] <- 1/lambda_logMratio[2] 168 | T[2,3] <- rho[4]*sigma_logMratio[2]*sigma_logMratio[3] 169 | T[2,4] <- rho[5]*sigma_logMratio[2]*sigma_logMratio[4] 170 | T[3,1] <- rho[2]*sigma_logMratio[1]*sigma_logMratio[3] 171 | T[3,2] <- rho[4]*sigma_logMratio[2]*sigma_logMratio[3] 172 | T[3,3] <- 1/lambda_logMratio[3] 173 | T[3,4] <- rho[6]*sigma_logMratio[3]*sigma_logMratio[4] 174 | T[4,1] <- rho[3]*sigma_logMratio[1]*sigma_logMratio[4] 175 | T[4,2] <- rho[5]*sigma_logMratio[2]*sigma_logMratio[4] 176 | T[4,3] <- rho[6]*sigma_logMratio[3]*sigma_logMratio[4] 177 | T[4,4] <- 1/lambda_logMratio[4] 178 | 179 | 180 | } 181 | -------------------------------------------------------------------------------- /R/Bayes_metad_group_regress_nodp.txt: -------------------------------------------------------------------------------- 1 | # Bayesian estimation of meta-d/d for group 2 | 3 | data { 4 | for (s in 1:nsubj) { 5 | # Type 1 counts 6 | N[s] <- sum(counts[s,1:nratings*2]) 7 | S[s] <- sum(counts[s,(nratings*2+1):(nratings*4)]) 8 | H[s] <- sum(counts[s,(nratings*3+1):(nratings*4)]) 9 | M[s] <- sum(counts[s,(nratings*2+1):(nratings*3)]) 10 | FA[s] <- sum(counts[s,(nratings+1):(nratings*2)]) 11 | CR[s] <- sum(counts[s,1:(nratings)]) 12 | } 13 | } 14 | 15 | model { 16 | for (s in 1:nsubj) { 17 | 18 | ## TYPE 2 SDT MODEL (META-D) 19 | # Multinomial likelihood for response counts ordered as c(nR_S1,nR_S2) 20 | counts[s,1:(nratings)] ~ dmulti(prT[s,1:(nratings)],CR[s]) 21 | counts[s,(nratings+1):(nratings*2)] ~ dmulti(prT[s,(nratings+1):(nratings*2)],FA[s]) 22 | counts[s,(nratings*2+1):(nratings*3)] ~ dmulti(prT[s,(nratings*2+1):(nratings*3)],M[s]) 23 | counts[s,(nratings*3+1):(nratings*4)] ~ dmulti(prT[s,(nratings*3+1):(nratings*4)],H[s]) 24 | 25 | # Means of SDT distributions] 26 | mu[s] <- Mratio[s]*d1[s] 27 | S2mu[s] <- mu[s]/2 28 | S1mu[s] <- -mu[s]/2 29 | 30 | # Calculate normalisation constants 31 | C_area_rS1[s] <- phi(c1[s] - S1mu[s]) 32 | I_area_rS1[s] <- phi(c1[s] - S2mu[s]) 33 | C_area_rS2[s] <- 1-phi(c1[s] - S2mu[s]) 34 | I_area_rS2[s] <- 1-phi(c1[s] - S1mu[s]) 35 | 36 | # Get nC_rS1 probs 37 | pr[s,1] <- phi(cS1[s,1] - S1mu[s])/C_area_rS1[s] 38 | for (k in 1:(nratings-2)) { 39 | pr[s,k+1] <- (phi(cS1[s,(k+1)] - S1mu[s])-phi(cS1[s,k] - S1mu[s]))/C_area_rS1[s] 40 | } 41 | pr[s,nratings] <- (phi(c1[s] - S1mu[s])-phi(cS1[s,(nratings-1)] - S1mu[s]))/C_area_rS1[s] 42 | 43 | # Get nI_rS2 probs 44 | pr[s,(nratings+1)] <- ((1-phi(c1[s] - S1mu[s]))-(1-phi(cS2[s,1] - S1mu[s])))/I_area_rS2[s] 45 | for (k in 1:(nratings-2)) { 46 | pr[s,(nratings+1+k)] <- ((1-phi(cS2[s,k] - S1mu[s]))-(1-phi(cS2[s,(k+1)] - S1mu[s])))/I_area_rS2[s] 47 | } 48 | pr[s,(nratings*2)] <- (1-phi(cS2[s,(nratings-1)] - S1mu[s]))/I_area_rS2[s] 49 | 50 | # Get nI_rS1 probs 51 | pr[s,(nratings*2+1)] <- phi(cS1[s,1] - S2mu[s])/I_area_rS1[s] 52 | for (k in 1:(nratings-2)) { 53 | pr[s,(nratings*2+1+k)] <- (phi(cS1[s,(k+1)] - S2mu[s])-phi(cS1[s,k] - S2mu[s]))/I_area_rS1[s] 54 | } 55 | pr[s,(nratings*3)] <- (phi(c1[s] - S2mu[s])-phi(cS1[s,(nratings-1)] - S2mu[s]))/I_area_rS1[s] 56 | 57 | # Get nC_rS2 probs 58 | pr[s,(nratings*3+1)] <- ((1-phi(c1[s] - S2mu[s]))-(1-phi(cS2[s,1] - S2mu[s])))/C_area_rS2[s] 59 | for (k in 1:(nratings-2)) { 60 | pr[s,(nratings*3+1+k)] <- ((1-phi(cS2[s,k] - S2mu[s]))-(1-phi(cS2[s,k+1] - S2mu[s])))/C_area_rS2[s] 61 | } 62 | pr[s,(nratings*4)] <- (1-phi(cS2[s,(nratings-1)] - S2mu[s]))/C_area_rS2[s] 63 | 64 | # Avoid underflow of probabilities 65 | for (i in 1:(nratings*4)) { 66 | prT[s,i] <- ifelse(pr[s,i] < Tol, Tol, pr[s,i]) 67 | } 68 | 69 | # Specify ordered prior on criteria (bounded above and below by Type 1 c) 70 | for (j in 1:(nratings-1)) { 71 | cS1_raw[s,j] ~ dnorm(-mu_c2, lambda_c2) T(,c1[s]) 72 | cS2_raw[s,j] ~ dnorm(mu_c2, lambda_c2) T(c1[s],) 73 | } 74 | cS1[s,1:(nratings-1)] <- sort(cS1_raw[s, ]) 75 | cS2[s,1:(nratings-1)] <- sort(cS2_raw[s, ]) 76 | 77 | delta[s] ~ dt(0, lambda_delta, 5) 78 | logMratio[s] <- mu_logMratio + mu_beta1*cov[s] + epsilon_logMratio*delta[s] 79 | Mratio[s] <- exp(logMratio[s]) 80 | 81 | } 82 | 83 | # hyperpriors 84 | mu_c2 ~ dnorm(0, 0.01) 85 | sigma_c2 ~ dnorm(0, 0.01) I(0, ) 86 | lambda_c2 <- pow(sigma_c2, -2) 87 | 88 | mu_logMratio ~ dnorm(0, 1) 89 | mu_beta1 ~ dnorm(0, 1) 90 | 91 | sigma_delta ~ dnorm(0, 1) I(0,) 92 | lambda_delta <- pow(sigma_delta, -2) 93 | epsilon_logMratio ~ dbeta(1,1) 94 | sigma_logMratio <- abs(epsilon_logMratio)*sigma_delta 95 | 96 | } 97 | -------------------------------------------------------------------------------- /R/Bayes_metad_indiv_R.txt: -------------------------------------------------------------------------------- 1 | # Bayesian estimation of meta-d for a single subject 2 | 3 | data { 4 | # Type 1 counts 5 | N <- sum(counts[1:(nratings*2)]) 6 | S <- sum(counts[(nratings*2+1):(nratings*4)]) 7 | H <- sum(counts[(nratings*3+1):(nratings*4)]) 8 | M <- sum(counts[(nratings*2+1):(nratings*3)]) 9 | FA <- sum(counts[(nratings+1):(nratings*2)]) 10 | CR <- sum(counts[1:(nratings)]) 11 | } 12 | 13 | model { 14 | 15 | ## TYPE 1 SDT BINOMIAL MODEL 16 | H ~ dbin(h,S) 17 | FA ~ dbin(f,N) 18 | h <- phi(d1/2-c1) 19 | f <- phi(-d1/2-c1) 20 | 21 | # Type 1 priors 22 | c1 ~ dnorm(0, 2) 23 | d1 ~ dnorm(0, 0.5) 24 | 25 | ## TYPE 2 SDT MODEL (META-D) 26 | # Multinomial likelihood for response counts ordered as c(nR_S1,nR_S2) 27 | counts[1:(nratings)] ~ dmulti(prT[1:(nratings)],CR) 28 | counts[(nratings+1):(nratings*2)] ~ dmulti(prT[(nratings+1):(nratings*2)],FA) 29 | counts[(nratings*2+1):(nratings*3)] ~ dmulti(prT[(nratings*2+1):(nratings*3)],M) 30 | counts[(nratings*3+1):(nratings*4)] ~ dmulti(prT[(nratings*3+1):(nratings*4)],H) 31 | 32 | # Means of SDT distributions 33 | S2mu <- meta_d/2 34 | S1mu <- -meta_d/2 35 | 36 | # Calculate normalisation constants 37 | C_area_rS1 <- phi(c1 - S1mu) 38 | I_area_rS1 <- phi(c1 - S2mu) 39 | C_area_rS2 <- 1-phi(c1 - S2mu) 40 | I_area_rS2 <- 1-phi(c1 - S1mu) 41 | 42 | # Get nC_rS1 probs 43 | pr[1] <- phi(cS1[1] - S1mu)/C_area_rS1 44 | for (k in 1:(nratings-2)) { 45 | pr[k+1] <- (phi(cS1[k+1] - S1mu)-phi(cS1[k] - S1mu))/C_area_rS1 46 | } 47 | pr[(nratings)] <- (phi(c1 - S1mu)-phi(cS1[(nratings-1)] - S1mu))/C_area_rS1 48 | 49 | # Get nI_rS2 probs 50 | pr[(nratings+1)] <- ((1-phi(c1 - S1mu))-(1-phi(cS2[1] - S1mu)))/I_area_rS2 51 | for (k in 1:(nratings-2)) { 52 | pr[nratings+1+k] <- ((1-phi(cS2[k] - S1mu))-(1-phi(cS2[k+1] - S1mu)))/I_area_rS2 53 | } 54 | pr[(nratings*2)] <- (1-phi(cS2[(nratings-1)] - S1mu))/I_area_rS2 55 | 56 | # Get nI_rS1 probs 57 | pr[(nratings*2+1)] <- phi(cS1[1] - S2mu)/I_area_rS1 58 | for (k in 1:(nratings-2)) { 59 | pr[(nratings*2+1+k)] <- (phi(cS1[k+1] - S2mu)-phi(cS1[k] - S2mu))/I_area_rS1 60 | } 61 | pr[(nratings*3)] <- (phi(c1 - S2mu)-phi(cS1[(nratings-1)] - S2mu))/I_area_rS1 62 | 63 | # Get nC_rS2 probs 64 | pr[(nratings*3+1)] <- ((1-phi(c1 - S2mu))-(1-phi(cS2[1] - S2mu)))/C_area_rS2 65 | for (k in 1:(nratings-2)) { 66 | pr[(nratings*3+1+k)] <- ((1-phi(cS2[k] - S2mu))-(1-phi(cS2[k+1] - S2mu)))/C_area_rS2 67 | } 68 | pr[(nratings*4)] <- (1-phi(cS2[(nratings-1)] - S2mu))/C_area_rS2 69 | 70 | # Avoid underflow of probabilities 71 | for (i in 1:(nratings*4)) { 72 | prT[i] <- ifelse(pr[i] < Tol, Tol, pr[i]) 73 | } 74 | 75 | # Specify ordered prior on criteria (bounded above and below by Type 1 c1) 76 | for (j in 1:(nratings-1)) { 77 | cS1_raw[j] ~ dnorm(0,2) I(,c1-Tol) 78 | cS2_raw[j] ~ dnorm(0,2) I(c1+Tol,) 79 | } 80 | cS1[1:(nratings-1)] <- sort(cS1_raw) 81 | cS2[1:(nratings-1)] <- sort(cS2_raw) 82 | 83 | # Type 2 priors 84 | meta_d ~ dnorm(d1,0.5) 85 | 86 | } -------------------------------------------------------------------------------- /R/example_metad_2wayANOVA.R: -------------------------------------------------------------------------------- 1 | ##################################### 2 | 3 | # Estimate metacognitive efficiency (Mratio) at the group level 4 | # 5 | # Adaptation in R of matlab function 'fit_meta_d_mcmc_groupCorr.m' 6 | # by Steve Fleming 7 | # for more details see Fleming (2017). HMeta-d: hierarchical Bayesian 8 | # estimation of metacognitive efficiency from confidence ratings. 9 | # 10 | # you need to install the following packing before using the function: 11 | # tidyverse 12 | # magrittr 13 | # reshape2 14 | # rjags 15 | # coda 16 | # lattice 17 | # broom 18 | # ggpubr 19 | # ggmcmc 20 | # 21 | # nR_S1 and nR_S2 should be two lists of each nR_S1 or nR_S2 per task 22 | # model output is a large mcmc list and two vectors for d1 and c1 23 | # 24 | # Author: Nicolas Legrand nicolas.legrand@cfin.au.dk 25 | 26 | ##################################### 27 | 28 | ## Packages 29 | library(tidyverse) 30 | library(magrittr) 31 | library(reshape2) 32 | library(rjags) 33 | library(coda) 34 | library(lattice) 35 | library(broom) 36 | library(ggpubr) 37 | library(ggmcmc) 38 | 39 | 40 | ######################################################################### 41 | # Create data for 5 participants ---------------------------------------- 42 | # Simulate responses from 5 participants in a 2x2 repeated measures design 43 | # Condition 1: (1, 1, 0, 0) - Condition 2: (1, 0, 1, 0) 44 | 45 | nR_S1 = list( 46 | list(c(23., 9., 7., 5., 3., 3., 0., 0.), c(10., 10., 8., 12., 6., 2., 2., 0.), 47 | c(11., 8., 11., 8., 7., 3., 1., 1.), c(20., 7., 7., 6., 6., 2., 1., 1.)), 48 | list(c(16., 13., 10., 4., 7., 0., 0., 0.), c(11., 9., 12., 13., 3., 2., 0., 0.), 49 | c(16., 14., 6., 3., 7., 3., 1., 0.), c(22., 8., 5., 6., 5., 3., 1., 0.)), 50 | list(c(16., 7., 8., 9., 7., 2., 1., 0.), c(14., 8., 10., 9., 6., 2., 1., 0.), 51 | c(17., 12., 6., 7., 3., 4., 0., 1.), c(21., 6., 8., 9., 4., 1., 1., 0.)), 52 | list(c(21., 8., 9., 4., 4., 3., 1., 0.), c(11., 16., 6., 9., 4., 3., 0., 1.), 53 | c(10., 8., 9., 11., 9., 2., 1., 0.), c(15., 7., 13., 7., 4., 4., 0., 0.)), 54 | list(c(15., 12., 7., 8., 4., 2., 2., 0.), c(5., 16., 8., 11., 4., 4., 2., 0.), 55 | c(9., 10., 12., 9., 8., 1., 0., 1.), c(25., 7., 5., 6., 6., 1., 0., 0.)) 56 | ) 57 | 58 | nR_S2 = list( 59 | list(c(0., 1., 4., 6., 9., 6., 10., 14.), c(0., 1., 2., 3., 8., 10., 9., 17.), 60 | c(0., 2., 3., 2., 7., 10., 6., 20.), c(0., 1., 2., 3., 6., 8., 9., 21.)), 61 | list(c(0., 1., 1., 6., 9., 8., 10., 15.), c(0., 2., 3., 5., 11., 15., 9., 5.), 62 | c(0., 1., 2., 4., 3., 8., 8., 24.), c(0., 0., 1., 5., 2., 10., 8., 24.)), 63 | list(c(0., 1., 3., 2., 12., 4., 9., 19.), c(1., 1., 0., 5., 12., 14., 6., 11.), 64 | c(0., 1., 4., 4., 8., 6., 11., 16.), c(1., 2., 1., 6., 5., 8., 10., 17.)), 65 | list(c(0., 0., 3., 5., 3., 11., 6., 22.), c(1., 0., 3., 4., 13., 7., 13., 9.), 66 | c(0., 0., 4., 1., 4., 12., 12., 17.), c(1., 2., 0., 5., 5., 14., 8., 15.)), 67 | list(c(0., 1., 3., 4., 6., 7., 16., 13.), c(0., 0., 1., 5., 8., 10., 12., 14.), 68 | c(0., 0., 3., 2., 2., 7., 15., 21.), c(0., 2., 1., 6., 10., 10., 10., 11.)) 69 | ) 70 | 71 | # Model ------------------------------------------------------------------- 72 | 73 | # Fit all data at once 74 | source("fit_metad_2wayANOVA.R") 75 | output <- metad_2wayANOVA(nR_S1 = nR_S1, nR_S2 = nR_S2) 76 | 77 | ## Summary stats -------------------------------------------------- 78 | 79 | # Mean values 80 | Value <- summary(output) 81 | stat <- data.frame(mean = Value[["statistics"]][, "Mean"]) 82 | stat %<>% 83 | rownames_to_column(var = "name") 84 | 85 | # Rhat 86 | Value <- gelman.diag(output, confidence = 0.95) 87 | Rhat <- data.frame(conv = Value$psrf) 88 | 89 | # HDI 90 | HDI <- data.frame(HPDinterval(output, prob = 0.95)) 91 | HDI %<>% 92 | rownames_to_column(var = "name") 93 | 94 | # All values in the same dataframe 95 | Fit <- stat %>% 96 | cbind(lower = HDI$lower, 97 | upper = HDI$upper) 98 | 99 | ## Plots --------------------------------------------------------- 100 | 101 | # Plot trace mcmc 102 | traceplot(output) 103 | 104 | # mcmc values in df for plot posterior distributions 105 | mcmc.sample <- ggs(output) 106 | 107 | # Plot posterior distribution for rho value 108 | PlotCondition1 <- mcmc.sample %>% 109 | filter(Parameter == "muBd_Condition1") %>% 110 | ggplot(aes(value)) + 111 | geom_histogram(fill = "blue", colour = "grey", alpha = 0.5, bins = 100) + 112 | geom_vline(xintercept = stat$mean[stat$name == "muBd_Condition1"],linetype="dashed", linewidth = 1.) + 113 | geom_vline(xintercept = 0, linewidth = 1.5, color='red') + 114 | annotate("segment", x= HDI$lower[HDI$name == "muBd_Condition1"], 115 | y = 50, xend = HDI$upper[HDI$name == "muBd_Condition1"], 116 | yend = 50, colour = "white", linewidth = 2.5) + 117 | ylab("Sample count") + 118 | xlab(expression(paste("muBd_Condition1"))) 119 | PlotCondition1 120 | 121 | # Plot posterior distribution for rho value 122 | PlotCondition2 <- mcmc.sample %>% 123 | filter(Parameter == "muBd_Condition2") %>% 124 | ggplot(aes(value)) + 125 | geom_histogram(fill = "blue", colour = "grey", alpha = 0.5, bins = 100) + 126 | geom_vline(xintercept = stat$mean[stat$name == "muBd_Condition2"],linetype="dashed", linewidth = 1.) + 127 | geom_vline(xintercept = 0, linewidth = 1.5, color='red') + 128 | annotate("segment", x = HDI$lower[HDI$name == "muBd_Condition2"], 129 | y = 50, xend = HDI$upper[HDI$name == "muBd_Condition2"], 130 | yend = 50, colour = "white", linewidth = 2.5) + 131 | ylab("Sample count") + 132 | xlab(expression(paste("muBd_Condition2"))) 133 | PlotCondition2 134 | 135 | # Plot posterior distribution for rho value 136 | PlotInteraction <- mcmc.sample %>% 137 | filter(Parameter == "muBd_interaction") %>% 138 | ggplot(aes(value)) + 139 | geom_histogram(fill = "blue", colour = "grey", alpha = 0.5, bins = 100) + 140 | geom_vline(xintercept = stat$mean[stat$name == "muBd_interaction"],linetype="dashed", linewidth = 1.) + 141 | geom_vline(xintercept = 0, linewidth = 1.5, color='red') + 142 | annotate("segment", x = HDI$lower[HDI$name == "muBd_interaction"], 143 | y = 50, xend = HDI$upper[HDI$name == "muBd_interaction"], 144 | yend = 50, colour = "white", linewidth = 2.5) + 145 | ylab("Sample count") + 146 | xlab(expression(paste("muBd_interaction"))) 147 | PlotInteraction 148 | 149 | # Save samples ------------------------------------------------------------ 150 | 151 | df = rbind(data.frame(output[[1]]), data.frame(output[[2]]), data.frame(output[[3]])) 152 | 153 | write.table(df, file = 'posterior.txt', append = FALSE, sep = "\t", dec = ".", 154 | row.names = TRUE, col.names = TRUE) 155 | -------------------------------------------------------------------------------- /R/example_metad_group.R: -------------------------------------------------------------------------------- 1 | ##################################### 2 | 3 | # Example of hierarchical metacognitive efficiency (Mratio) 4 | # at the group level 5 | # exemple of trace plots and posterior distribution plots 6 | # using the Function_metad_group.R 7 | # 8 | # AM 2018 9 | 10 | ##################################### 11 | 12 | ## Packages ---------------------------------------------------------------- 13 | library(tidyverse) 14 | library(magrittr) 15 | library(reshape2) 16 | library(rjags) 17 | library(coda) 18 | library(lattice) 19 | library(broom) 20 | library(ggpubr) 21 | library(ggmcmc) 22 | 23 | ## Create data for 3 participants ------------------------------------------------------------- 24 | 25 | nR_S1 <- data.frame( 26 | p1 = c(52,32,35,37,26,12,4,2), 27 | p2 = c(27,39,46,52,14,10,9,3), 28 | p3 = c(112,30,15,17,17,9,0,0)) 29 | nR_S2 <- data.frame( 30 | p1 = c(2,5,15,22,33,38,40,45), 31 | p2 = c(2,4,9,21,57,48,34,25), 32 | p3 = c(0,1,7,18,12,17,27,118)) 33 | 34 | ## Hierarchical meta_d group function ------------------------------------------------------ 35 | 36 | # List creation for model inputs 37 | nR_S1 <- list(nR_S1) 38 | nR_S2 <- list(nR_S2) 39 | 40 | # Fit all data at once 41 | source("fit_metad_group.R") 42 | output <- fit_metad_group(nR_S1 = nR_S1, nR_S2 = nR_S2) 43 | 44 | ## Model output ------------------------------------------------------------ 45 | 46 | # Values 47 | Value <- summary(output) 48 | stat <- data.frame(mean = Value$statistics[,"Mean"]) 49 | stat %<>% 50 | rownames_to_column(var = "name") 51 | 52 | # Rhat 53 | Value <- gelman.diag(output, confidence = 0.95) 54 | Rhat <- data.frame(conv = Value$psrf) 55 | 56 | # HDI 57 | HDI <- data.frame(HPDinterval(output, prob = 0.95)) 58 | HDI %<>% 59 | rownames_to_column(var = "name") 60 | 61 | # All values in the same dataframe 62 | Fit <- stat %>% 63 | cbind(lower = HDI$lower, 64 | upper = HDI$upper, 65 | Rhat = Rhat[,1]) 66 | 67 | ## Plots --------------------------------------------------------- 68 | 69 | # Plot trace mcmc 70 | traceplot(output) 71 | 72 | # mcmc values in df for plot posterior distributions 73 | mcmc.sample <- ggs(output) 74 | 75 | # Plot posterior distribution for mu Mratio value 76 | Mratio_plot <- mcmc.sample %>% 77 | filter(Parameter == "mu_logMratio") %>% 78 | ggplot(aes(exp(value))) + 79 | geom_histogram(binwidth = 0.03, fill = "blue", colour = "grey", alpha = 0.5) + 80 | geom_vline(xintercept = exp(stat$mean[stat$name == "mu_logMratio"]), linetype = "dashed", linewidth = 1.5) + 81 | annotate("segment", x = exp(HDI$lower[HDI$name == "mu_logMratio"]), y = 50, 82 | xend = exp(HDI$upper[HDI$name == "mu_logMratio"]), yend = 50, 83 | colour = "white", linewidth = 2.5) + 84 | ylab("Sample count") + 85 | xlab(expression(paste(mu, " Mratio"))) 86 | 87 | 88 | Mratio_plot 89 | -------------------------------------------------------------------------------- /R/example_metad_group_corr.R: -------------------------------------------------------------------------------- 1 | ##################################### 2 | 3 | # Example of hierarchical metacognitive efficiency (Mratio) calculation 4 | # for two domains and correlation coefficient 5 | # exemple of trace plots and posterior distribution plots 6 | # using the Function_metad_groupcorr.R 7 | # The same function allows also the calculation for 3 and 4 domains 8 | # 9 | # AM 2018 10 | 11 | ##################################### 12 | 13 | ## Packages ---------------------------------------------------------------- 14 | library(tidyverse) 15 | library(magrittr) 16 | library(reshape2) 17 | library(rjags) 18 | library(coda) 19 | library(lattice) 20 | library(broom) 21 | library(ggpubr) 22 | library(ggmcmc) 23 | 24 | ## Create data for 3 participants ------------------------------------------------------------- 25 | 26 | # Task 1 27 | nR_S1_1 <- data.frame( 28 | p1 = c(52,32,35,37,26,12,4,2), 29 | p2 = c(27,39,46,52,14,10,9,3), 30 | p3 = c(112,30,15,17,17,9,0,0)) 31 | nR_S2_1 <- data.frame( 32 | p1 = c(2,5,15,22,33,38,40,45), 33 | p2 = c(2,4,9,21,57,48,34,25), 34 | p3 = c(0,1,7,18,12,17,27,118)) 35 | 36 | # Task 2 37 | nR_S1_2 <- data.frame( 38 | p1 = c(97,49,13,9,20,11,1,0), 39 | p2 = c(37,41,49,44,17,11,0,1), 40 | p3 = c(61,45,34,28,21,9,1,1)) 41 | nR_S2_2 <- data.frame( 42 | p1 = c(0,1,8,23,17,33,22,96), 43 | p2 = c(0,2,9,18,44,46,43,38), 44 | p3 = c(2,5,3,22,32,38,27,71)) 45 | 46 | ## Hierarchical meta_d correlation function ------------------------------------------------------ 47 | 48 | # List creation for model inputs 49 | nR_S1 <- list(nR_S1_1, 50 | nR_S1_2) 51 | 52 | nR_S2 <- list(nR_S2_1, 53 | nR_S2_2) 54 | 55 | # Fit all data at once 56 | source("fit_metad_groupcorr.R") 57 | output <- fit_metad_groupcorr(nR_S1 = nR_S1, nR_S2 = nR_S2) 58 | 59 | ## Model output ------------------------------------------------------------ 60 | 61 | # Values 62 | Value <- summary(output) 63 | stat <- data.frame(mean = Value$statistics[,"Mean"]) 64 | stat %<>% 65 | rownames_to_column(var = "name") 66 | 67 | # Rhat 68 | Value <- gelman.diag(output, confidence = 0.95) 69 | Rhat <- data.frame(conv = Value$psrf) 70 | 71 | # HDI 72 | HDI <- data.frame(HPDinterval(output, prob = 0.95)) 73 | HDI %<>% 74 | rownames_to_column(var = "name") 75 | 76 | # All values in the same dataframe 77 | Fit <- stat %>% 78 | cbind(lower = HDI$lower, 79 | upper = HDI$upper, 80 | Rhat = Rhat[,1]) 81 | 82 | ## Plots --------------------------------------------------------- 83 | 84 | # Plot trace mcmc 85 | traceplot(output) 86 | 87 | # mcmc values in df for plot posterior distributions 88 | mcmc.sample <- ggs(output) 89 | 90 | # Plot posterior distribution for rho value 91 | Rho_plot <- mcmc.sample %>% 92 | filter(Parameter == "rho") %>% 93 | ggplot(aes(value)) + 94 | geom_histogram(binwidth = 0.03, fill = "blue", colour = "grey", alpha = 0.5) + 95 | geom_vline(xintercept = stat$mean[stat$name == "rho"],linetype="dashed", linewidth = 1.5) + 96 | annotate("segment", x = HDI$lower[HDI$name == "rho"], y = 50, 97 | xend = HDI$upper[HDI$name == "rho"], yend = 50, 98 | colour = "white", linewidth = 2.5) + 99 | xlim(c(-1, 1)) + 100 | ylab("Sample count") + 101 | xlab(expression(paste(rho, " value"))) 102 | 103 | Rho_plot 104 | -------------------------------------------------------------------------------- /R/example_metad_indiv.R: -------------------------------------------------------------------------------- 1 | ##################################### 2 | 3 | # Example of meta d calculation for individual subject and 4 | # exemple of trace plots and posterior distribution plots 5 | # using the Function_metad_indiv.R 6 | # AM 2018 7 | 8 | ##################################### 9 | 10 | ## Packages ---------------------------------------------------------------- 11 | library(tidyverse) 12 | library(magrittr) 13 | library(rjags) 14 | library(coda) 15 | library(ggmcmc) 16 | 17 | ## Create data for 1 participant ------------------------------------------------------------- 18 | nR_S1 <- c(52,32,35,37,26,12,4,2) 19 | nR_S2 <- c(2,5,15,22,33,38,40,45) 20 | 21 | ## Individual meta_d function ------------------------------------------------------ 22 | source("fit_metad_indiv.R") 23 | fit <- fit_metad_indiv(nR_S1 = nR_S1, nR_S2 = nR_S2) 24 | 25 | ## Model output ------------------------------------------------------------ 26 | output = fit[[1]] 27 | d1 = fit[[2]]$d1 28 | 29 | # Mean values 30 | Value <- summary(output) 31 | stat <- data.frame(mean = Value[["statistics"]][, "Mean"]) 32 | stat %<>% 33 | rownames_to_column(var = "name") 34 | 35 | # Rhat 36 | Value <- gelman.diag(output, confidence = 0.95) 37 | Rhat <- data.frame(conv = Value$psrf) 38 | 39 | # HDI 40 | HDI <- data.frame(HPDinterval(output, prob = 0.95)) 41 | HDI %<>% 42 | rownames_to_column(var = "name") 43 | 44 | # All values in the same dataframe 45 | Fit <- stat %>% 46 | cbind(lower = HDI$lower, 47 | upper = HDI$upper) 48 | 49 | ## Plots --------------------------------------------------------- 50 | 51 | # Plot trace mcmc 52 | traceplot(output) 53 | 54 | # mcmc values in df for plot posterior distributions 55 | mcmc.sample <- ggs(output) 56 | 57 | # Plot posterior distribution for meta-d value 58 | Plot <- mcmc.sample %>% 59 | filter(Parameter == "meta_d") %>% 60 | ggplot(aes(value)) + 61 | geom_histogram(binwidth = 0.03, fill = "blue", colour = "grey", alpha = 0.5) + 62 | geom_vline(xintercept = stat$mean[stat$name == "meta_d"],linetype="dashed", linewidth= 1.5) + 63 | annotate("segment", x = HDI$lower[HDI$name == "meta_d"], y = 50, 64 | xend = HDI$upper[HDI$name == "meta_d"], 65 | yend = 50, colour = "white", linewidth = 2.5) + 66 | ylab("Sample count") + 67 | xlab(expression(paste("Meta d'"))) 68 | 69 | Plot 70 | -------------------------------------------------------------------------------- /R/fit_meta_d_mcmc_regression.R: -------------------------------------------------------------------------------- 1 | # HMeta-d for between-subjects regression on meta-d'/d' 2 | # 3 | #Adaptation in R of matlab function 'fit_metad_mcmc_regression.m' 4 | #by Steve Fleming (2017) 5 | # 6 | # 7 | # you need to install the following packing before using the function: 8 | # coda 9 | # rjags 10 | # magrittr 11 | # dplyr 12 | # tidyr 13 | # tibble 14 | # ggmcmc 15 | # 16 | # nR_S1 and nR_S2 should be two vectors 17 | # cov is a n x s matrix of covariates, where s=number of subjects, n=number of covarient 18 | # model output is a large mcmc list and two vectors for d1 and c1 19 | 20 | ######################### 21 | 22 | #Packages 23 | library(coda) 24 | library(rjags) 25 | library(magrittr) 26 | library(dplyr) 27 | library(tidyr) 28 | library(tibble) 29 | library(ggmcmc) 30 | 31 | fit_meta_d_regression <- function (nR_S1, nR_S2) { 32 | 33 | #get type 1 SDT parameter values 34 | Nsubj <- ncol(nR_S1) 35 | nRatings = nrow(nR_S1)/2 36 | nTot <- sum(nR_S1[,1], nR_S2[,1]) #nb of trials 37 | 38 | nR_S1 <- list(nR_S1) 39 | nR_S2 <- list(nR_S2) 40 | 41 | d1 <- data.frame() 42 | c1 <- data.frame() 43 | 44 | for(n in 1:(Nsubj)) { 45 | # Adjust to ensure non-zero counts for type 1 d' point estimate 46 | # (not necessary if estimating d' inside JAGS) 47 | adj_f <- 1/((nRatings)*2) 48 | nR_S1_adj = nR_S1[[1]][,n] + adj_f 49 | nR_S2_adj = nR_S2[[1]][,n] + adj_f 50 | 51 | ratingHR <- matrix() 52 | ratingFAR <- matrix() 53 | 54 | for (c in 2:(nRatings*2)) { 55 | ratingHR[c-1] <- sum(nR_S2_adj[c:length(nR_S2_adj)]) / sum(nR_S2_adj) 56 | ratingFAR[c-1] <- sum(nR_S1_adj[c:length(nR_S1_adj)]) / sum(nR_S1_adj) 57 | 58 | } 59 | 60 | t1_index <- nRatings 61 | 62 | a <- qnorm(ratingHR[(t1_index)]) - qnorm(ratingFAR[(t1_index)]) 63 | d1 %<>% 64 | rbind(a) 65 | a <- -0.5 * (qnorm(ratingHR[(t1_index)]) + qnorm(ratingFAR[(t1_index)])) 66 | c1 %<>% 67 | rbind(a) 68 | } 69 | 70 | d1 <- c(d1[1:Nsubj,1]) 71 | c1 <- c(c1[1:Nsubj,1]) 72 | cov <- c(cov[1:Nsubj,1]) 73 | 74 | # Data preparation for model 75 | counts <- t(nR_S1[[1]]) %>% 76 | cbind(t(nR_S2[[1]])) 77 | 78 | Tol <- 1e-05 79 | 80 | data <- list( 81 | d1 = d1, 82 | c1 = c1, 83 | nsubj = Nsubj, 84 | counts = counts, 85 | cov = as.vector(cov), 86 | nratings = nRatings, 87 | Tol = Tol) 88 | } 89 | 90 | ## Model using JAGS 91 | # Create and update model 92 | regression_model <- jags.model(file = 'Bayes_metad_group_regress_nodp.txt', data = data, 93 | n.chains = 3, quiet=FALSE) 94 | update(regression_model, n.iter=1000) 95 | 96 | # Sampling 97 | output <- coda.samples( 98 | model = regression_model, 99 | variable.names = c("mu_logMratio", "sigma_logMratio", "mu_c2", "mu_beta1", "Mratio"), 100 | n.iter = 10000, 101 | thin = 1 ) 102 | 103 | ## Model output ------------------------------------------------------------ 104 | 105 | # Convergence diagnostic 106 | Value <- gelman.diag(output, confidence = 0.95) 107 | Rhat <- data.frame(conv = Value$psrf) 108 | 109 | # Values (mean and CI) 110 | Value <- summary(output) 111 | stat <- data.frame(mean = Value$statistics[,"Mean"]) 112 | stat %<>% 113 | rownames_to_column(var = "name") %>% 114 | cbind(CILow = Value$quantiles[,"2.5%"]) %>% 115 | cbind(CIUp = Value$quantiles[,"97.5%"]) 116 | 117 | # HDI function 118 | HDI <- data.frame(HPDinterval(output, prob = 0.95)) 119 | HDI %<>% 120 | rownames_to_column(var = "name") 121 | 122 | ## Plot trace mcmc --------------------------------------------------------- 123 | traceplot(output) 124 | 125 | ## Plot posterior distributions -------------------------------------------- 126 | mcmc.sample <- ggs(output) 127 | 128 | ##Save model 129 | Fit <- stat %>% 130 | cbind(lower = HDI$lower, 131 | upper = HDI$upper, 132 | Rhat = Rhat[,1]) 133 | -------------------------------------------------------------------------------- /R/fit_metad_2wayANOVA.R: -------------------------------------------------------------------------------- 1 | ##################################### 2 | 3 | # Estimate metacognitive efficiency (Mratio) at the group level 4 | # 5 | # Adaptation in R of matlab function 'fit_meta_d_mcmc_groupCorr.m' 6 | # by Steve Fleming 7 | # for more details see Fleming (2017). HMeta-d: hierarchical Bayesian 8 | # estimation of metacognitive efficiency from confidence ratings. 9 | # 10 | # you need to install the following packing before using the function: 11 | # tidyverse 12 | # magrittr 13 | # reshape2 14 | # rjags 15 | # coda 16 | # lattice 17 | # broom 18 | # ggpubr 19 | # ggmcmc 20 | # 21 | # nR_S1 and nR_S2 should be two lists of each nR_S1 or nR_S2 per task 22 | # model output is a large mcmc list and two vectors for d1 and c1 23 | # 24 | # Author: Nicolas Legrand nicolas.legrand@cfin.au.dk 25 | 26 | ##################################### 27 | 28 | ## Packages 29 | library(tidyverse) 30 | library(magrittr) 31 | library(reshape2) 32 | library(rjags) 33 | library(coda) 34 | library(lattice) 35 | library(broom) 36 | library(ggpubr) 37 | library(ggmcmc) 38 | 39 | # Model ------------------------------------------------------------------- 40 | 41 | metad_2wayANOVA <- function (nR_S1_tot, nR_S2_tot) { 42 | # nR_S1_tot and nR_S2_tot should be lists of size N subjects. 43 | # Each list contain 4 responses vectors. 44 | # Design matrix: Condition 1: (1, 1, 0, 0) - Condition 2: (1, 0, 1, 0) 45 | 46 | # Type 1 parameters 47 | nratings <- length(nR_S1_tot[[1]][[1]])/2 48 | nsubj <- length((nR_S1_tot)) 49 | 50 | d1 <- matrix(ncol = 4, nrow = nsubj) 51 | c1 <- matrix(ncol = 4, nrow = nsubj) 52 | counts_total = array(dim = c(nsubj, nratings*4, 4)) 53 | 54 | for (n in 1:(nsubj)) { 55 | for (i in 1:4) { 56 | 57 | nR_S1 = nR_S1_tot[[n]][i] 58 | nR_S2 = nR_S2_tot[[n]][i] 59 | 60 | # Adjust to ensure non-zero counts for type 1 d' point estimate 61 | adj_f <- 1/((nratings)*2) 62 | nR_S1_adj = unlist(nR_S1) + adj_f 63 | nR_S2_adj = unlist(nR_S2) + adj_f 64 | 65 | ratingHR <- matrix() 66 | ratingFAR <- matrix() 67 | 68 | for (c in 2:(nratings*2)) { 69 | ratingHR[c-1] <- sum(nR_S2_adj[c:length(nR_S2_adj)]) / sum(nR_S2_adj) 70 | ratingFAR[c-1] <- sum(nR_S1_adj[c:length(nR_S1_adj)]) / sum(nR_S1_adj) 71 | 72 | } 73 | 74 | d1[n, i] = qnorm(ratingHR[(nratings)]) - qnorm(ratingFAR[(nratings)]) 75 | c1[n, i] = -0.5 * (qnorm(ratingHR[(nratings)]) + qnorm(ratingFAR[(nratings)])) 76 | 77 | counts_total[n, , i] <- t(nR_S1[[1]]) %>% 78 | cbind(t(nR_S2[[1]])) 79 | } 80 | } 81 | 82 | Tol <- 1e-05 83 | 84 | data <- list( 85 | d1 = d1, # [Subjects * Condition] 86 | c1 = c1, # [Subjects * Condition] 87 | nsubj = nsubj, 88 | counts = counts_total, # [Subjects * Counts * Condition] 89 | nratings = nratings, 90 | Tol = Tol, 91 | Condition1 = c(1, 1, 0, 0), 92 | Condition2 = c(1, 0, 1, 0), 93 | Interaction = c(1, 0, 0, 0) 94 | ) 95 | 96 | ## Model using JAGS 97 | # Create and update model 98 | aov_model <- jags.model("Bayes_metad_2wayANOVA.txt", data = data, 99 | n.chains = 3, n.adapt= 2000, quiet=FALSE) 100 | update(aov_model, n.iter=5000) 101 | 102 | # Sampling 103 | output <- coda.samples( 104 | model = aov_model, 105 | variable.names = c("muBd_Condition1", "lamBd_Condition1", "sigD_Condition1", "muBd_Condition2", "lamBd_Condition2", "sigD_Condition2", 106 | "muBd_interaction", "lamBd_interaction", "sigD_interaction", "Mratio"), 107 | n.iter = 10000, 108 | thin = 1 ) 109 | 110 | return(output) 111 | } -------------------------------------------------------------------------------- /R/fit_metad_group.R: -------------------------------------------------------------------------------- 1 | ##################################### 2 | 3 | # Estimate metacognitive efficiency (Mratio) at the group level 4 | # 5 | # Adaptation in R of matlab function 'fit_meta_d_mcmc_group.m' 6 | # by Steve Fleming 7 | # for more details see Fleming (2017). HMeta-d: hierarchical Bayesian 8 | # estimation of metacognitive efficiency from confidence ratings. 9 | # 10 | # you need to install the following packing before using the function: 11 | # tidyverse 12 | # magrittr 13 | # reshape2 14 | # rjags 15 | # coda 16 | # lattice 17 | # broom 18 | # ggpubr 19 | # ggmcmc 20 | # 21 | # nR_S1 and nR_S2 should be two vectors 22 | # model output is a large mcmc list and two vectors for d1 and c1 23 | # 24 | # Audrey Mazancieux 2018 25 | 26 | ##################################### 27 | 28 | ## Packages 29 | library(tidyverse) 30 | library(magrittr) 31 | library(reshape2) 32 | library(rjags) 33 | library(coda) 34 | library(lattice) 35 | library(broom) 36 | library(ggpubr) 37 | library(ggmcmc) 38 | 39 | fit_metad_group <- function (nR_S1, nR_S2) { 40 | 41 | # Type 1 parameters 42 | nTot <- sum(nR_S1[[1]]$V1, nR_S2[[1]]$V1) 43 | nratings <- nrow(nR_S1[[1]])/2 44 | nsubj <- ncol(nR_S1[[1]]) 45 | nTask <- length(nR_S1) 46 | 47 | 48 | # Adjust to ensure non-zero counts for type 1 d' point estimate 49 | d1 <- data.frame() 50 | c1 <- data.frame() 51 | 52 | for (n in 1:(nsubj)) { 53 | 54 | adj_f <- 1/((nratings)*2) 55 | nR_S1_adj = nR_S1[[1]][,n] + adj_f 56 | nR_S2_adj = nR_S2[[1]][,n] + adj_f 57 | 58 | ratingHR <- matrix() 59 | ratingFAR <- matrix() 60 | 61 | for (c in 2:(nratings*2)) { 62 | ratingHR[c-1] <- sum(nR_S2_adj[c:length(nR_S2_adj)]) / sum(nR_S2_adj) 63 | ratingFAR[c-1] <- sum(nR_S1_adj[c:length(nR_S1_adj)]) / sum(nR_S1_adj) 64 | 65 | } 66 | 67 | t1_index <- nratings 68 | a <- qnorm(ratingHR[(t1_index)]) - qnorm(ratingFAR[(t1_index)]) 69 | d1 %<>% 70 | rbind(a) 71 | a <- -0.5 * (qnorm(ratingHR[(t1_index)]) + qnorm(ratingFAR[(t1_index)])) 72 | c1 %<>% 73 | rbind(a) 74 | } 75 | 76 | d1 <- c(d1[1:nsubj,1]) 77 | c1 <- c(c1[1:nsubj,1]) 78 | 79 | # Data preparation for model 80 | counts <- t(nR_S1[[1]]) %>% 81 | cbind(t(nR_S2[[1]])) 82 | 83 | d1 <<- as.matrix(d1) 84 | c1 <<- as.matrix(c1) 85 | 86 | Tol <- 1e-05 87 | 88 | data <- list( 89 | d1 = d1, 90 | c1 = c1, 91 | nsubj = nsubj, 92 | counts = counts, 93 | nratings = nratings, 94 | Tol = Tol 95 | ) 96 | 97 | ## Model using JAGS 98 | # Create and update model 99 | model <- jags.model(file = 'Bayes_metad_group_R.txt', data = data, 100 | n.chains = 3, quiet=FALSE) 101 | update(model, n.iter=1000) 102 | 103 | # Sampling 104 | output <- coda.samples( 105 | model = model, 106 | variable.names = c("mu_logMratio", "sigma_logMratio", "Mratio", "mu_c2"), 107 | n.iter = 10000, 108 | thin = 1 ) 109 | 110 | return(output) 111 | } 112 | -------------------------------------------------------------------------------- /R/fit_metad_groupcorr.R: -------------------------------------------------------------------------------- 1 | ##################################### 2 | 3 | # Estimate correlation coefficient between metacognitive effiency 4 | # estimate between two, three, or four domains. 5 | # 6 | # Adaptation in R of matlab function 'fit_meta_d_mcmc_groupCorr.m' 7 | # by Steve Fleming 8 | # for more details see Fleming (2017). HMeta-d: hierarchical Bayesian 9 | # estimation of metacognitive efficiency from confidence ratings. 10 | # 11 | # you need to install the following packing before using the function: 12 | # tidyverse 13 | # magrittr 14 | # reshape2 15 | # rjags 16 | # coda 17 | # lattice 18 | # broom 19 | # ggpubr 20 | # ggmcmc 21 | # 22 | # nR_S1 and nR_S2 should be two lists of each nR_S1 or nR_S2 per task 23 | # model output is a large mcmc list and two vectors for d1 and c1 24 | # 25 | # AM 2018 26 | 27 | ##################################### 28 | 29 | ## Packages 30 | library(tidyverse) 31 | library(magrittr) 32 | library(reshape2) 33 | library(rjags) 34 | library(coda) 35 | library(lattice) 36 | library(broom) 37 | library(ggpubr) 38 | library(ggmcmc) 39 | 40 | fit_metad_groupcorr <- function (nR_S1, nR_S2) { 41 | 42 | # Type 1 parameters 43 | nTot <- sum(nR_S1[[1]]$V1, nR_S2[[1]]$V1) 44 | nratings <- nrow(nR_S1[[1]])/2 45 | nsubj <- ncol(nR_S1[[1]]) 46 | nTask <- length(nR_S1) 47 | 48 | # FOR 2 TASKS 49 | if(nTask == 2) { 50 | 51 | # Adjust to ensure non-zero counts for type 1 d' point estimate 52 | d1 <- data.frame() 53 | c1 <- data.frame() 54 | 55 | for (task in 1:2) { 56 | 57 | for (n in 1:(nsubj)) { 58 | 59 | adj_f <- 1/((nratings)*2) 60 | nR_S1_adj = nR_S1[[task]][,n] + adj_f 61 | nR_S2_adj = nR_S2[[task]][,n] + adj_f 62 | 63 | ratingHR <- matrix() 64 | ratingFAR <- matrix() 65 | 66 | for (c in 2:(nratings*2)) { 67 | ratingHR[c-1] <- sum(nR_S2_adj[c:length(nR_S2_adj)]) / sum(nR_S2_adj) 68 | ratingFAR[c-1] <- sum(nR_S1_adj[c:length(nR_S1_adj)]) / sum(nR_S1_adj) 69 | 70 | } 71 | 72 | t1_index <- nratings 73 | a <- qnorm(ratingHR[(t1_index)]) - qnorm(ratingFAR[(t1_index)]) 74 | d1 %<>% 75 | rbind(a) 76 | a <- -0.5 * (qnorm(ratingHR[(t1_index)]) + qnorm(ratingFAR[(t1_index)])) 77 | c1 %<>% 78 | rbind(a) 79 | } 80 | } 81 | 82 | dt1 <- c(d1[1:nsubj,1]) 83 | dt2 <- c(d1[(nsubj+1):(nsubj*2),1]) 84 | 85 | ct1 <- c(c1[1:nsubj,1]) 86 | ct2 <- c(c1[(nsubj+1):(nsubj*2),1]) 87 | 88 | d1 <- data.frame(T1 = dt1, 89 | T2 = dt2) 90 | 91 | c1 <- data.frame(T1 = ct1, 92 | T2 = ct2) 93 | 94 | # Data preparation for model 95 | counts1 <- t(nR_S1[[1]]) %>% 96 | cbind(t(nR_S2[[1]])) 97 | counts2 <- t(nR_S1[[2]]) %>% 98 | cbind(t(nR_S2[[2]])) 99 | 100 | Tol <- 1e-05 101 | 102 | d1 <<- as.matrix(d1) 103 | c1 <<- as.matrix(c1) 104 | 105 | data <- list( 106 | d1 = d1, 107 | c1 = c1, 108 | nsubj = nsubj, 109 | counts1 = counts1, 110 | counts2 = counts2, 111 | nratings = nratings, 112 | Tol = Tol 113 | ) 114 | 115 | ## Model using JAGS 116 | # Create and update model 117 | cor_model <- jags.model(file = 'Bayes_metad_group_corr2_R.txt', data = data, 118 | n.chains = 3, quiet=FALSE) 119 | update(cor_model, n.iter=1000) 120 | 121 | # Sampling 122 | output <- coda.samples( 123 | model = cor_model, 124 | variable.names = c("mu_logMratio", "sigma_logMratio", "rho", "Mratio", "mu_c2"), 125 | n.iter = 10000, 126 | thin = 1 ) 127 | 128 | } 129 | 130 | # FOR 3 TASKS 131 | if(nTask == 3) { 132 | 133 | # Adjust to ensure non-zero counts for type 1 d' point estimate 134 | d1 <- data.frame() 135 | c1 <- data.frame() 136 | 137 | for (task in 1:3) { 138 | 139 | for (n in 1:(nsubj)) { 140 | 141 | adj_f <- 1/((nratings)*2) 142 | nR_S1_adj = nR_S1[[task]][,n] + adj_f 143 | nR_S2_adj = nR_S2[[task]][,n] + adj_f 144 | 145 | ratingHR <- matrix() 146 | ratingFAR <- matrix() 147 | 148 | for (c in 2:(nratings*2)) { 149 | ratingHR[c-1] <- sum(nR_S2_adj[c:length(nR_S2_adj)]) / sum(nR_S2_adj) 150 | ratingFAR[c-1] <- sum(nR_S1_adj[c:length(nR_S1_adj)]) / sum(nR_S1_adj) 151 | 152 | } 153 | 154 | t1_index <- nratings 155 | a <- qnorm(ratingHR[(t1_index)]) - qnorm(ratingFAR[(t1_index)]) 156 | d1 %<>% 157 | rbind(a) 158 | a <- -0.5 * (qnorm(ratingHR[(t1_index)]) + qnorm(ratingFAR[(t1_index)])) 159 | c1 %<>% 160 | rbind(a) 161 | } 162 | } 163 | 164 | dt1 <- c(d1[1:nsubj,1]) 165 | dt2 <- c(d1[(nsubj+1):(nsubj*2),1]) 166 | dt3 <- c(d1[(nsubj*2+1):(nsubj*3),1]) 167 | 168 | ct1 <- c(c1[1:nsubj,1]) 169 | ct2 <- c(c1[(nsubj+1):(nsubj*2),1]) 170 | ct3 <- c(c1[(nsubj*2+1):(nsubj*3),1]) 171 | 172 | d1 <- data.frame(T1 = dt1, 173 | T2 = dt2, 174 | T3 = dt3) 175 | 176 | c1 <- data.frame(T1 = ct1, 177 | T2 = ct2, 178 | T3 = ct3) 179 | 180 | # Data preparation for model 181 | counts1 <- t(nR_S1[[1]]) %>% 182 | cbind(t(nR_S2[[1]])) 183 | counts2 <- t(nR_S1[[2]]) %>% 184 | cbind(t(nR_S2[[2]])) 185 | counts3 <- t(nR_S1[[3]]) %>% 186 | cbind(t(nR_S2[[3]])) 187 | 188 | Tol <- 1e-05 189 | 190 | d1 <<- as.matrix(d1) 191 | c1 <<- as.matrix(c1) 192 | 193 | data <- list( 194 | d1 = d1, 195 | c1 = c1, 196 | nsubj = nsubj, 197 | counts1 = counts1, 198 | counts2 = counts2, 199 | counts3 = counts3, 200 | nratings = nratings, 201 | Tol = Tol 202 | ) 203 | 204 | ## Model using JAGS 205 | # Create and update model 206 | cor_model <- jags.model(file = 'Bayes_metad_group_corr3_R.txt', data = data, 207 | n.chains = 3, quiet=FALSE) 208 | update(cor_model, n.iter=1000) 209 | 210 | # Sampling 211 | output <- coda.samples( 212 | model = cor_model, 213 | variable.names = c("mu_logMratio", "sigma_logMratio", "rho", "Mratio", "mu_c2"), 214 | n.iter = 10000, 215 | thin = 1 ) 216 | 217 | } 218 | 219 | # FOR 4 TASKS 220 | if(nTask == 4) { 221 | 222 | # Adjust to ensure non-zero counts for type 1 d' point estimate 223 | d1 <- data.frame() 224 | c1 <- data.frame() 225 | 226 | for (task in 1:4) { 227 | 228 | for (n in 1:(nsubj)) { 229 | 230 | adj_f <- 1/((nratings)*2) 231 | nR_S1_adj = nR_S1[[task]][,n] + adj_f 232 | nR_S2_adj = nR_S2[[task]][,n] + adj_f 233 | 234 | ratingHR <- matrix() 235 | ratingFAR <- matrix() 236 | 237 | for (c in 2:(nratings*2)) { 238 | ratingHR[c-1] <- sum(nR_S2_adj[c:length(nR_S2_adj)]) / sum(nR_S2_adj) 239 | ratingFAR[c-1] <- sum(nR_S1_adj[c:length(nR_S1_adj)]) / sum(nR_S1_adj) 240 | 241 | } 242 | 243 | t1_index <- nratings 244 | a <- qnorm(ratingHR[(t1_index)]) - qnorm(ratingFAR[(t1_index)]) 245 | d1 %<>% 246 | rbind(a) 247 | a <- -0.5 * (qnorm(ratingHR[(t1_index)]) + qnorm(ratingFAR[(t1_index)])) 248 | c1 %<>% 249 | rbind(a) 250 | } 251 | } 252 | 253 | dt1 <- c(d1[1:nsubj,1]) 254 | dt2 <- c(d1[(nsubj+1):(nsubj*2),1]) 255 | dt3 <- c(d1[(nsubj*2+1):(nsubj*3),1]) 256 | dt4 <- c(d1[(nsubj*3+1):(nsubj*4),1]) 257 | 258 | ct1 <- c(c1[1:nsubj,1]) 259 | ct2 <- c(c1[(nsubj+1):(nsubj*2),1]) 260 | ct3 <- c(c1[(nsubj*2+1):(nsubj*3),1]) 261 | ct4 <- c(c1[(nsubj*3+1):(nsubj*4),1]) 262 | 263 | d1 <- data.frame(T1 = dt1, 264 | T2 = dt2, 265 | T3 = dt3, 266 | T4 = dt4) 267 | 268 | c1 <- data.frame(T1 = ct1, 269 | T2 = ct2, 270 | T3 = ct3, 271 | T4 = ct4) 272 | 273 | # Data preparation for model 274 | counts1 <- t(nR_S1[[1]]) %>% 275 | cbind(t(nR_S2[[1]])) 276 | counts2 <- t(nR_S1[[2]]) %>% 277 | cbind(t(nR_S2[[2]])) 278 | counts3 <- t(nR_S1[[3]]) %>% 279 | cbind(t(nR_S2[[3]])) 280 | counts4 <- t(nR_S1[[4]]) %>% 281 | cbind(t(nR_S2[[4]])) 282 | 283 | Tol <- 1e-05 284 | 285 | d1 <<- as.matrix(d1) 286 | c1 <<- as.matrix(c1) 287 | 288 | data <- list( 289 | d1 = d1, 290 | c1 = c1, 291 | nsubj = nsubj, 292 | counts1 = counts1, 293 | counts2 = counts2, 294 | counts3 = counts3, 295 | counts4 = counts4, 296 | nratings = nratings, 297 | Tol = Tol 298 | ) 299 | 300 | ## Model using JAGS 301 | # Create and update model 302 | cor_model <- jags.model(file = 'Bayes_metad_group_corr4_R.txt', data = data, 303 | n.chains = 3, quiet=FALSE) 304 | update(cor_model, n.iter=1000) 305 | 306 | # Sampling 307 | output <- coda.samples( 308 | model = cor_model, 309 | variable.names = c("mu_logMratio", "sigma_logMratio", "rho", "Mratio", "mu_c2"), 310 | n.iter = 10000, 311 | thin = 1 ) 312 | 313 | } 314 | 315 | return(output) 316 | } 317 | 318 | -------------------------------------------------------------------------------- /R/fit_metad_indiv.R: -------------------------------------------------------------------------------- 1 | ##################################### 2 | 3 | # Estimate metacognitive sensibility (meta d') for individual subject 4 | # 5 | # Adaptation in R of matlab function 'fit_meta_d_mcmc.m' 6 | # by Steve Fleming 7 | # for more details see Fleming (2017). HMeta-d: hierarchical Bayesian 8 | # estimation of metacognitive efficiency from confidence ratings. 9 | # 10 | # you need to install the following packing before using the function: 11 | # tidyverse 12 | # magrittr 13 | # rjags 14 | # coda 15 | # ggmcmc 16 | # 17 | # nR_S1 and nR_S2 should be two vectors 18 | # model output is a large mcmc list and two vectors for d1 and c1 19 | # 20 | # AM 2018 21 | 22 | ##################################### 23 | 24 | ## Packages ---------------------------------------------------------------- 25 | library(tidyverse) 26 | library(magrittr) 27 | library(rjags) 28 | library(coda) 29 | library(ggmcmc) 30 | 31 | fit_metad_indiv <- function (nR_S1, nR_S2) { 32 | 33 | Tol <- 1e-05 34 | nratings <- length(nR_S1)/2 35 | 36 | # Adjust to ensure non-zero counts for type 1 d' point estimate 37 | adj_f <- 1/((nratings)*2) 38 | nR_S1_adj = nR_S1 + adj_f 39 | nR_S2_adj = nR_S2 + adj_f 40 | 41 | ratingHR <- matrix() 42 | ratingFAR <- matrix() 43 | 44 | for (c in 2:(nratings*2)) { 45 | ratingHR[c-1] <- sum(nR_S2_adj[c:length(nR_S2_adj)]) / sum(nR_S2_adj) 46 | ratingFAR[c-1] <- sum(nR_S1_adj[c:length(nR_S1_adj)]) / sum(nR_S1_adj) 47 | 48 | } 49 | 50 | t1_index <- nratings 51 | d1 <<- qnorm(ratingHR[(t1_index)]) - qnorm(ratingFAR[(t1_index)]) 52 | c1 <<- -0.5 * (qnorm(ratingHR[(t1_index)]) + qnorm(ratingFAR[(t1_index)])) 53 | 54 | counts <- t(nR_S1) %>% 55 | cbind(t(nR_S2)) 56 | counts <- as.vector(counts) 57 | 58 | # Data preparation for model 59 | data <- list( 60 | d1 = d1, 61 | c1 = c1, 62 | counts = counts, 63 | nratings = nratings, 64 | Tol = Tol 65 | ) 66 | 67 | ## Model using JAGS 68 | # Create and update model 69 | model <- jags.model(file = 'Bayes_metad_indiv_R.txt', data = data, 70 | n.chains = 3, quiet=FALSE) 71 | update(model, n.iter=1000) 72 | 73 | # Sampling 74 | samples <- coda.samples( 75 | model = model, 76 | variable.names = c("meta_d", "cS1", "cS2"), 77 | n.iter = 10000, 78 | thin = 1 ) 79 | 80 | output <- list(samples, data) 81 | 82 | return(output) 83 | } 84 | -------------------------------------------------------------------------------- /R/trials2counts.R: -------------------------------------------------------------------------------- 1 | ##################################### 2 | 3 | #Trial 2 Count function 4 | # 5 | # Convert trial by trial experimental information for N trials into response counts. 6 | # 7 | # INPUTS 8 | # stimID: 1xN vector. stimID(i) = 0 --> stimulus on i'th trial was S1. 9 | # stimID(i) = 1 --> stimulus on i'th trial was S2. 10 | # 11 | # response: 1xN vector. response(i) = 0 --> response on i'th trial was "S1". 12 | # response(i) = 1 --> response on i'th trial was "S2". 13 | # 14 | # rating: 1xN vector. rating(i) = X --> rating on i'th trial was X. 15 | # X must be in the range 1 <= X <= nRatings. 16 | # 17 | # nRatings: total # of available subjective ratings available for the 18 | # subject. e.g. if subject can rate confidence on a scale of 1-4, 19 | # then nRatings = 4 20 | # 21 | # padAmount: if set to 1, each response count in the output has the value of 22 | # 1/(2*nRatings) added to it. This is desirable if trial counts of 23 | # 0 interfere with model fitting. 24 | # if set to 0, trial counts are not manipulated and 0s may be 25 | # present. default value is 0. 26 | # padCells: Putts padAmount into function. default value is 0. 27 | # 28 | # OUTPUTS 29 | # newlist which contains nR_S1 & nR_S2 30 | # these are vectors containing the total number of responses in 31 | # each response category, conditional on presentation of S1 and S2. 32 | # 33 | # e.g. if nR_S1 = [100 50 20 10 5 1], then when stimulus S1 was 34 | # presented, the subject had the following response counts: 35 | # responded S1, rating=3 : 100 times 36 | # responded S1, rating=2 : 50 times 37 | # responded S1, rating=1 : 20 times 38 | # responded S2, rating=1 : 10 times 39 | # responded S2, rating=2 : 5 times 40 | # responded S2, rating=3 : 1 time 41 | # 42 | # EXAMPLE 43 | # stimID = list(0, 1, 0, 0, 1, 1, 1, 1) 44 | # response = list(0, 1, 1, 1, 0, 0, 1, 1) 45 | # rating = list(1, 2, 3, 4, 4, 3, 2, 1) 46 | # nRatings = 4 47 | # newlist = trials2counts(stimID,response,rating,nRatings) 48 | # print(newlist) 49 | 50 | ##################################### 51 | 52 | trials2counts <- function(stimID, response, rating,nRatings, padAmount = 0,padCells=0){ 53 | 54 | nR_S1 <- list() 55 | nR_S2 <- list() 56 | 57 | if (padAmount == 0){ 58 | padAmount = 1/(2*nRatings)} 59 | # S1 responses 60 | for (r in nRatings:1){ 61 | cs1 <- 0 62 | cs2 <- 0 63 | for (i in 1:length(stimID)){ 64 | s = stimID[i] 65 | x = response[i] 66 | y = rating[i] 67 | 68 | if ((s==0) & (x==0) & (y==r)){ 69 | (cs1 <- cs1+1)} 70 | if ((s==1) & (x==0) & (y==r)){ 71 | (cs2 <- cs2+1)} 72 | } 73 | nR_S1 <- append(nR_S1,cs1) 74 | nR_S2 <- append(nR_S2,cs2) 75 | } 76 | 77 | # S2 responses 78 | for (r in 1:nRatings){ 79 | cs1 <- 0 80 | cs2 <- 0 81 | for (i in 1:length(stimID)){ 82 | s = stimID[i] 83 | x = response[i] 84 | y = rating[i] 85 | 86 | if ((s==0) & (x==1) & (y==r)){ 87 | (cs1 <- cs1+1)} 88 | if ((s==1) & (x==1) & (y==r)){ 89 | (cs2 <- cs2+1)} 90 | } 91 | nR_S1 <- append(nR_S1,cs1) 92 | nR_S2 <- append(nR_S2,cs2) 93 | } 94 | 95 | 96 | # pad response counts to avoid zeros 97 | nR_S1 <- as.numeric(nR_S1) 98 | nR_S2 <- as.numeric(nR_S2) 99 | if (padCells == 1){ 100 | nR_S1 <- lapply(nR_S1,FUN= function(x) x+padAmount) 101 | nR_S2 <- lapply(nR_S2,FUN= function(x) x+padAmount)} 102 | 103 | # Combine into lists 104 | newlist <- list(nR_S1,nR_S2) 105 | } 106 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | HMeta-d 2 | === 3 | 4 | **Hierarchical meta-d' model (HMeta-d)** 5 | 6 | Steve Fleming 7 | stephen.fleming@ucl.ac.uk 8 | 9 | This MATLAB toolbox implements the meta-d’ model (Maniscalco & Lau, 2012) in a hierarchical Bayesian framework using Matlab and JAGS, a program for conducting MCMC inference on arbitrary Bayesian models. A paper with more details on the method and the advantages of estimating meta-d’ in a hierarchal Bayesian framework is available here https://academic.oup.com/nc/article/doi/10.1093/nc/nix007/3748261/HMeta-d-hierarchical-Bayesian-estimation-of. 10 | 11 | For a more general introduction to Bayesian models of cognition see Lee & Wagenmakers, Bayesian Cognitive Modeling: A Practical Course http://bayesmodels.com/ 12 | 13 | The model builds on work by Michael Lee on Bayesian estimation of Type 1 SDT parameters: https://link.springer.com/article/10.3758/BRM.40.2.450 14 | 15 | The code is designed to work “out of the box” without much coding on the part of the user, and it receives data in the same format as Maniscalco & Lau’s toolbox, allowing easy switching and comparison between the two. 16 | 17 | 1) To get started, you need to first ensure JAGS (an MCMC language similar to BUGS) is installed on your machine. See here for further details: 18 | 19 | http://mcmc-jags.sourceforge.net/ 20 | 21 | **Note that there are re compatibility issues between matjags and JAGS 4.X To run the MATLAB code you will need to install JAGS 3.4.0 rather than the latest version.** The model files work fine with JAGS 4.X when called from R with rjags. 22 | 23 | 2) The main functions are fit_meta_d_mcmc (for fitting individual subject data) and fit_meta_d_mcmc_group (for hierarchical fits of group data). More information is contained in the help of these two functions and in the wiki https://github.com/smfleming/HMM/wiki/HMeta-d-tutorial. To get started try running exampleFit or exampleFit_group. 24 | 25 | A walkthrough of the model and intuitions behind different usages can be found in Olivia Faull's step-by-step tutorial developed for the Zurich Computational Psychiatry course: https://github.com/metacoglab/HMeta-d/blob/master/CPC_metacog_tutorial/cpc_metacog_tutorial.m 26 | 27 | Please get in touch with your experiences with using the toolbox, and any bug reports or issues to me at stephen.fleming@ucl.ac.uk 28 | 29 | **License** 30 | 31 | This code is being released with a permissive open-source license. You should feel free to use or adapt the utility code as long as you follow the terms of the license, which are enumerated below. If you use the toolbox in a publication we ask that you cite the following paper: 32 | 33 | Fleming, S.M. (2017) HMeta-d: hierarchical Bayesian estimation of metacognitive efficiency from confidence ratings, Neuroscience of Consciousness, 3(1) nix007, https://doi.org/10.1093/nc/nix007 34 | 35 | Copyright (c) 2017, Stephen Fleming 36 | 37 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: 38 | 39 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. 40 | 41 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 42 | --------------------------------------------------------------------------------