Colloquia

*Upcoming RMME/STAT Colloquium (9/10): Susan Murphy, “Assessing Personalization in Digital Health”

RMME/STAT Joint Colloquium

Assessing Personalization in Digital Health

Dr. Susan Murphy
Harvard University

Friday, September 10th, at 12:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m883b79a16b8b2c21038a80da6301cba3

Reinforcement Learning provides an attractive suite of online learning methods for personalizing interventions in Digital Health. However, after a reinforcement learning algorithm has been run in a clinical study, how do we assess whether personalization occurred? We might find users for whom it appears that the algorithm has indeed learned in which contexts the user is more responsive to a particular intervention. But could this have happened completely by chance? I discuss some first approaches to addressing these questions.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

RMME Faculty & Students Publish New Article: “Evaluator Education Curriculum: Which Competencies Ought to Be Prioritized in Master’s and Doctoral Programs?”

Congratulations to Bianca Montrosse-MoorheadAnthony J. GambinoLaura M. YahnMindy Fan, and Anne T. Vo on their recent publication: “Evaluator Education Curriculum: Which Competencies Ought to Be Prioritized in Master’s and Doctoral Programs?” This article appears in the American Journal of Evaluation (https://doi.org/10.1177/10982140211020326).

For more information, visit: https://journals.sagepub.com/doi/10.1177/10982140211020326

 

Abstract:

A budding area of research is devoted to studying evaluator curriculum, yet to date, it has focused exclusively on describing the content and emphasis of topics or competencies in university-based programs. This study aims to expand the foci of research efforts and investigates the extent to which evaluators agree on what competencies should guide the development and implementation of evaluator education. This study used the Delphi method with evaluators (n = 11) and included three rounds of online surveys and follow-up interviews between rounds. This article discusses on which competencies evaluators were able to reach consensus. Where consensus was not found, possible reasons are offered. Where consensus was found, the necessity of each competency at both the master’s and doctoral levels is described. Findings are situated in ongoing debates about what is unique about what novice evaluators need to know and be able to do and the purpose of evaluator education.

*RMME Faculty & Students Publish New Article: “Evaluator Education Curriculum: Which Competencies Ought to Be Prioritized in Master’s and Doctoral Programs?”

Congratulations to Bianca Montrosse-MoorheadAnthony J. GambinoLaura M. YahnMindy Fan, and Anne T. Vo on their recent publication: “Evaluator Education Curriculum: Which Competencies Ought to Be Prioritized in Master’s and Doctoral Programs?” This article appears in the American Journal of Evaluation (https://doi.org/10.1177/10982140211020326).

For more information, visit: https://journals.sagepub.com/doi/10.1177/10982140211020326

 

Abstract:

A budding area of research is devoted to studying evaluator curriculum, yet to date, it has focused exclusively on describing the content and emphasis of topics or competencies in university-based programs. This study aims to expand the foci of research efforts and investigates the extent to which evaluators agree on what competencies should guide the development and implementation of evaluator education. This study used the Delphi method with evaluators (n = 11) and included three rounds of online surveys and follow-up interviews between rounds. This article discusses on which competencies evaluators were able to reach consensus. Where consensus was not found, possible reasons are offered. Where consensus was found, the necessity of each competency at both the master’s and doctoral levels is described. Findings are situated in ongoing debates about what is unique about what novice evaluators need to know and be able to do and the purpose of evaluator education.

Upcoming RMME/STAT Colloquium (6/18): Jon Krosnick, “The Collapse of Scientific Standards in the World of High Visibility Survey Research”

RMME/STAT Joint Colloquium

The Collapse of Scientific Standards in the World of High Visibility Survey Research

Dr. Jon Krosnick
Stanford University

Friday, June 18th, at 12:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m6b0af866c35360de3b7819e6204bc121

In parallel to the explosion of the replication crisis across the sciences, survey research has experienced its own crisis of credibility – and very publicly. Election after election, pre-election polls in recent years in the U.S., Britain, Israel, and elsewhere have been widely viewed as inaccurate. After each failure to accurately predict election outcomes, the survey research profession has implemented a self-study to try to explain its inaccuracies, presumably in order to learn useful lessons for improving practices. And yet inaccuracies have continued unabated. This talk will review the evidence of inaccuracy and propose and test an explanation that has received little attention: that leading survey researchers have all but abandoned well-validated scientific procedures for data collection and data analysis and have misrepresented their procedures as having more scientific integrity than they in fact have. Interestingly, the lessons learned have implications for academic research in the social sciences, in medicine, and in other fields.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

*Upcoming RMME/STAT Colloquium (6/18): Jon Krosnick, “The Collapse of Scientific Standards in the World of High Visibility Survey Research”

RMME/STAT Joint Colloquium

The Collapse of Scientific Standards in the World of High Visibility Survey Research

Dr. Jon Krosnick
Stanford University

Friday, June 18th, at 12:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m6b0af866c35360de3b7819e6204bc121

In parallel to the explosion of the replication crisis across the sciences, survey research has experienced its own crisis of credibility – and very publicly. Election after election, pre-election polls in recent years in the U.S., Britain, Israel, and elsewhere have been widely viewed as inaccurate. After each failure to accurately predict election outcomes, the survey research profession has implemented a self-study to try to explain its inaccuracies, presumably in order to learn useful lessons for improving practices. And yet inaccuracies have continued unabated. This talk will review the evidence of inaccuracy and propose and test an explanation that has received little attention: that leading survey researchers have all but abandoned well-validated scientific procedures for data collection and data analysis and have misrepresented their procedures as having more scientific integrity than they in fact have. Interestingly, the lessons learned have implications for academic research in the social sciences, in medicine, and in other fields.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (5/21): David Kaplan, “Developments and Extensions in the Quantification of Model Uncertainty: A Bayesian Perspective”

RMME/STAT Joint Colloquium

Developments and Extensions in the Quantification of Model Uncertainty: A Bayesian Perspective

Dr. David Kaplan
University of Wisconsin-Madison

Friday, May 21st, at 12:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m9c9a2619f1a5b404889a0fda12b7a6bc

Issues of model selection have dominated the theoretical and applied statistical literature for decades. Model selection methods such as ridge regression, the lasso and the elastic net have replaced ad hoc methods such as stepwise regression as a means of model selection. In the end, however, these methods lead to a single final model that is often taken to be the model considered ahead of time, thus ignoring the uncertainty inherent in the search for a final model. One method that has enjoyed a long history of theoretical developments and substantive applications, and that accounts directly for uncertainty in model selection, is Bayesian model averaging (BMA). BMA addresses the problem of model selection by not selecting a final model, but rather by averaging over a space of possible models that could have generated the data. The purpose of this paper is to provide a detailed and up-to-date review of BMA with a focus on its foundations in Bayesian decision theory and Bayesian predictive modeling. We consider the selection of parameter and model priors as well as methods for evaluating predictions based on BMA. We also consider important assumptions regarding BMA and extensions of model averaging methods to address these assumptions, particularly the method of Bayesian stacking. Extensions to problems of missing data and probabilistic forecasting in large-scale educational assessments are discussed.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (4/30): Jennifer Hill, “thinkCausal: One Stop Shopping for Answering your Causal Inference Questions”

RMME/STAT Joint Colloquium

thinkCausal: One Stop Shopping for Answering your Causal Inference Questions

Dr. Jennifer Hill
New York University

Friday, April 30th, at 12:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m8c032f2f335a1c377fcd8a293df02bbc

Causal inference is a necessary tool in education research for answering pressing and ever-evolving questions around policy and practice. Increasingly, researchers are using more complicated machine learning algorithms to estimate causal effects. These methods take some of the guesswork out of analyses, decrease the opportunity for “p-hacking,” and are often better suited for more fine-tuned causal inference tasks such as identifying varying treatment effects and generalizing results from one population to another. However, these more sophisticated methods are more difficult to understand and are often only accessible in more technical, less user-friendly software packages. The thinkCausal project is working to address these challenges (and more) by developing a highly scaffolded multi-purpose causal inference software package with the BART predictive algorithm as a foundation. The software will scaffold the researcher through the data analytic process and provide options to access technology-based teaching tools to understand foundational concepts in causal inference and machine learning. This talk will briefly review BART for causal inference and then discuss the challenges and opportunities in building this type of tool. This is work in progress and the goal is to create a conversation about the tool and role of education in data analysis software more broadly.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (4/23): Jean-Paul Fox, “Bayesian Covariance Structure Modeling: An Overview and New Developments”

RMME/STAT Joint Colloquium

Bayesian Covariance Structure Modeling: An Overview and New Developments

Dr. Jean-Paul Fox
University of Twente

Friday, April 23rd, at 2:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m51820f42c5c0cf72fc3979c5bccd49a2

There is large family of statistical models to understand clustered or hierarchical structures in the data (e.g., multilevel models, mixed effect models, random effect models). The general modeling technique is to use a latent variable (i.e., random effect, frailty parameter) to describe the covariance among clustered observations, where the strength of the covariance is represented by the latent variable variance. This approach has several disadvantages. It is only possible to describe positive within-cluster correlation (similarity), and not dissimilarity (Nielsen et al., 2021). Sample size restriction and model complexity are often implied by the number and type of latent variables. Furthermore, the latent variable variance is restricted to be positive, which leads to boundary issues at/around zero and statistical issues in evaluating data in support of a latent variable. A new approach for modeling clustered data is Bayesian covariance structure modeling (BCSM) in which the dependence structure is directly modeled through a structured covariance matrix. BCSM have been developed for various applications and complex dependence structures (Fox et al., 2017, Klotzke and Fox, 2019a, 2019b; Mulder and Fox, 2019). This presentation gives an overview of BCSM and discusses several applications/new developments: (1) BCSM for measurement invariance testing (Fox et al., 2020); (2) BCSM for identifying negative within-cluster correlation and personalized (treatment) effects in counseling; and (3) BCSM for interval-censored, clustered, event-time data from a three-armed randomized clinical trial investigating coronary intervention. This talk discusses prior specification, the multiple-hypothesis-testing problem, and computational demands.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (4/16): Susan Paddock, “Causal Inference Under Interference in Dynamic Therapy Group Studies”

RMME/STAT Joint Colloquium

Causal Inference Under Interference in Dynamic Therapy Group Studies

Dr. Susan Paddock
NORC University of Chicago

Friday, April 16th, at 12:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=mdbdf7c1935ee0cdcc88e0a90573ea2fc

Group therapy is a common treatment modality for behavioral health conditions. Patients often enter and exit groups on an ongoing basis, leading to dynamic therapy groups. Examining the effect of high versus low session attendance on patient outcomes is of interest. However, there are several challenges to identifying causal effects in this setting, including the lack of randomization, interference among patients, and the interrelatedness of patient participation. Dynamic therapy groups motivate a unique causal inference scenario, as the treatment statuses are completely defined by the patient attendance record for the therapy session, which is also the structure inducing interference. We adopt the Rubin Causal Model framework to define the causal effect of high versus low session attendance of group therapy at both the individual patient and peer levels. We propose a strategy to identify individual, peer, and total effects of high attendance versus low attendance on patient outcomes by the prognostic score stratification. We examine performance of our approach via simulation, apply it to data from a group cognitive behavioral therapy trial for reducing depressive symptoms among patients in a substance use disorders treatment setting, and discuss the strengths and limitations of this approach.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (3/26): David Dunson, “Bayesian Pyramids: Identifying Interpretable Deep Structure Underlying High-dimensional Data”

RMME/STAT Joint Colloquium

Bayesian Pyramids: Identifying Interpretable Deep Structure Underlying High-dimensional Data

David Dunson
Duke University

Friday, March 26th, at 12:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m09a58d2d0b8f3973e89583e46454fbfa

High-dimensional categorical data are routinely collected in biomedical and social sciences. It is of great importance to build interpretable models that perform dimension reduction and uncover meaningful latent structures from such discrete data. Identifiability is a fundamental requirement for valid modeling and inference in such scenarios yet is challenging to address when there are complex latent structures. We propose a class of interpretable discrete latent structure models for discrete data and develop a general identifiability theory. Our theory is applicable to various types of latent structures, ranging from a single latent variable to deep layers of latent variables organized in a sparse graph (termed a Bayesian pyramid). The proposed identifiability conditions can ensure Bayesian posterior consistency under suitable priors. As an illustration, we consider the two-latent-layer model and propose a Bayesian shrinkage estimation approach. Simulation results for this model corroborate identifiability and estimability of the model parameters. Applications of the methodology to DNA nucleotide sequence data uncover discrete latent features that are both interpretable and highly predictive of sequence types. The proposed framework provides a recipe for interpretable unsupervised learning of discrete data and can be a useful alternative to popular machine learning methods.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab