News & Updates

*RMME Master’s Student, Daniel Doerr, Secures New Position

Congratulations to Daniel Doerr, a part-time Master’s student in the Research Methods, Measurement, & Evaluation program at UConn! Daniel currently serves as the Director of Student Affairs Planning, Assessment, and Evaluation, in the Office of the Vice President for Student Affairs at the University of Connecticut. However, he recently accepted new employment as an Associate Performance Auditor, with the State of Connecticut Auditors of Public Accounts Office. Daniel will start work in his new position on July 16th. 

Please join the RMME community, as we congratulate Daniel on this career-changing milestone!

RMME Faculty & Students Publish New Article: “Evaluator Education Curriculum: Which Competencies Ought to Be Prioritized in Master’s and Doctoral Programs?”

Congratulations to Bianca Montrosse-MoorheadAnthony J. GambinoLaura M. YahnMindy Fan, and Anne T. Vo on their recent publication: “Evaluator Education Curriculum: Which Competencies Ought to Be Prioritized in Master’s and Doctoral Programs?” This article appears in the American Journal of Evaluation (https://doi.org/10.1177/10982140211020326).

For more information, visit: https://journals.sagepub.com/doi/10.1177/10982140211020326

 

Abstract:

A budding area of research is devoted to studying evaluator curriculum, yet to date, it has focused exclusively on describing the content and emphasis of topics or competencies in university-based programs. This study aims to expand the foci of research efforts and investigates the extent to which evaluators agree on what competencies should guide the development and implementation of evaluator education. This study used the Delphi method with evaluators (n = 11) and included three rounds of online surveys and follow-up interviews between rounds. This article discusses on which competencies evaluators were able to reach consensus. Where consensus was not found, possible reasons are offered. Where consensus was found, the necessity of each competency at both the master’s and doctoral levels is described. Findings are situated in ongoing debates about what is unique about what novice evaluators need to know and be able to do and the purpose of evaluator education.

*RMME Faculty & Students Publish New Article: “Evaluator Education Curriculum: Which Competencies Ought to Be Prioritized in Master’s and Doctoral Programs?”

Congratulations to Bianca Montrosse-MoorheadAnthony J. GambinoLaura M. YahnMindy Fan, and Anne T. Vo on their recent publication: “Evaluator Education Curriculum: Which Competencies Ought to Be Prioritized in Master’s and Doctoral Programs?” This article appears in the American Journal of Evaluation (https://doi.org/10.1177/10982140211020326).

For more information, visit: https://journals.sagepub.com/doi/10.1177/10982140211020326

 

Abstract:

A budding area of research is devoted to studying evaluator curriculum, yet to date, it has focused exclusively on describing the content and emphasis of topics or competencies in university-based programs. This study aims to expand the foci of research efforts and investigates the extent to which evaluators agree on what competencies should guide the development and implementation of evaluator education. This study used the Delphi method with evaluators (n = 11) and included three rounds of online surveys and follow-up interviews between rounds. This article discusses on which competencies evaluators were able to reach consensus. Where consensus was not found, possible reasons are offered. Where consensus was found, the necessity of each competency at both the master’s and doctoral levels is described. Findings are situated in ongoing debates about what is unique about what novice evaluators need to know and be able to do and the purpose of evaluator education.

Upcoming RMME/STAT Colloquium (6/18): Jon Krosnick, “The Collapse of Scientific Standards in the World of High Visibility Survey Research”

RMME/STAT Joint Colloquium

The Collapse of Scientific Standards in the World of High Visibility Survey Research

Dr. Jon Krosnick
Stanford University

Friday, June 18th, at 12:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m6b0af866c35360de3b7819e6204bc121

In parallel to the explosion of the replication crisis across the sciences, survey research has experienced its own crisis of credibility – and very publicly. Election after election, pre-election polls in recent years in the U.S., Britain, Israel, and elsewhere have been widely viewed as inaccurate. After each failure to accurately predict election outcomes, the survey research profession has implemented a self-study to try to explain its inaccuracies, presumably in order to learn useful lessons for improving practices. And yet inaccuracies have continued unabated. This talk will review the evidence of inaccuracy and propose and test an explanation that has received little attention: that leading survey researchers have all but abandoned well-validated scientific procedures for data collection and data analysis and have misrepresented their procedures as having more scientific integrity than they in fact have. Interestingly, the lessons learned have implications for academic research in the social sciences, in medicine, and in other fields.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

*Upcoming RMME/STAT Colloquium (6/18): Jon Krosnick, “The Collapse of Scientific Standards in the World of High Visibility Survey Research”

RMME/STAT Joint Colloquium

The Collapse of Scientific Standards in the World of High Visibility Survey Research

Dr. Jon Krosnick
Stanford University

Friday, June 18th, at 12:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m6b0af866c35360de3b7819e6204bc121

In parallel to the explosion of the replication crisis across the sciences, survey research has experienced its own crisis of credibility – and very publicly. Election after election, pre-election polls in recent years in the U.S., Britain, Israel, and elsewhere have been widely viewed as inaccurate. After each failure to accurately predict election outcomes, the survey research profession has implemented a self-study to try to explain its inaccuracies, presumably in order to learn useful lessons for improving practices. And yet inaccuracies have continued unabated. This talk will review the evidence of inaccuracy and propose and test an explanation that has received little attention: that leading survey researchers have all but abandoned well-validated scientific procedures for data collection and data analysis and have misrepresented their procedures as having more scientific integrity than they in fact have. Interestingly, the lessons learned have implications for academic research in the social sciences, in medicine, and in other fields.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (5/21): David Kaplan, “Developments and Extensions in the Quantification of Model Uncertainty: A Bayesian Perspective”

RMME/STAT Joint Colloquium

Developments and Extensions in the Quantification of Model Uncertainty: A Bayesian Perspective

Dr. David Kaplan
University of Wisconsin-Madison

Friday, May 21st, at 12:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m9c9a2619f1a5b404889a0fda12b7a6bc

Issues of model selection have dominated the theoretical and applied statistical literature for decades. Model selection methods such as ridge regression, the lasso and the elastic net have replaced ad hoc methods such as stepwise regression as a means of model selection. In the end, however, these methods lead to a single final model that is often taken to be the model considered ahead of time, thus ignoring the uncertainty inherent in the search for a final model. One method that has enjoyed a long history of theoretical developments and substantive applications, and that accounts directly for uncertainty in model selection, is Bayesian model averaging (BMA). BMA addresses the problem of model selection by not selecting a final model, but rather by averaging over a space of possible models that could have generated the data. The purpose of this paper is to provide a detailed and up-to-date review of BMA with a focus on its foundations in Bayesian decision theory and Bayesian predictive modeling. We consider the selection of parameter and model priors as well as methods for evaluating predictions based on BMA. We also consider important assumptions regarding BMA and extensions of model averaging methods to address these assumptions, particularly the method of Bayesian stacking. Extensions to problems of missing data and probabilistic forecasting in large-scale educational assessments are discussed.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (4/30): Jennifer Hill, “thinkCausal: One Stop Shopping for Answering your Causal Inference Questions”

RMME/STAT Joint Colloquium

thinkCausal: One Stop Shopping for Answering your Causal Inference Questions

Dr. Jennifer Hill
New York University

Friday, April 30th, at 12:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m8c032f2f335a1c377fcd8a293df02bbc

Causal inference is a necessary tool in education research for answering pressing and ever-evolving questions around policy and practice. Increasingly, researchers are using more complicated machine learning algorithms to estimate causal effects. These methods take some of the guesswork out of analyses, decrease the opportunity for “p-hacking,” and are often better suited for more fine-tuned causal inference tasks such as identifying varying treatment effects and generalizing results from one population to another. However, these more sophisticated methods are more difficult to understand and are often only accessible in more technical, less user-friendly software packages. The thinkCausal project is working to address these challenges (and more) by developing a highly scaffolded multi-purpose causal inference software package with the BART predictive algorithm as a foundation. The software will scaffold the researcher through the data analytic process and provide options to access technology-based teaching tools to understand foundational concepts in causal inference and machine learning. This talk will briefly review BART for causal inference and then discuss the challenges and opportunities in building this type of tool. This is work in progress and the goal is to create a conversation about the tool and role of education in data analysis software more broadly.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (4/16): Susan Paddock, “Causal Inference Under Interference in Dynamic Therapy Group Studies”

RMME/STAT Joint Colloquium

Causal Inference Under Interference in Dynamic Therapy Group Studies

Dr. Susan Paddock
NORC University of Chicago

Friday, April 16th, at 12:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=mdbdf7c1935ee0cdcc88e0a90573ea2fc

Group therapy is a common treatment modality for behavioral health conditions. Patients often enter and exit groups on an ongoing basis, leading to dynamic therapy groups. Examining the effect of high versus low session attendance on patient outcomes is of interest. However, there are several challenges to identifying causal effects in this setting, including the lack of randomization, interference among patients, and the interrelatedness of patient participation. Dynamic therapy groups motivate a unique causal inference scenario, as the treatment statuses are completely defined by the patient attendance record for the therapy session, which is also the structure inducing interference. We adopt the Rubin Causal Model framework to define the causal effect of high versus low session attendance of group therapy at both the individual patient and peer levels. We propose a strategy to identify individual, peer, and total effects of high attendance versus low attendance on patient outcomes by the prognostic score stratification. We examine performance of our approach via simulation, apply it to data from a group cognitive behavioral therapy trial for reducing depressive symptoms among patients in a substance use disorders treatment setting, and discuss the strengths and limitations of this approach.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (3/26): David Dunson, “Bayesian Pyramids: Identifying Interpretable Deep Structure Underlying High-dimensional Data”

RMME/STAT Joint Colloquium

Bayesian Pyramids: Identifying Interpretable Deep Structure Underlying High-dimensional Data

David Dunson
Duke University

Friday, March 26th, at 12:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m09a58d2d0b8f3973e89583e46454fbfa

High-dimensional categorical data are routinely collected in biomedical and social sciences. It is of great importance to build interpretable models that perform dimension reduction and uncover meaningful latent structures from such discrete data. Identifiability is a fundamental requirement for valid modeling and inference in such scenarios yet is challenging to address when there are complex latent structures. We propose a class of interpretable discrete latent structure models for discrete data and develop a general identifiability theory. Our theory is applicable to various types of latent structures, ranging from a single latent variable to deep layers of latent variables organized in a sparse graph (termed a Bayesian pyramid). The proposed identifiability conditions can ensure Bayesian posterior consistency under suitable priors. As an illustration, we consider the two-latent-layer model and propose a Bayesian shrinkage estimation approach. Simulation results for this model corroborate identifiability and estimability of the model parameters. Applications of the methodology to DNA nucleotide sequence data uncover discrete latent features that are both interpretable and highly predictive of sequence types. The proposed framework provides a recipe for interpretable unsupervised learning of discrete data and can be a useful alternative to popular machine learning methods.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

CNN Features Program Alumna, Dr. Karen Rambo-Hernandez

A recent news story on CNN.com features Dr. Karen Rambo-Hernandez, an MEA (RMME) graduate and current Associate Professor in the Department of Teaching, Learning, and Culture at Texas A&M University. This story details the kindness of neighbors, working together and helping one another as they struggle through a brutal winter storm that has left millions of Texans out of power and in the cold. It warms our hearts and reminds us all what makes the RMME community special…its members!

See here for the full story:

https://www.cnn.com/2021/02/18/us/neighbors-helping-texas-winter-storm-trnd/index.html