Upcoming Events

Upcoming RMME Evaluation Colloquium (11/19): Holli Bayonas, “Behind the Evaluation: Holli Bayonas”

RMME Evaluation Colloquium

Behind the Evaluation: Holli Bayonas

Dr. Holli Bayonas
iEvaluate, LLC

Friday, November 19th, at 12:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m83cfb05ec06ab0e6aecf026ad3e414f6

This colloquium gives participants an inside look at one evaluator’s pathway to becoming an evaluation professional. Dr. Bayonas will describe her personal career trajectory, along with the day-to-day responsibilities associated with her current position at iEvaluate. She will compare and contrast her opportunities to work in industry versus working for herself as an independent evaluation consultant. In addition, Dr. Bayonas will discuss her approach to balancing career/professional goals and the demands of homelife, including how she and her partner navigated the prioritization and support of each other’s career aspirations. She will close this talk with career and personal advice for her younger self.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (11/5): Jerry Reiter, “How Auxiliary Information Can Help Your Missing Data Problem”

RMME/STAT Joint Colloquium

How Auxiliary Information Can Help Your Missing Data Problem

Dr. Jerry Reiter
Duke University

Friday, November 5th, at 12:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m86ce051dbd968c3317ff09c343d31f40

Many surveys (and other types of databases) suffer from unit and item nonresponse. Typical practice accounts for unit nonresponse by inflating respondents’ survey weights, and accounts for item nonresponse using some form of imputation. Most methods implicitly treat both sources of nonresponse as missing at random. Sometimes, however, one knows information about the marginal distributions of some of the variables subject to missingness. In this talk, I discuss how such information can be leveraged to handle nonignorable missing data, including allowing different mechanisms for unit and item nonresponse (e.g., nonignorable unit nonresponse and ignorable item nonresponse). I illustrate the methods using data on voter turnout from the Current Population Survey.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME Evaluation Colloquium (11/19): Holli Bayonas, “Behind the Evaluation: Holli Bayonas”

RMME Evaluation Colloquium

Behind the Evaluation: Holli Bayonas

Dr. Holli Bayonas
iEvaluate, LLC

Friday, November 19th, at 12:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m83cfb05ec06ab0e6aecf026ad3e414f6

This colloquium gives participants an inside look at one evaluator’s pathway to becoming an evaluation professional. Dr. Bayonas will describe her personal career trajectory, along with the day-to-day responsibilities associated with her current position at iEvaluate. She will compare and contrast her opportunities to work in industry versus working for herself as an independent evaluation consultant. In addition, Dr. Bayonas will discuss her approach to balancing career/professional goals and the demands of homelife, including how she and her partner navigated the prioritization and support of each other’s career aspirations. She will close this talk with career and personal advice for her younger self.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

*Upcoming RMME/STAT Colloquium (10/1): Fan Li, “Overlap Weighting for Causal Inference”

RMME/STAT Joint Colloquium

Overlap Weighting for Causal Inference

Dr. Fan Li
Duke University

Friday, October 1st, at 12:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=ma4999c9bf3ac28d40a9686eec33d70ed

Covariate balance is crucial for causal comparisons. Weighting is a common strategy to balance covariates in observational studies. We propose a general class of weights—the balancing weights—that balance the weighted distributions of the covariates between treatment groups. These weights incorporate the propensity score to weight each group to an analyst-selected target population. This class unifies existing weighting methods, including commonly used weights such as inverse-probability weights as special cases. Within the class, we highlight the overlap weighting method, which has been widely adopted in applied research. The overlap weight of each unit is proportional to the probability of that unit being assigned to the opposite group. The overlap weights are bounded and minimize the asymptotic variance of the weighted average treatment effect among the class of balancing weights. The overlap weights also possess a desirable exact balance property. Extension of overlap weighting to multiple treatments, survival outcomes, and subgroup analysis will also be discussed.

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (10/1): Fan Li, “Overlap Weighting for Causal Inference”

RMME/STAT Joint Colloquium

Overlap Weighting for Causal Inference

Dr. Fan Li
Duke University

Friday, October 1st, at 12:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=ma4999c9bf3ac28d40a9686eec33d70ed

Covariate balance is crucial for causal comparisons. Weighting is a common strategy to balance covariates in observational studies. We propose a general class of weights—the balancing weights—that balance the weighted distributions of the covariates between treatment groups. These weights incorporate the propensity score to weight each group to an analyst-selected target population. This class unifies existing weighting methods, including commonly used weights such as inverse-probability weights as special cases. Within the class, we highlight the overlap weighting method, which has been widely adopted in applied research. The overlap weight of each unit is proportional to the probability of that unit being assigned to the opposite group. The overlap weights are bounded and minimize the asymptotic variance of the weighted average treatment effect among the class of balancing weights. The overlap weights also possess a desirable exact balance property. Extension of overlap weighting to multiple treatments, survival outcomes, and subgroup analysis will also be discussed.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (9/10): Susan Murphy, “Assessing Personalization in Digital Health”

RMME/STAT Joint Colloquium

Assessing Personalization in Digital Health

Dr. Susan Murphy
Harvard University

Friday, September 10th, at 12:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m883b79a16b8b2c21038a80da6301cba3

Reinforcement Learning provides an attractive suite of online learning methods for personalizing interventions in Digital Health. However, after a reinforcement learning algorithm has been run in a clinical study, how do we assess whether personalization occurred? We might find users for whom it appears that the algorithm has indeed learned in which contexts the user is more responsive to a particular intervention. But could this have happened completely by chance? I discuss some first approaches to addressing these questions.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

*Upcoming RMME/STAT Colloquium (9/10): Susan Murphy, “Assessing Personalization in Digital Health”

RMME/STAT Joint Colloquium

Assessing Personalization in Digital Health

Dr. Susan Murphy
Harvard University

Friday, September 10th, at 12:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m883b79a16b8b2c21038a80da6301cba3

Reinforcement Learning provides an attractive suite of online learning methods for personalizing interventions in Digital Health. However, after a reinforcement learning algorithm has been run in a clinical study, how do we assess whether personalization occurred? We might find users for whom it appears that the algorithm has indeed learned in which contexts the user is more responsive to a particular intervention. But could this have happened completely by chance? I discuss some first approaches to addressing these questions.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (6/18): Jon Krosnick, “The Collapse of Scientific Standards in the World of High Visibility Survey Research”

RMME/STAT Joint Colloquium

The Collapse of Scientific Standards in the World of High Visibility Survey Research

Dr. Jon Krosnick
Stanford University

Friday, June 18th, at 12:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m6b0af866c35360de3b7819e6204bc121

In parallel to the explosion of the replication crisis across the sciences, survey research has experienced its own crisis of credibility – and very publicly. Election after election, pre-election polls in recent years in the U.S., Britain, Israel, and elsewhere have been widely viewed as inaccurate. After each failure to accurately predict election outcomes, the survey research profession has implemented a self-study to try to explain its inaccuracies, presumably in order to learn useful lessons for improving practices. And yet inaccuracies have continued unabated. This talk will review the evidence of inaccuracy and propose and test an explanation that has received little attention: that leading survey researchers have all but abandoned well-validated scientific procedures for data collection and data analysis and have misrepresented their procedures as having more scientific integrity than they in fact have. Interestingly, the lessons learned have implications for academic research in the social sciences, in medicine, and in other fields.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

*Upcoming RMME/STAT Colloquium (6/18): Jon Krosnick, “The Collapse of Scientific Standards in the World of High Visibility Survey Research”

RMME/STAT Joint Colloquium

The Collapse of Scientific Standards in the World of High Visibility Survey Research

Dr. Jon Krosnick
Stanford University

Friday, June 18th, at 12:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m6b0af866c35360de3b7819e6204bc121

In parallel to the explosion of the replication crisis across the sciences, survey research has experienced its own crisis of credibility – and very publicly. Election after election, pre-election polls in recent years in the U.S., Britain, Israel, and elsewhere have been widely viewed as inaccurate. After each failure to accurately predict election outcomes, the survey research profession has implemented a self-study to try to explain its inaccuracies, presumably in order to learn useful lessons for improving practices. And yet inaccuracies have continued unabated. This talk will review the evidence of inaccuracy and propose and test an explanation that has received little attention: that leading survey researchers have all but abandoned well-validated scientific procedures for data collection and data analysis and have misrepresented their procedures as having more scientific integrity than they in fact have. Interestingly, the lessons learned have implications for academic research in the social sciences, in medicine, and in other fields.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (5/21): David Kaplan, “Developments and Extensions in the Quantification of Model Uncertainty: A Bayesian Perspective”

RMME/STAT Joint Colloquium

Developments and Extensions in the Quantification of Model Uncertainty: A Bayesian Perspective

Dr. David Kaplan
University of Wisconsin-Madison

Friday, May 21st, at 12:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m9c9a2619f1a5b404889a0fda12b7a6bc

Issues of model selection have dominated the theoretical and applied statistical literature for decades. Model selection methods such as ridge regression, the lasso and the elastic net have replaced ad hoc methods such as stepwise regression as a means of model selection. In the end, however, these methods lead to a single final model that is often taken to be the model considered ahead of time, thus ignoring the uncertainty inherent in the search for a final model. One method that has enjoyed a long history of theoretical developments and substantive applications, and that accounts directly for uncertainty in model selection, is Bayesian model averaging (BMA). BMA addresses the problem of model selection by not selecting a final model, but rather by averaging over a space of possible models that could have generated the data. The purpose of this paper is to provide a detailed and up-to-date review of BMA with a focus on its foundations in Bayesian decision theory and Bayesian predictive modeling. We consider the selection of parameter and model priors as well as methods for evaluating predictions based on BMA. We also consider important assumptions regarding BMA and extensions of model averaging methods to address these assumptions, particularly the method of Bayesian stacking. Extensions to problems of missing data and probabilistic forecasting in large-scale educational assessments are discussed.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab