Upcoming Events

Upcoming RMME/STAT Colloquium (9/9): Kosuke Imai, “Experimental Evaluation of Algorithm-Assisted Human Decision-Making: Application to Pretrial Public Safety Assessment”

RMME/STAT Joint Colloquium

Experimental Evaluation of Algorithm-Assisted Human Decision-Making: Application to Pretrial Public Safety Assessment

Dr. Kosuke Imai
Harvard University

Friday, September 9, at 11:00AM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m486f7b13e6881ba895b350f338b0c90d

Despite an increasing reliance on fully-automated algorithmic decision-making in our day-to-day lives, human beings still make highly consequential decisions. As frequently seen in business, healthcare, and public policy, recommendations produced by algorithms are provided to human decision-makers to guide their decisions. While there exists a fast-growing literature evaluating the bias and fairness of such algorithmic recommendations, an overlooked question is whether they help humans make better decisions. We develop a general statistical methodology for experimentally evaluating the causal impacts of algorithmic recommendations on human decisions. We also show how to examine whether algorithmic recommendations improve the fairness of human decisions and derive the optimal decision rules under various settings. We apply the proposed methodology to preliminary data from the first-ever randomized controlled trial that evaluates the pretrial Public Safety Assessment (PSA) in the criminal justice system. A goal of the PSA is to help judges decide which arrested individuals should be released. On the basis of the preliminary data available, we find that providing the PSA to the judge has little overall impact on the judge’s decisions and subsequent arrestee behavior. Our analysis, however, yields some potentially suggestive evidence that the PSA may help avoid unnecessarily harsh decisions for female arrestees regardless of their risk levels while it encourages the judge to make stricter decisions for male arrestees who are deemed to be risky. In terms of fairness, the PSA appears to increase an existing gender difference while having little effect on any racial differences in judges’ decisions. Finally, we find that the PSA’s recommendations might be unnecessarily severe unless the cost of a new crime is sufficiently high.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

RMME Instructor, Dr. Leslie Fierro, Serves as Evaluation Panelist

On June 2, 2022, Dr. Leslie Fierro (RMME Instructor and Co-Editor of New Directions for Evaluation) contributed to a panel session entitled, “Issues in Evaluation: Surveying the Evaluation Policy Landscape in 2022”. The Government Accountability Office (GAO) and Data Foundation co-sponsored this webinar in which panelists discussed the state of evaluation policy today. Visit this website and register to watch the recording of this excellent webinar for free! Congratulations on this work, Dr. Fierro!

Upcoming RMME/STAT Colloquium (4/29): Luke Keele, “Approximate Balancing Weights for Clustered Observational Study Designs”

RMME/STAT Joint Colloquium

Approximate Balancing Weights for Clustered Observational Study Designs

Dr. Luke Keele
University of Pennsylvania

Friday, April 29, at 3:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m35b82d4dc6d3e77536aa48390a02485b

In a clustered observational study, a treatment is assigned to groups and all units within the group are exposed to the treatment. Clustered observational studies are common in education where treatments are given to all students within some schools but withheld from all students in other schools. Clustered observational studies require specialized methods to adjust for observed confounders. Extant work has developed specialized matching methods that take key elements of clustered treatment assignment into account. Here, we develop a new method for statistical adjustment in clustered observational studies using approximate balancing weights. An approach based on approximate balancing weights improves on extant matching methods in several ways. First, our methods highlight the possible need to account for differential selection into clusters. Second, we can automatically balance interactions between unit level and cluster level covariates. Third, we can also balance high moments on key cluster level covariates. We also outline an overlap weights approach for cases where common support across treated and control clusters is poor. We introduce an augmented estimator that accounts for outcome information. We show that our approach has dual representation as an inverse propensity score weighting estimator based on a hierarchical propensity score model. We apply this algorithm to assess a school-based intervention through which students in treated schools were exposed to a new reading program during summer school. Overall, we find that balancing weights tend to produce superior balance relative to extant matching methods. Moreover, an approximate balancing weight approach tends to require less input from the user to achieve high levels of balance.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME Evaluation Colloquium (4/1): Cassandra Myers & Joan Levine, “UConn’s Institutional Review Board: A Closer Look at Ethics in Research & Evaluation”

RMME Evaluation Colloquium

UConn’s Institutional Review Board: A Closer Look at Ethics in Research & Evaluation

Cassandra Myers, The HRP Consulting Group
Joan Levine, University of Connecticut

Friday, April 1, at 3:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=me8fe20df2d511c754f1bd3f3539991b4

The UConn-Storrs Human Research Protection Program (HRPP) is dedicated to the protection of human subjects in research activities conducted under its auspices. The HRPP reviews human subjects research to ensure appropriate safeguards for the ethical, compliant, and safe conduct of research, as well as the protection of the rights and welfare of the human subjects who volunteer to participate. As the regulatory framework for the protections of human subjects is complex and multi-faceted, this session’s goals are to review the regulatory framework and how it applies to research and evaluation, the requirements for consent, when consent can be waived, and how to navigate the IRB process at UConn. This session will also review historical case studies to understand current requirements and how these events still affect populations, policies, and regulations.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (3/25): Elizabeth Stuart, “Combining Experimental and Population Data to Estimate Population Treatment Effects”

RMME/STAT Joint Colloquium

Combining Experimental and Population Data to Estimate Population Treatment Effects

Dr. Elizabeth Stuart
Johns Hopkins Bloomberg School of Public Health

Friday, March 25, at 3:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=mb26cc940795502d8ae9ff7e274d435bb

With increasing attention being paid to the relevance of studies for real-world practice (especially in comparative effectiveness research), there is also growing interest in external validity and assessing whether the results seen in randomized trials would hold in target populations. While randomized trials yield unbiased estimates of the effects of interventions in the sample of individuals in the trial, they do not necessarily inform what the effects would be in some other, potentially somewhat different, population. While there has been increasing discussion of this limitation of traditional trials, relatively little statistical work has been done developing methods to assess or enhance the external validity of randomized trial results. In addition, new “big data” resources offer the opportunity to utilize data on broad target populations. This talk will discuss design and analysis methods for assessing and increasing external validity, as well as general issues that need to be considered when thinking about external validity. The primary analysis approach discussed will be a reweighting approach that equates the sample and target population on a set of observed characteristics. Underlying assumptions and methods to assess robustness to violation of those assumptions will be discussed. Implications for how future studies should be designed in order to enhance the ability to assess generalizability will also be discussed.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

RMME Instructor, Ummugul Bezirhan, Earns 2022 Dissertation Prize!

Congratulations to RMME instructor, Ummugul Bezirhan! She recently earned the Psychometric Society’s 2022 Dissertation Prize for her research entitled, “Conditional dependence between response time and accuracy in cognitive diagnostic models”. She will present this work as a keynote speaker at the upcoming International Meeting of the Psychometric Society (IMPS), which will be held from July 11-15, 2022, at the University of Bologna, in Bologna, Italy. See this Psychometric Society announcement for more information.

We are so thrilled to congratulate Dr. Bezirhan on this fantastic accomplishment–congratulations, Gul!

 

RESCHEDULED RMME/STAT Colloquium (3/4): Donald Hedeker, “Shared Parameter Mixed-Effects Location Scale Models for Intensive Longitudinal Data”

RMME/STAT Joint Colloquium

Shared Parameter Mixed-Effects Location Scale Models for Intensive Longitudinal Data

Dr. Donald Hedeker
University of Chicago

Friday, March 4, at 3:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m6944095dfb2736dba214a9c6f6397805

Intensive longitudinal data are increasingly encountered in many research areas. For example, ecological momentary assessment (EMA) and/or mobile health (mHealth) methods are often used to study subjective experiences within changing environmental contexts. In these studies, up to 30 or 40 observations are usually obtained for each subject over a period of a week or so, allowing one to characterize a subject’s mean and variance and specify models for both. In this presentation, we focus on an adolescent smoking study using EMA where interest is on characterizing changes in mood variation. We describe how covariates can influence the mood variances and also extend the statistical model by adding a subject-level random effect to the within-subject variance specification. This permits subjects to have influence on the mean, or location, and variability, or (square of the) scale, of their mood responses. The random effects are then shared in a modeling of future smoking levels. These mixed-effects location scale models have useful applications in many research areas where interest centers on the joint modeling of the mean and variance structure.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (2/25): Donald Hedeker, “Shared Parameter Mixed-Effects Location Scale Models for Intensive Longitudinal Data”

RMME/STAT Joint Colloquium

Shared Parameter Mixed-Effects Location Scale Models for Intensive Longitudinal Data

Dr. Donald Hedeker
University of Chicago

Friday, February 25, at 3:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m6944095dfb2736dba214a9c6f6397805

Intensive longitudinal data are increasingly encountered in many research areas. For example, ecological momentary assessment (EMA) and/or mobile health (mHealth) methods are often used to study subjective experiences within changing environmental contexts. In these studies, up to 30 or 40 observations are usually obtained for each subject over a period of a week or so, allowing one to characterize a subject’s mean and variance and specify models for both. In this presentation, we focus on an adolescent smoking study using EMA where interest is on characterizing changes in mood variation. We describe how covariates can influence the mood variances and also extend the statistical model by adding a subject-level random effect to the within-subject variance specification. This permits subjects to have influence on the mean, or location, and variability, or (square of the) scale, of their mood responses. The random effects are then shared in a modeling of future smoking levels. These mixed-effects location scale models have useful applications in many research areas where interest centers on the joint modeling of the mean and variance structure.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (1/28): Andrew Ho, “Test Validation for a Crisis: Five Practical Heuristics for the Best and Worst of Times”

RMME/STAT Joint Colloquium

Test Validation for a Crisis: Five Practical Heuristics for the Best and Worst of Times

Dr. Andrew Ho
Harvard University

Friday, January 28, at 3:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=me0f80ec702d5508cf83ae6a23183fc3d

The COVID-19 pandemic has raised debate about the place of education and testing in a hierarchy of needs. What do tests tell us that other measures do not? Is testing worth the time? Do tests expose or exacerbate inequality? The academic consensus in the open-access AERA/APA/NCME Standards has not seemed to help proponents and critics of tests reach common ground. I propose five heuristics for test validation and demonstrate their usefulness for navigating test policy and test use in a time of crisis: 1) A “four quadrants” framework for purposes of educational tests. 2) The “Five Cs,” a mnemonic for the five types of validity evidence in the Standards. 3) “RTQ,” a mantra reminding test users to read items. 4) The “3 Ws,” a user-first perspective on testing. And 5) the “Two A’s Tradeoff” between Assets and Accountability. I illustrate application of these heuristics to the challenge of reporting aggregate-level test scores when populations and testing conditions change as they have over the pandemic (e.g., An, Ho, & Davis, in press; Ho, 2021). I define and discuss these heuristics in the hope that they increase consensus and improve test use in the best and worst of times.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (12/10): Jaime Lynn Speiser, “Machine Learning Prediction Modeling for Longitudinal Outcomes in Older Adults”

RMME/STAT Joint Colloquium

Machine Learning Prediction Modeling for Longitudinal Outcomes in Older Adults

Dr. Jaime Lynn Speiser
Wake Forest School of Medicine

Friday, December 10th, at 12:00PM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=macf5fd1f3af4a057a735eeefe6e40af0

Prediction models aim to help medical providers, individuals and caretakers make informed, data-driven decisions about risk of developing poor health outcomes, such as fall injury or mobility limitation in older adults. Most models for outcomes in older adults use cross-sectional data, although leveraging repeated measurements of predictors and outcomes over time may result in higher prediction accuracy. This seminar talk will focus on longitudinal risk prediction models for mobility limitation in older adults using the Health, Aging, and Body Composition dataset with a novel machine learning method called Binary Mixed Model (BiMM) forest. I will give an overview of two common machine learning methods, decision tree and random forest, before introducing the BiMM forest method. I will then apply the BiMM forest method for developing prediction models for mobility limitation in older adults.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab