Upcoming Events

Bengt Muthén To Present M3 Pre-Conference Workshop on June 26

Drs. Bengt Muthén, Tihomir Asparouhov, & Ellen Hamaker will present a full-day pre-conference workshop at the 2023 Modern Modeling Methods Conference (see details below). Register soon because early-bird conference registration ends May 31, and all conference registration ends June 19!

2023 Modern Modeling Methods (M3) Pre-Conference Workshop
Date: Monday, June 26, 2023
Time: 8:30am – 5:00pm ET
Location: University of Connecticut’s Main Campus in Storrs, CT
Speakers: Bengt Muthén (UCLA, Mplus), Tihomir Asparouhov (Mplus), & Ellen Hamaker (University of Utrecht)
Workshop Title: New Features in Mplus Version 8.9 and Forthcoming 8.10

Register Here

Drs. Bengt Muthén, Tihomir Asparouhov, & Ellen Hamaker will Present a Full-Day Pre-Conference Workshop at the 2023 Modern Modeling Methods Conference, on June 26. Early-Bird Registration Ends May 31!

Upcoming RMME/STAT Colloquium (4/21): Matthias von Davier, “Applications of Artificial Intelligence and Natural Language Processing in Educational Measurement”

RMME/STAT Joint Colloquium

Applications of Artificial Intelligence and Natural Language Processing in Educational Measurement

Dr. Matthias von Davier
Boston College

Friday, April 21, at 11AM ET

https://tinyurl.com/rmme-vonDavier

This talk will provide an overview of the applications of Artificial Intelligence (AI) and Natural Language Processing (NLP) in educational measurement, focusing on automated item generation, automated scoring, and test assembly in multilingual assessments. We will discuss the potential benefits of AI and NLP for educational measurement, including increased efficiency, improved accuracy and reliability of assessment, and increased access to assessment technology for low-resource languages. We will examine the current state of the technology, including challenges associated with developing and deploying AI and NLP-based educational assessment systems. We will also discuss future directions for research and development in this area, including the development of methods for assessing and validating AI- and NLP-based systems and the potential for AI and NLP to improve assessment fairness and reduce assessment bias.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Modern Modeling Methods: Early-Bird Registration Ends 5/31

Register today for the 2023 Modern Modeling Methods Conference. Early-bird conference registration ends May 31!

 

2023 Modern Modeling Methods (M3) Conference
Dates: June 26 – June 28, 2023
Location: University of Connecticut’s Main Campus in Storrs, CT
Description: The Modern Modeling Methods (M3) Conference is an interdisciplinary conference designed to showcase the latest modeling methods and to present research related to these methodologies. Planned events include:

  • Monday, June 26: Full-day preconference workshop by Bengt Muthén, Tihomir Asparouhov, and Ellen Hamaker–“New Features in Mplus Version 8.9 and Forthcoming 8.10”
  • Tuesday (June 27) & Wednesday (June 28): Keynote presentations by Bengt Muthén and Ellen Hamaker; Talks by Tihomir Asparouhov, Jay Magidson, Daniel McNeish, David A. Kenny, and many others.

See the M3 Preliminary Program for a full list of talks.

Visit our website: modeling.uconn.edu.

Register Here!

 

Register Now! Modern Modeling Methods Early-Bird Conference Registration Ends May 31!

 

Upcoming RMME/STAT Colloquium (4/7): Luke Miratrix, “A Bayesian Nonparametric Approach to Geographic and two-Dimensional Regression Discontinuity Designs”

RMME/STAT Joint Colloquium

A Bayesian Nonparametric Approach to Geographic and two-Dimensional Regression Discontinuity Designs

Dr. Luke Miratrix
Harvard University

Friday, April 7, at 11AM ET

https://tinyurl.com/rmme-Miratrix

Geographical and two-dimensional regression discontinuity designs (RDDs) extend the classic, univariate RDD to multivariate, spatial contexts. We propose a framework for analyzing such designs with Gaussian process regression. This yields a Bayesian posterior distribution of the treatment effect at every point along the border, allowing for impact heterogeneity. We can then aggregate along the border to obtain an overall local average treatment effect (LATE) estimate. We address nuances of having a functional estimand defined on a border with potentially intricate topology, particularly with respect to defining the target estimand of interest. The Bayesian estimate of the LATE can also be used as a test statistic in a hypothesis test with good frequentist properties, which we validate using simulations and placebo tests. We demonstrate our methodology with a dataset of property sales in New York City, to assess whether there is a discontinuity in housing prices at the border between school districts. We also discuss application of this method to the context of treatment as a function of two forcing variables, such as falling below a threshold for either a reading or math test.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (3/24): Joseph L. Schafer, “Modeling Coarsened Categorical Variables: Techniques and Software”

RMME/STAT Joint Colloquium

Modeling Coarsened Categorical Variables: Techniques and Software

Dr. Joseph L. Schafer
U.S. Census Bureau

Friday, March 24, at 11AM ET

https://tinyurl.com/rmme-Schafer

Coarsened data can express intermediate states of knowledge between fully observed and fully missing. For example, when classifying survey respondents by cigarette smoking behavior as 1=never smoked, 2=former smoker, or 3=current smoker, we may encounter some who reported having smoked in the past but whose current activity is unknown (either 2 or 3, but not 1). Software for categorical data modeling typically provides codes for missing values but lacks convenient ways to convey states of partial  knowledge. A new R package cvam: Coarsened Variable Modeling, extends R’s implementation of categorical variables (factors) and fits log-linear and latent-class models to incomplete datasets containing coarsened and missing values. Methods include maximum likelihood estimation using an expectation-maximization algorithm, approximate Bayesian and Bayesian inference via Markov chain Monte Carlo. Functions are also provided for comparing models, predicting missing values, creating multiple imputations, and generating partially or fully synthetic data. In the first major application of this software, data from the U.S. Decennial Census and administrative records were combined to predict citizenship status for 309 million residents of the United States.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME Evaluation Colloquium (3/10): Laura Peck, “The Health Profession Opportunity Grant (HPOG) Impact Study: A Behind-the-Scenes Look at Experimental Evaluation in Practice”

RMME Evaluation Colloquium

The Health Profession Opportunity Grant (HPOG) Impact Study: A Behind-the-Scenes Look at Experimental Evaluation in Practice

Dr. Laura Peck
Abt Associates

Friday, March 10, at 11AM ET

https://tinyurl.com/eval-Peck

In 2010, the U.S. Department of Health and Human Services’ Administration for Children and Families awarded Health Profession Opportunity Grants (HPOG 1.0) to 32 organizations in 23 states. The purpose of the HPOG Program is to provide education and training to Temporary Assistance for Needy Families (TANF) recipients and other low-income individuals for occupations in the healthcare field that pay well and aim to meet local areas’ healthcare sector labor shortages. To assess its effectiveness, an experimental evaluation design assigned eligible program applicants at random to a “treatment” group that could access the program or a “control” group that could not. Beyond the impact analysis, the evaluation also probed questions about what drove program impacts, using various strategies. This colloquium will discuss how the HPOG 1.0 impact study was designed/implemented and introduce attendees to various design and analysis choices used by investigators, in partnership with the government funder, to address research questions. Specific topics will include: experimental design, multi-armed experimental design, experimental impact analysis, planned variation, natural variation, endogenous subgroup analysis, evaluation in practice.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (2/24): Ben Domingue, “Bookmaking for Binary Outcomes: Prediction, Profits, and the IMV”

RMME/STAT Joint Colloquium

Bookmaking for Binary Outcomes: Prediction, Profits, and the IMV

Dr. Ben Domingue
Stanford University

Friday, February 24, at 11AM ET

https://tinyurl.com/rmme-Domingue

Understanding the “fit” of models designed to predict binary outcomes is a long-standing problem. We propose a flexible, portable, and intuitive metric for such scenarios: the InterModel Vigorish (IMV). The IMV is based on a series of bets involving weighted coins, well-characterized physical systems with tractable probabilities. The IMV has a number of desirable properties including an interpretable and portable scale and an appropriate sensitivity to outcome prevalence. We showcase its flexibility across examples spanning the social, biomedical, and physical sciences. We demonstrate how it can be used to provide straightforward interpretation of logistic regression coefficients and to provide insights about the value of different types of item response theory (IRT) models. The IMV allows for precise answers to questions about changes in model fit in a variety of settings in a manner that will be useful for furthering research with binary outcomes.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

SAVE THE DATE! Modern Modeling Methods Returns to UConn!

 

 

Mark your calendar! The Modern Modeling Methods (M3) conference returns to UConn after a lengthy pandemic-induced hiatus. From June 26-28, 2023, M3 will resume as an in-person conference on the Storrs campus. Keynote speakers and workshop presenters include Bengt MuthenTihomir Asparouhov, and Ellen Hamaker. Remember to check the M3 website regularly for more information and updates.

 

Upcoming RMME/STAT Colloquium (11/11): Dylan Small, “Testing an Elaborate Theory of a Causal Hypothesis”

RMME/STAT Joint Colloquium

Testing an Elaborate Theory of a Causal Hypothesis

Dr. Dylan Small
University of Pennsylvania

Friday, November 11, at 11AM ET

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m8da0e35b64c861fc97a21dd36fb29ded

When R.A. Fisher was asked what can be done in observational studies to clarify the step from association to causation, he replied, “Make your theories elaborate” — when constructing a causal hypothesis, envisage as many different consequences of its truth as possible and plan observational studies to discover whether each of these consequences is found to hold. William Cochran called “this multi-phasic attack…one of the most potent weapons in observational studies.” Statistical tests for the various pieces of the elaborate theory help to clarify how much the causal hypothesis is corroborated. In practice, the degree of corroboration of the causal hypothesis has been assessed by verbally describing which of the several tests provides evidence for which of the several predictions. This verbal approach can miss quantitative patterns. So, we developed a quantitative approach to making statistical inference about the amount of the elaborate theory that is supported by evidence. This is joint work with Bikram Karmakar.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Upcoming RMME/STAT Colloquium (10/7): Edsel A. Pena, “Searching for Truth through Data”

RMME/STAT Joint Colloquium

Searching for Truth through Data

Dr. Edsel A. Pena
University of South Carolina

Friday, October 7, at 11:15AM ET, AUST 108

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m9667e91caf1197b47fc45f50529388b9

This talk concerns the role of statistical thinking in the Search for Truth using data. This will bring us to a discussion of p-values, a much-used tool in scientific research, but at the same time a controversial concept which has elicited much, sometimes heated, debate and discussion. In March 2016, the American Statistical Association (ASA) was compelled to release an official statement regarding p-values; a psychology journal has even gone to the extreme of banning the use of p-values in its articles; and in 2018, a special issue of The American Statistician was fully devoted to this issue. A main concern in the use of p-values is the introduction of a somewhat artificial threshold, usually the value of 0.05, when used in decision-making, with implications on reproducibility and replicability of reported scientific results. Some new perspectives on the use of p-values and in the search for truth through data will be discussed. In particular, this will touch on the representation of knowledge and its updating based on observations. Related to the issue of p-values, the following question arises: “When given the p-value, what does it provide in the context of the updated knowledge of the phenomenon under consideration, and what additional information should accompany it?” To be addressed also is the question of whether it is time to move away from hard thresholds such as 0.05 and whether we are on the verge of — to quote Wasserstein, Schirm and Lazar (2019) — a “World Beyond P < 0.05.”

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab