Month: November 2023

Upcoming RMME/STAT Colloquium (12/1): Irini Moustaki, “Some New Developments on Pairwise Likelihood Estimation & Testing in Latent Variable Models”

RMME/STAT Joint Colloquium

Some New Developments on Pairwise Likelihood Estimation & Testing in Latent Variable Models

Dr. Irini Moustaki
London School of Economics

Friday, December 1, at 11AM ET

https://tinyurl.com/rmme-Moustaki

Pairwise likelihood is a limited-information method used to estimate latent variable models, including factor analyses of categorical data. It avoids evaluating high-dimensional integrals and, thus, is computationally more efficient than full information maximum likelihood. This talk will discuss two new developments in the estimation and testing of latent variable models for binary data under the pairwise likelihood framework. The first development is about estimation and limited information goodness-of-fit test statistics under complex sampling. The performance of the estimation and the proposed test statistics under simple random sampling and unequal probability sampling is evaluated using simulated data. The second development focuses on computational aspects of pairwise likelihood. Despite its computational advantages it can still be demanding for large-scale problems that involve many observed variables. We propose an approximation of the pairwise likelihood estimator, derived from an optimization procedure relying on stochastic gradients. The stochastic gradients are constructed by subsampling the pairwise log-likelihood contributions, for which the subsampling scheme controls the per-iteration computational complexity. The stochastic estimator is shown to be asymptotically equivalent to the pairwise likelihood one. However, finite sample performances can be improved by compounding the sampling variability of the data with the uncertainty introduced by the subsampling scheme. We demonstrate the performance of the proposed method using simulation studies and two real data applications.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

RMME Faculty and Students Present at NERA 2023

Dr. D. Betsy McCoach (RMME Faculty member, Discussant), Amanda Sutter (RMME PhD Student, Presenter), Marcus Harris (RMME PhD Student, Presenter), Claudia Ventura (RMME PhD Student, Presenter), Faeze Safari (RMME PhD Student, Presenter), & Kirsten Reyna (RMME PhD Student, Presenter) present a symposium on “The Future of Educational Measurement” at NERA 2023. Congratulations on this excellent symposium, from the Research Methods, Measurement, & Evaluation Community!

 

RMME Faculty and Students Present at NERA 2023

 

Symposium Chair/Discussant: D. Betsy McCoach

Symposium Presenters: Amanda Sutter, Marcus Harris, Claudia Ventura, Faeze Safari, & Kirsten Reyna

Symposium Abstract: Emergent measurement scholars provide their perspective on current issues and future directions in educational measurement. The four presentations focus on 4 critical areas: 1.) Equity and social justice, 2.) The context in which we operate, 3.) The rise of artificial intelligence, and 4.) Graduate training in educational measurement.

Symposium Presentations:

1) Context Matters
Amanda Sutter, Marcus Harris

2) Equity and Social Justice Issues and Values in Measurement
Claudia Ventura, Amanda Sutter

3) Exploring the Challenges and Potential of Artificial Intelligence in Educational Measurement
Faeze Safari, Kirsten Reyna

4) Transforming Graduate School Training: Advancing Measurement and Open Science
Marcus Harris, Kirsten Reyna

 

 

RMME Instructor, Dr. Brenna Butler, Presents at AEA 2023

Dr. Brenna Butler, RMME instructor, presents at the AEA 2023 Conference. Dr. Butler shared two presentations at AEA 2023, including “Getting down to the essentials: How to effectively measure need on a needs assessment survey using a gap analysis approach” and “How can one-on-one client interactions be measured as organization-wide impacts?” Congratulations on these fantastic presentations, from the Research Methods, Measurement, & Evaluation Community!

 

RMME Instructor, Dr. Brenna Butler, Presents at AEA 2023

 

Author and Presenter: Brenna Butler

Presentation Title: Getting down to the essentials: How to effectively measure need on a needs assessment survey using a gap analysis approach

Abstract: Measuring need through a needs assessment survey is often not easy, given the many different methods that are often used to define what a “need” is. In this Ignite presentation, participants will learn about one concrete way of measuring participants’ needs through a gap analysis, which measures both participants’ present state and their desired state (Watkins et al., 2012). Participants will be provided examples of survey questions to measure these states in a valid manner that is housed in a framework of a systems thinking approach, meaning that participants’ present and desired states under investigation are done so with personal, social, and societal influences in consideration (Arther & McMahon, 2005). The strengths and weaknesses of measuring need through this approach compared to other approaches will be briefly described. Participants will leave this Ignite presentation with one contextual framework to apply to their own needs assessment surveys along with a template of survey questions that can be structured to measure needs effectively.

 

Presenters: Brenna Butler, Michael Hamel, Matthew Spindler, Malinda Suprise

Presentation Title: Hidden stories: How can one-on-one client interactions be measured as organization-wide impacts?

Abstract: In large organizations, such as Cooperative Extensions, impacts that derive from one-on-one interactions with clientele are often missed in “hidden stories” due to a lack of effective data tracking measures present within the organization. This session will describe the methodology used to create a survey tool that captures the outputs and outcomes of educator-clientele interactions at Penn State Extension, focusing on supporting data utility at multiple levels (i.e., educator, supervisor, and leadership council). How the online survey built a foundation in the organization for data analytics supported through consistent and structured data input and storage processes will be described. Those data input and storage processes will facilitate future data analysis for complex decision-making throughout the organization’s evaluative cycles of activities and programs. The importance of stakeholder involvement in the development process of this survey as a form of boundary-making will be discussed in relation to maximizing the utility of data collection throughout the organization. Data collection is inherently a form of boundary-making that determines which elements of a situation should be included in the informational picture constructed of a context and which elements should be excluded (Schwandt, 2018). Boundary-making was used in this instrument development process to utilize the important role of stakeholder involvement in defining what information should be collected and curated (Archibald, 2020). Participants should leave this demonstration with the knowledge and tools to employ a similar methodology at their own organization to track individual interactions using a structure that allows for data aggregation at an organizational level.

 

 

RMME PhD student, Amanda Sutter, Presents at AEA 2023

RMME PhD student, Amanda Sutter, discusses “The Story Behind an Evaluation Practice Survey: Insights from Cognitive Interviews” at the 2023 AEA Conference. Congratulations on this wonderful presentation, from the Research Methods, Measurement, & Evaluation Community!

 

RMME PhD student, Amanda Sutter, presentes at AEA 2023

 

Author and Presenter: Amanda Sutter

Title: The Story Behind an Evaluation Practice Survey: Insights from Cognitive Interviews

Abstract: Abstract Information: What do evaluators think about when they are asked questions about evaluation practice? What narratives are triggered by what words? This research study focused on understanding the thought processes and beliefs of evaluators as they reflect on their practice through cognitive interviews. Cognitive interviews are a powerful instrument design strategy that helps reveal the narratives and meanings behind survey responses. Building on last year’s field pilot of a new evaluation practice instrument, this research on evaluation study involved cognitive interviews with 20 evaluators recruited through purposive nonprobability sampling to gather diverse perspectives on the evaluation practice instrument. The semi-structured interview process followed the Willis Method, offering participants an easy process to explore their responses in-depth. This paper shares findings from the interviews and how the data will be used to improve the instrument, particularly to ensure that the phrasing of questions and subsequent interpretations align with the intended construct definitions and narrative of the instrument. The paper will be of interest to a variety of audiences. Practitioners and commissions will have opportunities to think about their own surveys and how cognitive interviews may be a useful tool for evaluation studies. Other researchers will also benefit from these opportunities, and they will learn about a potential instrument that may be useful in their own scholarly work.

RMME Faculty, Dr. Bianca Montrosse-Moorhead, & RMME PhD Student, Amanda Sutter, Present at AEA 2023

RMME Faculty member, Dr. Bianca Montrosse-Moorhead, and RMME PhD student, Amanda Sutter, present their workshop, “Building Better Surveys” at the AEA 2023 Conference. Congratulations on this successful presentation, from the Research Methods, Measurement, & Evaluation Community!

 

 

 

Authors: Bianca Montrosse-Moorhead and Amanda Sutter

Presentation Title: Building Better Surveys

Abstract: This skill-building workshop is back by popular demand! Have you ever wondered what strategies you can use to improve your surveys? Or, wanted to create or adapt an instrument that better captures difficult to measure concepts well? Or, wanted to make sure that your survey instrument is able to accurately shed light on the story you are trying to understand? This session will teach you an easy-to-implement mixed-method feedback process to build better surveys. The session will begin with an overview of the process, which includes both a review and interview component. Throughout the session, presenters will focus on the value and use of the process, walk through examples from real-life evaluation studies, and share resources for further learning. Attendees will also have the opportunity to practice using this process through hands-on activities. Attendees will leave the session with templates they can adapt for their own use and guidance to help them feel prepared to take action. Throughout the session, presenters will call attention to ways in which equity is and can be centered in the mixed-method feedback process.

RMME Master’s alumna, Xueying Gao, Presents at AEA 2023

RMME Master’s alumna, Xueying Gao, presents her poster, “Validation of the VI-SPDAT (version 3) instrument: a confirmatory factor analysis” at the 2023 AEA conference. Congratulations on this fantastic poster presentation, from the Research Methods, Measurement, & Evaluation Community!

 

RMME Master’s alumna, Xueying Gao, presents at AEA 2023

 

Authors: Xueying Gao and Brad Richardson

Presenter: Xueying Gao

Abstract: The Vulnerability Index-Service Prioritization Decision Assistance Tool (VI-SPDAT version 3; Community Solutions & OrgCode Consulting, Inc., 2020) is the primary assessment tool to scale individuals’ vulnerability (or self-sufficiency). The Iowa’s Treatment for Individuals Experiencing Homelessness (IA-TIEH) project was conducted to advance an informed, integrated program for homeless and at-risk adults who experience co-occurring disorders, in which VI-SPDAT was applied to the participants at intake assessment and 6-month reassessment to measure their level of vulnerability. The current study adopted the sample from 2020 to 2023 (N=539) to examine the construct validity of VI-SPDAT (version 3) with two factor models: single factor model and a three-factor model. Results suggested that the single factor model did not demonstrate adequate fit (CFI = 0.467, TLI = 0.445, RMSEA = 0.111, SRMR = 0.114), while the hierarchical CFA model demonstrated better fit (CFI = 0.856, TLI = 0.848, SRMR = 0.071, RMSEA = 0.061), suggesting its limitation in measuring individuals’ vulnerability and other outcome parameters in research and clinical practice. Some items were not associated with the global factor or sub-factor. The VI-SPDAT has substantial weaknesses in its theoretical alignment, item performance, and psychometric properties. We recommend the enhancement of a new multidimensional scale of vulnerability with a rigorous measurement development protocol.