A promising way to disentangle time from space kicks off
Review Prepared by: Moinak Bhaduri Mathematical Sciences, Bentley University, Massachusetts Fine! I admit it! The title’s a bit click-baity. “Time” here need not be some immense galactic time. “Space” refers here not to the endless physical or literal space around you, but more to the types of certain events. But once you realize why the untangling was vital, how it is achieved in games such as soccer, and what forecasting benefits it can lead to, you’ll forgive me. You see, for far too long, whenever scientists had to model (meaning describe and potentially, forecast) phenomena that had both a time and a value component, such as the timing of earthquakes and magnitude of those shocks, or times of gang violence and casualties because of those attacks, their default go-to were typical spatio-temporal processes such as the marked Hawkes (described below). While with that reliance no fault may be found in…
Unveiling the Dynamics of Human-AI Complementarity through Bayesian Modeling
Article Title: Bayesian modeling of human–AI complementarity Authors & Year: M. Steyvers, H. Tejeda, G. Kerrigan, and P. Smyth (2022) Journal: Proceedings of the National Academy of Sciences of the United States of America [DOI:10.1073/pnas.2111547119] Review Prepared by David Han Exploration of Human-Machine Complementarity with CNN In recent years, artificial intelligence (AI) and machine learning (ML), especially deep learning, have advanced significantly for tasks like computer vision and speech recognition. Despite their high accuracy, these systems can still have weaknesses, especially in tasks like image and text classification. This has led to interest in hybrid systems where AI and humans collaborate, focusing on a more human-centered approach to AI design. Studies show humans and machines have complementary strengths, prompting the development of frameworks and platforms for their collaboration. To explore this further, the authors of the paper developed a Bayesian model for image classification tasks, analyzing predictions from both humans…
How can we model political polarization?
Title: An Agent-Based Statistical Physics Model for Political Polarization: A Monte Carlo Study Authors & Year: Hung T. Diep, Miron Kaufman, and Sanda Kaufman (2023) Journal: Entropy [DOI: https://doi.org/10.3390/e25070981] Review Prepared by Amal Machtalay Political polarization refers to a phenomenon where people’s political beliefs become radical, often resulting in the increasing division between political parties, which can have significant social consequences. This polarization is a complex system that is characterized by multiple factors: numerous interacting components (individual agents/voters, politicians, groups, media, etc.), non-linear dynamics (meaning that small changes can lead to large and uncertain effects), and emergent behavior (where collective phenomena result from local interactions, like when individuals engage with posts on social media that align with their political beliefs). The authors study the case of three USA political groups, each group indexed by $i\in \left\{ 1,2,3\right\}$: Two types of interactions are classified and illustrated in Figure 1:…
Teaching Models by Adding Feature Hints
Machine learning models are excellent at discovering patterns in data to make predictions. However, their insights are limited to the input data itself. What if we could provide additional knowledge about the model features to improve learning? For example, suppose we have prior knowledge that certain features are more important than others in predicting the target variable. Researchers have developed a new method called the feature-weighted elastic net (“fwelnet”) that integrates this extra feature knowledge to train smarter models, resulting in more accurate predictions than regular techniques.
Bridging the Gap between Models and Data
One of the key goals of science is to create theoretical models that are useful at describing the world we see around us. However, no model is perfect. The inability of models to replicate observations is often called the “synthetic gap.” For example, it may be too computationally expensive to include a known effect or to vary a large number of known parameters. Or, there may be unknown instrumental effects associated with variability in conditions during the data acquisition.
Pinpointing Causality across Time and Geography: Uncovering the Relationship between Airstrikes and Insurgent Violence in Iraq
“Correlation is not causation”, as the saying goes, yet sometimes it can be, if certain assumptions are met. Describing those assumptions and developing methods to estimate causal effects, not just correlations, is the central concern of the causal inference field. Broadly speaking, causal inference seeks to measure the effect of a treatment on an outcome. This treatment can be an actual medicine or something more abstract like a policy. Much of the literature in this space focuses on relatively simple treatments/outcomes and uses data which doesn’t exhibit much dependency. As an example, clinicians often want to measure the effect of a binary treatment (received the drug or not) on a binary outcome (developed the disease or not). The data used to answer such questions is typically patient-level data where the patients are assumed to be independent from each other. To be clear, these simple setups are enormously useful and describe commonplace causal questions.
How Statistics Can Save Lives in a Pandemic
In responding to a pandemic, time is of the essence. As the COVID-19 pandemic has raged on, it has become evident that complex decisions must be made as quickly as possible, and quality data and statistics are necessary to drive the solutions that can prevent mass illness and death. Therefore, it is essential to outline a robust and generalizable statistical process that can not only help to diminish the current COVID-19 pandemic but also assist in the prevention of potential future pandemics.