Protecting representation and preventing gerrymandering
By: Erin McGee Paper title: Sequential Monte Carlo for Sampling Balanced and Compact Redistricting Plans Authors and year: Cory McCartan, Kosuke Imai, 2023 Journal: Annals of Applied Statistics (Forthcoming 2023), https://doi.org/10.48550/arXiv.2008.06131 In 2011, the Pennsylvania General Assembly was accused of drawing a redistricting plan for the state that diluted the power of Democratic voters, while strengthening the Republican vote. The case made its way to the Pennsylvania Supreme Court, where it was determined to be an unfairly drawn map. With more complicated techniques, gerrymandering, or altering districts to purposefully amplify the voting power of some, while reducing others, becomes harder to recognize. Gerrymandered districts are usually identifiable by the ‘jigsaw’ shapes that split counties and municipalities in an attempt to pack certain voting groups into the same district, while splitting others. However, proving that a district map has been purposefully manipulated, as it was in Pennsylvania, is no easy task.…
Combining Nested Sampling and Normalizing Flows
In order to validate our understanding of the world around us, we want to compare theoretical models to data we have actually observed. Often, these models are functions of parameters, and we want to know the values of those parameters such that the models most closely represent the world. For example, we may believe the concentration of one molecule in a chemical reaction should decrease exponentially with time. However, we also want to know the rate constant, the parameter in the model that multiplies time in the exponential, such that the model exponential curve actually resembles a specific reaction that we observe. This is the problem of parameter inference, for which we often turn to Bayesian methods, especially when working with complex models and/or many parameters..
Assurance, a Bayesian Approach in Reliability Demonstration Testing for Quality Technology
Title: Assurance for Sample Size Determination in Reliability Demonstration Testing Authors & Year: Kevin Wilson & Malcolm Farrow (2021) Journal: Technometrics [DOI: 10.1080/00401706.2020.1867646] Why Reliability Demonstration Testing? Ensuring high reliability is critical for hardware products, especially those involved in safety-critical functions such as railway systems and nuclear power reactors. To build trust, manufacturers use reliability demonstration tests (RDT) where a sample of products is tested and failures are observed. If the test meets specific criteria, it demonstrates the product’s reliability. The RDT design varies based on the type of hardware product being tested, whether it is failure on demand or time to failure. Traditionally, sample sizes for RDT have been determined using methods that consider the power of a hypothesis test or risk criteria. Various approaches, such as Bayesian methods and risk criteria evaluation, have been developed over the decades in order to enhance the effectiveness of RDT. These measures…
Conform – or else! Conformal scores as tools to lay out a set of likely classification labels
The setting repeats depressingly often. A hurricane inching towards the Florida coast. Weather scientists glued to tracking monitors, hunched over simulation printouts, trying to remove people out of harm’s way. Urgent to them is the need to mark a patch of the shore where the hurricane is likely to hit. Those living in this patch need to be relocated. These scientists, and many before them – it’s hard to say since when – realized what’s at issue here is not quite so much the precise location the storm is going to hit – precise to the exact grain of sand, but a stretch of land (whose length may shrink gradually depending on how late we leave the forecasting) where it is going to affect people with a high chance. A forecast interval of sorts.
E-valuating Forecasts on the Fly
The meteorologists of today no longer ask themselves, “Will it rain tomorrow?”, but rather, “What is the probability it will rain tomorrow?”. In other words, weather forecasting has evolved beyond giving simple point projections, and instead has largely shifted to probabilistic predictions, where forecast uncertainty is quantified through quantiles or entire probability distributions. Probabilistic forecasting was also the subject of my previous blog post, where the article of discussion explored the intricacies of proper scoring rules, metrics that allow us to compare and rank these more complex distributional forecasts. In this blog post, we explore facets of an even more basic consideration: how can one be sure their probabilistic forecasts make sense and actually align with the data that ended up being observed? This ‘alignment’ between forecasted probabilities and observations is referred to as probabilistic calibration. Put more concretely, when a precipitation forecasting model gives an 80% chance of rain, one would expect to see rain in approximately 80% of those cases (if the model is calibrated).
Teaching Models by Adding Feature Hints
Machine learning models are excellent at discovering patterns in data to make predictions. However, their insights are limited to the input data itself. What if we could provide additional knowledge about the model features to improve learning? For example, suppose we have prior knowledge that certain features are more important than others in predicting the target variable. Researchers have developed a new method called the feature-weighted elastic net (“fwelnet”) that integrates this extra feature knowledge to train smarter models, resulting in more accurate predictions than regular techniques.
MathStatBites at SCMA8: Astro Image Processing is BLISS?
In June 2023, astronomers and statisticans flocked to “Happy Valley’” Pennsylvania for the eighth installment of the Statistical Challenges in Modern Astronomy, a bidecadal conference. The meeting, hosted at Penn State University, marked a transition in leadership from founding members Eric Feigelson and Jogesh Babu to Hyungsuk Tak, who led the proceedings. While the astronomical applications varied widely, including modeling stars, galaxies, supernovae, X-ray observations, and gravitational waves, the methods displayed a strong Bayesian bent. Simulation based inference (SBI), which uses synthetic models to learn an approximate function for the likelihood of physical parameters given data, featured prominently in the distribution of talk topics. This article features work presented in two back-to-back talks on a probabilistic method for modeling (point) sources of light in astronomical images, for example stars or galaxies, delivered by Prof. Jeffrey Regier and Ismael Mendoza from the University of Michigan-Ann Arbor.
Bridging the Gap between Models and Data
One of the key goals of science is to create theoretical models that are useful at describing the world we see around us. However, no model is perfect. The inability of models to replicate observations is often called the “synthetic gap.” For example, it may be too computationally expensive to include a known effect or to vary a large number of known parameters. Or, there may be unknown instrumental effects associated with variability in conditions during the data acquisition.
Calling all writers!
We’re looking for new writers to join us!
Finding Clusters of Crime in Philadelphia
“Violent crime fell 3 percent in Philadelphia in 2010” – this title from the Philadelphia Inquirer depicts Philadelphia’s reported decline in crime in the late 2000s and 2010s. However, is this claim exactly what it appears to be? In their paper, “Crime in Philadelphia: Bayesian Clustering and Particle Optimization,” Balocchi, Deshpande, George, and Jensen use Bayesian hierarchical modeling and clustering to identify more nuanced patterns in temporal trends and baseline levels of crime in Philadelphia.