Title: Assurance for Sample Size Determination in Reliability Demonstration Testing Authors & Year: Kevin Wilson & Malcolm Farrow (2021) Journal: Technometrics [DOI: 10.1080/00401706.2020.1867646] Why Reliability Demonstration Testing? Ensuring high reliability is critical for hardware products, especially those involved in safety-critical functions such as railway systems and nuclear power reactors. To build trust, manufacturers use reliability demonstration tests (RDT) where a sample of products is tested and failures are observed. If the test meets specific criteria, it demonstrates the product’s reliability. The RDT design varies based on the type of hardware product being tested, whether it is failure on demand or time to failure. Traditionally, sample sizes for RDT have been determined using methods that consider the power of a hypothesis test or risk criteria. Various approaches, such as Bayesian methods and risk criteria evaluation, have been developed over the decades in order to enhance the effectiveness of RDT. These measures…
The setting repeats depressingly often. A hurricane inching towards the Florida coast. Weather scientists glued to tracking monitors, hunched over simulation printouts, trying to remove people out of harm’s way. Urgent to them is the need to mark a patch of the shore where the hurricane is likely to hit. Those living in this patch need to be relocated. These scientists, and many before them – it’s hard to say since when – realized what’s at issue here is not quite so much the precise location the storm is going to hit – precise to the exact grain of sand, but a stretch of land (whose length may shrink gradually depending on how late we leave the forecasting) where it is going to affect people with a high chance. A forecast interval of sorts.
The meteorologists of today no longer ask themselves, “Will it rain tomorrow?”, but rather, “What is the probability it will rain tomorrow?”. In other words, weather forecasting has evolved beyond giving simple point projections, and instead has largely shifted to probabilistic predictions, where forecast uncertainty is quantified through quantiles or entire probability distributions. Probabilistic forecasting was also the subject of my previous blog post, where the article of discussion explored the intricacies of proper scoring rules, metrics that allow us to compare and rank these more complex distributional forecasts. In this blog post, we explore facets of an even more basic consideration: how can one be sure their probabilistic forecasts make sense and actually align with the data that ended up being observed? This ‘alignment’ between forecasted probabilities and observations is referred to as probabilistic calibration. Put more concretely, when a precipitation forecasting model gives an 80% chance of rain, one would expect to see rain in approximately 80% of those cases (if the model is calibrated).
Machine learning models are excellent at discovering patterns in data to make predictions. However, their insights are limited to the input data itself. What if we could provide additional knowledge about the model features to improve learning? For example, suppose we have prior knowledge that certain features are more important than others in predicting the target variable. Researchers have developed a new method called the feature-weighted elastic net (“fwelnet”) that integrates this extra feature knowledge to train smarter models, resulting in more accurate predictions than regular techniques.
In June 2023, astronomers and statisticans flocked to “Happy Valley’” Pennsylvania for the eighth installment of the Statistical Challenges in Modern Astronomy, a bidecadal conference. The meeting, hosted at Penn State University, marked a transition in leadership from founding members Eric Feigelson and Jogesh Babu to Hyungsuk Tak, who led the proceedings. While the astronomical applications varied widely, including modeling stars, galaxies, supernovae, X-ray observations, and gravitational waves, the methods displayed a strong Bayesian bent. Simulation based inference (SBI), which uses synthetic models to learn an approximate function for the likelihood of physical parameters given data, featured prominently in the distribution of talk topics. This article features work presented in two back-to-back talks on a probabilistic method for modeling (point) sources of light in astronomical images, for example stars or galaxies, delivered by Prof. Jeffrey Regier and Ismael Mendoza from the University of Michigan-Ann Arbor.
One of the key goals of science is to create theoretical models that are useful at describing the world we see around us. However, no model is perfect. The inability of models to replicate observations is often called the “synthetic gap.” For example, it may be too computationally expensive to include a known effect or to vary a large number of known parameters. Or, there may be unknown instrumental effects associated with variability in conditions during the data acquisition.
“Violent crime fell 3 percent in Philadelphia in 2010” – this title from the Philadelphia Inquirer depicts Philadelphia’s reported decline in crime in the late 2000s and 2010s. However, is this claim exactly what it appears to be? In their paper, “Crime in Philadelphia: Bayesian Clustering and Particle Optimization,” Balocchi, Deshpande, George, and Jensen use Bayesian hierarchical modeling and clustering to identify more nuanced patterns in temporal trends and baseline levels of crime in Philadelphia.
Consider a graph, which is a set of vertices connected with edges. Your task is to assign two colors to the vertices of the graph, but under the constraint that if vertices share an edge, then they must be different colors. Can you solve this problem and satisfy the constraint? Now suppose that the edges of the graph are chosen randomly; for example, by flipping a coin for every two vertices to determine if there is an edge connecting them. What’s the chance that you can still find a coloring which satisfies the constraint?
In an increasingly data-driven world, the ability to draw accurate conclusions from research and apply them to a broader context is essential. Enter generalizability and transportability, two critical concepts researchers consider when assessing the applicability of their findings to different populations and settings. In their article “A Review of Generalizability and Transportability,” published in the Annual Review of Statistics and Its Application in 2023, Irina Degtiar and Sherri Rose delve into these critical concepts, providing valuable insights into their nuances and potential applications.