Extreme value statistics focuses on events in the tail of a distribution. For univariate analyses, we typically decide upon a threshold above or below which we deem events to be in the tail, and fit a suitable extreme-value model to those data. For multivariate analyses one can take a similar approach, with the added complication of defining the shape of the threshold. In either case, how to select the level of such a threshold is an enduring question, since in reality the tail does not begin at a fixed point. Typically the extreme value models that we fit should exhibit some stability in their estimation once the data are "sufficiently extreme" for the limit models to provide a good approximation. As such, so-called parameter stability plots are a popular tool for threshold selection, as one looks to heuristically optimize bias-variance trade-offs by taking the lowest plausible threshold above which stability holds, taking into account the estimation uncertainty. However, a repeated criticism of this approach is that it is difficult to interpret the plots given that estimates and pointwise confidence intervals are strongly dependent across thresholds. Working in a likelihood-based framework, we suggest a transformation of traditional parameter stability plots that is more interpretable, and comment on possibilities for turning this into an automated threshold selection method. The ideas are applicable in univariate and multivariate analyses, and examples from both situations will be presented.