Tailored scoring rules for probabilities
When scoring rules were first widely used, they were developed as a way to measure the accuracy of probability forecasts ex post. Ex ante, proper scoring rules encourage honestly reported and sharper probabilities, both of which increase the forecaster's expected score. Most applications utilize standard off-theshelf scoring rules. In the spirit of decision analysis, we develop proper scoring rules that are tailored to specific decision-making problems and to the utility functions of particular decision makers. We show how these rules, which are intended for situations where a decision maker consults an expert to assess a probability, not only encourage honest reporting, but also reward sharpness in a way that aligns the interests of the expert and the decision maker. We also illustrate the generation of tailored scoring rules in numerical form, which is useful when analytical expressions for the tailored rules cannot be obtained or are too complex to be helpful in practice. Finally, we show how these numerical scoring rules can be presented to the expert in graphical or tabular form and suggest that this could be desirable even for standard scoring rules.