We study the setting of decision making which involves the simultaneous testing of more than one hypothesis. Examples include making decisions on whether variables are important or not for a prediction model with high-dimensional data; evaluating the popularity of a user interface with multiple metrics.
Testing each hypothesis independently with separate control of type-1 error leads to large probability of rejecting some of the true null hypothesis. Thus the false discovery rate (FDR) is introduced as an error metric that generalizes the notion of type-1 error to the setting of multiple hypotheses.
We consider multiple testing on graphs. The graphs can represent prior knowledge and structural constraints, which come in various forms, including groupings of hypotheses, covariates, weights, knowledge of dependence structures, and so on. The graphs can also represent the communication efficiency between different hypotheses, where one tests hypotheses on the graph in a decentralized manner. We develop algorithms for multiple testing on graphs and networks that control false discovery rate both in the first and the second setting.