In recent years, leader-follower (or Stackelberg) games have attracted a growing interest in Artificial Intelligence. In the two-player case, these games describe situations where one player (leader) commits to a strategy and the other player (follower) first observes the leader’s commitment and, then, decides how to play. This is the case of security games, where a defender (leader) is tasked to allocate scarce resources to protect valuable targets from an attacker (follower). In this talk, we first analyse the single-follower scenario, differentiating between two cases: the optimistic one, where the follower breaks ties to maximize the leader’s utility, and the pessimistic case, where the follower acts so as to minimize it. Then, we switch to the multi-follower setting, assuming the followers play noncooperatively and simultaneously, thus reaching a Nash equilibrium. We focus on the pessimistic case with followers restricted to play pure strategies, showing that the problem is hard and presenting an algorithm to solve it.