Real-world security problems are generally characterized by uncertainty about attackers’ preferences, behavior, or other characteristics. To handle such uncertainties, security agencies (defender) typically rely on historical attack data to build a behavior model of the attacker, and incorporate this model into generating an effective defense strategy. For example, in wildlife protection, rangers can collect poaching signs (e.g., snares) to learn the behavior of poachers. However, in the real-world, a clever attacker can manipulate its attacks to fool the learning algorithm of the defender towards its own benefit. Unfortunately, existing state-of-the-art algorithms for generating defense strategies are not equipped to handle such deceptive behavior by the attacker, and this could lead to arbitrary losses for the defender. To address these challenges, this paper investigates a basic deception strategy of the attacker, termed imitative behavior deception, in which the attacker intentionally pretends to follow a specific behavior model and consistently plays according to that model, in order to optimize its utility. We have three main contributions. First, built upon previous work on attacker-behavior modeling, we introduce new algorithms to compute an optimal imitative behavior deception strategy of the attacker. Second, we propose a novel game-theoretic counter-deception algorithm which determines effective defense strategies, taking into account the deceptive behavior of the attacker. Third, we conduct extensive experiments, which shows that under the attacker’s deception, the defender accrues a significant loss whereas the attacker achieves a significant gain in utility. Our experimental results also demonstrate the impact of our counter-deception algorithm on substantially diminishing the attacker’s deception.