Software companies must often make decisions about applying new software development methodologies, technologies or tools. Various evaluation methods have been proposed to support this decision making; from those that focus on values (especially monetary values) to the more exploratory ones, and also various types of empirical studies. One common challenge of any evaluation is to choose evaluation criteria. While there have been a growing number of published empirical studies evaluating different methodologies, few of them include rationale for selecting their evaluation criteria or metrics. Therefore they also have problems with explaining their results. This paper proposes an approach for identifying relevant evaluation criteria that is based on the concepts of (core) practices and promises of a methodology. A practice of a methodology is a new concept or technique or an improvement to established ones that is an essential part of the methodology and differentiates it from other methodologies. A promise is the expected positive impact of a practice. Evaluation criteria or metrics are selected in order to evaluate the promises of practices. The approach facilitates identifying relevant criteria for evaluation and describing the results, and thus improves the validity of empirical studies. It will also help developing a common research agenda when evaluating new methodologies and answering questions such as whether a methodology helps improving a quality attribute and how, what the differences are between two methodologies, and which studies are relevant when collecting evidence about a methodology. The proposed approach is applied on software reuse and model-driven engineering as examples based on the results of two literature surveys performed in these areas.