This chapter provides an overview of ERIC (Extracting Relations Inferred from Convolutions), a solution to the extraction of explanations and human-comprehensible knowledge from Convolutional Neural Networks (CNNs). ERIC reduces the behaviour of one or more convolutional layers to a discrete logic program over a set of atoms, each corresponding to individual convolutional kernels. Extracted programs yield performances that correlate with those of the original model. When the logic rules are analysed alongside the data as a visual concept learner, ERIC has demonstrated the discovery of relevant concepts when applied to classification tasks, including those in fields with specialised knowledge such as radiology. Concepts with sharper edges seem to have a positive influence on the fidelity of extracted programs to the extent that ERIC was able to yield high fidelity on MNIST and in a traffic sign classification task of up to 43 classes. Also, extracted concepts may be transferred to a different CNN trained on a related but different problem in the same domain. For example, concepts identified for pleural effusion were transferable to a COVID-19 classification task. Also in the medical domain, ERIC has demonstrated the capability of identifying concepts used by CNNs that are not justified anatomically or used by medical doctors in their decision making. This chapter also briefly reviews Elite Backpropagation (EBP), which trains CNNs so that each class is associated with a small set of elite kernels and improves the performance of ERIC by inducing more compact rules while maintaining high fidelity.