

Individual specificity and autonomy of a morally reasoning system is principally attained by means of a constructivist inductive process. Input into such process are moral dilemmata, or their story-like representations; its output are general patterns allowing to classify as moral or immoral even dilemmas which were not represented in the initial “training” corpus. Moral inference process can be simulated by machine learning algorithms and can be based upon detection and extraction of morally relevant features. Supervised or semi-supervised approaches should be used by those aiming to simulate parent->child or teacher->student morality transfer processes in artificial agents. Pre-existing models of inference – e.g. the grammar inference models well-studied in the domain of computational linguistics – can offer certain inspiration for anyone aiming to deploy a moral induction model. Historical data, mythology or folklore could furnish a basis for the training corpus which could be subsequently significantly extended by a crowdsourcing method exploiting the web-based “Completely Automated Moral Turing test to tell Computers and Humans Apart”. Such a CAMTCHA approach could be also useful for evaluation of agent's moral faculties.