Scientists claim they can “teach” an AI moral reasoning by training it to extract ideas of right and wrong from texts. Researchers from Darmstadt University of Technology (DUT) in Germany fed their model books, news, and religious literature so it could learn the associations between different words and sentences. After training the system, they say it adopted the values of the texts. As the team put it in their research paper: The resulting model, called the Moral Choice Machine (MCM), calculates the bias score on a sentence level using embeddings of the Universal Sentence Encoder since the moral value of an action to be taken depends… This story continues at The Next Web