Machine learning techniques are increasingly being used in the sciences, as they can streamline work and improve efficiency. But these techniques are sometimes met with hesitation: When users don’t understand what’s going on behind the curtains, they may lack trust in the machine learning models.
As these tools become more widespread, a team of researchers in Lawrence Livermore National Laboratory’s Computing and Physical and Life Sciences directorates are trying to provide a reasonable starting place for scientists who want to apply machine learning, but don’t have the appropriate background. The team’s work grew out of a Laboratory Directed Research and Development (LDRD) project on feedstock materials optimization, which led to a pair of papers about the types of questions a materials scientist may encounter when using machine learning tools, and how these tools behave.
Trusting artificial intelligence is easy when its conclusion is a simple ground truth, like identifying an animal. But when it comes to abstract scientific concepts, machine learning can seem more ambiguous.
“There’s been a lot of work applying machine learning to natural images — cats, dogs, people, bicycles,” said Brian Gallagher, one of the project members and a group leader in the Center for Applied Scientific Computing. “These are naturally decomposable into parts, like wheels or whiskers. Those kinds of explanations aren’t meaningful here; there aren’t subparts to decompose into.”