Driving Machine Learning Solutions to Success Through Model Interpretability

abs-neurons-hologram-view.jpg

Despite the improvements the field of data science (DS) has made in the last decade, Gartner has estimated that almost 85 percent of all data science projects fail. Further, only 4% of data science projects are considered ‘very successful’. Among the major drivers of data science project failure are poor data quality, lack of technical skill or business acumen, lack of deployment infrastructure and lack of adoption.

The last of these, model adoption by users, can “make or break” the entire project, but can be overlooked in project planning under the assumption that adoption will follow, as long as the model helps the business. Unfortunately, the observed ground reality is not that simple. The key reasons for low adoption of data science models are a lack of trust and understanding of the model output.

Many machine learning models operate as a “black-box”, where they take a series of inputs to give a series of outputs but do not offer any insight into what factors in the input drove those outputs, be it classification or regression outputs. Nor do they provide any rationale about how an undesired output can be changed to a desired outcome in the future for a similar case by impacting the input.

Explanations about which input variable impacted the output in what manner is critical for efforts to influence the key underlying metrics that may be being tracked for that product/process. The success of a data science model largely depends on how well the model is adopted and used by these consumers of the model outputs.

Frequently, the adoption fails to generate traction because the end users do not understand why the model generated a given prediction. In most cases, the responsibility of identifying the drivers of the prediction falls on the product owners or business analysts who use their experience and tribal knowledge to make assumptions about the reason behind the predictions. This necessarily relies on subjectivity and human bias and may or may not align with the true underlying data patterns the model uses to make its prediction. This problem is particularly acute when the model predictions are not aligned with end users’ tribal knowledge or gut instincts.

Likewise, user trust also gets affected when the model predicts an incorrect output. If the end user were able to see why the model made a particular decision, it can mitigate the ensuing trust erosion, restore trust and also elicit feedback for the model’s improvement. In the absence of trust restoration, the lack of trust may precede a gradual fall back to the old way of doing things, leading to the DS project’s failure without clear feedback to the developers about why the model was not adopted.

Adding interpretability and explanations for predictions can increase user confidence in the data science solutions and drive its adoption by end users. A key learning from our work in increasing and maintaining data science adoption is that explain ability and…

…Read more

Visit source www.dell.com

We use income earning auto affiliate links. More on Sponsored links.
Advertisement Amazon

Related Posts