Taking a Product to Market

Description

A software engineer has concerns about a recommender system his firm is designing to sell to health and life insurance companies. In a meeting with the sales team leader, he brings up these concerns and argues for extending their development timelines to address them. The sales manager, however, wants to hit the market with their product as soon as possible.

Body

Kenny is a sales manager at a tech firm which designs data management and processing solutions for health and life insurance companies. He is leading the design of a software system that will predict customer product preference based on whether a customer is a smoker or nonsmoker. Specifically, they want their product to compare new customer data with previous customer data, match the previous customer information with the products that they have purchased, and establish associations that connect new customers to existing products. The predictive system is intended to make the process of selling insurance faster, more efficient, and more cost-effective. Kenny wants to move their product to the market as quickly as possible, as there are other firms currently developing similar software to also bring to market.

Charles, one of the members of the solution design team, has vast experience working in the insurance industry. During a late-stages team meeting, Charles raises concerns about the customer-product categorizations they have developed. He claims that they make inaccurate connections between customers and products by assuming too much from the simple smoking/non-smoking designation without considering other types of indicators such as number of children or marital status. As a result, the recommender system will end up matching customers with products that are not necessarily right for them.

Charles continues, “The many nuanced details of a person’s life matter in buying and selling of insurance, and these are left out of these algorithms. Focusing primarily on smoker/non-smoker might make it easier for the insurance company to sell insurance, and sell more of it, but may not benefit the customers themselves in the long run. We need to take a step back in our process and tweak our code to consider additional features from our consumer data. This will achieve more accurate matches and better outcomes for customers without sacrificing the efficiency of the system or missing our deadline to hit the market. We can add these features and establish a more rigorous system without sacrificing run-time performance of the system.”

“Will a more rigorous system produce more accurate risk-assessments of customers?” Kenny asks.

“Yes,” Charles replies, “while the general categorizer of smoker may imply a higher at-risk status, using more classifiers past the smoker/non-smoker distinction will substantiate risk assessment by considering other factors like marital status, age, gender, living situation, etcetera, and better match customers to the appropriate insurance packages.”

“But a recommender system primarily based on the smoker/nonsmoker classifier will group more customers into a higher risk category, and match more customers to a particular type of insurance product?” Kenny clarifies.

“Well, yes…” Charles reluctantly replies.

“Then we should push forward with that.”

Charles again begins to protest the ethics of the system, since it will recommend high risk packages to people who might not need it, but Kenny downplays his critique, “I think you are missing the bigger picture, and what is most important to our firm. At the end of the day we are interested in our product’s ability to increase the revenue of our clients. We don’t need a rigorous recommender system to sell this product, because insurance firms will want customers to stop being classified as soon as they get to the yes that determines the product they want to sell to their customers, which can be achieved with a simple classifier. Our market research has shown that classifier to be smoker/nonsmoker. Insurance companies can generally group their customers into higher risk categories and sell more of the products they want to sell this way.”

“So, in a way, we are only interested in the classifier columns that will scare people into buying a product that an insurance firm wants to sell?” Charles probes.

Kenny replies confidently, “If you choose to look at it that way, that is your choice. The way I see it, a more accurate recommender system does not enhance the profitability of our product, and may in fact diminish its attractiveness to our insurance company clientele. We are not responsible for how our clients choose to provide insurance to their customers, but provide tools that respond to their demand, which is to make more money. Additionally, it’s not as if smoker/nonsmoker is a poor general classifier. There is plenty of data that proves the dangers to someone’s health by smoking. In the eyes of most, this is enough to justify a higher risk health status.”

Kenny continues, “You should not get overly concerned about the current state of the system. Your team has produced a recommender system that can be effectively marketed to our customers, one that accurately addresses the needs communicated by the insurance industry. All indicators suggest this system will be very successful on the market as is, you and your team should be proud of your work. I do not want to overly hype its success, but I would not be surprised to see your team awarded high bonuses this quarter.”

The meeting concludes, and Charles leaves a little frustrated. He thinks that companies should follow the ethical principle of beneficence, meaning that businesses have a moral obligation to act for the benefits of their customers. He has always believed that corporate beneficence — which requires always keeping the customer’s best interests in mind — is not only the most ethical way to run a business, but also the most successful in the long run. After considering what Kenny has said about satisfying their customers’ needs, however, Charles wonders if his concerns are warranted. If what Kenny says is true, his team will be rewarded for producing a product that the firm significantly profits from. Making the unneeded changes to the product might negate some of these rewards, and possibly invite punishments for his team. Further, after the product goes to market, Charles isn’t so sure if he should feel any responsibility for how it is used by their insurance company clientele.

Questions:

  1. Do you agree with Charles concerns? Why/why not?
  2. What might some of Charles’s options be if he wants to appeal Kenny’s decision?
  3. What would you do in this situation? Why?
  4. Had you heard of the term beneficence before? If so, when? Discuss what “Corporate Beneficence” might mean, and the implications this concept has for software engineers and computer scientists in corporate settings.

Resources for Further Reading:

  • Beauchamp, Tom, "The Principle of Beneficence in Applied Ethics," The Stanford Encyclopedia of Philosophy (Winter 2013 Edition), Edward N. Zalta (ed.), http://plato.stanford.edu/archives/win2013/entries/principle-beneficence/.
Notes

Authors: Dalton George, MS, and Jason Ludwig, MS, are graduates of the Drexel University Center for Science, Technology and Society. June 2017.

This case/scenario was developed with support of NSF Award #1338205 Ethics of Algorithms (from NSF's EESE program). The full set of the Ethics of Algorithms cases is available at http://ethicsofalgorithms.com/cases.html. The principal investigators, Kelly Joyce and Kris Unsworth, conducted fieldwork and interviews with computer scientists and engineers to identify the ethical challenges they face when working with algorithms and big data. Dalton George and Jason Ludwig were research assistants on the project and used the data to develop the case. The research team tested the case in multiple classrooms and revised the case based on instructor and student feedback.

Citation
Dalton George, Jason Ludwig. . Taking a Product to Market. Online Ethics Center. DOI:. https://onlineethics.org/cases/taking-product-market.

This case presents a hypothetical scenario in health care. The framing here is on algorithmic design and product recommendation accuracy. Ethics is salient even though one might argue it is also a design choice type of problem. But because this relates to health and insurance premiums the ethical link makes a lot of sense. Overall, the case is well presented and it includes a brief ethical analysis by discussing the principle of beneficence and how it relates to decision making. Multiple ethical perspectives are not brought to question though. I think that the case can be much stronger if that is done. Here I will present two different scenarios for further discussion. One relies on the algorithm as the "punisher" since it basically penalizes people for smoking. The other relies on the algorithm as the "extremist" since it ignores every other aspect about a person's health status rather than their smoking behavior.

Scenario One: The Punisher

Can we think of punishment for good? In this case, the algorithm, in its current form, exposes the truth about people when it comes to their smoking behavior. In doing so, it purposively places smokers in the high risk group which essentially means they will have higher insurance premiums. At first sight, this is bad, perhaps seen as discrimination. But can we think of such punishment as good? Can we use it, for example, to provoke the duty of self-improvement? In other words, even though you are healthy in any other way the fact that your smoke is so unhealthy (in the eyes of the algorithm and those who developed) that it automatically places you in the high risk group. This classification can annoy some and make them feel like they don't belong -- especially smokers who engage in other healthy related behaviors, such as exercising. Yet, such classification can also benefit some and prompt them to stop and think about how unhealthy their smoking behavior actually is. On the bright side, such reflection may motivate them to quit. A pessimist would discourage such utopian ideas and say that punishment actually leads to rebellion. Since smokers now know they are financially penalized for smoking they may actually increase their nicotine consumption out of anger and protest. And we all know that THAT is undesirable. But, is there any other way punishment can be good? What about conscious punishment? Or even the idea of a health tax to improve the life of others and therefore invoke the duty of beneficence?

Scenario Two: The Extremist

Here I highlight a few injustices associated with extreme algorithms. The situation remains the same. Smokers pay more for health insurance simply because they engage in smoking behavior. Let me start with a question: How and why is that actually unfair? After all, we know from scientific research that the risk of lung cancer is greatly higher among smokers (see White, 1990 for an example). Well, there are probably a lot of people who, every now and then, smoke but other than that, they are actually quite healthy. For example, they might exercise on a daily basis, eat well, and have good genes. Should they pay more because they smoke, you know, socially or as I wrote before, every now and then? Is smoking behavior really the best proxy we can come up with for deciding someone's health status? Why not examine other healthy related types of data and make a more accurate and fair decision? Isn't that our duty, as citizens, anyway? To be fair and use all the information we have to make a decision? But, the counter-argument here is, well, there are only a few people who smoke and actually engage in healthy behavior. After all, smoking is correlated with drinking and that is also quite bad for you (see Batel et al., 1995; Burton and Tiffany, 1997). On top of that, no algorithm is perfect. Plus, the cost and time to gather all the information is absurd and irrational for the firm, from a business perspective. Why not then follow Simon's bounded rationality approach? Would we really be unethical if doing so? In the end, we know the algorithm has good intentions - smoking is unhealthy and correlated to other unhealthy behavior and its goal is to capture that (i.e., health risk). But, should healthy smokers really pay the same insurance premium, ceteris paribus, as unhealthy smokers? Is that actually fair? What does that say about your business, from a moral perspective? Again, we go back to issues of fairness related to algorithmic extremism.

References

  • Batel, P., Pessione, F., Maıtre, C. and Rueff, B. (1995) Relationship between alcohol and tobacco dependencies among alcoholics who smoke. Addiction, 90, 977-980.
  • Burton, S. M. and Tiffany, S. T. (1997) The effect of alcohol consumption on craving to smoke. Addiction, 92, 15-26.
  • White, T. (1997) Research on smoking and lung cancer: A landmark in the history of chronic disease epidemiology. Yale Journal of Biology and Medicine, 63(1), 29-46.