Online ratings systems shouldn’t just be a numbers game
When you’re browsing the internet for something to buy, watch, listen, or rent, chances are that you will scan online recommendations before you make your purchase. It makes sense. With an overabundance of options in front of you, it can be difficult to know exactly which movie or garment or holiday gift is the best fit.
Personalized recommendation systems help users navigate the often-confusing labyrinth of online content. They take a lot of the legwork out of decision-making. And they are an increasingly commonplace function of our online behavior. All of which is in your best interest as a consumer, right?
Yes and no, says Jesse Bockstedt, associate professor of information systems and operations management at Emory’s Goizueta Business School. Bockstedt has produced a body of research in recent years that reveals a number of issues with recommendation systems that should be on the radar of organizations and users alike.
While user ratings, often shown as stars on a five- or ten-point scale, can help you decide whether or not to go ahead and make a selection, online recommendations can also create a bias towards a product or experience that might have little or nothing to do with your actual preferences, Bockstedt says. Simply put, you’re more likely to watch, listen to, or buy something because it’s been recommended. And, when it comes to recommending the thing you’ve just watched, listened to, or bought yourself, your own rating might also be heavily influenced by the way it was recommended to you in the first place.
“Our research has shown that when a consumer is presented with a product recommendation that has a predicted preference rating—for example, we think you’ll like this movie or it has four and a half out of five stars—this information creates a bias in their preferences,” Bockstedt says. “The user will report liking the item more after they consume it if the system’s initial recommendation was high, and they say they like it less post-consumption, if the system’s recommendation was low. This holds even if the system recommendations are completely made up and random. So the information presented to the user in the recommendation creates a bias in how they perceive the item even after they’ve actually consumed or used it.”
This in turn creates a feedback loop which can reflect authentic preference, but this preference is very likely to be contaminated by bias. And that’s a problem, Bockstedt says.
“Once you have error baked into your recommendation system via this biased feedback loop, it’s going to reproduce and reproduce so that as an organization you’re pushing your customers towards certain types of products or content and not others—albeit unintentionally,” Bockstedt explains. “And for users or consumers, it’s also problematic in the sense that you’re taking the recommendations at face value, trusting them to be accurate while in fact they may not be. So there’s a trust issue right there.”
Online recommendation systems can also potentially open the door to less than scrupulous behaviors, Bockstedt adds.
Because ratings can anchor user preferences and choices to one product over another, who’s to say organizations might not actually leverage the effect to promote more expensive options to their users? In other words, systems have the potential to be manipulated such that customers pay more—and pay more for something that they may not in fact have chosen in the first place.
Addressing recommendation system-induced bias is imperative, Bockstedt says, because these systems are essentially here to stay. So how do you go about attenuating the effect?
His latest paper sheds new and critical light on this. Together with Gediminas Adomavicius and Shawn P. Curley of the University of Minnesota and Indiana University’s Jingjing Zhang, Bockstedt ran a series of lab experiments to determine whether user bias could be eliminated or mitigated by showing users different types of recommendations or rating systems. Specifically they wanted to see if different formats or interface displays could diminish the bias effect on users. And what they found is highly significant.
Emory has published a full article on this topic – and its available for reading here:
If you are a journalist looking to cover this topic or if you are simply interested in learning more, then let us help.
Jesse Bockstedt, associate professor of information systems and operations management at Emory’s Goizueta Business School. He is available to speak with media, simply click on his icon now – to book an interview today.