Let me start off with a bold statement: we now live in a golden age of content creation. Never in our history has it been easier to capture, nurture, and grow our ideas in electronic formats, and publish those ideas to audiences worldwide.
Unfortunately, the ease of creation results in an overabundance of content, which leads to some headaches. The technicalities involved in the storage and retrieval of all of that information are two of the most obvious ones. But there are some other problems when it comes time for the average user to understand and summarize all of that data.
What do I mean by this? Let’s take a look at product reviews as an example.
Trying Not to Get Burned
Recently I was searching the market for a set of smoke detectors to complete a basement renovation. While browsing my local home renovation box store, I came across a very stylish model of smoke detector that was backed by a notable internet company. I really liked the way the product looked, but the price of each detector was nearly double that of the standard smoke detector. One question immediately crossed my mind:
Does the expensive, trendy product out-perform its competition? I knew the internet was a great place to find answers from people who bought and used the product. So, armed with the model number of the smoke detector, I took to the internet to find out what people were saying.
To make things interesting, I kept track of what was difficult about finding, interpreting, and summarizing the information I found. After all, given the number of people who posted reviews about the smoke detector, the process should be easy… right?
Getting the Right Source
I immediately ran into the first problem. There were so many different places that had product reviews for the smoke detector!
Amazon, Home Depot, CNET, EnGadget, YouTube – the list went on and on. Each site has very different audiences, so it stands to reason that reviews appearing on each one will reflect the opinions and experiences of different subsets of the population. People reviewing the product at a home renovation site may be more concerned about basic functionality, while people reviewing from a high-tech site may be more concerned about its internet connectivity.
Looking exclusively at one site might not give me the best overall picture, so I decided that I should probably look at reviews from multiple sites. This of course took a lot more time, since I now had to wade through a number of different sites to find interesting information.
- Observation 1: I would have loved a way to aggregate multiple sources into a single information stream, saving me a lot of time. Even better – aggregate each source into a single stream, but have a view that would break out what each population was saying. That way, I could identify with a particular group and better explore how their opinions may reflect my own needs.
Repetition, Repetition, Repetition
The next problem became apparent once I dug into Amazon and started reading reviews. People were saying the same things over and over. And over.
Usually, the first few pages of reviews would repeat the same points. This was good, since it gave me a good overview without having to scan thousands of pages. However, I found this repetition dulled my attention – it was easy to miss a unique nugget of insight. While this doesn’t sound like a Bad Thing, if you are on the fence about purchasing a product, one unique and powerful insight can very easily sway your decision to buy.
- Observation 2: I found myself craving a tool that would summarize the repetition. I didn’t want to read 400 reviews that discussed false alarms, but it was still important that there were 400 people who had that problem. It was also important that I found those 10 reviews that talked about positive customer service experiences.
Perspective Changes Over Time
At first I thought I could concentrate on the most recent reviews exclusively. However, I learned that people’s opinion of the product changed over time even though the product may not have changed. This change of opinion could be due to several things – for example, a competitor’s product may have entered the marketplace, making people look less favorably upon the one they were reviewing. This shift in sentiment is significant, since the underlying cause might be something I’m quite concerned about (or it could also be nothing).
- Observation 3: I needed a tool that would let me slice up the reviews into different time segments. This would allow me to identify trouble areas quicker, and examine reviews in those trouble times to better understand what people didn’t like about the product.
Star Struck
While I appreciate that many review sites try to summarize the population’s satisfaction through average star rating, it isn’t helpful! Let’s take an example rating of 3.7 stars. Ignoring the fact that star ratings are categorical data that should not be averaged, what does a 3.7 really mean?
You would think that people are generally neutral about the product. But a different story hides behind the average. If there are 43 people who rate the product 5 stars, and 20 people who rate the product 1 star, you would get an average of 3.73 stars. This split between extreme high and extreme low is a long way from a neutral rating! 68.3% of the reviewing population loves the product, but 31.7% of the reviewing population hates it. I really want to know what the haters are talking about, and whether their concerns are valid.
It doesn’t end there – the average star rating hides even more. A generally positive review may touch upon areas that the reviewer feels could be improved, while a negative review may actually talk about some of the positive aspects of the product. An average rating hides these unique nuances that are really important to someone who is undecided about the product.
- Observation 4: I needed a tool that understands more about the general themes people are discussing, and the sentiment (positive, negative, or neutral) that each reviewer is expressing toward each theme. This would give me a more balanced idea of what is wrong or right with the product.
Conclusion: The Best of All Worlds
I’m sure you are dying to know what my final decision was regarding the trendy smoke detector after more than three hours of manually searching. So, here it is:
I didn’t buy it.
In the end I went with the competitor’s product. Why? My decision was based on basic functionality, and whether the product had a good history of performing those functions well. While some of the trendy features appealed to my inner tech spirit, many reviewers felt that the product was flawed when it came to performing the one task that I really needed – smoke detection.
Coming back to my exercise in understanding product reviews, ideally I would have loved an online service that understood the points I made above, and presented me with the types of views I made mention to. More specifically, I crave a tool that could:
- Aggregate multiple sources into a single data stream;
- Dive into the actual text of the review and break it down into underlying themes;
- Keep track of whether people are generally saying positive or negative things about those themes; and
- Present information over different time slices and audiences.
My experience above is what excites me about Sentiment Radar; it is a tool capable of digging into thousands of reviews, discovering the themes people are talking about, ranking the sentiment of each theme in every review, and summarizing that information in a format that is extremely easy to digest. In a recent test I ran, Sentiment Radar automatically generated these results in the matter of a minute or two, while it took me three hours to do the same using my manual process.
Sentiment Radar can also be used by agile product development teams and companies looking to maximize value in their next development cycle, but that’s a topic for another post.