I have been really fascinated by the capability of AB Testing Tool to make Designing Decisions using Data Science.
If you combine Machine Learning & Statistical Science to the same – then Previous Combo becomes more Potent & Effective.
AB Testing can be a tough Art to Master, even the most experienced ones go wrong all the time with their hypothesis & metrics selection.
Hence accepting your own & your colleagues varying opinions + navigating through the same to get most effective Long Term Impacts is they Key for Success.
But one should be honest to the science of AB Testing & how it is designed even if results are not favourable – it’s like a daily routine that should be practiced as it is.
What am here to discuss is the New Art of combining “Cognitive Design Behaviour with Data Science Algorithms” and especially use Spotify’s Playlist Recommendation for the same.
My Current Research Analysis says that Spotify’s Recommendation Engine is one of the best in this space!
Yeah better than Apple‘s iTunes, Alibaba‘s + Amazon’s Return HomePage Pages Recommendations or Similar SKU suggestions, Google‘s Search Engine or Google News Feed, Netflix‘s Similar Shows Features. What makes Spotify shine here is how it has used the Music Preferences Attributes along with Human Emotion Capture Calculations ! They use a unique score assigned to each song preferences & suggest them if your attributes matches the range of moods/preferences! Also their Engine is constantly improving & updating based on bigger cluster preferences. What I appreciate is that they are not using Other Platforms similar data but trusting self engines to contribute. Secondly, it empowers the customers to play with the algorithm to change preferences.
As per further search, AB Testing plays a very important role in making below decisions:
- UI Decisions : Button Shape, Contrast colour, Position, Text, Headline, Icons (+ vs Heart), Widget Decisions, Hamburger Menu
- Page Format – list view vs card view, the information architecture, number of information shared in a single page view
- Metrics selection & Order Prioritisation for Making recommendations of Playlist
Deciding the Weightage of different Key Factors for engine’s Logic – which is the Highlight of Spotify’s Success!
- Localising + Personalising the Site : https://www.thinkwithgoogle.com/intl/en-gb/success-stories/how-spotify-increased-premium-subscriptions-using-google-optimize-360/
- Optimising the Keywords Search : When users in Germany searched for “audiobook” and clicked on one of Spotify’s ads, they were either brought to the custom page or to the original page.
- Creating Culture of Experimentation & Real Time AB Testing Result platform which is most crucial. (They were earlier using Google’s Optimizely for a long time)
Did I ever share the perfect method of conducting an AB Test Experiment :
- Define the Problem – To increase conversion, to increase revenue
- Come up with a series of ideas to work around the same basis historical data, recent findings, industry insights – This should be a collective + collaborative process wherein everyone should be heard & allowed to explain their logic. Also while prospecting, one has to think about the impact % on success metrics with what confidence, duration, sample size. Do not hesitate to use AB Testing calculator for same.
- Finally one has to prioritise these ideas basis their impact, time + effort + cost and decide on the order of these tests
- While designing the test – decide the Hypothesis that by doing so & so the impact shall increase. What is important to note is that while choosing these tests – they have to be such that can be proven only by data results. Also for tests with drastically different options – it can be a short duration test while for closer options, sample size hence duration has to be longer.
- So Hypothesis, success metric, options selection are very crucial aspects of the test
- Keep observing the performance of key metrics (Primary, Secondary at least) during the tests – no need to take sudden decisions basis this but observe the pattern.
- Let the test complete & observe the results – if results are drastically different, you have a clear winner. If test results are negative (do not worry), try with a different hypothesis & options. If test results are closer then decide whether to increase sample size or to discard the test or to go ahead with partial winner (last one not recommended but call can be taken depending on experience/circumstance/influence of other tests).
- Keep doing the same for different cases