So, collaborative filtering cannot be used for cold starts because of its very nature of recommending each item (products advertised on your site) based on user actions. A CF model throws up more and more items with an increasing degree of accuracy as a user continues to act on the website or app, over a period of time.
But at the start line, i.e. cold start, since the user (or the product) is new there’s no track record to fall back on.
In today’s world of commercial artificial intelligence (AI), the cold start is also where the solution can now include a degree of automated data modeling.
In their research paper, “Treating Cold Start in Product Search by Priors”, researchers Parth Gupta, Tommaso Dreossi, Jan Bakus, Yu-Hsiang Lin, Vamsi Salaka, explain the problem explicitly:
Learning to Rank (LTR) models rely on several features to rank documents for a given query. Many LTR features are based on users’ interactions with documents such as impressions, clicks, and purchases. We call these features behavioral features.
Ranking models are trained to optimize user engagement, and therefore, such behavioral features tend to be the most important training signals.
However, new and tail products that do not have user engagement lack behavioral features and hence are ranked as irrelevant, which in turn further excludes them from catching user engagement. It takes time for them to gather enough behavioral signals to show up at their fair ranking position.
This leads to the causality dilemma: No behavioral data causes poor ranking which in turn results in new products having a reduced likelihood of accruing behavioral data.
This phenomenon is referred to as a cold start problem and poses serious concerns, from bad customer experience to lost revenue opportunities.
Over the years, recommender systems have used different solutions to “kick start” their recommendations from a cold start. The early day’s recommenders drew on the methods and theories taken from other artificial intelligence (AI) fields for user profiling and preference discovery.
But, of late, with the commercial deployment of AI, we have seen an increase in the success of AI apps.
Solutions for Cold Start Problem in Recommender Systems
Developers have started to deploy different AI techniques to recommender systems, as AI offers a higher degree of recommendation than conventional practice.
This has ushered in a new generation of recommender systems that use a combination of data analytics and AI to produce advanced insights into the relationships between users and items.
In ML, for supervised learning, the access to, and the application of labeled data based on things past is necessary for the labeling of previously unseen data, for things in the future.
The same is true in the case of unsupervised machine learning, where patterns are being characterized because there can be no assumption made in relation to the existence of labeled training data.
Transform your business using Express Analytics’ machine learning solutions
So, then, the obvious question – how does one go ahead in tagging labels to future data when there are no labels (diagnoses, classes, known outcomes) in the past data?
Here are some of the solutions that combine algorithms and the techniques of ML to produce a neutral judgment in model development and progression, which eventually leads towards optimization:
Content-based filtering: This seems to be among the favorite options used by ML developers in cold start cases. The product recommender can utilize metadata about the new product when creating recommendations.
What’s more, any form of additional information such as that obtained about the user from their social media networks (provided at the time of signing up with social logins) can also be utilized to tackle the initial information scarcity.
Also, here, filtering algorithms are given the predisposition for specific items, and so recommend similar items based on a domain-specific concept of item content.
There are many advantages to a content-based recommender system. To begin with, this type of recommendation is based on item representation, making it independent of the user. Because of this, the issue of data sparsity does not arise.
Next, content-based recommender systems are able to suggest new items to users, which resolves the new product cold start problem.
Popularity-based model: One more go-to option used frequently for cold start cases. Using Python to build the model, a new customer, at the very start of a customer journey, can be shown a list of trending (popular) products. Maybe, if there’s a product on the list which almost all new customers buy, that, too, can be suggested to all new customers.
Following this, each choice can be registered, so also the contextual information, i.e. location using the device’s GPS coordinates, the channel the visitor came from, the device used, and so on. After the first few clicks, behavioral science will help the e-commerce site build a customer profile. to build up from there.
For a first-time product about to be listed, it has no previous “baggage”, i.e. the purchase history, and so on. Till enough purchases or likes are not gathered around this one product, eCommerce sites like Amazon and even YouTube continue to promote it frequently, in an almost in-your-face campaign.
What is also done simultaneously is to display similar products, using either string similarity algorithms like Levenstein distance or Hammind distance, till enough users have either bought the product or perhaps rated the new service. There you have it, then, data around a product that proves its popularity or otherwise.
One drawback this model does have is a lack of personalization, but then again, you are talking of tackling the cold start problem in recommender systems, and not after the engine has been warmed and is running smoothly, figuratively speaking.
The multi-armed bandit model: The inspiration for this is drawn from a multi-lever casino slot machine. The gambler has the option of pulling not one but many levers, so he thinks his chances at winning have gone up. (Incidentally, the single lever slot machine is called the single-armed bandit).
The probability ratio of the reward to each lever is different, though. The ratios are not known to the gambler (he is taking his chances) but known to the casino, which increases its profits eventually, because the House never loses (hence, the use of the term, “bandit’).
So, what relevance does this have to an algorithm trying to circumvent the cold start problem in recommender systems?
On a daily basis, there are scores of new items that arrive daily on a website or app. These items are like the various slot machines, single as well as multi-lever, and each comes with a different ROI. But because these are fresh items, the e-commerce site really does not know at that point in time how many users will buy them.
In a multi-arm slot machine, each arm represents a reward. So recommending a subset of the new products is like deciding to pull a subset of slot machines to draw. The trick for the site is to identify obviously which lever to pull to eke out the maximum return.
There’s an inherent problem, though in this kind of machine learning model. It’s referred to as the “exploration vs exploitation” problem. In layman’s terms, if a particular new product is flying off the shelves, it is but natural for the seller to get greedy and show it to many more users (called exploitation).
But at the same time, the remaining new products may be languishing, and the seller needs to also show them to customers at a good enough frequency because they may then turn out to be even more popular than the (popular) item the seller has shown already (exploration).
Clearly, a balance has to be struck between exploration and exploitation. There exist various bandit algorithms for making optimal recommendations for the user. The popular MAB algorithms include Epsilon Greedy, Decayed Epsilon Greedy, Upper Confidence Bound, Thompson Sampling, etc.
Each of these can be coded in Python. Eventually, the idea is to use any of these models so that the bandit algorithm will help the seller choose scientifically between exploiting the one item that gave it the highest reward and exploring the other products.
The seller has to learn to balance the reward maximization based on the information already obtained against implementing new actions to increase the database. In ML, this is called the exploitation vs. exploration tradeoff.
Transform your business using Express Analytics’ machine learning solutions
Deep Learning Approach: With the recent advent of deep learning, a sub-discipline of machine learning that mimics the human brain, there are new attempts at resolving the cold start problem in recommender systems or issue using this approach.
There are many models and research papers out there that suggest the use of deep neural networks as an option to mitigate the cold start problem in recommender systems. Using neural networks, the initial weights/ratings on the network edges are assigned in a random manner but backpropagation is used to repeat the model to the optimal.
Meta-learning approaches have gained popularity in machine learning for learning representations useful for a wide range of tasks, according to this research. In his paper, “Meta-Learning for User Cold-Start Recommendation”, Homanga Bharadwaj, Department of Computer Science and Engineering, Indian Institute of Technology Kanpur, India, says they designed a recommendation framework that was trained to be “reasonably good enough” for a wide range of users.
It was inspired by the generalizable modeling prowess of model-agnostic meta-learning. While testing, to adapt to a specific user, the model parameters were updated by a few gradient steps, and then the model was evaluated on three different benchmark datasets.
The paper, using detailed simulation studies, shows that the framework was able to handle the user cold-start model much better than state-of-the-art benchmark recommender systems.
Another research paper in the same field by a team of researchers from the University of KwaZulu Natal, Westville, South Africa attempted a new approach to solve the cold start problem in recommender systems by using social networks and Matrix Factorization to enhance a deep learning approach.
The research team explained that the social information was used to form groups of users since users within a given community were likely to have the same interests. For this, a community detection algorithm was used. Once they were so segregated, a deep learning model was trained on each community.
The comparative models were then evaluated, and the metrics used were Mean Squared Error (MSE) and Mean Absolute Error (MAE). The evaluation was carried out using 5-fold cross-validation.
The results showed that the use of social information improved on the results achieved from the Deep Learning Approach, and grouping users into communities was advantageous.
In conclusion: The problem of cold start in a recommender system has been around for years but with the advent of artificial intelligence coupled with data analytics, much progress has been made, and today, there are quite a few solutions around to overcome it.