Case Study: Recommender System for Fashion Retailer

This is another work of our product recommendation which has successfully implemented within fashion retail business. Our previous project related to product recommendation can be seen in here. In this project, we applied one of the most popular recommender system algorithm, Collaborative Filtering, within an offline fashion retail business to deal with our client’s problem in accelerating the SMS marketing effectiveness. Our client is a leading fashion and lifestyle brick and mortar store in Indonesia.

Introduction

The demand of the recommender systems has risen to prominence because of the growing demand for product recommendation to specific customers’ characteristic or need. Many online stores had deployed recommender systems for personalised product recommendation in hope to escalate their user experience. However, the application of recommender systems should not be limited within the e-commerce environment, as it could be extended to the offline commerce as well. One example of such an application to the offline setting was to increase the effectiveness of promotion and make higher revenue. There were three challenges that we found in implementing recommender systems algorithm in the offline retail. The first challenge was lack of customer-product rating data. Secondly, there was insufficient product description. And finally, some customers do not want to purchase a product they never bought, yet they need the same one they like much. Our preliminary study revealed that there is 1.89 times higher chance of customers would buy the same products within a six-month period. On contrary, we know that the most key motivation behind the recommender systems application is that a customer will more likely purchase similar products he/she already bought.

We found the most suitable approach to build recommender systems was Item-based Collaborative Filtering since we lack customer-product rating data. This kind of data had helped most e-commerce to assess their customers’ preference for a certain product to gain the complete picture of customers interest. Alternatively, we could rely on the historical transactions of each customer to draw their interest by assuming that what they purchased was what they loved. Meanwhile, the missing transaction still has two interpretations whether he/she really dislikes the product or he/she does not know due to the availability of that product. As Michael Hashler wrote in his paper, he suggested assuming the missing data as a reflection of negative feedback or customers dislike the products.

Model Building

We converted the transaction data as implicit feedback into a customer-product feedback matrix where every row represents a customer and every column represents a product. We had to deal with sparsity problem that came from too many zero values in the costumer-product matrix which reflected that the customers tend to buy a few subset of products. Sparsity data could be caused by synonymy within the data. Sometimes, several different products were actually the same products. This might happen due to input error or unclear product hierarchy within the business. As a result, the recommender systems could not detect this latent association and obviously would implicate to poor performance. We proposed dimensionality reduction approach to reduce the sparsity and synonymy problems by reducing the number of products that grouped in one department and had a strong relationship indicated with high the Jaccard value.

Another homework was to encounter the repeat purchase behaviour of the customers. The recommender systems should accommodate this behaviour to yield the optimum result. Our approach to solving this obstacle was to record any repeat transaction of a certain product up to three times. The reason was to avoid further sparsity problem because of the very low number of customers that willing to buy the certain product until four times in a given period. The approach will capture the correlation between the purchasing of product x for the first time and the second time. The more customers bought the product for the second time, the higher probability that the customers had already purchased product x will buy again in the next period.

The model building process consisted of calculating a similarity matrix containing all product-to-product similarities using a given similarity measure. In the case of implicit feedback, the most suitable measure is using Jaccard index. It is because the matrix only contained value 1 for the non-missing transaction. We could not benefit the most popular similarity measures such as Pearson Correlation and Cosine similarity due to this condition.

Once we got the similarity measure matrix, the algorithm then identified the set of the candidate recommended products by taking the union of the k-most similar product. The k products which were the most similar to product x can be seen as the neighbourhood of size k of the product. This could improve the space and time complexity significantly with the consequence of sacrificing some recommendation quality. Lastly, to make product recommendations based on the model, we used the similarities of other products to calculate a score of the active customer's similarities for related products. The candidates were sorted in decreasing order with respect to the score and the first N products were selected as the Top-N recommended product set.

We built the Item-Based Collaborative Filtering (IBCF) using an offline commerce's two years transactional data. There were around two million active customers and approximately three hundred different products. It contained customer ID, purchased product and purchase date. Before we moved to the model building, the first procedure should be data cleaning especially for reducing data sparsity and synonymy. Initially, we had 316 different products in the environment. We could see its sparsity through the Jaccard index for product-to-product shown in Figure 1. The green colour means there was a strong correlation between the two products, while red colour meant no correlation that tells us that very few customers purchased those two products. In Figure 1, the red colour was very far dominant compared to the green. This phenomenon was called as sparsity problem in the data. Dimensionality reduction on product side came to overcome this obstacle although it could not guarantee to vanish all sparse data, it really helped to gain better performance of Collaborative Filtering. After applying dimensionality reduction using Jaccard index, we eliminated the number of the products to 260 different items. It meant we reduced around 17.7% of the original list of products. Our criteria were simple which was to look at the whole Jaccard matrix and to see which products that being clustered as a consequence of high similarity. Thus, we could group these products into a representative product. As a result, the green colour spread more in Figure 2. In other words, we had successfully reduced the sparsity of data although we cannot remove the whole.

Figure 1   

 

                                                                       Figure 2

 

Experimental Design

We evaluated the performance of the Top-N recommendation algorithms in increasing the redeem rate of personalised promotion comparing to the random target selection of the most popular products promotion as a control group. We made the prediction and tested it in six months to the customers that were divided into two groups: a group of customers that received personalised promotion and a group of other customers that received the most popular products promotion randomly as a control group. Since the limitation of the marketing channel, we only sent content promotion more general by using brand level instead of SKU (Stock Keeping Unit). We measured the redeem rate as the number of customers that received the promotion based on IBCF results and then went to the store within the given period to buy the offered product. We used the same definition to control group defined as a number of customers who received the most popular product recommendations randomly that made a transaction.

Our finding was that the redeem rate of personalised promotion was always three times higher over the control group from the first month until the sixth month. Every month, we sent offers to on average 106,914 customers, for each group. Out of that 641,484 customers in total, 9,738 customers of IBCF group came to redeem whilst only 3,277 customers of a random group. This gave a redeem ratio between IBCF and random group to about 3:1. Table 4 gives the detail of our experimental results. The variation of the number of customers within each group being tested was due to the variation of transactions occurred within each month. It had clearly shown that the recommender systems applied to the offline commerce had extremely helped increase their marketing return. It had proven through the redeem rate of personalised promotion using IBCF that consistently achieved three times higher than their base products promotion.

 

Figure 3

 

The Australasian Data Mining Conference

We had an opportunity to present the work to the Australasian Data Mining Conference (AusDM) in Melbourne, by August 2017. The conference was part of International Joint Conference on Artificial Intelligence (IJCAI) that held in the same city. There were 18 research papers to be presented at this two-day event ranging from Rank-Forest, a modification of Decision Forest to Distributed Spatial Data Clustering. The research was composed of both theoretical and application area of data mining. We got several queries regarding our project in particular how to carry out the data cleansing and how to accurately measure the impact of the model. One thing that is interesting to be considered for the next project is about the seasonal pattern of customers purchase behaviour. This may lead to the use of Model-Based algorithm by including time dimension into the customer-product matrix. This could be future work to be implemented within offline commerce recommender systems. At Stream, we keep developing methodologies to resolve client’s problem through the use of robust statistical and machine learning algorithm.