Rakuten, created in 1997 in Japan and at the origin of the marketplace concept, became one of the largest e-commerce platforms worldwide. Along with the global marketplaces, Rakuten supports an ever-expanding list of acquisitions and strategic investments in disruptive industries and growing markets, such as communications, financial services, digital contents, and gathers more than one billion users in an international ecosystem.
Rakuten Institute of Technology (RIT) is the research and innovation department of Rakuten, with teams in Tokyo, Paris, Boston, San Mateo, Singapore, Bengaluru. RIT does applied research in the domains of computer vision, natural language processing, machine / deep Learning and customer behaviour analysis.
This challenge focusses on prediction of colour attribute of products from a large-scale multimodal (text and image) e-commerce product catalog data of Rakuten Ichiba marketplace.
The catalog of product listings for any e-commerce marketplace consists of product information that is provided by the merchants. Typically a merchant provides the title, description, and image(s) of the product. Extracting various attributes are useful in several contexts, such as recommendations, search, product discovery, etc. Manual and rule-based approaches to attribute extraction are not scalable due to the sheer size of the product catalog. Deploying multimodal approaches would be a useful technique as the colour information can be predicted either through the image or though the text that the merchant have uploaded. Advances in this area of research have been limited due to the lack of real data from actual commercial catalogs. The challenge presents several interesting research aspects due to the intrinsic noisy nature of the product labels and images, the size of modern e-commerce catalogs, and the typical unbalanced data distribution.
By express derogation from any preexisting or future contractual documents and/or terms and conditions pertaining to the Rakuten Data Challenge occurring on the occasion of the Challenge Data of ENS and Collège de France (“Rakuten Data Challenge”), the participant (“Participant”) agrees to the following conditions in connection with the study data (“Study Data”) uploaded by Rakuten, Inc., 1-14-1 Tamagawa, Setagaya-ku, Tokyo, Japan, (the “Provider”) on the occasion of the Rakuten Data Challenge.
The Participant shall
(i) use the Study Data for the sole purpose of the good performance of the Rakuten Data Challenge (the “Purpose”),
(ii) notwithstanding the above, not show or disclose the Study Data in the result presentations of the Rakuten Data Challenge,
(iii) not use, apply, reveal, report, publish, extract or otherwise disclose to any third party all or part of the Study Data in any circumstances for a purpose other than the Purpose.
As of the termination of the Rakuten Data Challenge, the Participant shall immediately cease any use of the Study Data unless otherwise agreed by the Provider. The present specific terms shall remain in full force and effect until the termination of the Purpose and for a period of two (2) years following the termination date of the Purpose.
For any questions about this challenge please contact to the following address:
The goal of this data challenge is to predict the "colour" of a product, given its image, title, and description. A product can be of multiple colours, making it a multi-label classification problem.
For example, in Rakuten Ichiba catalog, a product with a Japanese title タイトリスト プレーヤーズ ローラートラベルカバー (Titleist Players Roller Travel Cover) associated with an image and sometimes with an additional description. The colour of this product is annotated as Red and Black. There are other products with different titles, images, with possible descriptions, and associated colour attribute tags. Given these information on the products, like the example above, this challenge proposes to model a multi-label classifier to classify the products into its corresponding colour attributes.
The metric used in this challenge to rank the participants is the weighted-F1 score.
Scikit-Learn package has an F1 score implementation (link) and can be used for this challenge with its average parameter set to "weighted".
For this challenge, Rakuten is releasing approx. 250K item listings in CSV format, including the train (212,659) and test set (37,528). The dataset consists of product titles, product descriptions, product images and their corresponding colour attribute tags. There are 19 unique colour tags in the dataset.
The data are divided under two criteria, forming four distinct sets: training or test, input or output.
X_train.csv: training input file
Y_train.csv: training output file
X_test.csv: test input file
Additionally images.zip file is supplied containing all the images. Uncompressing this file will provide a folder named images with all the item images.
The first line of all the files contains the header, and the columns are separated by comma (',').
The columns of the input files (X_train.csv and X_test.csv) are:
image_file_name - The name of the image file in images folder corresponding to the item.
item_name - The item title, a short text summarizing the item.
item_caption - A more detailed text describing the item. Not all the merchants use this field, so to retain originality of the data, the description field can contain NaN value for many products.
The training output file (y_train.csv) contains the color_tags, the category for the classification task, for each product in the training input file (X_train.csv). Here also the first line of the file is the header. There is a one-to-one mapping between the lines of training input and training output files.
Here is an example of the output file corresponding to the above example of the input file:
For the test input file X_test.csv, participants need to provide a test output file in the same format as the training output file. The first line of this test output file should contain the header color_tags, and then a list of predicted colour tags per line. One can recall that each item may contain multiple colour tags. There should be a one-to-one correspondence between the lines of the predicted test output file and the lines describing the items in the test input file (X_test.csv). A sample prediction file is also provided to show the format expected.
Here is an example of an expected prediction file:
The benchmark model only uses the images. However, the participants are encouraged to use both images and texts while designing a classifier, since they contain complementary information.
For the image based classifier, a version of Densely Connected Networks (DenseNet) model (reference) is used. DenseNet121, pre-trained on ImageNet, from PyTorch model hub is used as the image feature extractor. For each image, the output of the global average pooling layer is used as input features to a fully-connected layer with 19 outputs (corresponding to the colour categories). The loss function is per-label binary cross entropy loss.
During inference, the model compute a score between 0 and 1 for each of the categories. We consider that the model makes a prediction if the output of the sigmoid is greater than 0.5.
Following is the weighted-F1 score obtained using the benchmark models described above on the images:
Files are accessible when logged in and registered to the challenge
The challenge provider
Research wing of Rakuten, one of the largest e-commerce companies in the world