Skip to main content

3 posts tagged with "recsys"

View All Tags

· 6 min read
Sparsh Agarwal

The usage and importance of recommender systems are increasing at a fast pace. And deep learning is gaining traction as the preferred choice for model architecture. Giants like Google and Facebook are already using recommenders to earn billions of dollars.

Recently, Facebook shared its approach to maintain its 12 trillion parameter recommender. Building these large systems is challenging because it requires huge computation and memory resources. And we will soon enter into 100 trillion range. And SMEs will not be left behind due to open-source environment of software architectures and the decreasing cost of hardware, especially on the cloud infrastructure.

As per one estimate, a model with 100 trillion parameters would require at least 200TB just to store the model, even at 16-bit floating-point accuracy. So we need architectures that can support efficient and distributed training of recommendation models.

Memory-intensive vs Computation-intensive: The increasing parameter comes mostly from the embedding layer which maps each entrance of an ID type feature (such as an user ID and a session ID) into a fixed length low-dimensional embedding vector. Consider the billion scale of entrances for the ID type features in a production recommender system and the wide utilization of feature crosses, the embedding layer usually domains the parameter space, which makes this component extremely memory-intensive. On the other hand, these low-dimensional embedding vectors are concatenated with diversified Non-ID type features (e.g., image, audio, video, social network, etc.) to feed a group of increasingly sophisticated neural networks (e.g., convolution, LSTM, multi-head attention) for prediction(s). Furthermore, in practice, multiple objectives can also be combined and optimized simultaneously for multiple tasks. These mechanisms make the rest neural network increasingly computation-intensive.

An example of a recommender models with 100+ trillions of parameter in the embedding layer and 50+ TFLOP computation in the neural network.

An example of a recommender models with 100+ trillions of parameter in the embedding layer and 50+ TFLOP computation in the neural network.

Alibaba's XDL, Baidu's PaddleRec, and Kwai's Persia are some open-source frameworks for this large-scale distributed training of recommender systems.

Parameter Server Framework​

Existing distributed systems for deep learning based recommender models are usually built on top of the parameter server (PS) framework, where one can add elastic distributed storage to hold the increasingly large amount of parameters of the embedding layer. On the other hand, the computation workload does not scale linearly with the increasing parameter scale of the embedding layer—in fact, with an efficient implementation, a lookup operation over a larger embedding table would introduce almost no additional computations.

Left: deep learning based recommender model training workflow over a heterogeneous cluster. Right: Gantt charts to compare fully synchronous, fully asynchronous, raw hybrid and optimized hybrid modes of distributed training of the deep learning recommender model. [Source](https://arxiv.org/pdf/2111.05897v1.pdf).

Left: deep learning based recommender model training workflow over a heterogeneous cluster. Right: Gantt charts to compare fully synchronous, fully asynchronous, raw hybrid and optimized hybrid modes of distributed training of the deep learning recommender model. Source.

PERSIA​

PERSIA (Parallel rEcommendation tRaining System with hybrId Acceleration) is a PyTorch-based system for training deep learning recommendation models on commodity hardware. It supports models containing more than 100 trillion parameters.

It uses a hybrid training algorithm to tackle the embedding layer and dense neural network modules differently—the embedding layer is trained in an asynchronous fashion to improve the throughput of training samples, while the rest neural network is trained in a synchronous fashion to preserve the statistical efficiency.

It also uses a distributed system to manage the hybrid computation resources (CPUs and GPUs) to optimize the co-existence of asynchronicity and synchronicity in the training algorithm.

Untitled

Untitled

Persia includes a data loader module, a embedding PS (Parameter Server) module, a group of embedding workers over CPU nodes, and a group of NN workers over GPU instances. Each module can be dynamically scaled for different model scales and desired training throughput:

  • A data loader that fetches training data from distributed storages such as Hadoop, Kafka, etc;
  • A embedding parameter server (embedding PS for short) manages the storage and update of the parameters in the embedding layer $\mathrm{w}^{emb}$;
  • A group of embedding workers that runs Algorithm 1 for getting the embedding parameters from the embedding PS; aggregating embedding vectors (potentially) and putting embedding gradients back to embedding PS;
  • A group of NN workers that runs the forward-/backward- propagation of the neural network $\mathrm{NN_{w^{nn}}(·)}$.

The architecture of Persia.

The architecture of Persia.

Logically, the training procedure is conducted by Persia in a data dispatching based paradigm as below:

  1. The data loader will dispatch the ID type feature $\mathrm{x^{ID}}$ to an embedding worker—the embedding worker will generate an unique sample ID 𝜉 for this sample, buffer this sample ID with the ID type feature $\mathrm{x_\xi^{ID}}$ locally, and returns this ID 𝜉 back the data loader; the data loader will associate this sample’s Non-ID type features and labels with this unique ID.
  2. Next, the data loader will dispatch the Non-ID type feature and label(s) $\mathrm{(x\xi^{NID},y\xi)}$ to a NN worker.
  3. Once a NN worker receives this incomplete training sample, it will issue a request to pull the ID type features’ $\mathrm{(x\xi^{ID})}$ embedding $\mathrm{w\xi^{emb}}$ from some embedding worker according to the sample ID 𝜉—this would trigger the forward propagation in Algorithm 1, where the embedding worker will use the buffered ID type feature $\mathrm{x\xi^{ID}}$ to get the corresponding $\mathrm{w\xi^{emb}}$ from the embedding PS.
  4. Then the embedding worker performs some potential aggregation of original embedding vectors. When this computation finishes, the aggregated embedding vector $\mathrm{w_\xi^{emb}}$ will be transmitted to the NN worker that issues the pull request.
  5. Once the NN worker gets a group of complete inputs for the dense module, it will create a mini-batch and conduct the training computation of the NN according to Algorithm 2. Note that the parameter of the NN always locates in the device RAM of the NN worker, where the NN workers synchronize the gradients by the AllReduce Paradigm.
  6. When the iteration of Algorithm 2 is finished, the NN worker will send the gradients of the embedding ($\mathrm{F_\xi^{emb'}}$) back to the embedding worker (also along with the sample ID 𝜉).
  7. The embedding worker will query the buffered ID type feature $\mathrm{x\xi^{ID}}$ according to the sample ID 𝜉; compute gradients $\mathrm{F\xi^{emb'}}$ of the embedding parameters and send the gradients to the embedding PS, so that the embedding PS can finally compute the updates according the embedding parameter’s gradients by its SGD optimizer and update the embedding parameters.

· 3 min read
Sparsh Agarwal

Matching micro-videos with suitable background music can help uploaders better convey their contents and emotions, and increase the click-through rate of their uploaded videos. However, manually selecting the background music becomes a painstaking task due to the voluminous and ever-growing pool of candidate music. Therefore, automatically recommending background music to videos becomes an important task.

In this paper, Zhu et. al. shared their approach to solve this task. They first collected ~3,000 background music from popular TikTok videos and also ~150,000 video clips that used some kind of background music. They named this dataset TT-150K.

An exemplar subset of videos and their matched background music in the established TT-150k dataset

An exemplar subset of videos and their matched background music in the established TT-150k dataset

After building the dataset, they worked on modeling and proposed the following architecture:

Proposed CMVAE (Cross-modal Variational Auto-encoder) framework

Proposed CMVAE (Cross-modal Variational Auto-encoder) framework

The goal is to represent videos (users in recsys terminology) and music (items) in a shared latent space. To achieve this, CMVAE use pre-trained models to extract features from unstructured data - vggish model for audio2vec, resnet for video2vec and bert-multilingual for text2vec. Text and video vectors are then fused using product-of-expert approach.

It uses the reconstruction power of variational autoencoders to 1) reconstruct video from music latent vector and, 2) reconstruct music from video latent vector. In layman terms, we are training a neural network that will try to guess the video activity just by listening background music, and also try to guess the background music just by seeing the video activities.

The joint training objective is $\mathcal{L}{(z_m,z_v)} = \beta \cdot\mathcal{L}{cross_recon} - \mathcal{L}{KL} + \gamma \cdot \mathcal{L}{matching}$, where $\beta$ and $\gamma$ control the weight of the cross reconstruction loss and the matching loss, respectively.

After training the model, they compared the model's performance with existing baselines and the results are as follows:

/img/content-blog-raw-blog-short-video-background-music-recommender-untitled-2.png

Conclusion: I don't make short videos myself but can easily imagine the difficulty in finding the right background music. If I have to do this task manually, I will try out 5-6 videos and select one that I like. But here, I will be assuming that my audience would also like this music. Moreover, feedback is not actionable because it will create kind of an implicit sub-conscious effect (because when I see a video, I mostly judge it at overall level and rarely notice that background music is the problem). So, this kind of recommender system will definitely help me in selecting a better background music. Excited to see this feature soon in TikTok, Youtube Shorts and other similar services.

· 12 min read
Sparsh Agarwal

/img/content-blog-raw-blog-tools-for-building-recommender-systems-untitled.png

Recombee - Recommendation as a service API​

Recombee is a Recommender as a Service with easy integration and Admin UI. It can be used in many domains, for example in media (VoD, news …), e-commerce, job boards, aggregators or classifieds. Basically, it can be used in any domain with a catalog of items that can be interacted by users. The users can interact with the items in many ways: for example view them, rate them, bookmark them, purchase them, etc. Both items and users can have various properties (metadata) that are also used by the recommendation models.

/img/content-blog-raw-blog-tools-for-building-recommender-systems-untitled-1.png

Here is the official tutorial series to get started.

Amazon Personalize - Self-service Platform to build and serve recommenders​

Amazon Personalize is a fully managed machine learning service that goes beyond rigid static rule based recommendation systems and trains, tunes, and deploys custom ML models to deliver highly customized recommendations to customers across industries such as retail and media and entertainment.

/img/content-blog-raw-blog-tools-for-building-recommender-systems-untitled-2.png

It covers 6 use-cases:

Popular Use-cases

Popular Use-cases

Following are the hands-on tutorials:

  1. Data Science on AWS Workshop - Personalize Recommendationsp
  2. https://aws.amazon.com/blogs/machine-learning/creating-a-recommendation-engine-using-amazon-personalize/
  3. https://aws.amazon.com/blogs/machine-learning/omnichannel-personalization-with-amazon-personalize/
  4. https://aws.amazon.com/blogs/machine-learning/using-a-b-testing-to-measure-the-efficacy-of-recommendations-generated-by-amazon-personalize/

Also checkout these resources:

  1. https://www.youtube.com/playlist?list=PLN7ADELDRRhiQB9QkFiZolioeJZb3wqPE

Azure Personalizer - An API based service with Reinforcement learning capability​

Azure Personalizer is a cloud-based API service that helps developers create rich, personalized experiences for each user of your app. It learns from customer's real-time behavior, and uses reinforcement learning to select the best item (action) based on collective behavior and reward scores across all users. Actions are the content items, such as news articles, specific movies, or products. It takes a list of items (e.g. list of drop-down choices) and their context (e.g. Report Name, User Name, Time Zone) as input and returns the ranked list of items for the given context. While doing that, it also allows feedback submission regarding the relevance and efficiency of the ranking results returned by the service. The feedback (reward score) can be automatically calculated and submitted to the service based on the given personalization use case.

/img/content-blog-raw-blog-tools-for-building-recommender-systems-untitled-4.png

You can use the Personalizer service to determine what product to suggest to shoppers or to figure out the optimal position for an advertisement. After the content is shown to the user, your application monitors the user's reaction and reports a reward score back to the Personalizer service. This ensures continuous improvement of the machine learning model, and Personalizer's ability to select the best content item based on the contextual information it receives.

Following are some of the interesting use cases of Azure Personalizer:

  1. Blog Recommender [Video tutorial, GitHub]
  2. Food Personalizer [Video tutorial, Slideshare, Code Blog]
  3. Coffee Personalizer [GitHub, Video tutorial]
  4. News Recommendation
  5. Movie Recommendation
  6. Product Recommendation
  7. Intent clarification & disambiguation: help your users have a better experience when their intent is not clear by providing an option that is personalized.
  8. Default suggestions for menus & options: have the bot suggest the most likely item in a personalized way as a first step, instead of presenting an impersonal menu or list of alternatives.
  9. Bot traits & tone: for bots that can vary tone, verbosity, and writing style, consider varying these traits.
  10. Notification & alert content: decide what text to use for alerts in order to engage users more.
  11. Notification & alert timing: have personalized learning of when to send notifications to users to engage them more.
  12. Dropdown Options - Different users of an application with manager privileges would see a list of reports that they can run. Before Personalizer was implemented, the list of dozens of reports was displayed in alphabetical order, requiring most of the managers to scroll through the lengthy list to find the report they needed. This created a poor user experience for daily users of the reporting system, making for a good use case for Personalizer. The tooling learned from the user behavior and began to rank frequently run reports on the top of the dropdown list. Frequently run reports would be different for different users, and would change over time for each manager as they get assigned to different projects. This is exactly the situation where Personalizer’s reward score-based learning models come into play.
  13. Projects in Timesheet - Every employee in the company logs a daily timesheet listing all of the projects the user is assigned to. It also lists other projects, such as overhead. Depending upon the employee project allocations, his or her timesheet table could have few to a couple of dozen active projects listed. Even though the employee is assigned to several projects, particularly at lead and manager levels, they don’t log time in more than 2 to 3 projects for a few weeks to months.
    1. Reward Score Calculation

Google Recommendation - Recommender Service from Google​

https://cloudx-bricks-prod-bucket.storage.googleapis.com/6a0d4afb1778e55d54cb7d66382a4b25f8748a50a93f3c3403d2a835aa166f3d.svg

Abacus.ai - Self-service Platform at cheaper price​

It uses multi-objective, real-time recommendations models and provides 4 use-cases for fasttrack train-&-deploy process - Personalized recommendations, personalized search, related items and real-time feed recommendations.

/img/content-blog-raw-blog-tools-for-building-recommender-systems-untitled-5.png

Here is the hands-on video tutorial:

https://youtu.be/7hTKL73f2yA

Nvidia Merlin - Toolkit with GPU capabilities​

Merlin empowers data scientists, machine learning engineers, and researchers to build high-performing recommenders at scale. Merlin includes tools that democratize building deep learning recommenders by addressing common ETL, training, and inference challenges. Each stage of the Merlin pipeline is optimized to support hundreds of terabytes of data, all accessible through easy-to-use APIs. With Merlin, better predictions than traditional methods and increased click-through rates are within reach.

End-to-end recommender system architecture. FE: feature engineering; PP: preprocessing; ETL: extract-transform-load.

End-to-end recommender system architecture. FE: feature engineering; PP: preprocessing; ETL: extract-transform-load.

/img/content-blog-raw-blog-tools-for-building-recommender-systems-untitled-7.png

TFRS - Open-source Recommender library built on top of Tensorflow​

Built with TensorFlow 2.x, TFRS makes it possible to:

/img/content-blog-raw-blog-tools-for-building-recommender-systems-untitled-8.png

Following is a series of official tutorial notebooks:-

TensorFlow Recommenders: Quickstart

Elliot - An end-to-end framework good for recommender system experiments​

Elliot is a comprehensive recommendation framework that aims to run and reproduce an entire experimental pipeline by processing a simple configuration file. The framework loads, filters, and splits the data considering a vast set of strategies (13 splitting methods and 8 filtering approaches, from temporal training-test splitting to nested K-folds Cross-Validation). Elliot optimizes hyperparameters (51 strategies) for several recommendation algorithms (50), selects the best models, compares them with the baselines providing intra-model statistics, computes metrics (36) spanning from accuracy to beyond-accuracy, bias, and fairness, and conducts statistical analysis (Wilcoxon and Paired t-test). The aim is to provide the researchers with a tool to ease (and make them reproducible) all the experimental evaluation phases, from data reading to results collection.

/img/content-blog-raw-blog-tools-for-building-recommender-systems-untitled-9.png

RecBole - Another framework good for recommender system model experiments​

RecBole is developed based on Python and PyTorch for reproducing and developing recommendation algorithms in a unified, comprehensive and efficient framework for research purpose. It can be installed from pip, Conda and source, and easy to use. It includes 65 recommendation algorithms, covering four major categories: General Recommendation, Sequential Recommendation, Context-aware Recommendation, and Knowledge-based Recommendation, which can support the basic research in recommender systems.

/img/content-blog-raw-blog-tools-for-building-recommender-systems-untitled-10.png

Features:

  • General and extensible data structureWe deign general and extensible data structures to unify the formatting and usage of various recommendation datasets.
  • Comprehensive benchmark models and datasetsWe implement 65 commonly used recommendation algorithms, and provide the formatted copies of 28 recommendation datasets.
  • Efficient GPU-accelerated executionWe design many tailored strategies in the GPU environment to enhance the efficiency of our library.
  • Extensive and standard evaluation protocolsWe support a series of commonly used evaluation protocols or settings for testing and comparing recommendation algorithms.

The Microsoft Recommenders repository is an open source collection of python utilities and Jupyter notebooks to help accelerate the process of designing, evaluating, and deploying recommender systems. The repository was initially formed by data scientists at Microsoft to consolidate common tools and best practices developed from working on recommender systems in various industry settings. The goal of the tools and notebooks is to show examples of how to effectively build, compare, and then deploy the best recommender solution for a given scenario. Contributions from the community have brought in new algorithm implementations and code examples covering multiple aspects of working with recommendation algorithms.

/img/content-blog-raw-blog-tools-for-building-recommender-systems-untitled-11.png

Surprise - An open-source library with easy api and powerful models​

Surprise is a Python scikit for building and analyzing recommender systems that deal with explicit rating data.

Surprise was designed with the following purposes in mind:

Spotlight - Another open-source library​

Spotlight uses PyTorch to build both deep and shallow recommender models. By providing both a slew of building blocks for loss functions (various pointwise and pairwise ranking losses), representations (shallow factorization representations, deep sequence models), and utilities for fetching (or generating) recommendation datasets, it aims to be a tool for rapid exploration and prototyping of new recommender models.

/img/content-blog-raw-blog-tools-for-building-recommender-systems-untitled-12.png

Here is a series of hands-on tutorials to get started.

Vowpal Wabbit - library with reinforcement learning features​

Vowpal Wabbit is an open source machine learning library, extensively used by industry, and is the first public terascale learning system. It provides fast, scalable machine learning and has unique capabilities such as learning to search, active learning, contextual memory, and extreme multiclass learning. It has a focus on reinforcement learning and provides production ready implementations of Contextual Bandit algorithms. It was developed originally at Yahoo! Research, and currently at Microsoft Research. Vowpal Wabbit sees significant innovation as a research to production vehicle for Microsoft Research.

/img/content-blog-raw-blog-tools-for-building-recommender-systems-untitled-13.png

For most applications, collaborative filtering yields satisfactory results for item recommendations; there are however several issues that arise that might make it difficult to scale up a recommender system.

  • The number of features can grow quite large, and given the usual sparsity of consumption datasets, collaborative filtering needs every single feature and datapoint available.
  • For new data points, the whole model has to be re-trained

Vowpal Wabbit’s matrix factorization capabilities can be used to build a recommender that is similar in spirit to collaborative filtering but that avoids the pitfalls that we mentioned before.

Following are the three introductory hands-on tutorials on building recommender systems with vowpal wabbit:

  1. Vowpal Wabbit Deep Dive - A Content-based Recommender System using Microsoft Recommender Library
  2. Simulating Content Personalization with Contextual Bandits
  3. Vowpal Wabbit, The Magic Recommender System!

DLRM - An open-source scalable model from Facebook's AI team, build on top of PyTorch​

DLRM advances on other models by combining principles from both collaborative filtering and predictive analytics-based approaches, which enables it to work efficiently with production-scale data and provide state-of-art results.

In the DLRM model, categorical features are processed using embeddings, while continuous features are processed with a bottom multilayer perceptron (MLP). Then, second-order interactions of different features are computed explicitly. Finally, the results are processed with a top MLP and fed into a sigmoid function in order to give a probability of a click.

/img/content-blog-raw-blog-tools-for-building-recommender-systems-untitled-14.png

Following are the hands-on tutorials:

  1. https://nbviewer.jupyter.org/github/gotorehanahmad/Recommendation-Systems/blob/master/dlrm/dlrm_main.ipynb
  2. Training Facebook's DLRM on the digix dataset

References​

  1. https://elliot.readthedocs.io/en/latest/
  2. https://vowpalwabbit.org/index.html
  3. https://abacus.ai/user_eng
  4. https://azure.microsoft.com/en-in/services/cognitive-services/personalizer/
  5. https://aws.amazon.com/personalize/
  6. https://github.com/facebookresearch/dlrm
  7. https://www.tensorflow.org/recommenders
  8. https://magento.com/products/product-recommendations
  9. https://cloud.google.com/recommendations
  10. https://www.recombee.com/
  11. https://recbole.io/
  12. https://github.com/microsoft/recommenders
  13. http://surpriselib.com/
  14. https://github.com/maciejkula/spotlight
  15. https://vowpalwabbit.org/tutorials/contextual_bandits.html
  16. https://github.com/VowpalWabbit/vowpal_wabbit/wiki
  17. https://vowpalwabbit.org/tutorials/cb_simulation.html
  18. https://vowpalwabbit.org/rlos/2021/projects.html
  19. https://vowpalwabbit.org/rlos/2020/projects.html
  20. https://getstream.io/blog/recommendations-activity-streams-vowpal-wabbit/
  21. https://samuel-guedj.medium.com/vowpal-wabbit-the-magic-58b7f1d8e39c
  22. https://vowpalwabbit.org/neurips2019/
  23. https://github.com/VowpalWabbit/neurips2019
  24. https://getstream.io/blog/introduction-contextual-bandits/
  25. https://www.youtube.com/watch?v=CeOcNK1xSSA&t=72s
  26. https://vowpalwabbit.org/blog/rlos-fest-2021.html
  27. https://github.com/VowpalWabbit/workshop
  28. https://github.com/VowpalWabbit/workshop/tree/master/aiNextCon2019
  29. Blog post by Nasir Mirza. Azure Cognitive Services Personalizer: Part One. Oct, 2019.
  30. Blog post by Nasir Mirza. Azure Cognitive Services Personalizer: Part Two. Oct, 2019.
  31. Blog post by Nasir Mirza. Azure Cognitive Services Personalizer: Part Three. Dec, 2019.
  32. Microsoft Azure Personalizer Official Documentation. Oct, 2020.
  33. Personalizer demo.
  34. Official Page.
  35. Blog Post by Jake Wong. Get hands on with the Azure Personalizer API. Aug, 2019.
  36. Medium Post.
  37. Blog Post.
  38. Git Repo.
  39. https://youtu.be/7hTKL73f2yA
  40. Deep-Learning Based Recommendation Systems — Learning AI
  41. Evaluating Deep Learning Models with Abacus.AI – Recommendation Systems
  42. https://aws.amazon.com/blogs/machine-learning/pioneering-personalized-user-experiences-at-stockx-with-amazon-personalize/
  43. https://aws.amazon.com/blogs/machine-learning/category/artificial-intelligence/amazon-personalize/
  44. https://d1.awsstatic.com/events/reinvent/2019/REPEAT_1_Build_a_content-recommendation_engine_with_Amazon_Personalize_AIM304-R1.pdf
  45. https://aws.amazon.com/blogs/aws/amazon-personalize-real-time-personalization-and-recommendation-for-everyone/
  46. https://d1.awsstatic.com/events/reinvent/2019/REPEAT_1_Accelerate_experimentation_with_personalization_models_AIM424-R1.pdf
  47. https://d1.awsstatic.com/events/reinvent/2019/REPEAT_1_Personalized_user_engagement_with_machine_learning_AIM346-R1.pdf
  48. https://github.com/aws-samples/amazon-personalize-samples
  49. https://github.com/aws-samples/amazon-personalize-automated-retraining
  50. https://github.com/aws-samples/amazon-personalize-ingestion-pipeline
  51. https://github.com/aws-samples/amazon-personalize-monitor
  52. https://github.com/aws-samples/amazon-personalize-data-conversion-pipeline
  53. https://github.com/james-jory/segment-personalize-workshop
  54. https://github.com/aws-samples/amazon-personalize-samples/tree/master/next_steps/workshops/POC_in_a_box
  55. https://github.com/Imagination-Media/aws-personalize-magento2
  56. https://github.com/awslabs/amazon-personalize-optimizer-using-amazon-pinpoint-events
  57. https://github.com/aws-samples/amazon-personalize-with-aws-glue-sample-dataset
  58. https://github.com/awsdocs/amazon-personalize-developer-guide
  59. https://github.com/chrisking/NetflixPersonalize
  60. https://github.com/aws-samples/retail-demo-store
  61. https://github.com/aws-samples/personalize-data-science-sdk-workflow
  62. https://github.com/apac-ml-tfc/personalize-poc
  63. https://github.com/dalacan/personalize-batch-recommendations
  64. https://github.com/harunobukameda/Amazon-Personalize-Handson
  65. https://www.sagemakerworkshop.com/personalize/
  66. https://github.com/lmorri/vodpocinabox
  67. https://github.com/awslabs/unicornflix
  68. https://www.youtube.com/watch?v=r9J3UZmddC4&t=966s
  69. https://www.youtube.com/watch?v=kTufCK76Yus&t=1436s
  70. https://www.youtube.com/watch?v=hY_XzglTkak&t=66s
  71. https://business.adobe.com/lv/summit/2020/adobe-sensei-powers-magento-product-recommendations.html
  72. https://magento.com/products/product-recommendations
  73. https://docs.magento.com/user-guide/marketing/product-recommendations.html
  74. https://vod.webqem.com/detail/videos/magento-commerce/video/6195503645001/magento-commerce---product-recommendations?autoStart=true&page=1
  75. https://blog.adobe.com/en/publish/2020/11/23/new-ai-capabilities-for-magento-commerce-improve-retail.html#gs.yw6mtq
  76. https://developers.google.com/recommender/docs/reference/rest
  77. https://www.youtube.com/watch?v=nY5U0uQZRyU&t=6s