Skip to main content

4 posts tagged with "personalization"

View All Tags

· 10 min read
Sparsh Agarwal

Author: Alexsoft

In a hyper-connected world, where advanced analytics and smart devices constantly re-assess and monitor risks, the traditional once-a-year insurance policy looks increasingly irrelevant and static. Insurance will become a breathing and living thing that shrinks and scales with time to accommodate the changing risks in the clients’ daily lives. As technology continues to expand, real-time data from connected devices and predictive analysis from AIs and machine learning will enhance personalized insurance to benefit the client and insurer.

To satisfy the expectations of clients, insurers may need to go beyond the personalization of marketing communication and start personalizing product bundles for individuals.

What is personalized insurance?

Personalized insurance is the process of reaching insurance customers with targeted pricing, offers, and messages at the right time. Personalization spans across various types of insurance services, from health to property insurance.

Some insurers are already defining themselves as trusted advisors aiding people in navigating, anticipating, and eliminating risks rather than just paying the compensation when things go wrong.

For example, these companies use customer data from wearable and smart devices to monitor the user’s lifestyle. If the user’s data indicate the emergence of a serious medical condition, they can send the customer content designed to change their detrimental lifestyle or recommend immediate treatment. When the customer stays fit, healthy and does not carry out risky activities, their insurance cost will be decreased.

/img/content-blog-raw-blog-insurance-personalization-untitled.png

Insurers can provide personalization to customers at different levels:

  • Personalized product bundles. The insurer offers a wide range of products such as health, car, life, and property insurance. So, clients can choose the specific products they want and group them in a bundle.
  • Personalized communications. Insurers use data collected from smart devices to notify customers about harmful activities and lifestyles. They also send recommendations on lifestyle changes. Some insurers take a step further to provide clients with incentives for a healthy lifestyle.
  • Personalized insurance quote. Customers are able to adjust the price of their insurance premiums by turning off the ones they don’t need at any time. Some insurers enable automatic quote adjustments depending on customer’s behavior (e.g., driving habits) or lifestyle choices (e.g., exercising).

Why is it important?

Collecting and analyzing user data is vital in personalizing products based on individual behavior and preferences. In addition, insurers should use this data to enhance external relationships with their customers and guide their internal processes. This will eventually lead to delightful customer experiences and efficient operations.

Personalized insurance is important for many reasons:

Customers expect personalized treatment. Every customer wants to feel special, and the personalization of your services and products will do just that. It will make them stay loyal to you. Moreover, customers are open for personalization. According to the Accenture study, 95 percent of new customers are ready to share their data in exchange for personalized insurance services. And about 58 percent of conservative users would be willing to do so.

Driving more effective sales and increasing revenue. Personalization benefits your sales and income in two ways. First, lots of people are ready to share their data with you in exchange for incentives and reduced premiums. Secondly, having access to clients’ data gives you the ability to target people who are already interested in your product, thereby increasing sales and revenue at a lower cost. You will be able to reach your customers at the right time and with the product they need.

Streamlining operations and working with customers more accurately. Having an insight into customer preferences and behavior is crucial if you want to provide personalized services. Data obtained from social media activity, fitness trackers, GPS, and other tech can help you serve customers better.

Success stories

Lemonade

Use of AI and chatbots to personalize communications.

Lemonade is a US insurance company that uses Maya – an AI-powered bot, to collect and analyze customer data. Maya acts as a virtual assistant that gets information, provides quotes, and handles payments. It also has the ability to provide customized answers to user’s questions and even help them make changes to existing policies. Lemonade uses Natural Action Synthesis and Natural Language Processing to ensure that Maya gets smarter the more it chats. This is possible because their machine learning model is retrained almost daily.

/img/content-blog-raw-blog-insurance-personalization-untitled-1.png

On top of that, the company uses big data analytics to quantify losses and predict risks by placing the client into a risk group and quoting a relevant premium. Customers are grouped according to their risk behaviors. The groups are created using algorithms that collect extensive customer data, such as health conditions.

Cover

Cover is a US-based insurance metasearch company that notifies its clients of price drops for their premiums. Their technology works by scanning the market, looking for discounted and lowered prices of insurance premiums for their clients. Cover blends automation, mobile technology, and expert advice to provide customers with high-quality insurance protection at the best prices.

Cover compares with policy data and prices from over 30 different insurers. From the start, the customers need to provide answers to some questions, which will be used to match the client with a policy that suits their needs.

Oscar

Oscar is a health insurer that provides its clients with a concierge team of medical professionals who give health advice and help them know if they see the best specialist for their specific health condition. They also help with finding the best doctors that accept Oscar insurance and manage and treat chronic conditions. Also, they set aside a separate concierge team in cases of emergencies that helps with the patient’s discharge and follow-up care.

Oscar’s mobile app acts as an intermediary between the user and the health system. The platform facilitates the customer’s interaction with their healthcare professionals. Clients can receive their lab reports, medical records, physician recommendations, and virtual care from the app. Oscar has also improved its high-touch services, including telemedicine and an “Ask your concierge” feature that connects users with a health insurance advice team.

/img/content-blog-raw-blog-insurance-personalization-untitled-2.png

Alllstate

Allstate is an auto insurance company that offers personalized car insurance to its customers using telematics programs called Drivewise and Milewise. Drivewise is offered through a mobile app that monitors the customers driving behavior and provides feedback after each drive. Customers also receive incentives for safe driving. From the app interface, clients can check their rewards and driving behavior for the last 100 trips. The customer’s premium is then calculated based on factors like speeding, abrupt braking, and time of the trip. One of the nice things about Drivewise is that even those who do not have an Allstate care insurance policy can participate in this program. Their Milewise program, as the name suggests, lets customers pay insurance based on the miles covered. So, the app monitors the distance covered by the car, and low-mileage drivers can save on insurance.

/img/content-blog-raw-blog-insurance-personalization-untitled-3.png

How to approach personalization?

/img/content-blog-raw-blog-insurance-personalization-untitled-4.png

Before fully investing in personalization, you need to carefully plan your approach. This will ensure you have all the pieces for success, and it will help you follow through with your plan.

Explore existing data

Having customer data is the minimum requirement to provide personalized services. First, you need to envision the type of personalization you want to offer. Then, make sure you have data collection channels that provide you with relevant data needed for your tasks. For instance, some of your documents may contain the required information, and you have to digitize, structure those, or extract specific details for that. So, you should audit your current information and data collection mechanisms to estimate whether you’ll need any additional effort to gather this data. For instance, you may want to use intelligent document processing.

Engage data scientists to make the proof of concept and carry out A/B tests

Your vision on personalization may not work for every business model. Or your data quality may be low to reach project feasibility. We’ve talked about that while explaining how to approach ROI calculations with machine learning projects. So, you need to present the data you have to a data science team to run several experiments and build prototypes. Once they are ready, you can roll out your new algorithms for a subset of customers to run A/B tests. Their results may show that the conventional approaches work better for you or help iterate on your assumptions.

Invest in data infrastructure

If the A/B tests show that personalization will work for your business model, that is where automation comes into play. You can start investing in data infrastructure and analytical pipelines to automate data collection and analysis mechanisms.

You’ll need a data engineering team for that. These specialists set up connections with data sources, such as mobile, IoT, and telematics devices, enable automatic data preparation, configure storages, and integrate your infrastructure with business intelligence software that helps explore and visualize data.

Continuously learn your customers’ preferences and needs

The data you collect is only as good as the insights gained from it. That is why it is vital to have a comprehensive analytic solution. A high-quality analytic software will transform the data into your most valuable asset. This data will be used to improve product development, make more accurate decisions, and provide personalized services to your customers.

Iterate on your infrastructure and algorithms

Personalization isn’t a one-time project. Whether you apply machine learning or build personalization based on rule-based systems, you still have to revisit your technology, continuously gather new data, and adapt your workflows.

Ensure a personalized cross-channel experience

Since the data collected from IoT devices and other tech is vital for personalization, it is important to make the customer experience seamless across different communication channels. Therefore, the customer should always be provided with the same level of personalization regardless of the touchpoint.

Challenges

Personalization is financially intensive. The ability of insurers to personalize insurance differs only marginally between marketing communications and products. Most of them, especially startups, do not have the funds to implement advanced technologies like machine learning needed for personalized insurance. However, insurers do not need to start with all the levels of personalization. They can often start by customizing their customer service, gathering data and insights, and then gradually developing towards more complex systems.

Complex process involving multiple parties. Also, it is difficult to balance personalization with financial targets, especially when establishing a price for risk. In-depth personalization of insurance must use data analytics from different sources to ensure that personalized offers reflect the client’s needs as well as the profitability and risks implications for the company.

Customer data is heavily regulated. Customer data from different sources are subject to industry regulations and privacy concerns. It is often a difficult task to obtain approval from regulators to use this data. Also, customers are becoming more aware of how companies are using their data and approve strict regulations. That is why laws such as General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) have been passed, which gives customers more control over their data. Insurers can address this barrier by explaining to people how their systems work and how personal data is used. Read more on explainable machine learning in our dedicated article. Besides being open, insurers can provide clients with incentives and other services for free in exchange for access to personal data.

· 3 min read
Sparsh Agarwal

Classical recommender systems typically provides familier items, which not only bores customer after some time, but create a critical bias problem also, generally known as filter bubble or echo chamber problem.

To address this issue, instead of recommending best matching product all the time, we intentionally recommend a random product. For example, if a user subscribed to Netflix one month ago and watching action movies all the time. If we recommend another action movie, there is a high probability that user will click but keeping in mind the long-term user satisfaction and to address the filter bubble bias, we would recommend a comedy movie. Surprisingly, this strategy works!!

The most common metric is diversity factor but diversity only measures dispersion among recommended items. The better alternative is unexpectedness factor. It measures deviations of recommended items from user expectations and thus captures the concept of user surprise and allows recommender systems to break from the filter bubble. The goal is to provide novel, surprising and satisfying recommendations.

Including session-based information into the design of an unexpected recommender system is beneficial. For example, it is more reasonable to recommend the next episode of a TV series to the user who has just finished the first episode, instead of recommending new types of videos to that person. On the other hand, if the user has been binge-watching the same TV series in one night, it is better to recommend something different to him or her.

Model

/img/content-blog-raw-blog-personalized-unexpectedness-in-recommender-systems-untitled.png

Overview of the proposed PURS model. The base model estimates the click-through rate of certain user-item pairs, while the unexpected model captures the unexpectedness of the new recommendation as well as user perception towards unexpectedness.

Offline Experiment Results

/img/content-blog-raw-blog-personalized-unexpectedness-in-recommender-systems-untitled-1.png

Online A/B Test Results

Authors conducted the online A/B test at Alibaba-Youku, a major video recommendation platform from 2019-11 to 2019-12. During the testing period, they compared the proposed PURS model with the latest production model in the company. They measured the performance using standard business metrics: VV (Video View, average video viewed by each user), TS (Time Spent, average time that each user spends on the platform), ID (Impression Depth, average impression through one session) and CTR (Click-Through-Rate, the percentage of user clicking on the recommended video). They also measure the novelty of the recommended videos using the unexpectedness and coverage measures.

Represents statistical significance at the 0.95 level.

Represents statistical significance at the 0.95 level.

Code Walkthrough

Note: PURS is implemented in Tensorflow 1.x

Unexpected attention (model.py)

def unexp_attention(self, querys, keys, keys_id):
"""
Same Attention as in the DIN model
queries: [Batchsize, 1, embedding_size]
keys: [Batchsize, max_seq_len, embedding_size] max_seq_len is the number of keys(e.g. number of clicked creativeid for each sample)
keys_id: [Batchsize, max_seq_len]
"""
querys = tf.expand_dims(querys, 1)
keys_length = tf.shape(keys)[1] # padded_dim
embedding_size = querys.get_shape().as_list()[-1]
keys = tf.reshape(keys, shape=[-1, keys_length, embedding_size])
querys = tf.reshape(tf.tile(querys, [1, keys_length, 1]), shape=[-1, keys_length, embedding_size])

net = tf.concat([keys, keys - querys, querys, keys*querys], axis=-1)
for units in [32,16]:
net = tf.layers.dense(net, units=units, activation=tf.nn.relu)
att_wgt = tf.layers.dense(net, units=1, activation=tf.sigmoid) # shape(batch_size, max_seq_len, 1)
outputs = tf.reshape(att_wgt, shape=[-1, 1, keys_length], name="weight") #shape(batch_size, 1, max_seq_len)
scores = outputs
scores = scores / (embedding_size ** 0.5) # scale
scores = tf.nn.softmax(scores)
outputs = tf.matmul(scores, keys) #(batch_size, 1, embedding_size)
outputs = tf.reduce_sum(outputs, 1, name="unexp_embedding") #(batch_size, embedding_size)
return outputs

Unexpected metric calculation (train.py)

def unexpectedness(sess, model, test_set):
unexp_list = []
for _, uij in DataInput(test_set, batch_size):
score, label, user, item, unexp = model.test(sess, uij)
for index in range(len(score)):
unexp_list.append(unexp[index])
return np.mean(unexp_list)

References

  1. https://arxiv.org/pdf/2106.02771v1.pdf
  2. https://github.com/lpworld/PURS

· 10 min read
Sparsh Agarwal

Overview

News recommendation system has a high degree of real-time because there will be a large number of news and hot spots at any time. Incremental updating, online learning, local updating and even reinforcement learning can make the recommender system quickly respond to the user‘s new behavior, and the premise of these updating strategies is that the sample itself has enough real-time information. In news recommendation system, the typical training sample is the user’s click behavior data.

Why is the real-time nature of the recommendation system important?

Intuitively, when users use personalized news applications, users expect to find articles that match their interests faster; when using short video services, they expect to "flash" content that they are interested in faster; when doing online shopping, I also hope to find the products that I like, faster. All recommendations highlight the word "fast", which is an intuitive manifestation of the "real-time" role of the recommendation system.

From a professional point of view, the real-time performance of the recommendation system is also crucial, which is mainly reflected in the following two aspects:

  1. The faster the update speed of the recommendation system is, the more it can reflect the user's recent user habits, and the more time-sensitive it can make recommendations to the user.
  2. The faster the recommendation system is updated, the easier it is for the model to find the latest popular data patterns, and the more it can make the model react to find the latest fashion trends.

The real-time nature of the "feature" of the recommendation system

Suppose a user has watched a 10-minute "badminton teaching" video in its entirety. Then there is no doubt that the user is interested in the subject of "badminton". The system hopes to continue to recommend "badminton" related videos when the user turns the page next time. However, due to the lack of real-time features of the system, the user’s viewing history cannot be fed back to the recommendation system in real time. As a result, the recommendation system learned that the user had watched the video "Badminton Teaching". It was already half an hour later. Has left the app. This is an example of recommendation failure caused by poor real-time performance of the recommendation system.

It is true that the next time the user opens the application, the recommendation system can use the last user behavior history to recommend "badminton" related videos, but the recommendation system undoubtedly loses what is most likely to increase user viscosity and increase user retention. opportunity.

The real-time nature of the "model" of the recommender system

No matter how strong the real-time feature is, the scope of influence is limited to the current user. Compared with the real-time nature of "features", the real-time nature of the recommendation system model is often considered from a more global perspective . The real-time nature of the feature attempts to describe a person with more accurate features, so that the recommendation system can give a recommendation result that is more in line with the person. The real-time nature of the model hopes to capture new data patterns at the global level faster and discover new trends and relevance.

Take, for example, a large number of promotional activities on Double Eleven on an e-commerce website. The real-time nature of the feature will quickly discover the products that the user may be interested in based on the user's recent behavior, but will never find the latest preferences of similar users, the latest correlation information between the products, and the trend information of new activities.

To discover such global data changes, the model needs to be updated faster. The most important factor affecting the real-time performance of the model is the training method of the model.

  1. Full update - The most common way of model training is full update. The model will use all training samples in a certain period of time for retraining, and then replace the "outdated" model with the new trained model. However, the full update requires a large amount of training samples, so the training time required is longer; and the full update is often performed on offline big data platforms, such as spark+tensorflow, so the data delay is also longer, which leads to the full update It is the worst "real-time" model update method. In fact, for a model that has been trained, it is enough to learn only the newly added incremental samples, which is called incremental update.
  2. Incremental update (Incremental Learning) - Incremental update only feeds newly added samples to the model for incremental learning . Technically, deep learning models often use stochastic gradient descent (SGD) and its variants for learning. The model's learning of incremental samples is equivalent to continuing to input incremental samples for gradient descent on the basis of the original samples. Therefore, based on the deep learning model, it is not difficult to change from full update to incremental update. But everything in engineering is a tradeoff, there is never a perfect solution, and incremental updates are no exception. Since only incremental samples are used for learning, the model also converges to the best point of the new sample after multiple epochs, and it is difficult to converge to the global best point of all the original samples + incremental samples. Therefore, in the actual recommendation system, the incremental update and the global update are often combined . After several rounds of incremental update, the global update is performed in a time window with a small business volume, and the model is corrected after the incremental update process. Accumulated errors in. Make trade-offs and trade-offs between "real-time performance" and "global optimization".
  3. Online learning - "Online learning" is a further improvement of "incremental update", "incremental update" is to perform incremental update when a batch of new samples is obtained, and online learning is to update the model in real time every time a new sample is obtained. Online learning can also be implemented technically through SGD. But if you use the general SGD method, online learning will cause a very serious problem, that is, the sparsity of the model is very poor, opening too many "fragmented" unimportant features. We pay attention to the "sparseness" of the model in a sense that is also an engineering consideration. For example, in a model with an input vector of several million dimensions, if the sparsity of the model is good, the effect of the model can be maintained without affecting the model. , Only make the corresponding weight of the input vector of a very small part of the dimension non-zero, that is to say, when the model is online, the volume of the model is very small, which is undoubtedly beneficial to the entire model serving process. Both the memory space required to store the model and the speed of online inference will benefit from the sparsity of the model. If the SGD method is used to update the model, it is easier to generate a large number of features with small weights than the batch method, which increases the difficulty of model deployment and update. So in order to take into account the training effect and model sparsity in the online learning process, there are a lot of related researches. The most famous ones include Microsoft's RDA, Google's FOBOS and the most famous FTRL, etc.
  4. Partial model update - Another improvement direction to improve the real-time performance of the model is to perform a partial update of the model. The general idea is to reduce the update frequency of the part with low training efficiency and increase the update frequency of the part with high training efficiency . This approach is representative of the GBDT+LR model of Facebook.

/img/content-blog-raw-blog-real-time-news-personalization-with-flink-untitled.png

Data pipeline of a typical news recommendation system

When a user is exposed with a list of news articles, a page view events are sent to the backend server and when that user clicks on the news of interest, the action events are also sent to the backend server. After receiving these 2 event streams (page view and clicks), the backend server will send these user behaviour events to the message queue. And message queue finally stores these messages into the distributed file system, such as HDFS.

For model training, we need a training sample. The most common sampling technique is negative sampling. In this, we generate 'n' negative samples for each positive event that we receive. Users will only generate behavior for some exposed news samples, which are positive samples, and the remaining exposure samples without behavior are negative samples. After generating positive and negative samples, the model can be trained.

The recommendation system with low real-time requirements can use batch processing technology (APACHE spark is a typical tool) to generate samples, as shown in the left figure. Set a timing task, and read the user behavior log and exposure log in the time window from HDFS every other period of time, such as one hour, to perform join operation, generate training samples, and then write the training samples back to HDFS, Then start the training update of the model.

/img/content-blog-raw-blog-real-time-news-personalization-with-flink-untitled-1.png

Problems

One obvious problem with batch processing is latency. The typical cycle of running batch tasks regularly is one hour, which means that there is a delay of at least one hour from sample generation to model training. Sometimes, if the batch platform is overloaded and the tasks need to be queued, the delay will be greater.

Another problem is the boundary problem. If page view (PV) data is generated at the end of the log time window selected by the batch task, the corresponding action data may fall into the next time window of the batch task, resulting in join failure and false negative samples.

A related problem to this is the time synchronization problem. When a news item is exposed to the user, the user may click immediately after the PV data stream is generated, or the user may act after a few minutes, more than ten minutes, or even several hours. This means that after the PV data stream arrives, it needs to wait for a period of time to join with the action data stream. If the waiting time is too long, some samples (positive samples) that should have user behavior will be wrongly marked as negative samples because the user behavior has no time to return. Too long waiting time will damage and increase the system delay. Offline analysis of the delay distribution between the actual action data stream and PV data stream is a very typical exponential distribution.

/img/content-blog-raw-blog-real-time-news-personalization-with-flink-untitled-2.png

In order to enhance the real-time performance, we use Apache Flink framework to rewrite the sample generation logic with stream processing technology. As shown in the right figure above, after the user exposure and behavior logs generated by online services are written into the message queue, instead of waiting for them to drop to HDFS, we directly consume these message flows with Flink. At the same time, Flink reads the necessary feature information from the redis cache and generates the sample message stream directly. The sample message flow is written back to the Kafka queue, and downstream tensorflow can directly consume the message flow for model training.

As per the exponential distribution (analyzed on a private dataset of a news recommender app), most of the user behavior has reflow within a few minutes. And if few minutes is an acceptable delay, a simple solution is to set a time window with a compromise size. Flink provides window join to implement this logic.

References

  1. https://developpaper.com/flink-streaming-processing-and-real-time-sample-generation-in-recommender-system/
  2. https://zhuanlan.zhihu.com/p/74813776
  3. https://zhuanlan.zhihu.com/p/75597761

· 4 min read
Sparsh Agarwal

/img/content-blog-raw-blog-what-is-livestream-ecommerce-untitled.png

Recent years witness the prosperity of online live streaming. With the development of mobile phones, cameras, and high-speed internet, more and more users are able to broadcast their experiences in live streams on various social platforms, such as Facebook Live and YouTube Live. There are a variety of live streaming applications, including knowledge share, video-gaming, and outdoor traveling.

One of the most important scenarios is live streaming commerce, a new form of online shopping becomes more and more popular, which combines live streaming with E-Commerce activity. The streamers introduce products and interact with their audiences, and hence greatly improve the performance of selling products.

/img/content-blog-raw-blog-what-is-livestream-ecommerce-untitled-1.png

Livestream ecommerce is a business model in which retailers, influencers, or celebrities sell products and services via online video streaming where the presenter demonstrates and discusses the offering and answers audience questions in real-time.

/img/content-blog-raw-blog-what-is-livestream-ecommerce-untitled-2.png

Examples

https://media.nngroup.com/media/editor/2021/02/16/tiktok_livestream_compressed.mp4

During a livestream event hosted by Walmart on TikTok, users watched an influencer presenting various products such as a pair of jeans. Those interested in the jeans could tap the product listing shown at the bottom of the screen. They could also browse the list of products promoted during the livestream and purchase them without leaving the TikTok app. Viewers’ real-time comments appeared along the left-hand side of the livestream feed.

Advantages

  • Livestreams allow users to see products in detail and get their questions answered in real time
  • During livestream sessions, the hosts can show product details in close-up (left), give instructions of use for products like essential oils and cosmetic face masks (middle), or even show how a particular product, like the tea they’re selling, is made (right) /img/content-blog-raw-blog-what-is-livestream-ecommerce-untitled-3.png
  • Greatly shorten the decision-making time of consumers and provoke the sales volume
  • The expert streamers introduce and promote the products in a live streaming manner, which makes the shopping process more interesting and convincing
  • Rich and real-time interactions between streamers and their audiences, which makes live streaming a new medium and a powerful marketing tool for E-Commerce
  • Viewers not only can watch the showing for product’s looks and functions, but also can ask the streamers to show different or individual perspectives of the products in real-time

Market

Livestream ecommerce has been surging dramatically in China. According to Forbes, this industry is estimated to earn $60 billion annually. In 2019, about 37 percent of the online shoppers in China (265 million people) made livestream purchases. On Taobao’s 2020 annual Single-Day Global Shopping Festival (November 11th), livestreams accounted for $6 billion in sales (twice the amount from the prior year).

Amazon has also launched its live platform, where influencers promote items and chat with potential customers. And Facebook and Instagram are exploring the integration between ecommerce and social media. For instance, the new Shop feature on Instagram allows users to browse products and place orders directly within Instagram — a form of social commerce.

The total GMV driven by live streaming achieved $6 Billion USD. Some quantitative research results show that adopting live streaming in sales can achieve a 21.8% increase in online sales volume.

/img/content-blog-raw-blog-what-is-livestream-ecommerce-untitled-4.png

The Anatomy of a Livestream Session

/img/content-blog-raw-blog-what-is-livestream-ecommerce-untitled-5.png

A typical livestream session has the following basic components:

  1. The video stream, where the host shows the products, talks about them, and answers questions from the audience. In the Amazon Live case, the stream occupies the most of the screen space.
  2. The list of products being promoted, with the product currently being shown highlighted. This list appears at the bottom of the Amazon video stream.
  3. A chat area, where viewers can type questions and comments to interact with the host and other viewers. The chat area is at the right of the live stream on Amazon Live.
  4. A reaction button, that users can use to send reactions, displayed as animated emojis. The reaction button shows up as a little star icon at the bottom right of the video stream on Amazon.

References

  1. Features of Livestream ecommerce: What We Can Learn from China
  2. Top Live Streaming E-Commerce Startups