The personalized digital experience is all about communication
The back and forth between companies and consumers is changing from the traditional outbound push strategy, to the more empowering experience that pulls users into the inbound interaction when they want to engage. Consumers are becoming more savvy but ultimately want to be understood, educated and not sold to. In order to do that, the digital experience needs to evolve towards “human”-like intelligence.
Here’s what a personalized digital experience should communicate to a user:
“Talk to me! I am listening!”
“You don’t need special skills to talk to me. I am designed so that you can explore in an intuitive way.”
“I am here to support your curiosity! I can tell you about how I met others’ needs and hopefully, you will tell me more and I will try to meet your needs as well.”
“If you tell me something about yourself, I will remember and next time we talk you will have a more relevant experience than the one before.”
Is your website or intranet there yet?
Natural Language Processing (NLP) Is the Foundation for Human-To-Machine-To-Human Interaction
Most users interact with a digital property (web or mobile) in three ways: click, speak, and type. In modern experience design we are seeing more and more that these three modalities are blending, with an increased focus on speak or type as entry points into the experience. The way the input comes through speak (voice-to-text translation) or type is in the form of text. How text gets interpreted by the machine is the subject of the wide umbrella of Natural Language Processing (NLP).
When it comes to advancements in the NLP space, AI has been used heavily in the transcribe layer, specifically to increase the accuracy of voice-to-text-to-voice capabilities. We now have access to services developed by companies like Google, Amazon and others that can transcribe voice to text at an accuracy in the 90%+ levels.
Once text is available it needs to be translated to what a machine understands. Natural language understanding (NLU) is what this stage is commonly referred to and starts with basic functionality and evolves to advanced ones.
Tokenization is breaking up the blocks of letters based on spacing. Tokenization can get very advanced especially when dealing with how different languages are treated (for example some Asian languages are based on characters, not letters).
Part of speech extraction will determine whether the tokens are nouns or verbs, etc. Next comes understanding the specific entities referred to in the text (for example are the nouns referring to names of people, organizations, place, etc.). Misspelling corrections and synonym detection are also used to augment the correct interpretation of the text inputs.
It is this type of basic NLU that allows for successful frequency-based key-word search, which has been the backbone for supporting users’ interactivity with digital information for decades now.
The current advanced methods of NLU and search are focusing on the context, and how all the words work together to relate contextually. This understanding is necessary to interpret the intent of the users’ spoken or typed inquiries and produce the most relevant answers. Advanced semantic methods use deep learning to match the meaning or incoming user queries with the right results.
The Search and Browse Experience Is How We Uncover and React to User Preference
Listen to the Query To Understand How to Respond
Now that we have robust methodologies in understanding what is being asked, can we drill deeper and extract specific personal preferences from user queries? Extracting preferences out of a product search can be very powerful. For example if I am looking for “32X32 Black Slacks”, I not only expect relevant results, but also the size, color and type of accessory to be remembered for future browsing. This experience is only possible when we can extract these explicit preferences using NLP.
Based on these preferences, the system can then inject recommendations that have a higher chance of positive impact on my browsing. If the experience allows me to control my preferences easily and intuitively, I feel in control, I feel more educated and in a better position to make a purchase decision.
Listen to the Clicks
Personalization is an ongoing learning process. The more the users tell us about their intention in the browse experience, the more we learn what we can present back to them. This extends from what they are speaking and typing, to what they are clicking on, including: Products/Services/Answers, Likes/Dislikes/Shares, and Add-To-Cart/Purchase.
Clicks are considered signals of user’s preferences. This behavior can be used to learn patterns at the user population level, including what products/services get consumed together, which are complementary, similar, or popular. These population level signals can produce powerful models to use as an anonymous user or new user interacts with the digital property. Methods like query suggestions and auto-complete, optimized query rewrites, consumption-based segmentation, RFM (consumption-based recency, frequency and monetary models), action propensity models, collaborative-filtering based recommenders, association for next best offer, can all be used to shape the journey of users with great success.
The applications for intelligent conversational frameworks abound, as the need for automation without sacrificing the user experience is growing exponentially in the digital space. Customer support, ecommerce inquiries, intranet employee self-service and more are all areas across multiple industries that could benefit from these intuitive frameworks.
Advances in semantic search methodology allow us to use automation and AI to deliver an intuitive conversational experience with relevant answers for customers or employees. The use of embeddings and semantic encoders (vector representation of text and calculation of vector similarity between queries and curated answers) are augmenting the key-word search experience by allowing a more flexible way to ask (speak or type) questions. This methodology ensures that users get the correct response back as intended by the party curating the knowledge for the conversational experience.
Bringing It All Together
Personalized interaction must be approached as a layered, holistic, and evolving communication problem. Unfortunately there is no easy button. The good news is that we do understand the starting point and have robust solutions around it, with basic NLP as the essential building block.
From there on, it becomes more and more a blend of art and science around how we layer advanced methods that use structured, semi-structured, text, images, video and audio data to concentrate as much intelligence about a user as possible at the point of interaction. The subtleties of this blend can make a user feel understood, in control of their experience and guided towards decisions.
The lines between search, browse, recommendations and conversations need to blur in order for the experience to become more fluid, human-like, and familiar to build loyalty, consumption and brand ambassadorship for the long run.
About the Author
Radu Miclaus is Director of Product, AI & Cloud, Lucidworks. Lucidworks builds AI-powered search solutions for many of the world’s largest brands. Fusion, Lucidworks’ advanced development platform, provides the enterprise-grade capabilities needed to design, develop, and deploy intelligent search apps at any scale on top of Apache Solr. Find out more: www.lucidworks.com
Featured image: ©Bigstock