
Recommend Smarter, Faster and More Accurately
Learn how to rapidly deploy resilient, real-time recommenders using the ecosystem.Ai Platform
Introduction #
This lesson outlines the steps you will take to build the configurations needed for your recommender. Go through the steps of identifying the right kind of data for your project, assigning recommender model types, and configuring deployment details. Learn how to then view and analyze your recommender dashboards in order to alter variables and increase recommendation effectiveness.
Recommenders are one of the most commonly used types of prediction, when adding intelligence to customer interaction. They can be used for recommending products, messages, design constructs, special offers, and more. A recommendation can be made at any point in a customer journey, when there is a choice of what to show to a customer.
They can be difficult to implement optimally in practice. Some of the difficulties are:
- Real-time adds lots of value but it is a very difficult practice to get right
- As recommendations are often deployed in key journeys, there is no margin for error for up-time and responsiveness
- Some approaches may require the building and management of a large number of models
- Giving discounts and other offers can be costly, you need to carefully manage the budget
- Focusing on a single recommendation option limits the grasp of possibility. Exploration should always be an option to keep up with changing human context.
The Recommenders Module is designed to allow you to rapidly deploy resilient, real-time recommenders with a range of model enhancement functionality. This will enable you to keep within budget, maintain the ability to explore deployment environments, use different models in different scenarios and a range of other functionality.
How it Works #
The Recommenders Module uses sophisticated tools to enable you to build more effective vidget (virtual item) recommendations.
Worker architecture allows for use of the latest modeling packages. Architected from the ground up for durability and performance, it returns results in real-time and is highly resilient. You also have easy deployment to production at the push of a button. There are many ways to structure your recommenders: Multinomial models, collections of binomial models, incorporating an exploration component, and stacked ensembles of recommendations.
The Value of Real-Time Recommendations
Real-time allows you to use the latest information about your customer. Letting you use real-time features such as location, time of day, balance, etc. which aren’t available to batch models at all. Real-time ultimately removes the problems associated with batch-generated offers becoming irrelevant over time. The difference between batch and real-time extends beyond the moment of the recommendation itself, in real-time, you will be able to view, analyze and adjust according the the customer response, the very moment it happens.
Recommender Structures
There are a number of different model frameworks you can use in your recommender structure. To list just a few:
- Multinomial models – A single model is built which predicts the vidget that a customer is most likely to engage with
- Collections of binomial models – A model is built for each vidget, which predicts the likelihood of a customer engaging with that vidget
- Incorporating an exploration component – Rather than only presenting a vidget which is predicted to be the most likely to be engaged with, occasionally present other vidgets in a way that allows you to explore whether the human behavior in your system has changed.
- Stacked ensembles of recommendations – The output of one model can be used to inform the next model. For example, one model could recommend a design construct, while the next model sets the messaging within that design.
Let’s Get Started! #
Configuring any recommender is effective and seamless using the Recommenders Module
Two main interfaces can be used to access the ecosystem.Ai functionality: The first is the Workbench graphical interface, and the second is by using Jupyter notebooks. This lesson takes you through the Workbench configuration for recommenders.
If you would prefer to build your recommender in a Notebook, head to The Worker Ecosystem in your workbench, and click on Jupyter Notebooks. From there, navigate to your Get Started folder and find “No Data Experiments”.
Project Definition #
Projects allow you keep track of all of the work linked to completing any specific project.
When you log into the workbench you will see example projects that have already been created for you:
You can edit these projects to set up your own pre-configured first example. Some of the examples that have already been set up for you are:
- Simple Recommender
- Offer Recommender Single-Model – which shows how offers can be recommended using a single multivariate
- Offer Recommender Multi-Model – which uses a collection of binomial models.
- A Message Recommender – which show how customer engagement messages can be personalized using a recommendation engine
- Recommender Experiment – which shows how exploration can be incorporated into a recommender environment.
You can add a new project or edit an existing one:
When creating your project you must provide a name, type and description.
You can also assign dates and people to the project, but this is more for administrative purposes than a necessity.
As you progress with you recommender configurations, you will link items to the project as they are created. Such as the models, frames, simulations and other completed elements.
Adding Data to Files and Feature Engineering #
The data used to set up your recommender will likely consist of historical information. Such as whether vidgets have been taken up by clients, and/or characteristics of the client at the point they were presented with the vidget. Examples of different styles of recommender data sets can be found in the example projects.
Head to the Files section of the Workbench in the menu on the left. In the Files section, you will be able to see a list of all the data files available for you to create predictions with. This is where you can view a list of files, and download files to your local environment. In order to view the contents of these files, or don’t want to upload your own data, head to the Feature Engineering step of this lesson (2.2).
If you wish to upload your own data to be used in building your first recommender, stay in the Files section and add a file.
Data can be added to the ecosystem platform using the + Add File functionality:
Files must be uploaded in either CSV or JSON format.
For an alternate form of getting your data onto the platform, you can connect to your own local database hub.
You can also use database connection strings using the Presto Data Navigator:
If you have your own database, you can connect it here. This database access option uses the Presto Worker in the platform. Add a Connection path, similar to this example: local/master?user=admin. Then write a SQL statement to extract the data you want, similar to this example: select * from master.bank_customer limit 2. Then click Execute.
NOTE: IN ORDER TO ADD DATA USING THE PRESTO FUNCTIONALITY, YOU MUST FIRST HAVE YOUR PRESTO CONNECTION ACCURATELY SET UP.
In the Feature Engineering tab you can use the + Ingest Collection functionality:
Once data has been added to the platform it must be ingested into a specified database and collection. You can either select or add a database at this point, and then use the Ingest Collection functionality to ingest a collection into the selected database.
Close the ingestion window, then you can begin the next step of creating a feature store using your uploaded or chosen dataset.
NOTE: IF THE NAME OF THE COLLECTION YOU ARE INGESTING ALREADY EXISTS, THE NEW DATA WILL BE APPENDED TO THE EXISTING DATA. IT WILL NOT BE OVER-RIDDEN.
Feature Store creation #
Once data has been ingested, it must be put into a format in which it can be used by machine learning algorithms. In order to do this, you will export your data set to the ecosystem platform, and create a feature store from the exported data.
Remain in the Feature Engineering section of the Workbench to export your data. Refresh the page if your Collection has not yet appeared on the list. Most of the settings in this tab can remain default.
Data is exported in the Feature Engineering tab using the Dropdown “options” on the right:
Select the collection where your data is stored and on the drop down menu next to the collection name, select export.
Fill in the details to specify the export details:
The fields should be automatically filled with the details of your collection. You can be specific about the fields you wish to export, you will find more details about this in the Feature Engineering Course. For now, the default fields are recommended. If you are unsure of how much data to export, leave the Number to Export as 0 to export it all. Then click export and the dataset will be pulled into the Workbench for Feature Store creation.
Now, head to the Feature Store section of the Workbench to use your exported dataset.
Go to Feature Store Definition:
This tab displays a list of all previously configured feature stores.
Select + Feature Store to create a feature store from your exported data set:
The fields in the feature store creation section will be the configuration settings for your final Feature Store.
You will need to specify a unique name (Feature Store ID) and Description:
The feature store can also be allocated to an existing project.
Specify the characteristics of the file you will be building the feature store on, and select Check Data.
This will create a table of all of the columns in the data set. You can change the automatically allocated type of each column on the right. As well as add descriptions to those columns.
Once you have correctly typed and described the data you can set the Destination Frame:
which is the name of your feature store. This field should be pre-populated, but if you want your feature store to have a distinctive, or specific name, you can change it here.
Then you can Parse Data and your feature store will be created:
There is no need to use the small Parse button when configuring a recommender feature store.
Make sure to save your feature store definition:
Predictions #
The next step is to begin making the models for your project. Head to the Predictions tab in order to view pre-configured example models, or to create your own.
The Predictor Definition tab is where you will manage the models for your recommender:
This is where you will be able to view all of your previously configured models, as well as be able to create new models. Creating models using this interface is more suited to cases which use a small number of models. Cases where tens, or hundreds of models will need to be trained, are better handled in our Jupyter Notebooks.
Use the + Create Model functionality to create a new model:
Select a unique Predict ID and Description: Refer back to this unique ID at any point in the process. You can also link the model to a previously created project.
Specify the version and Model ID associated with this version of the model: You will almost always have multiple versions. This will ensure you can appropriately track changes, as you experiment with the parameters of your models.
Choose a model type from the list of supported models: and enter a model category. That will allow you to more effectively organise your overall model training.
Select the Primary Data Frame that you created in the feature store tab and update the version number:
Make sure the number in the Version field matches the version number in your Model ID.
Specify the model parameters and add notes to be stored with the model:
Notes should include the features you have used in the model. All of which can be retrieved using the Retrieve Features button. Notes should also include the model parameters, as well as a summary of the changes made, in each version of the model.
For the Model Parameters functionality: Clicking Generate Default: will generate a list of the model parameters and their default values. These can be modified to change the behaviour of the model. Examples of model parameter set ups can be found in the other pre-configured example projects. Go to your Projects Dashboard to find these.
Further details of all of the Model Parameters functionality can be found in the Prediction Workers documentation. Each worker has its own documentation – H2O, PyTorch, Ludwig and Tensorflow.
Once you have saved you model configuration, click Generate Model:
This will start the model training, which may take some time to complete. To view training progress, and any interim models being produced, use the Result button.
Once models have been trained they will appear in the table:
Selecting a model will show metrics on its effectiveness. This can be used to select the preferred model, should multiple models be available.
Once a model has been selected save and deploy the model, to have it ready to use in the deployment step.
Head back to Projects and go to Deployment, in order to configure your deployment details.
Deployment #
The deployment tab allows you to set up your recommender. In order to be used in the production, Quality Assurance or Test environment. As well as to Push it into the desired environment.
Most settings in this tab can be left with their default values.
Set the case configuration for your recommender deployment: Create a unique Prediction Case ID name. Add a Description that is relevant to the specific deployment you are doing. Add the Type and the Purpose of your deployment. You can leave the type and purpose blank if you are unsure what to put there.
Then input the properties details:
Set the Version of the deployment step. This version number should be updated every time you make changes to the deployment. Specify the Environment in which you will be deploying your configuration. Then input the performance and complexity settings for your set up.
Selecting the Model Enabler checkboxes will reveal Settings sections relevant to that option, at the bottom of the page:
Select Prediction Model:
and specify the model that you want to use. Enter the names of the models that you have saved in the previous steps, for deployment.
Select Parameters from Data Source: to access a database. This is for when you want to get data for making predictions from a database that is accessible to your production environment. The alternative to this, is passing the data through the API. Data uploaded to the platform and ingested, will be available to the recommender in production.
A range of different Model Enablers can be selected to enhance the functionality of your models:
- Offer matrix is loaded in memory and accesses in the plugins. For the purpose of default pricing, category and other forms of lookup.
- Plugins supports three primary areas: API definition, pre-score logic and post-score logic. There are a number of post-score templates.
- Budget Tracker to track offers and other items used through the scoring engine, and alter the behavior of the scoring system. Must include the post-score template for this option to work.
- Whitelist allows you to test certain options with customers. The results will be obtained from a lookup table. Must include the post-score template for this option to work.
- New knowledge allows you to add exploration to your recommender. This will happen by specifying the epsilon parameter. Epsilon% (eg. 0.3 = 30%) of the interactions will be selected at random, while the remaining ones will be selected using the model.
- Pattern selector allows different patterns when when options are presented, through the scoring engine result.
For more detailed descriptions, go to the Model Enablers lesson.
Once that is done, click the Push button to set the configuration up in your specified environment. No downtime is required. The Generate and Build buttons are not needed for now, they are designed for Enterprise and on-premise setups.
Testing #
Once you have pushed your configuration for deployment you should do some testing to see if the results align with your expectations. There are two ways to test your deployment now that it has been created and Pushed.
- Head to our Jupyter Notebooks to configure the simulation of your deployment. The steps of how to complete this part of the journey is laid out in the Notebooks.
2. Go to the Laboratory section of the Workbench in order to test your API:
If you have used one of the pre-configured examples to work through, click on the relevant deployment to view the details. If you have created your own, use the Create New button to make a new API. Provide the deployment name and click next to add it to the list.
Select Configuration to view and edit the details of your API:
Select the one you want to test, fill in the relevant details of the campaign, then select the campaign to bring down the API test window
Click Execute to bring back the API results and ensure your deployment is functioning:
Now that you have built, deployed and tested your recommender, it’s time to watch it in action.
Monitoring #
Once your recommender is running it is important to keep track of its behavior, and begin to examine the results.
Head to the Worker Ecosystem section of the Workbench and select the “Real-Time Dashboard” to go to Grafana, to set up and view the real-time results of your deployment. Alternatively, head to the Monitoring section of the Workbench to configure and view your Superset Dashboard.
Access the Grafana Dashboards for a real-time view of your recommender:
Our Grafana Dashboards illustrate the behavior of the recommender in production. Showing which options are being recommended, and which are successful. As well as providing information on performance, and how the recommender is trading off between exploring and exploiting.
To set up your Grafana Dashboard, and link it your chosen deployment, you will need to login as an admin. We have already pre-built a dashboard for to view all the most important elements of your real-time deployment. However, if you have experience with Grafana, or are looking to monitor something very specific, you can built your own dashboard: https://grafana.com/docs/grafana/next/getting-started/build-first-dashboard/.
Now that you have logged in, Navigate to the left hand menu, click on the ‘dashboards’ icon and select “Manage”:
At this point, you will notice a list of folders.
Select the “Runtime2” folder and click on Scoring Dashboard: Client Pulse Responder:
To view the pre-built dashboard configuration. The dropdown menu called “Prediction case” is where you can see all the linked deployments to this dashboard. Find your Deployment there if you have used one of the pre-configured solutions.
To add a new deployment to be viewed on this dashboard, go to the “Dashboard Settings” icon in the top right corner:
This will take you to the settings page where you can manage elements of the dashboard.
Go the Variables in the menu on the left, and then click on Prediction:
You will notice in the “Custom Options” field that the deployments currently linked to this dashboard are listed, separated by comma.
Simply add your deployment case name in this field:
Then click update. When this refreshes, click Save Dashboard on the left, this will link to a popup where you can specify the details of your changes. This is not a compulsory step, but it is good practice to document all changes. Then click Save. Press the “back” button in the top left hand corner to go back to the dashboard, give it a minute to load and then you will be able to view your new deployment in the “Prediction Case” list.
Go to the Superset Dashboard to view further illustrations of your recommender:
The superset dashboards allow you to view and analyse the results of the recommendation process.
Leave A Comment