Building A Conversational N.L.P Enabled Chatbot Using Google’s Dialogflow

Ever since ELIZA (the first Natural Language Processing computer program brought to life by Joseph Weizenbaum in 1964) was created in order to process user inputs and engage in further discussions based on the previous sentences, there has been an increased use of Natural Language Processing to extract key data from human interactions. One key application of Natural language processing has been in the creation of conversational chat assistants and voice assistants which are used in mobile and web applications to act as customer care agents attending to the virtual needs of customers.

In 2019, the Capgemini Research Institute released a report after conducting a survey on the impact which chat assistants had on users after being incorporated by organizations within their services. The key findings from this survey showed that many customers were highly satisfied with the level of engagement they got from these chat assistants and that the number of users who were embracing the use of these assistants was fast growing!

To quickly build a chat assistant, developers and organizations leverage SaaS products running on the cloud such as Dialogflow from Google, Watson Assistant from IBM, Azure Bot Service from Microsoft, and also Lex from Amazon to design the chat flow and then integrate the natural language processing enabled chat-bots offered from these services into their own service.

This article would be beneficial to developers interested in building conversational chat assistants using Dialogflow as it focuses on the Dialogflow itself as a Service and how chat assistants can be built using the Dialogflow console.

Note: Although the custom webhooks built within this article are well explained, a fair understanding of the JavaScript language is required as the webhooks were written using JavaScript.

Dialogflow

Dialogflow is a platform that simplifies the process of creating and designing a natural language processing conversational chat assistant which can accept voice or text data when being used either from the Dialogflow console or from an integrated web application.

To understand how Dialogflow simplifies the creation of a conversational chat assistant, we will use it to build a customer care agent for a food delivery service and see how the built chat assistant can be used to handle food orders and other requests of the service users.

Before we begin building, we need to understand some of the key terminologies used on Dialogflow. One of Dialogflow’s aim is to abstract away the complexities of building a Natural Language Processing application and provide a console where users can visually create, design, and train an AI-powered chatbot.

Dialog Flow Terminologies

Here is a list of the Dialogflow terminologies we will consider in this article in the following order:

  • Agent
    An agent on Dialogflow represents the chatbot created by a user to interact with other end-users and perform data processing operations on the information it receives. Other components come together to form an agent and each time one of these components is updated, the agent is immediately re-trained for the changes to take effect.

    User’s who want to create a full-fledged conversational chatbot within the quickest time possible can select an agent from the prebuilt agents which can be likened to a template which contains the basic intents and responses needed for a conversational assistant.

    Note: A conversational assistant on Dialogflow will now be referred to as an “agent” while someone else asides the author of the assistant who interacts with it would be referred to as an “end-user”.

  • Intent
    Similar to its literal meaning, the intent is the user’s end goal in each sentence when interacting with an agent. For a single agent, multiple intents can be created to handle each sentence within a conversation and they are connected together using Contexts.

    From the intent, an agent is able to understand the end-goal of a sentence. For example, an agent created to process food orders from customers would be to recognize the end-goal of a customer to place an order for a meal or get recommendations on the available meals from a menu using the created intents.

  • Entity
    Entities are a means by which Dialogflow processes and extracts specific data from an end-user’s input. An example of this is a Car entity added to an intent. Names of vehicles would be extracted from each sentence input as the Car entity.

    By default, an agent has some System entities which have predefined upon its creation. Dialogflow also has the option to define custom entities and add values recognizable within this entity.

  • Training Phrase
    The training phrases is a major way in which an agent is able to recognize the intent of an end-user interacting with the agent. Having a large number of training phrases within an intent increases the accuracy of the agent to recognize an intent, in fact Dialogflow’s documentation on training phases recommends that “at least 10-20” training phrases be added to a created intent.

    To make training phrases more reusable, dialogflow gives the ability to annotate specific words within the training phrase. When a word within a phrase is annotated, dialogflow would recognize it as a placeholder for values that would be provided in an end-user’s input.

  • Context
    Contexts are string names and they are used to control the flow of a conversation with an agent. On each intent, we can add multiple input contexts and also multiple output contexts. When the end-user makes a sentence that is recognized by an intent the output contexts become active and one of them is used to match the next intent.

    To understand contexts better, we can illustrate context as the security entry and exit door, while the intent as the building. The input context is used when coming into the building and it accepts visitors that have been listed in the intent while the exit door is what connects the visitors to another building which is another intent.

  • Knowledge base
    A knowledge base represents a large pool of information where an agent can fetch data when responding to an intent. This could be a document in any format such as txt, pdf, csv among other supported document types. In machine learning, a knowledge base could be referred to as a training dataset.

    An example scenario where an agent might refer to a knowledge base would be where an agent is being used to find out more details about a service or business. In this scenario, an agent can refer to the service’s Frequently Asked Questions as its knowledge base.

  • Fulfillment
    Dialogflow’s Fulfillment enables an agent to give a more dynamic response to a recognized intent rather than a static created response. This could be by calling a defined service to perform an action such as creating or retrieving data from a database.

    An intent’s fulfillment is achieved through the use of a webhook. Once enabled, a matched intent would make an API request to the webhook configured for the dialogflow agent.

Now, that we have an understanding of the terminologies used with Dialogflow, we can move ahead to use the Dialogflow console to create and train our first agent for a hypothetical food service.

Using The Dialogflow Console

Note: Using the Dialogflow console requires that a Google account and a project on the Google Cloud Platform is created. If unavailable, a user would be prompted to sign in and create a project on first use.

The Dialogflow console is where the agent is created, designed, and trained before integrating with other services. Dialogflow also provides REST API endpoints for users who do not want to make use of the console when building with Dialogflow.

While we go through the console, we will gradually build out the agent which would act as a customer care agent for a food delivery service having the ability to list available meals, accept a new order and give information about a requested meal.

The agent we’ll be building will have the conversation flow shown in the flow chart diagram below where a user can purchase a meal or get the list of available meals and then purchase one of the meals shown.

Creating A New Agent

Within every newly created project, Dialogflow would prompt the first time user to create an agent which takes the following fields:

  • A name to identify the agent.
  • A language which the agent would respond in. If not provided the default of English is used.
  • A project on the Google Cloud to associate the agent with.

Immediately after we click on the create button after adding the values of the fields above, a new agent would be saved and the intents tab would be shown with the Default fallback and Default Welcome intent as the only two available intents which are created by default with every agent on Dialogflow.

Exploring the Default fallback intent, we can see it has no training phrase but has sentences such as “Sorry, could you say that again?”, “What was that?”, “Say that one more time?” as responses to indicate that the agent was not able to recognize a sentence which has been made by an end-user. During all conversations with the agent, these responses are only used when the agent cannot recognize a sentence typed or spoken by a user.

While the sentences above are sufficient for indicating that agent does not understand the last typed sentence, we would like to aid the end-user by giving them some more information to hint the user on what the agent can recognize. To do this, we replace all the listed sentences above with the following ones and click the Save button for the agent to be retrained.

I didn't get that. I am Zara and I can assist you in purchasing or learning more about the meals from Dialogflow-food-delivery service. What would you like me to do? I missed what you said. I'm Zara here and I can assist you in purchasing or learning more about the meals from Dialogflow-food-delivery service. What would you like me to do? Sorry, I didn't get that. Can you rephrase it? I'm Zara by the way and I can assist you in purchasing or learning more about the meals from Dialogflow-food-delivery service. Hey, I missed that I'm Zara and I can assist you in purchasing or learning more about the meals from Dialogflow-food-delivery service. What would you like me to do?

From each of the four sentences above, we see can observe that the agent could not recognize what the last sentence made was and also a piece of information on what the agent can do thus hinting the user on what to type next in order to continue the conversation.

Moving next to the Default Welcome Intent, the first section on the intent page is the Context section and expanding it we can see both the input and output contexts are blank. From the conversation flow of the agent shown previously, we want an end-user to either place a meal order or request a list of all available meals. This would require the two following new output contexts they would each become active when this intent is matched;

  • awaiting_order_request
    This would be used to match the intent handling order requests when an end-user wants to place an order for a meal.

  • awaiting_info_request
    This would be used to match the intent that retrieves data of all the meals when an end-user wants to know the available meals.

After the context section is the intent’s Events and we can see it has the Welcome event type added to the list of events indicating that this intent will be used first when the agent is loaded.

Coming next are the Training Phrases for the intent. Due to being created by default, it already has 16 phrases that an end-user would likely type or say when they interact with the agent for the first time.

When an end-user types or makes a sentence similar to those listed in the training phrases above, the agent would respond using a picked response from the Responses list section shown below:

Each of the responses above is automatically generated for every agent on Dialogflow. Although they are grammatically correct, we would not use them for our food agent. Being a default intent that welcomes an end-user to our agent, a response from the agent should tell what organization it belongs to and also list its functionalities in a single sentence.

We would delete all the responses above and replace them with the ones below to better help inform an end-user on what to do next with the agent.

1. Hello there, I am Zara and I am here to assist you to purchase or learn about the meals from the Dialogflow-food-delivery service. What would you like me to do?
2. Hi, I am Zara and I can assist you in purchasing or learning more about the meals from the Dialogflow-food-delivery service. What would you like me to do?

From the two responses above, we can see it tells an end-user what the name of the bot is, the two things the agent can do, and lastly, it pokes the end-user to take further action. Taking further action further from this intent means we need to connect the Default Welcome Intent to another. This is possible on Dialogflow using context.

When we add and save those two phrases above, dialogflow would immediately re-train the agent so I can respond using any one of them.

Next, we move on to create two more intents to handle the functionalities which we have added in the two responses above. One to purchase a food item and the second to get more information about meals from our food service.

Creating list-meals intent:

Clicking the + ( add ) icon from the left navigation menu would navigate to the page for creating new intents and we name this intent list-available-meals.

From there we add an output context with the name awaiting-order-request. This output context would be used to link this intent to the next one where they order a meal as we expect an end-user to place an order for a meal after getting the list of meals available.

Moving on to the Training Phrases section on the intent page, we will add the following phrases provided by the end-user in order to find out which meals are available.

Hey, I would like to know the meals available.
What items are on your menu?
Are there any available meals?
I would like to know more about the meals you offer.

Next, we would add just the single fallback response below to the Responses section;

Hi there, the list of our meals is currently unavailable. Please check back in a few minutes as the items on the list are regularly updated.

From the response above we can observe that it indicates that the meal’s list is unavailable or an error has occurred somewhere. This is because it is a fallback response and would only be used when an error occurs in fetching the meals. The main response would come as a fulfillment using the webhooks option which we will set up next.

The last section in this intent page is the Fulfillment section and it is used to provide data to the agent to be used as a response from an externally deployed API or source. To use it we would enable the Webhook call option in the Fulfillment section and set up the fulfillment for this agent from the fulfillment tab.

Managing Fulfillment:

From the Fulfillment tab on the console, a developer has the option of using a webhook which gives the ability to use any deployed API through its endpoint or use the Inline Code editor to create a serverless application to be deployed as a cloud function on the Google Cloud. If you would like to know more about serverless applications, this article provides an excellent guide on getting started with serverless applications.

Each time an end-user interacts with the agent and the intent is matched, a POST) request would be made to the endpoint. Among the various object fields in the request body, only one is of concern to us, i.e. the queryResult object as shown below:

{ "queryResult": { "queryText": "End-user expression", "parameters": { "param-name": "param-value" }, },
}

While there are other fields in the queryResult such as a context, the parameters object is more important to us as it holds the parameter extracted from the user’s text. This parameter would be the meal a user is requesting for and we would use it to query the food delivery service database.

When we are done setting up the fulfillment, our agent would have the following structure and flow of data to it:

From the diagram above, we can observe that the cloud function acts as a middleman in the entire structure. The Dialogflow agent sends the parameter extracted from an end user’s text to the cloud function in a request payload and the cloud function, in turn, queries the database for the document using the received name and sends back the queried data in a response payload to the agent.

To start an implementation of the design system above, we would begin with creating the cloud function locally in a development machine then connect it to our dialogflow agent using the custom webhook option. After it has been tested, we can switch to using the inline editor in the fulfillment tab to create and deploy a cloud function to work with it. We begin this process by running the following commands from the command line:

# Create a new project and ( && ) move into it.
mkdir dialogflow-food-agent-server && cd dialogflow-food-agent-server # Create a new Node project
yarn init -y # Install needed packages
yarn add mongodb @google-cloud/functions-framework dotenv

After installing the needed packages, we modify the generated package.json file to include two new objects which enable us to run a cloud function locally using the Functions Framework.

// package.json
{ "main": "index.js", "scripts": { "start": "functions-framework --target=foodFunction --port=8000" },
}

The start command in the scripts above tells the functions Framework to run the foodFunction in the index.js file and also makes it listen and serve connections through our localhost on port 8000.

Next is the content of the index.js file which holds the function; we’ll make use of the code below since it connects to a MongoDB database and queries the data using the parameter passed in by the Dialogflow agent.

require("dotenv").config(); exports.foodFunction = async (req, res) => { const { MongoClient } = require("mongodb"); const CONNECTION_URI = process.env.MONGODB_URI; // initate a connection to the deployed mongodb cluster const client = new MongoClient(CONNECTION_URI, { useNewUrlParser: true, }); client.connect((err) => { if (err) { res .status(500) .send({ status: "MONGODB CONNECTION REFUSED", error: err }); } const collection = client.db(process.env.DATABASE_NAME).collection("Meals"); const result = []; const data = collection.find({}); const meals = [ { text: { text: [ We currently have the following 20 meals on our menu list. Which would you like to request for?, ], }, }, ]; result.push( data.forEach((item) => { const { name, description, price, image_uri } = item; const card = { card: { title: ${name} at $${price}, subtitle: description, imageUri: image_uri, }, }; meals.push(card); }) ); Promise.all(result) .then((_) => { const response = { fulfillmentMessages: meals, }; res.status(200).json(response); }) .catch((e) => res.status(400).send({ error: e })); client.close(); });
};

From the code snippet above we can see that our cloud function is pulling data from a MongoDB database, but let’s gradually step through the operations involved in pulling and returning this data.

  • First, the cloud function initiates a connection to a MongoDB Atlas cluster, then it opens the collection storing the meal category documents within the database being used for the food-service on the cluster.

  • Next, using the parameter passed into the request from the user’s input, we run a find method on the collection to get which then returns a cursor which we further iterate upon to get all the MongoDB documents within the collection containing the data.

  • We model the data returned from MongoDB into Dialogflow’s Rich response message object structure which displays each of the meal items to the end-user as a card with an image, title, and a description.
  • Finally, we send back the entire data to the agent after the iteration in a JSON body and end the function’s execution with a 200 status code.

Note: The Dialogflow agent would wait for a response after a request has been sent within a frame of 5 seconds. This waiting period is when the loading indicator is shown on the console and after it elapses without getting a response from the webhook, the agent would default to using one of the responses added in the intent page and return a DEADLINE EXCEEDED error. This limitation is worth taking note of when designing the operations to be executed from a webhook. The API error retries section within the Dialogflow best practices contains steps on how to implement a retry system.

Now, the last thing needed is a .env file created in the project directory with the following fields to store the environment variables used in the index.js.

#.env
MONGODB_URI = "MONGODB CONNECTION STRING"
DATABASE_NAME = ""

At this point, we can start the function locally by running yarn start from the command line in the project’s directory. For now, we still cannot make use of the running function as Dialogflow only supports secure connections with an SSL certificate, and where Ngrok comes into the picture.

Using Ngrok, we can create a tunnel to expose the localhost port running the cloud function to the internet with an SSL certificate attached to the secured connection using the command below from a new terminal;

ngrok http -bind-tls=true 8000

This would start the tunnel and generate a forwarding URL which would be used as an endpoint to the function running on a local machine.

Note: The extra -bind-tls=true argument is what instructs Ngrok to create a secured tunnel rather than the unsecured connection which it creates by default.

Now, we can copy the URL string opposite the forwarding text in the terminal and paste in the URL input field which is found in the Webhook section, and then save it.

To test all that has been done so far, we would make a sentence to the Dialogflow agent requesting the list of meals available using the Input field at the top right section in the Dialogflow console and watch how it waits for and uses a response sent from the running function.

Starting from the center placed terminal in the image above, we can the series of POST requests made to the function running locally and on the right-hand side the data response from the function formatted into cards.

If for any reason a webhook request becomes unsuccessful, Dialogflow would resolve the error by using one of the listed responses. However, we can find out why the request failed by using the Diagnostic Info tool updated in each conversation. Within it are the Raw API response, Fulfillment request, Fulfillment response, and Fulfillment status tabs containing JSON formatted data. Selecting the Fulfillment response tab we can see the response from the webhook which is the cloud function running on our local machine.

At this point, we expect a user to continue the conversation with an order of one of the listed meals. We create the last intent for this demo next to handle meal orders.

Creating Request-meal Intent:

Following the same steps used while creating the first intent, we create a new intent using the console and name it request-meal and add an input context of awaiting_order_request to connect this intent from either the Default Welcome intent or the list-available meals intent.

Within the training phrase section, we make use of the following phrases,

Hi there, I'm famished, can I get some food? Yo, I want to place an order for some food. I need to get some food now. Dude, I would like to purchase $40 worth of food. Hey, can I get 2 plates of food?

Reading through the phrases above, we can observe they all indicate one thing — the user wants food. In all of the phrases listed above, the name or type of food is not specified but rather they are all specified as food. This is because we want the food to be dynamic value, if we were to list all the food names we certainly would need to have a very large list of training phrases. This also applies to the amount and price of the food being ordered, they would be annotated and the agent would be able to recognize them as a placeholder for the actual values within an input.

To make a value within a phrase dynamic, dialogflow provides entities. Entities represent common types of data, and in this intent, we use entities to match several food types, various price amounts, and quantity from an end user’s sentence to request.

From the training phrases above, dialogflow would recognize $40 as `@sys.unit-currencywhich is under the amounts-with-units category of the [system entities list](https://cloud.google.com/dialogflow/es/docs/entities-system) and **2** as@numberunder the number category of the [system entities list](https://cloud.google.com/dialogflow/es/docs/entities-system). However,food` is not a not a recognized system entity. In a case such as this, dialogflow gives developers the option to create a custom entity to be used.

Managing Entities

Double-clicking on food would pop up the entities dropdown menu, at the bottom of the items in the dropdown we would find the Create new entity button, and clicking it would navigate to the Entities tab on the dialogflow console, where we can manage all entities for the agent.

When at the entities tab, we name this new entity as food then at the options dropdown located at the top navigation bar beside the Save button we have the option to switch the entities input to a raw edit mode. Doing this would enable us to add several entity values in either a json or csv format rather than having to add the entities value one after the other.

After the edit mode has been changed, we would copy the sample JSON data below into the editor box.

// foods.json [ { "value": "Fries", "synonyms": [ "Fries", "Fried", "Fried food" ] }, { "value": "Shredded Beef", "synonyms": [ "Shredded Beef", "Beef", "Shredded Meat" ] }, { "value": "Shredded Chicken", "synonyms": [ "Shredded Chicken", "Chicken", "Pieced Chicken" ] }, { "value": "Sweet Sour Sauce", "synonyms": [ "Sweet Sour Sauce", "Sweet Sour", "Sauce" ] }, { "value": "Spring Onion", "synonyms": [ "Spring Onion", "Onion", "Spring" ] }, { "value": "Toast", "synonyms": [ "Toast", "Toast Bread", "Toast Meal" ] }, { "value": "Sandwich", "synonyms": [ "Sandwich", "Sandwich Bread", "Sandwich Meal" ] }, { "value": "Eggs Sausage Wrap", "synonyms": [ "Eggs Sausage Wrap", "Eggs Sausage", "Sausage Wrap", "Eggs" ] }, { "value": "Pancakes", "synonyms": [ "Pancakes", "Eggs Pancakes", "Sausage Pancakes" ] }, { "value": "Cashew Nuts", "synonyms": [ "Cashew Nuts", "Nuts", "Sausage Cashew" ] }, { "value": "Sweet Veggies", "synonyms": [ "Sweet Veggies", "Veggies", "Sweet Vegetables" ] }, { "value": "Chicken Salad", "synonyms": [ "Chicken Salad", "Salad", "Sweet Chicken Salad" ] }, { "value": "Crunchy Chicken", "synonyms": [ "Crunchy Chicken", "Chicken", "Crunchy Chickens" ] }, { "value": "Apple Red Kidney Beans", "synonyms": [ "Apple Red Kidney Beans", "Sweet Apple Red Kidney Beans", "Apple Beans Combination" ] },
]

From the JSON formatted data above, we have 15 meal examples. Each object in the array has a “value” key which is the name of the meal and a “synonyms” key containing an array of names very similar to the object’s value.

After pasting the json data above, we also check the Fuzzy Matching checkbox as it enables the agent to recognize the annotated value in the intent even when incompletely or slightly misspelled from the end user’s text.

After saving the entity values above, the agent would immediately be re-trained using the new values added here and once the training is completed, we can test by typing a text in the input field at the right section.

Responses within this intent would be gotten from our previously created function using the intent’s fulfillment webhook, however, we add the following response to serve as a fallback to be used whenever the webhook is not executed successfully.

I currently can't find your requested meal. Would you like to place an order for another meal?

We would also modify the code of the existing cloud function to fetch a single requested as it now handles requests from two intents.

require("dotenv").config(); exports.foodFunction = async (req, res) => { const { MongoClient } = require("mongodb"); const CONNECTION_URI = process.env.MONGODB_URI; const client = new MongoClient(CONNECTION_URI, { useNewUrlParser: true, }); // initate a connection to the deployed mongodb cluster client.connect((err) => { if (err) { res .status(500) .send({ status: "MONGODB CONNECTION REFUSED", error: err }); } const collection = client.db(process.env.DATABASE_NAME).collection("Meals"); const { displayName } = req.body.queryResult.intent; const result = []; switch (displayName) { case "list-available-meals": const data = collection.find({}); const meals = [ { text: { text: [ We currently have the following 20 meals on our menu list. Which would you like to request for?, ], }, }, ]; result.push( data.forEach((item) => { const { name, description, price, availableUnits, image_uri, } = item; const card = { card: { title: ${name} at $${price}, subtitle: description, imageUri: image_uri, }, }; meals.push(card); }) ); return Promise.all(result) .then((_) => { const response = { fulfillmentMessages: meals, }; res.status(200).json(response); }) .catch((e) => res.status(400).send({ error: e })); case "request-meal": const { food } = req.body.queryResult.parameters; collection.findOne({ name: food }, (err, data) => { if (err) { res.status(400).send({ error: err }); } const { name, price, description, image_uri } = data; const singleCard = [ { text: { text: [The ${name} is currently priced at $${price}.], }, }, { card: { title: ${name} at $${price}, subtitle: description, imageUri: image_uri, buttons: [ { text: "Pay For Meal", postback: "htts://google.com", }, ], }, }, ]; res.status(200).json(singleCard); default: break; } client.close(); });
};

From the highlighted parts above, we can see the following new use cases that the function has now been modified to handle:

  • Multiple intents
    the cloud function now uses a switch statement with the intent’s name being used as cases. In each request payload made to a webhook, Dialogflow includes details about the intent making the request; this is where the intent name is being pulled from to match the cases within the switch statement.
  • Fetch a single meal
    the Meals collection is now queried using the value extracted as a parameter from the user’s input.
  • A call-to-action button is now being added to the card which a user can use to pay for the requested meal and clicking it opens a tab in the browser. In a functioning chat assistant, this button’s postback URL should point to a checkout page probably using a configured third-party service such as Stripe checkout.

To test this function again, we restart the function for the new changes in the index.js file to take effect and run the function again from the terminal by running yarn start.

Note: You don’t have to restart the terminal running the Ngrok tunnel for the new changes to take place. Ngrok would still forward requests to the updated function when the webhook is called.

Making a test sentence to the agent from the dialogflow console to order a specific meal, we can see the request-meal case within the cloud function being used and a single card getting returned as a response to be displayed.

At this point, we can be assured that the cloud function works as expected. We can now move forward to deploy the local function to the Google Cloud Functions using the following command;

gcloud functions deploy "foodFunction" --runtime nodejs10 --trigger-http --entry-point=foodFunction --set-env-vars=[MONGODB_URI="MONGODB_CONNECTION_URL", DATABASE_NAME="DATABASE_NAME"] --allow-unauthenticated

Using the command above deploys the function to the Google Cloud with the flags explained below attached to it and logs out a generated URL endpoint of deployed cloud function to the terminal.

  • NAME
    This is the name given to a cloud function when deploying it and is it required. In our use case, the name of the cloud function when deployed would be foodFunction.

  • trigger-http
    This selects HTTP as the function’s trigger type. Cloud functions with an HTTP trigger would be invoked using their generated URL endpoint. The generated URLs are secured and use the https protocol.

  • entry-point
    This the specific exported module to be deployed from the file where the functions were written.

  • set-env-vars
    These are the environment variables available to the cloud function at runtime. In our cloud function, we only access our MONGODB_URI and DATABASE_NAME values from the environment variables.

    The MongoDB connection string is gotten from a created MongoDB cluster on Atlas. If you need some help on creating a cluster, the MongoDB Getting started section provides great help.

  • allow-authenticated
    This allows the function to be invoked outside the Google Cloud through the Internet using its generated endpoint without checking if the caller is authenticated.

Dialogflow Integrations

Dialogflow gives developers the feature to integrate a built agent into several conversational platforms including social media platforms such as Facebook Messenger, Slack, and Telegram. Asides from the two integration platforms which we used for our built agent, the Dialogflow documentation lists the available types of integrations and platforms within each integration type.

Integrating With Google Actions

Being a product from Google’s ecosystem, agents on Dialogflow integrate seamlessly with Google Assistant in very few steps. From the Integrations tab, Google Assistant is displayed as the primary integration option of a dialogflow agent. Clicking the Google Assistant option would open the Assistant modal from which we click on the test app option. From there the Actions console would be opened with the agent from Dialogflow launched in a test mode for testing using either the voice or text input option.

Integrating a dialogflow agent with the Google Assistant is a huge way to make the agent accessible to millions of Google Users from their Smartphones, Watches, Laptops, and several other connected devices. To publish the agent to the Google Assistant, the developers docs provides a detailed explanation of the process involved in the deployment.

Integrating With A Web Demo

The Web Demo which is located in the Text-based sections of the Integrations Tab in the Dialogflow console allows for the use of the built agent in a web application by using it in an iframe window. Selecting the web Demo option would generate a URL to a page with a chat window that simulates a real-world chat application.

Note: Dialogflow’s web demo only supports text responses and does not support the display of Rich messages and images. This worth noting when using a webhook that responds with data in the Rich response format.

Conclusion

From several surveys, we can see the effect of chat assistants on customer satisfaction when incorporated by organizations into their services. These positive metrics are expected to grow up in the next coming years thus placing greater importance on the use of these chat assistants.

In this article, we have learned about Dialogflow and how it is providing a platform for organizations and developers to build Natural Language processing conversational chat assistants for use in their services. We also moved further to learn about its terminologies and how these terminologies apply when building a chat assistant by building a demo chat assistant using the Dialogflow console.

If a chat assistant is being built to be used at a production level, it is highly recommended that the developer(s) go through the Dialogflow best practices section of the documentation as it contains standard design guidelines and solutions to common pitfalls encountered while building a chat assistant.

The source code to the JavaScript webhook built within this article has been pushed to GitHub and can be accessed from this repository.

References

How to Name Product Categories

Category names on an ecommerce site have a huge impact on organic search rankings and on-site conversions.

To determine the right names:

  • Start by listing all categories on competitors’ sites and marketplaces such as Amazon. Track how many competitors use a certain category name.
  • Add additional category names that may apply to your products. Consider using a thesaurus.
  • Upload the names to Google Ad’s Keyword Planner for search volume for each term. (Access to the Planner requires a Google Ads account.) Remember that countries and regions refer to the same item differently. “Pants” in the U.S. could be “trousers” in the U.K., for example.

Next, analyze the results. I’ll use hypothetical data, below, for yoga pants. The table shows (i) the various category names from competitors that sell yoga pants, (ii) the number of competitors using that name, and (iii) the Google search volume of that name.

I then inserted a simple equation in the “Outcome” column of Search Volume ÷ Number of Competitors. This identifies the name with the most searches and the least number of competitors.  “Sweatpants” is the winner at 20,000 monthly searches per competitor, followed by “Athletic Wear” (3,000) and “Yoga Pants” (1,500).

Assigning Products

Next, assign each of your products to a category. Some products may have multiple categories. A blanket, for example, could fall under “Blankets,” “Bedding,” or “Gifts.”

You can also use the simple model above for attributes, such as color, fabric, and size, to filter on.  For example, a red polo shirt could have attributes of “Men,” “Polo,” “Shirt,” “Red,” and “Cotton.” The more attributes, the easier for shoppers to find products and purchase.

Third-party Channels

To pick the right category on a third-party channel, such as Amazon, add a column to the above analysis to identify if a category is available and the number of products it contains. Use the equation of Search Volume ÷ Number of Products to select the category. (Search volume on Google is a good estimate for other platforms. Amazon’s search volume is available on Jungle Scout and similar tools.)

In the example below, the “Athletic Wear” category has 2,000 products on Amazon with 15,000 monthly searches (on Google).  Thus the volume of 7.5 is higher than “Pants” and “Sweatpants” — both have more competitor products per search.

20 Free Web Design Tools from Fall 2020

Free resources from the design community can add value to an ecommerce site. Here is a list of new web tools and design elements from fall 2020. There are designer and developer apps, coding resources, graphic tools, fonts, and more. All of these tools are free, though some also offer premium versions.

Free Design Tools

Serenade is an app to write code using natural speech. It integrates with existing tools (e.g., Visual Studio Code, Atom, and Chrome), so there’s no need to abandon your current setup. Use voice commands alongside typing, or leave your keyboard behind entirely.

Serenade home page

Serenade

Quarkly is a tool for creating websites and web apps. Use adaptive pre-made blocks to build interactive sites quickly. Apply animations, filters, blendings, transformations, and more. Code your own React-ready components. Work on a project with your colleagues. Free during beta.

Unspam.email is an online email tester tool. It analyzes an email, assigns a spam score, and predicts deliverability results with a heat map. Learn to improve the delivery of your email newsletter. Ensure everyone can read your emails with accessibility checks.

WP Umbrella monitors your WordPress sites and alerts you if anything goes wrong, from PHP errors to uptime and performance. Prevent downtime, accelerate your maintenance operations, and ease your website’s deployment flow.

Home page WP Umbrella

WP Umbrella

Tint & Shade Generator produces tints (pure white added) and shades (pure black added) of a given hex color in 10-percent increments. It also previews tints and shades for a base color.

Phosphor is a free and open-source icon library for interfaces, diagrams, presentations, and more. It has over 4,000 icons, consistent in style and scale but flexible in sizes and weights.

Bubbles is a Chrome extension that combines video, audio, and message-based collaboration to allow users to capture, comment, and share anything they see on their screen. Have conversations in the context of what you’re looking at. Eliminate back-and-forth emails and misunderstandings.

Home page for Bubbles

Bubbles

Gazepass allows your users to log in using biometrics across all of their devices, rather than passwords. With just a few lines of code, users can sign up and log in using native biometric sensors or face identification via a webcam.

Urlcat is a tiny JavaScript library that makes building URLs convenient and prevents common mistakes.

Link hover animation provides the code for a simple animated highlight.

Swell is a headless ecommerce platform. Create fast and flexible shopping experiences without having to think about infrastructure or maintenance. Includes native subscriptions, custom content models, B2B wholesale features, and a checkout API.

Home page of Swell

Swell

IconPark gives access to more than 1,400 quality icons and an interface for customizing. Instead of using various SVG source files, attributes of a single SVG are modified to produce different themes.

BGJar is a free SVG background generator for your websites, blogs, and apps. Choose from 24 background styles and then customize.

Hitcount is a simple and modern website hit counter. Copy the code and place it on your site where you want a counter to be displayed.

Blacklight is a real-time website privacy inspector. Enter the address of any website, and Blacklight will scan it and reveal its user-tracking technologies.

Home page of Backlight

Blacklight

Free Fonts

Autobus Omnibus is a modern, bold sans serif that’s stylish and significant.

Home page of Autobus Omnibus

Autobus Omnibus

Mango is a free geometric and minimal lowercase font. It’s fresh and futuristic — useful for logos, headlines, and motion graphic animations.

Home page of Mango

Mango

Laredo Rounded is a free rounded vintage style font created by Studio Aurora.

Home page of Laredo Rounded

Laredo Rounded

Skaters is a simple and chunky display font to make your casual designs stand out.

Home page of Skaters

Skaters

Bestermind is a stylish and elegant script font to personalize your design.

Home page of Bestermind

Bestermind

How To Use MDX Stored In Sanity In A Next.js Website

Recently, my team took on a project to build an online, video-based learning platform. The project, called Jamstack Explorers, is a Jamstack app powered by Sanity and Next.js. We knew that the success of this project relied on making the editing experience easy for collaborators from different companies and roles, as well as retaining the flexibility to add custom components as needed.

To accomplish this, we decided to author content using MDX, which is Markdown with the option to include custom components. For our audience, Markdown is a standard approach to writing content: it’s how we format GitHub comments, Notion docs, Slack messages (kinda), and many other tools. The custom MDX components are optional and their usage is similar to shortcodes in WordPress and templating languages.

To make it possible to collaborate with contributors from anywhere, we decided to use Sanity as our content management system (CMS).

But how could we write MDX in Sanity? In this tutorial, we’ll break down how we set up MDX support in Sanity, and how to load and render that MDX in Next.js — powered website using a reduced example.

TL;DR

If you want to jump straight to the results, here are some helpful links:

How To Write Content Using MDX In Sanity

Our first step is to get our content management workflow set up. In this section, we’ll walk through setting up a new Sanity instance, adding support for writing MDX, and creating a public, read-only API that we can use to load our content into a website for display.

Create A New Sanity Instance

If you don’t already have a Sanity instance set up, let’s start with that. If you do already have a Sanity instance, skip ahead to the next section.

Our first step is to install the Sanity CLI globally, which allows us to install, configure, and run Sanity locally.

# install the Sanity CLI
npm i -g @sanity/cli

In your project folder, create a new directory called sanity, move into it, and run Sanity’s init command to create a new project.

# create a new directory to contain Sanity files
mkdir sanity
cd sanity/
sanity init

The init command will ask a series of questions. You can choose whatever makes sense for your project, but in this example we’ll use the following options:

  • Choose a project name: Sanity Next MDX Example.
  • Choose the default dataset configuration (“production”).
  • Use the default project output path (the current directory).
  • Choose “clean project” from the template options.

Install The Markdown Plugin For Sanity

By default, Sanity doesn’t have Markdown support. Fortunately, there’s a ready-made Sanity plugin for Markdown support that we can install and configure with a single command:

# add the Markdown plugin
sanity install markdown

This command will install the plugin and add the appropriate configuration to your Sanity instance to make it available for use.

Define A Custom Schema With A Markdown Input

In Sanity, we control every content type and input using schemas. This is one of my favorite features about Sanity, because it means that I have fine-grained control over what each content type stores, how that content is processed, and even how the content preview is built.

For this example, we’re going to create a simple page structure with a title, a slug to be used in the page URL, and a content area that expects Markdown.

Create this schema by adding a new file at sanity/schemas/page.js and adding the following code:

export default { name: 'page', title: 'Page', type: 'document', fields: [ { name: 'title', title: 'Page Title', type: 'string', validation: (Rule) => Rule.required(), }, { name: 'slug', title: 'Slug', type: 'slug', validation: (Rule) => Rule.required(), options: { source: 'title', maxLength: 96, }, }, { name: 'content', title: 'Content', type: 'markdown', }, ],
};

We start by giving the whole content type a name and title. The type of document tells Sanity that this should be displayed at the top level of the Sanity Studio as a content type someone can create.

Each field also needs a name, title, and type. We can optionally provide validation rules and other options, such as giving the slug a max length and allowing it to be generated from the title value.

Add A Custom Schema To Sanity’s Configuration

After our schema is defined, we need to tell Sanity to use it. We do this by importing the schema into sanity/schemas/schema.js, then adding it to the types array passed to createSchema.

 // First, we must import the schema creator import createSchema from 'part:@sanity/base/schema-creator'; // Then import schema types from any plugins that might expose them import schemaTypes from 'all:part:@sanity/base/schema-type'; + // Import custom schema types here
+ import page from './page'; // Then we give our schema to the builder and provide the result to Sanity export default createSchema({ // We name our schema name: 'default', // Then proceed to concatenate our document type // to the ones provided by any plugins that are installed types: schemaTypes.concat([
- / Your types here! /
+ page, ]), });

This puts our page schema into Sanity’s startup configuration, which means we’ll be able to create pages once we start Sanity up!

Run Sanity Studio Locally

Now that we have a schema defined and configured, we can start Sanity locally.

sanity start

Once it’s running, we can open Sanity Studio at http://localhost:3333 on our local machine.

When we visit that URL, we’ll need to log in the first time. Use your preferred account (e.g. GitHub) to authenticate. Once you get logged in, you’ll see the Studio dashboard, which looks pretty barebones.

To add a new page, click “Page”, then the pencil icon at the top-left.

Add a title and slug, then write some Markdown with MDX in the content area:

This is written in Markdown. But what’s this? <Callout> Oh dang! Is this a React component in the middle of our content? 😱 </Callout> Holy buckets! That’s amazing!

Heads up! The empty line between the MDX component and the Markdown it contains is required. Otherwise the Markdown won’t be parsed. This will be fixed in MDX v2.

Once you have the content in place, click “Publish” to make it available.

Deploy The Sanity Studio To A Production URL

In order to make edits to the site’s data without having to run the code locally, we need to deploy the Sanity Studio. The Sanity CLI makes this possible with a single command:

sanity deploy

Choose a hostname for the site, which will be used in the URL. After that, it will be deployed and reachable at your own custom link.

This provides a production URL for content editors to log in and make changes to the site content.

Make Sanity Content Available Via GraphQL

Sanity ships with support for GraphQL, which we’ll use to load our page data into our site’s front-end. To enable this, we need to deploy a GraphQL API, which is another one-liner:

sanity graphql deploy

We can choose to enable a GraphQL Playground, which gives us a browser-based data explorer. This is extremely handy for testing queries.

Store the GraphQL URL — you’ll need it to load the data into Next.js!

https://sqqecrvt.api.sanity.io/v1/graphql/production/default

The GraphQL API is read-only for published content by default, so we don’t need to worry about keeping this secret — everything that this API returns is published, which means it’s what we want people to see.

Test Sanity GraphQL Queries In The Browser

By opening the URL of our GraphQL API, we’re able to test out GraphQL queries to make sure we’re getting the data we expect. These queries are copy-pasteable into our code.

To load our page data, we can build the following query using the “schema” tab at the right-hand side as a reference.

query AllPages { allPage { title slug { current } content }
}

This query loads all the pages published in Sanity, returning the title, current slug, and content for each. If we run this in the playground by pressing the play button, we can see our page returned.

Now that we’ve got page data with MDX in it coming back from Sanity, we’re ready to build a site using it!

In the next section, we’ll create an Next.js site that loads data from Sanity and renders our MDX content properly.

Display MDX In Next.js From Sanity

In an empty directory, start by initializing a new package.json, then install Next, React, and a package called next-mdx-remote.

# create a new package.json with the default options
npm init -y # install the packages we need for this project
npm i next react react-dom next-mdx-remote

Inside package.json, add a script to run next dev:

 { "name": "sanity-next-mdx", "version": "1.0.0", "scripts": {
+ "dev": "next dev" }, "author": "Jason Lengstorf <jason@lengstorf.com>", "license": "ISC", "dependencies": { "next": "^10.0.2", "next-mdx-remote": "^1.0.0", "react": "^17.0.1", "react-dom": "^17.0.1" }

Create React Components To Use In MDX Content

In our page content, we used the <Callout> component to wrap some of our Markdown. MDX works by combining React components with Markdown, which means our first step is to define the React component our MDX expects.

Create a Callout component at src/components/callout.js:

export default function Callout({ children }) { return ( <div style={{ padding: '0 1rem', background: 'lightblue', border: '1px solid blue', borderRadius: '0.5rem', }} > {children} </div> );
}

This component adds a blue box around content that we want to call out for extra attention.

Send GraphQL Queries Using The Fetch API

It may not be obvious, but you don’t need a special library to send GraphQL queries! It’s possible to send a query to a GraphQL API using the browser’s built-in Fetch API.

Since we’ll be sending a few GraphQL queries in our site, let’s add a utility function that handles this so we don’t have to duplicate this code in a bunch of places.

Add a utility function to fetch Sanity data using the Fetch API at src/utils/sanity.js:

export async function getSanityContent({ query, variables = {} }) { const { data } = await fetch( 'https://sqqecrvt.api.sanity.io/v1/graphql/production/default', { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify({ query, variables, }), }, ).then((response) => response.json()); return data;
}

The first argument is the Sanity GraphQL URL that Sanity returned when we deployed the GraphQL API.

GraphQL queries are always sent using the POST method and the application/json content type header.

The body of a GraphQL request is a stringified JSON object with two properties: query, which contains the query we want to execute as a string; and variables, which is an object containing any query variables we want to pass into the GraphQL query.

The response will be JSON, so we need to handle that in the .then for the query result, and then we can destructure the result to get to the data inside. In a production app, we’d want to check for errors in the result as well and display those errors in a helpful way, but this is a post about MDX, not GraphQL, so #yolo.

Heads up! The Fetch API is great for simple use cases, but as your app becomes more complex you’ll probably want to look into the benefits of using a GraphQL-specific tool like Apollo or urql.

Create A Listing Of All Pages From Sanity In Next.js

To start, let’s make a list of all the pages published in Sanity, as well as a link to their slug (which won’t work just yet).

Create a new file at src/pages/index.js and put the following code inside:

import Link from 'next/link';
import { getSanityContent } from '../utils/sanity'; export default function Index({ pages }) { return ( <div> <h1>This Site Loads MDX From Sanity.io</h1> <p>View any of these pages to see it in action:</p> <ul> {pages.map(({ title, slug }) => ( <li key={slug}> <Link href={`/${slug}`}> <a>{title}</a> </Link> </li> ))} </ul> </div> );
} export async function getStaticProps() { const data = await getSanityContent({ query: ` query AllPages { allPage { title slug { current } } } `, }); const pages = data.allPage.map((page) => ({ title: page.title, slug: page.slug.current, })); return { props: { pages }, };
}

In getStaticProps we call the getSanityContent utility with a query that loads the title and slug of all pages in Sanity. We then map over the page data to create a simplified object with a title and slug property for each page and return that array as a pages prop.

The Index component to display this page receives that page’s prop, so we map over that to output an unordered list of links to the pages.

Start the site with npm run dev and open http://localhost:3000 to see the work in progress.

If we click a page link right now, we’ll get a 404 error. In the next section we’ll fix that!

Generate Pages Programatically In Next.js From CMS Data

Next.js supports dynamic routes, so let’s set up a new file to catch all pages except our home page at src/pages/[page].js.

In this file, we need to tell Next what the slugs are that it needs to generate using the getStaticPaths function.

To load the static content for these pages, we need to use getStaticProps, which will receive the current page slug in params.page.

To help visualize what’s happening, we’ll pass the slug through to our page and log the props out on screen for now.

import { getSanityContent } from '../utils/sanity'; export default function Page(props) { return <pre>{JSON.stringify(props, null, 2)}</pre>;
} export async function getStaticProps({ params }) { return { props: { slug: params.page, }, };
} export async function getStaticPaths() { const data = await getSanityContent({ query: ` query AllPages { allPage { slug { current } } } `, }); const pages = data.allPage; return { paths: pages.map((p) => `/${p.slug.current}`), fallback: false, };
}

If the server is already running this will reload automatically. If not, run npm run dev and click one of the page links on http://localhost:3000 to see the dynamic route in action.

Load Page Data From Sanity For The Current Page Slug In Next.js

Now that we have the page slug, we can send a request to Sanity to load the content for that page.

Using the getSanityContent utility function, send a query that loads the current page using its slug, then pull out just the page’s data and return that in the props.

 export async function getStaticProps({ params }) {
+ const data = await getSanityContent({
+ query: + query PageBySlug($slug: String!) {
+ allPage(where: { slug: { current: { eq: $slug } } }) {
+ title
+ content
+ }
+ }
+,
+ variables: {
+ slug: params.page,
+ },
+ });
+
+ const [pageData] = data.allPage; return { props: {
- slug: params.page,
+ pageData, }, }; }

After reloading the page, we can see that the MDX content is loaded, but it hasn’t been processed yet.

Render MDX From A CMS In Next.js With Next-mdx-remote

To render the MDX, we need to perform two steps:

  1. For the build-time processing of MDX, we need to render the MDX to a string. This will turn the Markdown into HTML and ensure that the React components are executable. This is done by passing the content as a string into renderToString along with an object containing the React components we want to be available in MDX content.

  2. For the client-side rendering of MDX, we hydrate the MDX by passing in the rendered string and the React components. This makes the components available to the browser and unlocks interactivity and React features.

While this might feel like doing the work twice, these are two distinct processes that allow us to both create fully rendered HTML markup that works without JavaScript enabled and the dynamic, client-side functionality that JavaScript provides.

Make the following changes to src/pages/[page].js to render and hydrate MDX:

+ import hydrate from 'next-mdx-remote/hydrate';
+ import renderToString from 'next-mdx-remote/render-to-string'; import { getSanityContent } from '../utils/sanity';
+ import Callout from '../components/callout'; - export default function Page(props) {
- return <pre>{JSON.stringify(props, null, 2)}</pre>;
+ export default function Page({ title, content }) {
+ const renderedContent = hydrate(content, {
+ components: {
+ Callout,
+ },
+ });
+
+ return (
+ <div>
+ <h1>{title}</h1>
+ {renderedContent}
+ </div>
+ ); } export async function getStaticProps({ params }) { const data = await getSanityContent({ query: ` query PageBySlug($slug: String!) { allPage(where: { slug: { current: { eq: $slug } } }) { title content } } `, variables: { slug: params.page, }, }); const [pageData] = data.allPage; + const content = await renderToString(pageData.content, {
+ components: { Callout },
+ }); return { props: {
- pageData,
+ title: pageData.title,
+ content, }, }; } export async function getStaticPaths() { const data = await getSanityContent({ query: ` query AllPages { allPage { slug { current } } } `, }); const pages = data.allPage; return { paths: pages.map((p) => `/${p.slug.current}`), fallback: false, }; }

After saving these changes, reload the browser and we can see the page content being rendered properly, custom React components and all!

Use MDX With Sanity And Next.js For Flexible Content Workflows

Now that this code is set up, content editors can quickly write content using MDX to enable the speed of Markdown with the flexibility of custom React components, all from Sanity! The site is set up to generate all the pages published in Sanity, so unless we want to add new custom components we don’t need to touch the Next.js code at all to publish new pages.

What I love about this workflow is that it lets me keep my favorite parts of several tools: I really like writing content in Markdown, but my content also needs more flexibility than the standard Markdown syntax provides; I like building websites with React, but I don’t like managing content in Git.

Beyond this, I also have access to the huge amount of customization made available in both the Sanity and React ecosystems, which feels like having my cake and eating it, too.

If you’re looking for a new content management workflow, I hope you enjoy this one as much as I do!

What’s Next?

Now that you’ve got a Next site using MDX from Sanity, you may want to go further with these tutorials and resources:

What will you build with this workflow? Let me know on Twitter!

We Need You In The Smashing Family

At Smashing, we are looking for a friendly, reliable and passionate person to drive the sales and management of sponsorship and advertising. We work with small and big companies to help them get exposure and have their voice heard across a number of different media — from this very magazine to our online conferences, meet-ups and workshops. This includes:

We sincerely hope to find someone who knows and understands the web community we publish for. A person who is able to bring onboard advertisers and sponsors that will be helpful to our audience, and who will benefit from the exposure and visibility at Smashing. We are looking for a person with experience in nurturing long-term relationships with advertisers, while not being afraid to push for new sales.

We are a small family of 12, and we’ve all been working remotely for years now. By joining our team, you will have the opportunity to shape the role and work with the Magazine as well as the Events team to create sponsorship opportunities that truly benefit both sides of the arrangement. We also would be open to outsourcing this work to another company or working with someone on a freelance basis who provides these services to other companies.

What’s In It For You?

  • A small, friendly, inclusive and diverse team that is aligned and very committed to doing great work;
  • The ability to shape your work in a way that would work best for you;
  • No lengthy meetings or micro-management: we do everything to ensure you can do your best work.

Role And Responsibilities

  • You’ll be working with your existing contacts (those of which Smashing has already made) and find new contacts to sell advertising and sponsorship across the range of our products;
  • You’ll be managing sponsors and advertisers once they come on board, ensuring that expectations are managed and deadlines on both sides understood;
  • You’ll be exploring creative partnerships to ensure that sponsors get exposure they need while staying true to principles that Smashing stands for;
  • Work closely with the our team to ensure that our commitments to sponsors are possible to fulfill given time and team availability;
  • Being able to think creatively in terms of how we maximize sponsorship opportunities across our different outlets.

We’d Like You To:

  • Have good written English, and ability to communicate clearly with sponsors from around the world;
  • Be able to manage a flexible schedule in order to make calls to sponsors in timezones including the US West Coast;
  • Be happy working in an asynchronous way, mostly via writing (we use Slack and Notion), given the distributed nature of the team and sponsors;
  • Be conversant with web technologies to the extent of understanding who would be a good fit as a sponsor;
  • Ideally, have existing connections with web companies;
  • Fully remote, and probably fulltime. (Again, we also would be open to outsourcing this work to another company or working with someone on a freelance basis who provides these services to other companies.)

A Bit About Smashing

At Smashing, we focus on bringing quality content for web designers and developers, and support our community. The community around Smashing is indeed very important to us. They tell us when they like what we are doing, and also when they do not!

We are always looking for new ways to reach out to our community. Over the past year, we’ve taken conferences online and started running online workshops in response to the pandemic. Things will likely change over the coming year too, and we are keen to bring our existing sponsors along with us and continue to think creatively about how we can offer good value to them in a changing world.

Yet again, we are a very small team of dedicated people — fully distributed even before the pandemic. The majority of the team is in Europe, but we also have team members in the USA and Hong Kong. Therefore, location tends to be less important than an ability to work in a way that respects the time lag when dealing with multiple time zones.

Contact Details

If you are interested, please drop us an email at recruiting@smashing-media.com, tell us a bit about yourself and your experience, and why you’d like to be a part of the Smashing family. We can’t wait to hear from you!

My Top 6 Conversion Tactics for 2021

Despite the unprecedented changes in 2020, the standard ecommerce usability requirements remain true: simplify navigation, ensure mobile-friendliness, and streamline the checkout process.

But there are plenty more conversion tactics merchants should deploy heading into 2021. Here are my top six below.

6 Conversion Tactics for 2021

Targeted email campaigns. Sending the same email to your entire list isn’t the best practice. Targeting customers based on purchases and interests produces more conversions. There’s little reason for a sporting goods store to promote basketball to those interested only in golf. Segment subscribers to better target what each wants.

You can always include links to other products in the footer. And continue emailing the entire database for storewide sales and unique features.

An attractive rewards program. Loyalty programs reward customers based on the amount of money they spend. Take things up a notch by providing additional benefits, such as first dibs on limited and exclusive items, special pricing on select goods, and free shipping after a set period of buying.

Place what you stand for front and center. 2020’s events caused many consumers to scrutinize companies’ ideals. Make sure shoppers know of your key charities and causes. More people than ever want to patronize stores whose priorities align with their own.

Avoid politics and religion, however, unless they are the focus of your business.

Spotlight your staff. Consumers want to know who their money helps — from the cleaning crew to tech support. Take a cue from Crutchfield and Zappos. Both regularly feature employees and emphasize their importance to the company.

Photo of several Crutchfield employees

Crutchfield incorporates employee photos, videos, and personal stories.

Emphasize value. Give your shoppers more purchasing power. Consumers are looking for the best overall value, not necessarily the cheapest items. Subscriptions are helpful for replenishable products. Volume pricing can entice people to buy in quantities and allow families and friends to group their purchases, resulting in higher average order values for the business and lower prices for customers.

Offer many ways to pay. Roughly 70 percent of Digital Commerce 360’s top 1,000 stores accept PayPal. Loyal Apple users enjoy the convenience and enhanced security of Apple Pay. Other popular methods for mobile shoppers include Amazon Payments, Stripe, and Google Pay. Buy now, pay later is increasingly popular.

In other words, people have preferred ways to pay, and it’s not always a credit card. Offering multiple payment methods can help almost every demographic. Limited payment options can result in a lost sale.

Agency CEO: Adopt an ‘Abundance Mentality’

Online selling is increasingly competitive, forcing merchants to selfish and shortsighted decisions. That’s according to Corey Blake, CEO of MWI, a digital marketing agency.

“When we have a scarcity outlook with shortsightedness,” he told me, “We’re scraping and clawing to keep it all because we’re worried about losing money. A better approach is an abundance mentality, saying, ‘I’m trying to provide value to the world, and as long as I’m doing that, I’m going to find customers.’”

Blake’s approach has worked for MWI. Founded in 1999, it now operates on three continents, serving diverse clients with search engine optimization, advertising, content marketing, and more.

He and I recently spoke about ecommerce marketing circa late 2020. What follows is our entire audio conversation and a transcript, edited for clarity and length.

Eric Bandholz: Give us an overview of your company.

Corey Blake: MWI is a boutique digital marketing agency. We launched in 1999. We’ve got a presence in the U.S., Asia, and Europe. We do everything from SEO, advertising, website design and development, public relations, and content marketing.

We were exclusively an SEO firm until about 2014. Then we got into web development, advertising, PR, and content.

Bandholz: It seems like the ecommerce entrepreneurs I speak with no longer emphasize SEO. They focus on social media, paid search, influencers, or affiliate marketing. It is possible to get strong organic search rankings even now, in 2020?

Blake: There’s so much competition for organic search. I would never tell small operations, “Invest all of your marketing budget into SEO.” That’s because it’s going to take a lot of time. Meanwhile, your competitors are likely getting immediate traffic from paying influencers and running ads on Facebook, YouTube, and Instagram.

However, I see value in implementing SEO strategies after that initial startup phase, when you’re generating revenue and your marketing budget is presumably bigger.

SEO is content marketing play. Beardbrand does an amazing job at creating consistent content. It’s SEO gold when you link it back to your website.

Bandholz: You’re the only company I know with a three-letter domain: MWI. That had to be expensive.

Blake: It was just timing. My business partner, Josh Steimle, started the company when I was 13, back in 1999. It was called Mind Wire Interactive. He reached out to the guy that owned Mwi.com, who said, more or less, “We don’t use it. You can have it.” That’s how it came to be.

Bandholz: Your agency encounters many businesses. How do ecommerce companies fail at marketing?

Blake: More ecommerce folks should read a book called “The Third Door.” It emphasizes alternative ways to reach a goal. If the first or second door is closed, try the third.

As it applies to ecommerce, there’s always a way to succeed that most people aren’t seeing. There’s a door to increase sales, to increase opportunity, to increase brand awareness.

For example, companies come to us and say, “We’ve made this one video, and we’ve shared it on Instagram and YouTube, but we’re just not seeing any traction.”

Our response is, “Have you repurposed that video? Rather than one 60-second video, why not 10 6-second videos? Then post on a bunch of different platforms. Generate some PR around it. Cast a much wider net.”

Ecommerce businesses need to find that third door.

Bandholz: My number one hack to branding and link building is to reach out to platforms we use and offer our company as a case study. Beardbrand has been a case study on Typeform, Storemapper, Gleam, and Yotpo, among others. Anyone who needs a case study, we’re quick to raise our hands. ShipStation, the shipping software, has featured us for years. They are now broadcasting it in a national TV commercial.

Blake: You’re playing the long game, which is smart. You need those critical short-term wins, too, of course. If you can balance your marketing strategies with short-term wins and a long-term digital footprint, like you’re doing, it’s invaluable. Those two things used together can make a brand unstoppable.

Bandholz: We reached out to Typeform and said, “Hey, we’re doing something really unique with your software.” And they said, “That’s cool. No one is doing that. Let’s do a case study on it.” You need that vision to be okay with the fact that not everything will have a measurable, immediate return.

A lot of companies in our space go to Klaviyo.com. They will see Beardbrand there because we did a case study with Klaviyo. Plus, for us, it’s good karma. If you help people out, good things seem to happen.

Blake: Cool. It’s about building win-win partnerships and doing it with an abundance mentality. I’m stealing both of those concepts from “The 7 Habits of Highly Effective People,” the book. The abundance mentality is the ability to look at a partnership and say, “There’s enough good business to go around as long as we’re trying our best.”

If we’re doing our best and it’s not working, then it’s a bad idea, or our process is wrong.

When we have a scarcity outlook with shortsightedness, we’re scraping and clawing to keep it all because we’re worried about losing money. A better approach is an abundance mentality, saying, “I’m trying to provide value to the world, and as long as I’m doing that, I’m going to find customers.”

We’ve taken that approach with MWI, our agency. We’re not forcing it. It’s who I am. It’s who Josh is. It’s what our brand is. It’s the people we hire.

When we do business that way, it tends to remove stress and worry. That’s what we’re trying to do.

Bandholz: How can folks learn more about you and MWI?

Blake: The agency, again, is a three-letter domain Mwi.com. You can also reach me on LinkedIn. Beyond marketing, we have The Hope Strategy Podcast, where we focus on optimism and an abundance mentality.

So, MWI is for the business stuff. But if you’re looking for some good conversations from smart people, try the podcast.

Join Us For Smashing Meets Happy Holidays

If you are missing your festive meetups this year or just fancy seeing some friendly faces and learning some new things join us on December 17th for another Smashing Meets event.

Tickets are only 10 USD (and free for our lovely Smashing Members). The fun starts at 9AM ET (Eastern Time) or 15:00 CET (Central European Time) on the 17th December.

Ok. This is important. Smashing Meets by @smashingconf was soooo much fun. I will have to tune in whenever the timezone suits, it was an absolute blast!!!

— Mandy Michael (@Mandy_Kerr) May 19, 2020

This time, we will have talks from three speakers—Adekunle Oduye, Ben Hong, and Michelle Barker. There will be an interface design challenge and chance to network and meet other attendees. Just like an in-person meetup but you won’t have to go out in the cold!

If you want to know more about how our Smashing Meets events work, we have a review of a previous event, see some of the videos, or just head on over to the event page and get a ticket! I hope to see you there.

How To Migrate From WordPress To The Eleventy Static Site Generator

Eleventy is a static site generator. We’re going to delve into why you’d want to use a static site generator, get into the nitty-gritty of converting a simple WordPress site to Eleventy, and talk about the pros and cons of managing content this way. Let’s go!

What Is A Static Site Generator?

I started my web development career decades ago in the mid-1990s when HTML and CSS were the only things you needed to get a website up and running. Those simple, static websites were fast and responsive. Fast forward to the present day, though, and a simple website can be pretty complicated.

In the case of WordPress, let’s think through what it takes to render a web page. WordPress server-side PHP, running on a host’s servers, does the heavy lifting of querying a MySQL database for metadata and content, chooses the right versions of images stored on a static file system, and merges it all into a theme-based template before returning it to the browser. It’s a dynamic process for every page request, though most of the web pages I’ve seen generated by WordPress aren’t really that dynamic. Most visitors, if not all, experience identical content.

Static site generators flip the model right back to that decades-old approach. Instead of assembling web pages dynamically, static site generators take content in the form of Markdown, merge it with templates, and create static web pages. This process happens outside of the request loop when users are browsing your site. All content has been pre-generated and is served lightning-fast upon each request. Web servers are quite literally doing what they advertise: serving. No database. No third-party plugins. Just pure HTML, CSS, JavaScript, and images. This simplified tech stack also equates to a smaller attack surface for hackers. There’s a little server-side infrastructure to exploit, so your site is inherently more secure.

Leading static site generators are feature-rich, too, and that can make a compelling argument for bidding adieu to the tech stacks that are hallmarks of modern content management systems.

If you’ve been in this industry for a while, you may remember Macromedia’s (pre-Adobe) Dreamweaver product. I loved the concept of library items and templates, specifically how they let me create consistency across multiple web pages. In the case of Eleventy, the concepts of templates, filters, shortcodes, and plugins are close analogs. I got started on this whole journey after reading about Smashing’s enterprise conversion to the JamStack. I also read Mathias Biilmann & Phil Hawksworth’s free book called Modern Web Development on the JAMstack and knew I was ready to roll up my sleeves and convert something of my own.

Why Not Use A Static Site Generator?

Static site generators require a bit of a learning curve. You’re not going to be able to easily pass off editorial functions to input content, and specific use cases may preclude you from using one. Most of the work I’ll show is done in Markdown and via the command line. That said, there are many options for using static site generators in conjunction with dynamic data, e-commerce, commenting, and rating systems.

You don’t have to convert your entire site over all at once, either. If you have a complicated setup, you might start small and see how you feel about static site generation before putting together a plan to solve something at an enterprise scale. You can also keep using WordPress as a best-in-class headless content management system and use an SSG to serve WordPress content.

How I Chose Eleventy As A Static Site Generator

Do a quick search for popular static site generators and you’ll find many great options to start with: Eleventy, Gatsby, Hugo, and Jekyll were leading contenders on my list. How to choose? I did what came naturally and asked some friends. Eleventy was a clear leader in my Twitter poll, but what clinched it was a comment that said “@eleven_ty feels very approachable if one doesn’t know what one is doing.” Hey, that’s me! I can unhappily get caught up in analysis paralysis. Not today… it felt good to choose Eleventy based on a poll and a comment. Since then, I’ve converted four WordPress sites to Eleventy, using GitHub to store the code and Netlify to securely serve the files. That’s exactly what we’re going to do today, so let’s roll up our sleeves and dive in!

Getting Started: Bootstrapping The Initial Site

Eleventy has a great collection of starter projects. We’ll use Dan Urbanowicz’s eleventy-netlify-boilerplate as a starting point, advertised as a “template for building a simple blog website with Eleventy and deploying it to Netlify. Includes Netlify CMS and Netlify Forms.” Click “Deploy to netlify” to get started. You’ll be prompted to connect Netlify to GitHub, name your repository (I’m calling mine smashing-eleventy-dawson), and then “Save & Deploy.”

With that done, a few things happened:

  1. The boilerplate project was added to your GitHub account.
  2. Netlify assigned a dynamic name to the project, built it, and deployed it.
  3. Netlify configured the project to use Identity (if you want to use CMS features) and Forms (a simple contact form).

As the screenshot suggests, you can procure or map a domain to the project, and also secure the site with HTTPS. The latter feature was a really compelling selling point for me since my host had been charging an exorbitant fee for SSL. On Netlify, it’s free.

I clicked Site Settings, then Change Site Name to create a more appropriate name for my site. As much as I liked jovial-goldberg-e9f7e9, elizabeth-dawson-piano is more appropriate. After all, that’s the site we’re converting! When I visit elizabeth-dawson-piano.netlify.app, I see the boilerplate content. Awesome!

Let’s download the new repository to our local machine so we can start customizing the site. My GitHub repository for this project gives me the git clone command I can use in Visual Studio Code’s terminal to copy the files:

Then we follow the remaining instructions in the boilerplate’s README file to install dependencies locally, edit the _data/metadata.json file to match the project and run the project locally.

  • npm install @11ty/eleventy
  • npm install
  • npx eleventy --serve --quiet

With that last command, Eleventy launches the local development site at localhost:8080 and starts watching for changes.

Preserving WordPress Posts, Pages, And Images

The site we’re converting from is an existing WordPress site at elizabethrdawson.wordpress.com. Although the site is simple, it’d be great to leverage as much of that existing content as possible. Nobody really likes to copy and paste that much, right? WordPress makes it easy using its export function.

Export Content gives me a zip file containing an XML extract of the site content. Export Media Library gives me a zip file of the site’s images. The site that I’ve chosen to use as a model for this exercise is a simple 3-page site, and it’s hosted on WordPress.com. If you’re self-hosting, you can go to Tools > Export to get the XML extract, but depending on your host, you may need to use FTP to download the images.

If you open the XML file in your editor, it’s going to be of little use to you. We need a way to get individual posts into Markdown, which is the language we’re going to use with Eleventy. Lucky for us, there’s a package for converting WordPress posts and pages to Markdown. Clone that repository to your machine and put the XML file in the same directory. Your directory listing should look something like this:

If you want to extract posts from the XML, this will work out of the box. However, our sample site has three pages, so we need to make a small adjustment. On line 39 of parser.js, change “post” to “page” before continuing.

Make sure you do an “npm install” in the wordpress-export-to-markdown directory, then enter “node index.js” and follow the prompts.

That process created three files for me: welcome.md, about.md, and contact.md. In each, there’s front matter that describes the page’s title and date, and the Markdown of the content extracted from the XML. ‘Front matter’ may be a new term for you, and if you look at the section at the top of the sample .md files in the “pages” directory, you’ll see a section of data at the top of the file. Eleventy supports a variety of front matter to help customize your site, and title and date are just the beginning. In the sample pages, you’ll see this in the front matter section:

eleventyNavigation: key: Home order: 0

Using this syntax, you can have pages automatically added to the site’s navigation. I wanted to preserve this with my new pages, so I copied and pasted the content of the pages into the existing boilerplate .md files for home, contact, and about. Our sample site won’t have a blog for now, so I’m deleting the .md files from the “posts” directory, too. Now my local preview site looks like this, so we’re getting there!

This seems like a fine time to commit and push the updates to GitHub. A few things happen when I commit updates. Upon notification from GitHub that updates were made, Netlify runs the build and updates the live site. It’s the same process that happens locally when you’re updating and saving files: Eleventy converts the Markdown files to HTML pages. In fact, if you look in your _site directory locally, you’ll see the HTML version of your website, ready for static serving. So, as I navigate to elizabeth-dawson-piano.netlify.app shortly after committing, I see the same updates I saw locally.

Adding Images

We’ll use images from the original site. In the .eleventy.js file, you’ll see that static image assets should go in the static/img folder. Each page will have a hero image, and here’s where front matter works really well. In the front matter section of each page, I’ll add a reference to the hero image:

hero: `/static/img/performance.jpg`

Eleventy keeps page layouts in the _includes/layouts folder. base.njk is used by all page types, so we’ll add this code just under the navigation since that’s where we want our hero image.

{% if (hero) %}
<img class="page-hero" src="{{ hero }}" alt="Hero image for {{ title }}" />
{% endif %}

I also included an image tag for the picture of Elizabeth on the About page, using a CSS class to align it and give it proper padding. Now’s a good time to commit and see exactly what changed.

Embedding A YouTube Player With A Plugin

There are a few YouTube videos on the home page. Let’s use a plugin to create Youtube’s embed code automatically. eleventy-plugin-youtube-embed is a great option for this. The installation instructions are pretty clear: install the package with npm and then include it in our .eleventy.js file. Without any further changes, those YouTube URLs are transformed into embedded players. (see commit)

Using Collections And Filters

We don’t need a blog for this site, but we do need a way to let people know about upcoming events. Our events — for all intents and purposes — will be just like blog posts. Each has a title, a description, and a date.

There are a few steps we need to create this new collection-based page:

  • Create a new events.md file in our pages directory.
  • Add a few events to our posts directory. I’ve added .md files for a holiday concert, a spring concert, and a fall recital.
  • Create a collection definition in .eleventy.js so we can treat these events as a collection. Here’s how the collection is defined: we gather all Markdown files in the posts directory and filter out anything that doesn’t have a location specified in the front matter.
eleventyConfig.addCollection("events", (collection) => collection.getFilteredByGlob("posts/*.md").filter( post => { return ( item.data.location ? post : false ); })
);
  • Add a reference to the collection to our events.md file, showing each event as an entry in a table. Here’s what iterating over a collection looks like:
<table> <thead> <tr> <th>Date</th> <th>Title</th> <th>Location</th> </tr> </thead> <tbody> {%- for post in collections.events -%} <tr> <td>{{ post.date }}</td> <td><a href="{{ post.url }}">{{ post.data.title }}</a></td> <td>{{ post.data.location }}</td> </tr> {%- endfor -%} </tbody>
</table>

However, our date formatting looks pretty bad.

Luckily, the boilerplate .eleventy.js file already has a filter titled readableDate. It’s easy to use filters on content in Markdown files and templates:

{{ post.date | readableDate }}

Now, our dates are properly formatted! Eleventy’s filter documentation goes into more depth on what filters are available in the framework, and how you can add your own. (see: commit)

Polishing The Site Design With CSS

Okay, so now we have a pretty solid site created. We have pages, hero images, an events list, and a contact form. We’re not constrained by the choice of any theme, so we can do whatever we want with the site’s design… the sky is the limit! It’s up to you to make your site performant, responsive, and aesthetically pleasing. I made some styling and markup changes to get things to our final commit.

Now we can tell the world about all of our hard work. Let’s publish this site.

Publishing The Site

Oh, but wait. It’s already published! We’ve been working in this nice workflow all along, where our updates to GitHub automatically propagate to Netlify and get rebuilt into fresh, fast HTML. Updates are as easy as a git push. Netlify detects the changes from git, processes markdown into HTML, and serves the static site. When you’re done and ready for a custom domain, Netlify lets you use your existing domain for free. Visit Site Settings > Domain Management for all the details, including how you can leverage Netlify’s free HTTPS certificate with your custom domain.

Advanced: Images, Contact Forms, And Content Management

This was a simple site with only a few images. You may have a more complicated site, though. Netlify’s Large Media service allows you to upload full-resolution images to GitHub, and stores a pointer to the image in Large Media. That way, your GitHub repository is not jam-packed with image data, and you can easily add markup to your site to request optimized crops and sizes of images at request time. I tried this on my own larger sites and was really happy with the responsiveness and ease of setup.

Remember that contact form that was installed with our boilerplate? It just works. When you submit the contact form, you’ll see submissions in Netlify’s administration section. Select “Forms” for your site. You can configure Netlify to email you when you get a new form submission, and you can also add a custom confirmation page in your form’s code. Create a page in your site at /contact/success, for example, and then within your form tag (in form.njk), add action="/contact/success" to redirect users there once the form has been submitted.

The boilerplate also configures the site to be used with Netlify’s content manager. Configuring this to work well for a non-technical person is beyond the scope of the article, but you can define templates and have updates made in Netlify’s content manager sync back to GitHub and trigger automatic redeploys of your site. If you’re comfortable with the workflow of making updates in markdown and pushing them to GitHub, though, this capability is likely something you don’t need.

Further Reading

Here are some links to resources used throughout this tutorial, and some other more advanced concepts if you want to dive deeper.

5 Content Marketing Ideas for January 2021

January 2021 can represent a beginning for content marketers to build communities, stream live videos, write about national holidays, report industry news, or visually expose interesting industry data.

Content marketing is the act of curating or creating content such as articles, podcasts, videos, and graphics and then publishing and promoting that content to attract, engage, and retain an audience of customers and potential customers.

What follows are five content marketing ideas you can try this January.

1. Create a Content-driven Community

“In today’s highly digital and connected society, it’s funny to think people can still feel disconnected from others. With so many people who communicate online, behind screens, this connected world can actually feel rather lonely at times. This goes for personal relationships as well as business relationships — specifically between brands and their customers as well as brands and their employees,” wrote Kristen Baker, a HubSpot marketing manager, in the company’s “Ultimate Guide to Community Management.”

“So, what is it that has people feeling a disconnect to others and the companies they do business with?” asked Baker. “It’s a lack of community.”

Content marketing has always been a driver of community. Think about how Netflix and Mustache, a content marketing agency, created a “community” across several social media platforms using the @NetflixIsAJoke handle.

Publishing funny videos, memes, and posts, Netflix garnered almost 3.5 million followers, fans, and subscribers across Facebook, Instagram, Twitter, and YouTube.

Screenshot of Netflix Is a Joke

Netflix built a community of social media followers and fans across several platforms.

For your company’s January 2021 content marketing, consider taking this one step further and create a private, content-driven community for your customers.

You could use a Facebook Group, or, better still, start a community using Mighty Networks, Zapnito, Tribe, or similar platforms.

To attract members, invite each new customer to join your private community when she makes a purchase. Then fill your community with compelling content.

2. Start a Live Video Series

January is a time to try new things. For many of your customers, those new things might come in the form of a New Year’s resolution. So why not do something similar and resolve to provide a live stream video at least once per week for all of 2021?

Your weekly live stream could be focused on the products you sell. For example, the 44 East Boutique in Meridian, Idaho, live streams regularly — showing products or even helping customers shop via Facebook Live and Instagram TV.

Screenshot of 44 East Boutique's Instagram page

44 East Boutique started live streaming during the Covid shutdown but has continued after the store has reopened.

Kayloni Perry and her mother and business partner, Cheryl Jones, turned to live streaming in the midst of the coronavirus shutdown. They have since continued to live stream for as much as two hours at a time.

For your live stream, you could:

  • Promote products,
  • Interview customers or industry experts,
  • Teach skills related to the products you sell,
  • Offer up industry news,
  • Tell jokes (see Netflix community mentioned above).

3. Focus on National Holidays

The month of January is full of national holidays, including New Year’s Day, Martin Luther King, Jr. Day, and lesser-known days, such as National Bloody Mary Day, National Science Fiction Day, and National Hat Day.

Image of a fireworks display

New Year’s Day, which is often celebrated with fireworks, is an example of a holiday you might write about.

For your company’s January 2021 content marketing, pick one or two of these holidays and create articles that describe the history of the event, offer suggestions of how to celebrate, or connect the holiday to your business or your products.

Here are a few examples.

  • An online Christian bookstore might publish “How Faith Impacted the Civil Rights Movement: An MLK Day Retrospective.”
  • A fitness equipment retailer might write, “The New Year’s Resolution Bootcamp Guide.”
  • An apparel store could create a video called, “How to Put a Lid on National Hat Day.”

If these topics seem too serious, you could always try National Belly Laugh Day on January 24. For this one, you might publish a few jokes.

4. Industry News Stories

In January 2021, ask yourself if your business has an opportunity to become a real news source for your industry.

This could be a challenge. It would require a relatively large investment in your content marketing, but it may be rewarding both in terms of audience engagement and search engine benefits.

As an example, look at this recent article from the blog of Keen footwear, “Kids In Costa Rica Want Their Rivers Back.”

Photo of a young boy by a river in, presumably, Costa Rica

A child works along the river. The image comes from Keen’s news-like coverage of a conservation program it helped to fund.

The article is consistent with Keen’s brand, which includes a strong focus on conservation. It is also similar to a feature in a traditional magazine.

Here are a couple of example paragraphs from the Keen article.

“Meet Natalie. She is 8 years old and lives in the Osa Peninsula, Costa Rica, one of the most biodiverse places on Earth. Her school was canceled for two weeks, and instead of sleeping in or watching TV, she came out in the field with us every day at 7 a.m., always with a big smile and hug, to help plant native trees to reforest the river on her great-grandmother’s cattle farm. Her dream is to see the river healthy and clean again so she can swim and bathe in it, just like her grandmother did when she was a little girl…

“The idea to restore [the] river came about because Osa Conservation has been working since 2014 with a dedicated group of local citizen scientists and schoolchildren to monitor river water quality. We love it when kids raise their hands with curious faces to learn more about the great diversity of wildlife that exists in the rivers, excitedly ruffle through river rocks and leaves to find insect larvae, or debate the results of a water pH analysis.”

5. Chart of the Week/Month

If a picture is worth 1,000 words, perhaps, a chart of the week or month could be worth a few thousand.

Here the idea is to surface interesting or surprising facts related to the industry your business serves in the form of a chart.

Imagine that you are the content marketer for an online store specializing in educational toys and games for children. You could publish a chart like the one the Pew Research Center released in October 2020 titled “Most Parents of K-12 Students Learning Online Worry About Them Falling Behind.

Screenshot of a chart of survey results from parents

“Majorities of parents of K-12 students say that, compared with before the coronavirus outbreak, they are more concerned,” according to the Pew Research Center.

Your weekly or monthly chart might include curated data from any number of sources that you combine to create an interesting chart or original research drawn from your own surveys.