MCE Entitlement Consumption Considerations & Changes

Traditionally, Marketing Cloud Engagement (MCE) deployments were concerned with the tenant’s license, message consumption, and total Contacts count for billing purposes. ListEngage has learned of a Knowledge Article which indicates expansion of the above mentioned items, causing concerns around cost management. Per the article, three new areas of deployments will incur consumption charges: 

● Data Extension Storage
● Automations
● API Calls

In this article, each aspect of MCE deployment will be examined with respect to new consumption considerations. It is not clear when Salesforce will begin billing for these consumption entitlements, but with the introduction of the Knowledge Article (KA), ListEngage is proceeding as if these charges will be realized in the near future.

Data Extension Storage Updates

The KA discussed limitations at 200 GB for an Enterprise Edition tenant that has purchased 10,000 additional Contacts. This update affords 100 GB of storage and applies a 0.01 multiplier to additional Contacts, granting another 100 GB. The 200 GB total is potentially low when compared to the volume of campaign, audience, and engagement data accumulated in a medium-scale tenant that has been operating for considerable time.

To evaluate a current or planned deployment, go beyond the records within your organization’s various Data Extensions (DEs), and instead describe the DEs through the context in which they are used. For example, a DE is populated with records representing purchase history, evaluated for offer qualification at send time and the data is leveraged by multiple campaigns that occur throughout the year. When considering how to minimize your cost to maintain this DE, consider its use case in the greater context of the MCE deployment on an ongoing basis.

 Controlling cost for storage can be accomplished through asking two questions:

1. Should the DE exist over a long period of time, or a short period?
2. If the DE exists over a long period of time, what is the minimum length of time any record must be available within the DE?

The MCE Data Retention Policy feature is key to controlling cost when answering either of these questions. If a DE should only exist for a short period, its Retention Policy should be configured to automatically delete the Data Extension after that time. This is done using the “All records and data extensions” option, with a time limit corresponding to just after the DE is no longer needed. For example, if the DE is part of the audience data for sends within a campaign that are expected to operate through 30 days, the limit would be set at or around the 31st day. Similarly, if the DE is persistent, consider how long individual records are used. The “Individual Records” option enables definition of a limit where each record will be removed after that time period has elapsed (from when each record was created).

A very common scenario for MCE is accumulation of a large amount of data in the Send Log. The Send Log’s information is valuable for many downstream procedures, therefore it can be difficult to identify a specific time period for how long records should persist. Controlling the Send Log’s data accumulation could be handled in the following manner:

● Separate the downstream consumptions from directly accessing the Send Log Data Extension

● Utilize daily (or weekly for some use cases) recurring Automation to leverage a Query Activity to move records out of the Send Log into another “staging” Data Extension (preferably at a time of low sending activity)

● This will allow the Send Log to utilize a Data Retention policy that deletes individual records after 3 days – adding a bit of buffer to the daily transfer of new records to the staging DE.  

● Downstream consumers will now target the staging DE or, if there are competing schedules for those consumers, this process can be repeated with further Query Activities to build differentiated staging DEs to accommodate tailored Retention Policies. 

In almost all use cases, this approach will have an end result of reducing GB consumption over an annual term, despite the creation of additional Data Extensions. As an added bonus, this model is also optimal for tenant performance and send speed.

Automation consumption is more straightforward. The question to answer is if each recurrence (instance) of an Automation is truly needed. A basic example is an Automation used to import data from the Enhanced FTP. The Automation runs hourly, checking for a file that is only dropped on the FTP twice per day. In this configuration, 22 of the 24 Automation Instances produce an error as the file has not been refreshed. A preferable approach would be a Triggered Automation tied to the file drops, running just twice a day – only when necessary. Other considerations for reducing Automation Instance volume is migration of an Automation-based Campaign to Journey Builder or leveraging Script or API based actions to start an Automation on-demand instead of via excessive recurrence.

Automation

In this example, Automation is used to import data from the Enhanced FTP. The Automation runs hourly, checking for a file that is only dropped on the FTP twice per day; 22 of the 24 Automation Instances produce an error as the file has not been refreshed. 

Question: Is each recurrence (instance) of an Automation truly needed? 

Preferable Approaches:
● Triggered Automation tied to the file drops, running just twice a day, ie: only when necessary; or
● Migrate Automation-based Campaigns to Journey Builder; or
● Leverage Script or API based actions to start an Automation on-demand instead of via excessive recurrence

API

Many API based use cases have comparable functionality from other MCE features. It may be convenient for your development team to leverage APIs, and your knowledge of MCE can supplement their efforts.

Example

While the MCE API can be used to grab records from behavioral data (Clicks, etc.) or Data Extensions, that can also result in excessive API utilization at high volumes. If the external results  for that data is delivered in real time, you can move the data on an hourly or daily schedule in bulk via an Automation leveraging a Data Extract Activity. 

Checking for failed delivery of transaction messages

The Transactional Messaging API, unlike older variants of triggered send APIs, is integrated with the Event Notification Service – a.k.a.: webhooks. By enabling bounce and not-sent event notifications for a triggered send, the external application can be notified if events occur instead of iterating a series of Retrieve Requests every few minutes.

*The thoughts in this blog are based on current information available in the KA. Additional information will be included as available and final details, including costs, will come from Salesforce. 

About the Author
Fred Homner

Fred brings more than 13 years of experience in Customer Success and Technology/Product at ListEngage. Fred has a deep understanding and is a Subject Matter Expert of the Marketing Cloud platform (back and frontend) helping customers optimize their data, send management, and complex business needs.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts

Are you a marketer getting ready for Dreamforce next week? Does it feel a bit like getting ready for the

Julie Marques

Marketing Champion & Director of Strategic Consulting

We hear a lot of buzz and noise around things like CRM, CDP, and MDM, all the acronyms we’ve got

Ryan Ingersoll

Solutions Architect

In behavioral economics terms, we all operate under what is called “bounded rationality.” We almost never have 100% of the

Cole Fisher

Strategic Solution Lead