Friday, July 31, 2015

VEST Report: Competition in B2B Marketing Automation Isn't About Features

Yesterday I released the mid-year edition of the VEST Report on B2B marketing automation vendors, thereby meeting my self-imposed deadline of July 31. Look here for more information or to make a purchase.

Updating the report gives a nice overview of recent industry developments. Here are some observations:

- market positions are pretty stable. The only new vendor to make a splash recently has been SharpSpring, which went from zero to 500 agency clients in the past year. This puts them among the top 3 leaders in the small business sector. Otherwise, the top players have remained the same: Infusionsoft, HubSpot, Act-On, Salesforce Pardot, Marketo, Oracle Eloqua, and Adobe. Maybe RedPoint has crept up to a leader position, but they don’t share enough business information for me to know. Open source vendor Mautic has some interesting potential but it’s too soon to see any actual impact.

- products are pretty stable, too. The VEST entries showed very little change in the features reported by the various vendors since the last report. This isn't bad: it's simply that the standard features are now widely understood and vendors have had time to add them. The only major changes captured in the new report are the custom table abilities added by Marketo and Ontraport.

- the real action is outside the products. Probably the most interesting trend is integration of marketing automation with retargeting and display ad vendors, which has been announced in various forms by Marketo, Oracle, and Adobe. That, of course, relates to the convergence of martech and ad tech into “madtech” that I've written about before. The other big trends are systems for marketing agencies (either focused products like SharpSpring or added features and partner programs by the major vendors) and education programs for users (something that major vendors have long done but that others like Autopilot* and Mautic are also expanding). Both agencies and education are ways to support industry growth by overcoming the lack of marketers who can effectively use marketing automation systems.

- the really real action is elsewhere. Lest you think I’m just plain cranky, be assured that I see lots of exciting things happening in predictive analytics, data aggregation and enrichment, automated intelligence, and other areas. Even B2C marketing automation is showing some interesting new life. But even though B2B marketing automation revenues are still growing nicely**, the industry itself is looking pretty stable these days.



__________________________________________________________________________
*Not in the VEST by their choice.
** I'm estimating 40% in 2015, although it’s now harder to know because so much revenue is hidden inside companies like Oracle, Salesforce, and IBM that don’t report it separately.

Thursday, July 23, 2015

Design Your Best Marketing Technology Stack and Plan the Transition: Sneak Peek at FlipMyFunnel Conference

Picture posted by Terminus

I can’t decide which is more exciting about next month’s FlipMyFunnel conference in Atlanta (register here and use the code DR50 for a 50% discount): the opportunity to interact with a great collection of speakers and attendees or seeing what the conference organizers at Terminus do with the notion of MarTech Stack Jenga. Based on one cryptic Twitter picture, they’re up to something big.

My own contribution will be a presentation on designing your marketing stack. This is something I’ve done for years as a consultant but it’s now an especially hot topic. Here are some of the key points I’ll be making:

- the stack is based on your business and marketing strategies. I’ve described the importance of strategy before but have now refined my explanation to show how marketing programs, business requirements, and functional requirements connect over-all marketing strategy with martech. The picture below also highlights the importance of planning for future business, marketing, and martech developments.


And I’ve provided a sample template for organizing your requirements by system.

- a winning stack is efficient as well as functional. I'll present a checklist for evaluating your stack design along those dimensions.

- how you draw the stack makes a difference. I’ll argue that a diagram which shows relationships between systems is more helpful than one that simply lists the different components. In the example below, the flow highlights the isolation of sales and service from the rest of the stack – a critical weakness that isn’t apparent when you look at the systems only.


- transition planning must be systematic as well. Companies struggle with transition planning even more than they struggle with stack design.  The goal is to sequence the stack changes so that each new system adds the greatest value with the least disruption. This requires understanding which system changes support each improvement.  This lets you figure out which improvements would be supported by changing any one system, what would then be possible after changing a second system, what is possible after changing a third system, and so on.  The worksheet lets you explore different sequences so you can pick the best one.

This will be easier to understand in person than in writing. Don't take my word for it: join us in Atlanta and see for yourself.

Friday, July 17, 2015

Predictive Analytics: Should Automated Content Selection Work by Segment or Individual?

Two vendors made the same point with me this week, which is reason enough for a blog post in mid-July. The point was the difference between basing content selection on individuals and on segments. I have never considered the distinction to be especially important, since segment membership is determined by individual behaviors and individual-level decisions are guided by behavior patterns of groups. But the two vendors in question (Evergage and Jetlore) and another I spoke with earlier (Sailthru) were downright religious about the superiority of their approach (individual-level selections in every case). So I thought it worth some discussion.

First, let’s clarify the topic. The distinction these vendors were making is between selecting content separately for each individual and selecting the same content for all members of a segment. Of course, customers are assigned to segments based on their individual behavior and other attributes, but once someone is in a segment, the segment-based system ignores individual differences. Among segment-based systems, collaborative filtering uses product selections almost exclusively: this is the classic “customers who looked at this product also considered these products” approach, which doesn’t take into account other aspects of the customer’s history. Other methods build segments based on customer life stage, demographics, and similar broad attributes. It’s possible to build segments based on very detailed behavioral differences, but that’s likely to create too many segments to be practical.

At an operational level, the individual-level systems use automated analytics to rank all possible content choices for each individual using that individual’s data. Segment-based systems either use rules to select content for each segment or use automated analytics to rank content choices for the segment as a whole. The individual-level approach makes the most sense when there are many content choices to consider, as with retail merchandise or entertainment (books, music, movies, etc.). Those are cases where getting precisely the right content in front of the customer is much more effective than offering everyone the most commonly-selected items. Retail and entertainment marketers also usually have detailed history which supports accurate predictions of what the customer will want. Segment-based systems work best when only a few choices are available.  This means a separate segment can be created for each item or, more realistically, segments come first and items are created to serve them.

So does the entire debate really come down to using individual-level systems when there are lots of choices and segment-based systems when there are only a few? Not really: collaborative filtering can also handle massive numbers of options with great accuracy. The difference is that collaborative filtering doesn’t really consider much beyond a particular product choice, while a sophisticated individual-level system will consider other factors including the current context and the customer’s history. Done correctly, this should yield more appropriate selections. On the other hand, individual-level approaches require more data and more complex analytics, so there will be cases where a segment-based method is ultimately more appropriate.

Moreover, the two approaches are as much complementary as competitive. A segment can indicate customer state, such as just-acquired, satisfied, or churn-risk, which constrains the contents considered for offer by the individual-level system.* Or a segment-level system could chose the type of message to send but let the individual-level system pick the specific contents. Dynamic content within email campaigns often works exactly this way.

In fact, I’d argue that state-based segmentation is essential for individual-level optimization because states provide a framework to organize the masses of detailed customer data. Without tagging the customer’s current state during each event, it would be very difficult for even the most sophisticated analytical system to see the larger arc of the customer life cycle or to understand the relationship between specific offers and long-term outcomes.

All this has practical implications for marketers considering these systems.

- for individual-level systems, make sure they can look beyond predicting the highest immediate response rate to measuring impact on long-term objectives such as conversion or lifetime value.

- for segment-level systems, make sure they can take into account the customer’s past behaviors and attributes, not just the products they have recently purchased or considered

- for all types of systems, assess how they track and guide the customer through the stages of her long-term journey

You'll want to consider other differences between content selection systems, such as number of items they can manage, how quickly they return selections, how they incorporate items without a sales history, and what data they consider in their analysis. Just remember that making selections isn’t an end in itself: you want to make choices that will create the greatest long-term value. To do that, it’s not a choice between individual and segment level analysis. You need both.

____________________________________________________________________________
* See my June 25 post for a more detailed discussion of state-based campaigns.

Tuesday, July 07, 2015

Does Future Marketing Technology Require Perfect Data?


I mentioned in my last post that I’ve started to think in terms of three realities: today (the next two years), tomorrow (two to five years out), and later (after five years). Like the famous New Yorker magazine cover that showed a detailed knowledge of Manhattan and increasingly vague view of more distant regions, our picture of the immediate future is much more nuanced than what happens farther out. One result is an apparent assumption that future technology will work much better than today’s technology – not because anyone really thinks that future technology will be perfect, but because we can’t see where its imperfections will appear.

I’ve been thinking about this because so my own predictions are premised on increasingly detailed knowledge about customers and prospects. Both the “madtech” vision of broad access to third-party data and the “robotech” vision of delegating decisions to machines assume that effectively complete data will be available about each customer. But a quick look at today’s data shows that is far from true. Here are some factoids I’ve been gathering to illustrate the point:


- 37% of mobile ad locations are accurate to within 100 meters (Thinknear)

- 30-55% match rates for B2C individual-level onboarding (LiveRamp)

- 16-29% match rates for B2B individual-level data enrichment: (Raab Associates client tests)

- 14% match rates and low predictive value for B2B account-level intent data: (Infer)

And this doesn’t even begin to address predictive modeling, where even a 10x lift vs average still implies many errors at the individual level.

Contemplating these results does give me pause. At some point, poor data means that theoretically possible approaches are not practical because of low coverage or insufficient performance. Those constraints won’t magically vanish in the future, even though they’re not visible at this distance.

Being a technology optimist, I assume that data will get better over time. But I can’t cite much evidence to support my optimism.  If anything, the number of new data sources is outstripping improvements in existing sources. The true core challenge is identity resolution, which means associating data from different sources with the right individual profile. Cross-device matching is the current focus of this discussion but covers just part of the problem.

It’s a safe bet that perfect data won’t be available in two years or five years or probably ever. But the real question is whether enough good data will be available to support the futures I’ve been forecasting.

I think a realistic view is that some data will be more available than other data, and, as a result, some portions of the visions will happen while others do not. Customer data is likely to be richer than prospect data, since customers will grant permission to link with external data sources (or take actions that make linking easier even without their permission). Sharing among complementary companies – for example, airlines and hotels – will be easier to negotiate than sharing with anyone through public exchanges. Data about objects, such as cars or groceries or homes, should be less sensitive than data about individuals (even though there’s obviously a close relationship between objects and their owners). Data about public behaviors, such as travel and store visits, is less sensitive than data about private matters such as health care.  (See this recent Altimeter Group report for more information on consumer attitudes to privacy.)

In short, the future will remain unevenly distributed, as William Gibson observed. Marketers and the technologists who support them need both the ideal vision of how things would work in a world of perfect data (which isn’t the same as a perfect world!) and the realistic understanding of what’s likely to be practical within their planning horizon. They can then aggressively pursue opportunities revealed by the vision without chasing chimeras that will never appear. This pursuit is essential: tomorrow always comes, but the future won’t happen by itself.




Wednesday, July 01, 2015

Marketing Beyond MadTech: What Happens When The Robots Take Over?

I’ve recently found myself bouncing between three worlds:

- today’s world, where I spend my time reviewing software and helping marketers choose martech products. Since most of that discussion is currently phrased in terms of building a marketing stack, let’s call it the world of “stacktech”.

- tomorrow’s world, about two to five years in the future.  This is dominated by the merger between martech and adtech, a.k.a. “madtech”. The trends shaping this world are well known and many people agree on the broad outlines of what it will look like. I spend my time filling in the details since details will determine which tools and skills marketers need for success.

- the future world, out five years and beyond.* There’s much less agreement on how this will look and it’s arguably too far away for most marketers to worry about. But I do have a vision which I think may be useful to vendors and managers making long term plans. Since the dominant feature of this world will be an expanded role for machines, I’ll call it “robotech”.

Each of these worlds is very different from the others. Today’s marketing stack is still largely about tools to manage direct interactions between customers and the company. It works with the company’s own data and primarily through company-owned media like email and Web sites. Customer activities with anyone other than the company are largely invisible to the company.

By contrast, advertising and social messages in the "madtech" world are tightly integrated with company-owned channels and all customer behaviors are visible (for a price). The technical symbol of this transition is the change I wrote about last week from linear, company-driven campaign flows to customer-triggered experience plays.

The "robotech" world brings yet another radical shift.  In this future, humans have delegated increasing numbers of day-to-day decisions to their machines. My recent speeches have illustrated this with a vignette about a person in headed home in her self-driving car: she works quietly in her virtual office while her devices debate whether to stop for fuel, buy her coffee, avoid donuts, and get milk for breakfast. Only once the machines have reached a consensus do they inform her of the decision.

The example is trivial but the implications are profound. When machines buy on behalf of their owners, then marketers will sell to the machines. Since the machines will decide on the basis of algorithms, marketing becomes a matter of understanding and appealing to those algorithms. We already do this today in specialized areas like search engine optimization (“selling” to Google for a higher ranking) and programmatic media buying (providing more data about impressions so they earn higher bids). This sort of marketing is fundamentally different from both stacktech and madtech. My rough calculations show that nearly half of all consumer expenditures could eventually shift to machine control.

Humans still play an important role in the robotech world. It’s not just that they’re paying the bills for purchases by autonomous agents – a relationship familiar to any parent of a teenager. It’s also that humans are choosing the agents themselves. This is essentially a subscription: people will pay for a service that manages individual purchases. Since the details of each agent’s algorithm will be too hard to evaluate directly, the subscriptions will ultimately be purchased on the basis of trust.  This is a classic goal for traditioinal brand marketing but quite a change from the madtech focus on optimizing shorter-term metrics such as response or immediate revenue.

I don’t think the rebirth of brand marketing will mean a return to the simple-minded glories of the Mad Men era – we’ll still have all that data and all those channels to work with. But it might just possibly mean a less frantic urge to respond to every twist in the customer journey, replaced by broader, more stable messages aimed at building brand trust and a long-term relationship. In a world where customers increasingly filter out marketing messages and rely on machines to manage many steps in their customer journey, marketing approaches that deliver a few general messages may ultimately be the best use marketers can make of the limited customer attention they have available.

In sum, the transition madtech to robotech will be just as wrenching as the transition from stacktech to madtech.  Marketers should recognize that both are coming, even if it's too soon to prepare for the robotech world.  The time for that will come very quickly and it's always good to have at least thought about it in advance.

_____________________________________________________________________
* Serious planners think much further out, in terms of decades. But I don’t think anything usefully concrete can be predicted that far in advance.