Wednesday, December 06, 2017

Here's a Game to Illustrate Strategic Planning

My wife is working on a Ph.D. in education and recently took a course on strategic planning for academic institutions. Her final project included creating a game to help illustrate the course lessons. What she came up with struck me as applicable to planning in all industries, so I thought I’d share it here.

The fundamental challenge she faced in designing the game was to communicate key concepts about strategic planning. The main message was that strategic planning is about choosing among different strategies to find the one that best matches available resources. That’s pretty abstract, so she made it concrete by presenting game players with a collection of alternative strategies, each on a card of its own. She then created a second set of cards that listed actions available to the players. Each action card showed which strategies the action supported and what resources it required. There were four resources: money, faculty, students, and administrative staff.  To keep things simple, she assumed that total resources were fixed, that each strategy contributed equally to the ultimate goal, and that each action contributed equally to whichever strategies it supported. 

In other words, the components of the game were:

- One goal. In the case of my wife’s game, the goal was to achieve a “top ten” ranking for a particular department within a university. (It was a good goal because it was easily understood and measured.)

- Four strategies. In my wife’s game, the options were to build up the department, cooperate with other departments at the university, cooperate with other universities, or promote the department to the media and general public.

- A dozen actions. Each action supported at least one strategy (scored with 1 or 0) and consumed some quantity of the four resources (scored from 0 to 3 for each resource). Actions were things like “run a conference”, “set up a cross-disciplinary course” and “finance faculty research”.

- Four resources, each assigned an available quantity (i.e., budget).

As you can tell from the description, the action cards are the central feature of the game.  Here's a concrete example, where each row represents one action card:


The fundamental game mechanism was to pick a set of actions.  These were scored by counting how how many supported each strategy and how many resources they consumed.  The resource totals couldn't exceed the available quantities for each resource.  The table below shows scoring for a set of three actions.

 In this particular example, all three actions support "cooperate with other departments", while two support "build department" and one each supports "cooperate with other universities" and "promote to public".  Resource needs were money=8, faculty=6, student=5 and administration= 1.  Someone with these cards could choose "cooperate with other departments" as the best strategy -- if the resources permitted.  But if they were limited to 7 points for each resource, they might switch the "fund scholarship" card for the "extracurricular enrichment" card, which uses less money even though it consumes more of the other resources.  That works because, with a budget of 7 for each resource, the player can afford to increase spending in the other categories.


As this example suggests, the goal of the game is to get players to think about the relations among strategies, actions and resources, and in particular how to choose actions that fit with strategies and resources.

Although the basic scoring approach is built into the game, there are many ways my wife could have played it:

- Predefine available resources and let different players draw different action cards.  They would then decide which strategy best fit the available cards and resources. 

- Give different strategy cards to different players and put all action cards face up on the table.  Players then each choose one action card in turn, trying to assemble the best set of actions for their assigned strategy.

- Randomly select the resource levels at the start of the game and let all players use all action cards.  The winner is whoever first finds combination of actions that yields the most points for any strategy without exceeding the resources available.

- Split the class into two teams, gave each team two strategy cards and a set of action cards, and let the winner be whichever team finds the combination of actions that comes closest to using all available resources. (That’s the one she chose.)

Other rules are possible, along with refinements such as making some strategies more valuable than others at reaching the goal and making some actions more effective than others at supporting a given strategy. But my wife had ten minutes to explain, play, score and discuss the game, so the simplifications made sense for her situation.

What I like about this game is that it clearly identifies the elements of the strategic planning process and shows how they’re related. Specifically, it highlights that:

- different strategies can reach the same goal. Identifying available strategies and choosing among them is an important part of strategic planning that's often not clearly recognized.

- different actions can support different strategies. This has two implications: strategies are initially chosen in part based on what actions are available and, later, actions are evaluated based on how well they support the chosen strategy.

- different actions can compete for the same resources. In the short run, the combination of actions must be chosen to maximize the value achieved from the resources available. In the long run, resources are not fixed, so organizations can decide which resources they need to support the actions they need for strategic success.

- different strategies are best suited to different combinations of resources. This is the ultimate message of the game. Actions are just intermediaries to help understand how specific strategies and specific resources are related.

I hope you find this interesting and perhaps even useful. It’s more thought experiment than actual game.  But if you’re inspired to create your own physical version, do send me pictures.

Friday, December 01, 2017

2017 Retrospective: Things I Didn't Predict


It’s the time of year when people make predictions. It’s not my favorite exercise: the best prediction is always that things will continue as they are, but what’s really interesting is change – and significant change is inherently unpredictable. (See Nassim Nicholas Taleb's The Black Swan  and Philip Tetlocks' Superforecasting on those topics.)

So I think instead I’ll take a look at surprising changes that already happened. I’ve covered many of these in the daily newsletter of the Customer Data Platform Institute (click here to subscribe for free). In no particular order, things I didn’t quite expect this year include:

- Pushback against the walled garden vendors (Facebook, Google, Amazon, Apple, etc.) Those firms continue to dominate life online, and advertising and ecommerce in particular. (Did you know that Amazon accounted for more than half of all Black Friday sales last week?) But the usual whining about their power from competitors and ad buyers has recently been joined by increasing concerns among the public, media, and government. What’s most surprising is it took so long for the government to recognize the power those companies have accrued and the very real threat they pose to governmental authority. (See Martin Gurri’s The Revolt of the Public for by far the best explanation I’ve seen of how the Internet affects politics.)  On the other hand, the concentrated power of the Web giants means they could easily converted into agents of control if the government took over.  Don’t think this hasn’t occurred to certain (perhaps most) people in Washington.  Perhaps that’s why they’re not interested in breaking them up.  Consistent with this thought: the FCC plan to end Net Neutrality will give much more power to cable companies, which as highly regulated utilities have a long history of working closely with government authorities. It’s pitifully easy to imagine the cable companies as enthusiastic censors of unapproved messages.

- Growth in alternative personal data sources. Daily press announcements include a constant stream of news from companies that have found some new way to accumulate data about where people are going, who they meet, what they’re buying, what they plan to buy, what content they’re consuming, and pretty much everything else. Location data is especially common, derived from mobile apps that most people surely don’t realize are tracking them. But I’ve seen other creative approaches such as scanning purchase receipts (in return for a small financial reward, of course) and even using satellite photos to track store foot traffic. In-store technology such as beacons and wifi track behaviors even more precisely, and I’ve seen some fascinating (and frightening) claims about visual technologies that capture peoples’ emotions as well as identities. Combine those technologies with ubiquitous high resolution cameras, both mounted on walls and built into mobile devices, and the potential to know exactly who does and thinks what is all too real. Cross-device matching and cross-channel identity matching (a.k.a. “onboarding”) are part of this too.

- Growth in voice interfaces. Voice interfaces don't have the grand social implications of the preceding items but it’s still worth noting that voice-activated devices (Amazon Alexa and friends) and interfaces (Siri, Cortana, etc.) have grown more quickly than I anticipated. The change does add new challenges for marketers who were already having a hard time figuring out where to put ads on a mobile phone screen.  With voice, they have no screen at all.  Having your phone read ads to you, or perhaps worse sing a catchy jingle, will be pretty annoying. To take a more positive view: voice interfaces will force innovation in how marketers sell and put a premium on agent-based services that make more decisions for consumers. Of course, that's only positive if the agents actually work in consumers’ interest. If the agents also serve other masters – such as companies that pay them to send business their way – consumers can easily be harmed. But at least they’ll have more time for things other than shopping.

- Retailers focus on convenience. Speaking of shopping: retailers with physical stores continue to panic about the growth of Amazon and other ecommerce vendors. What I find surprising isn’t the panic, but that their main reaction has been to introduce innovations like “BOPIS” (buy online, pick up in store) that focus on shopper convenience. Nothing will ever be more convenient than ordering remotely and having stuff delivered, so this is a game they’re guaranteed to lose. It’s clear to me that the future of in-store retail depends on creating entertaining, enjoyable experiences, and specifically on human interactions that online merchants can’t duplicate. Those interactions could be with store personnel, friends, and other shoppers. I’ll violate my rule against predictions and say here that categories of in-store retail that aren’t inherently entertaining will vanish (think: grocery shopping except maybe for fresh produce).  But I'll hedge my bet by not saying when.

- Marketing clouds increase their share. I don’t recall seeing any actual data on this, but my distinct impression from talking with buyers is that the major marketing clouds (Adobe, Salesforce, Oracle) are being bought by more companies. This shouldn’t have surprised me: I’ve spoken for years about “Raab’s Law”, which says that integrated suites always beat best-of-breed components in the long run. It seemed briefly that marketing technology would be different, largely because cloud-based systems make integration so much easier than before.  The vast profusion of specialized martech systems seemed to support this view. But it’s clear we’re now seeing “martech fatigue” set in, as marketers tire of purchasing an endless array of new systems they then barely use. One bolt of lightning to illuminate this was a recent Gartner survey that found martech spend is now falling. This bodes ill for independent martech vendors and suggests that the long-awaited consolidation may finally be at hand. The question really is how long the momentum of wilder times will carry many of today’s martech point solutions before they fall.

- Interest in self-service technology. I’ve seen several recent announcements related to the idea that marketers and non-technical users in other departments will develop their own systems using varying types of advanced technology. Chatbots, predictive models, and entire business process integrations have all been offered as things business users could create for themselves, often with a little help from artificially intelligent friends. Bosh, I say. Marketers are already overwhelmed by the complexity of their tools and in particular by the challenges of connecting separate systems. The cloud and AI might make this easier but they don’t make it easy. The growth in marketing clouds shows marketers voting with their budgets to avoid integration. To be clear, what surprises me is that people think self-service will work or even that it’s desirable. They should know better.

- No Customer Data Platform acquisitions. Okay, maybe I’m the only person who thinks about this. But it’s still pretty odd that none of the big marketing clouds has yet purchased a CDP vendor. (The only deal I recall was Campaign Monitor buying Tagga and those are not major players.) I can think of many ways to explain this: cloud vendors don’t see the problem, they think they’re already solved it, they don’t want to admit they haven’t solved it, they don’t want to reengineer existing products to use a CDP, they prefer to buy companies with large market share, they think they can build their own CDPs, etc. But, ultimately, the big marketing clouds haven’t purchased a CDP because their clients haven’t pushed them for one. The marketing clouds will act quickly once they start losing deals because clients want CDP functions the vendors can’t provide. Maybe the limited degree of integration within the existing marketing cloud architectures is enough, or maybe buyers don't realize it's inadequate until after they’ve made the purchase.  I fully expect acquisitions to happen – oops, another prediction – but not very soon. And if the clouds continue to increase their share without adding a CDP, maybe it won’t happen at all.

- No uptake on the idea of “personal network effects”. There’s no question that I’m the only person thinking about this one. But I continue to believe the concept (described here) is central to understanding how things really work in today’s online economy. I'm surprised other people haven't picked up on it, especially as they pay more attention to the power of the walled garden vendors. For example, anti-trust regulators are struggling with the fact that firms like Google, Facebook and Amazon are not monopolies in the conventional sense, are missing the point that such firms can monopolize the attention (and, thus, purchases) of individual consumers. If anybody wants to co-author a Harvard Business Review article on this, let me know. (I’m serious.)

So much for surprises. Lest you get the impression that I’m always wrong, I'll list some things that haven’t surprised me one bit.

- No pressure on privacy. While many people keep expecting consumers to really start caring about personal privacy, I’ve never seen any reason to think that will happen. Experience has shown that even the smallest reward is enough for people to expose pretty much everything there is to know about themselves. Sometimes it doesn’t even require paying money; just a bit more convenience or recognition is enough. If anything, people are getting more used to everything being public and thus less concerned about keeping anything private. They suspect, probably correctly, that it’s a losing battle.

- Lack of unified customer data. Marketers have talked for years about the need for a complete customer view and the integrated, omni-channel customer experience that’s needed to support it. It’s possible there’s actually been some progress: while surveys used to show 10-15% of marketers said they had a unified view, I’m now often seeing figures in the 25-40% range. I don’t believe the real numbers are anywhere that high but maybe they indicate a little improvement. Even so, given the important assigned to the topic, you might expect the problem would mostly be solved by now. I’m not surprised it hasn’t been: building a unified view is tough, more because of organizational obstacles than a lack of technology. So things will continue to move slowly.

- AI bubble remains unburst. Surely it’s time for people to stop getting excited every time they hear the term “artificial intelligence”? Apparently not; pretty much every new product I see announced, including vacuum cleaners and doorknobs, has an AI component. People will eventually expect AI to be built into everything, just as they expect electricity, plastics, and other previous miracle technologies. But it takes a long time for people to recognize what’s possible, so I’m not surprised they still find even basic AI features to be amazing.

Wednesday, November 22, 2017

Do Customer Data Platforms Need Identity Matching? The Answer May Surprise You.

I spend a lot of time with vendors trying to decide whether they are, or should be, a Customer Data Platform. I also spend a lot of time with marketers trying to decide which CDPs might be right for them. One topic that’s common in both discussions is whether a CDP needs to include identity resolution – that is, the ability to decide which identifiers (name/address, phone number, email, cookie ID, etc.) belong to the same person.

It seems like an odd question. After all, the core purpose of a CDP is to build a unified customer database, which requires connecting those identifiers so data about each customer can be brought together. So surely identity resolution is required.

Turns out, not so much. There are actually several reasons.

- Some marketers don’t need it. Companies that deal only in a single channel often have just one identifier per customer.  For example, Web-only companies might use just a cookie ID.  True, channel-specific identifiers sometimes change (e.g., cookies get deleted).  But there may be no practical way to link old and new identifiers when that happens, or marketers may simply not care.  A more common situation is companies have already built an identity resolution process, often because they’re dealing with customers who identify themselves by logging in or who transact through accounts. Financial institutions, for example, often know exactly who they’re dealing with because all transactions are associated with an account that’s linked to a customer's master record (or perhaps not linked because the customer prefers it that way). Even when identity resolution is complicated,  mature companies often (well, sometimes) have mature processes to apply a customer ID to all data before it reaches the CDP. In any of these cases, the CDP can use the ID it’s given and not need an identity resolution process of its own.
- Some marketers can only use it if it’s perfect. Again, think of a financial institution: it can’t afford to guess who’s trying to take money out of an account, so it requires the customer to identify herself before making a transaction. In many other circumstances, absolute certainty isn’t required but a false association could be embarrassing or annoying enough that the company isn’t willing to risk it. In those cases, all that’s needed is an ability to “stitch” together identifiers based on definite connections. That might mean two devices are linked because they both sent emails using the same email address, or an email and phone number linked because someone entered them both into a registration form. Almost every CDP has this sort of “deterministic” linking capability, which is so straightforward that it barely counts as identity resolution in the broader sense.

- Specialized software already exists. The main type of matching that CDPs do internally – beyond simple stitching – is “fuzzy” matching.  This applies rules to decide when two similar-looking records really refer to the same person. It's most commonly applied to names and postal addresses, which are often captured inconsistently from one source to the next. It might sometimes be applied to other types of data, such as different forms of an email address (e.g. draab@raabassociates.com and draab@raabassociatesinc.com). The technology for this sort of matching gets very complicated very quickly, and it’s something that specialized vendors offer either for purchase or as a service. So CDP vendors can quite reasonably argue they needn’t build this for themselves but should simply integrate an external product.

- Much identity resolution requires external data. This is the heart of the matter.  Most of the really interesting identity resolution today involves linking different devices or linking across channels when there’s no known connection. This sort of “probabilistic” linking is generally done by vendors who capture huge amounts of behavioral data by tracking visitors to popular Web sites or users of popular mobile applications, or by gathering deterministic links from many different sources. They then build giant databases (or "graphs" if you want to sound trendy) with these connections.  Even matching of offline names and addresses usually requires external data, both to standardize the inputs (to make fuzzy matching more accurate) and to incorporate information such as address and name changes that cannot be known by inspecting the data itself.  In all these situations, marketers need to use the external vendors’ data to find connections that don’t exist within the marketers’ own, much more limited information. If the external vendor provides matching functions in addition to the data, the CDP is relieved of the need to do the matching internally.

In short, there’s a surprisingly strong case that identity resolution isn’t a required feature in a CDP.  All the CDP really needs is basic stitching and connections to external services for more advanced approaches.  As cross-device and cross-channel matching become more important, CDPs will be more reliant on external vendors no matter what capabilities they’ve built for themselves. One important qualifier is the CDP implementation team still needs expertise in matching, so they can help clients set it up properly. But while it’s great to find a CDP vendor with its own matching technology, lack of that technology shouldn’t exclude a vendor from being considered a CDP.

Thursday, November 09, 2017

No, Users Shouldn't Write Their Own Software

Salesforce this week announced “myEinstein” self-service artificial intelligence features to let non-technical users build predictive models and chatbots. My immediate reaction was that's a bad idea: top-of-the-head objections include duplicated effort, wasted time, and the potential for really bad results. I'm sure I could find other concerns if I thought about it, but today’s world brings a constant stream of new things to worry about, so I didn’t bother. But then today’s news described an “Everyone Can Code” initiative from Apple, which raised essentially the same issue in even clearer terms: should people create their own software?

I thought this idea had died a well-deserved death decades ago. There was a brief period when people thought that “computer literacy” would join reading, writing, and arithmetic as basic skills required for modern life. But soon they realized that you can run a computer using software someone else wrote!* That made the idea of everyone writing their own programs seem obviously foolish – specifically because of duplicated effort, wasted time, and the potential for really bad results. It took IT departments much longer to come around the notion of buying packaged software instead of writing their own but even that battle has now mostly been won. Today, smart IT groups only create systems to do things that are unique to their business and provide significant competitive advantage.

But the idea of non-technical workers creating their own systems isn't just about packaged vs. self-written software. It generally arises from a perception that corporate systems don’t meet workers’ needs: either because the corporate systems are inadequate or because corporate IT is hard to work with and has other priorities. Faced with such obstacles to getting their jobs done, the more motivated and technically adept users will create their own systems, often working with tools like spreadsheets that aren’t really appropriate but have the unbeatable advantage of being available.

Such user-built systems frequently grow to support work groups or even departments, especially at smaller companies. They’re much disliked by corporate IT, sometimes for turf protection but mostly because they pose very real dangers to security, compliance, reliability, and business continuity. Personal development on a platform like myEinstein poses many of the same risks, although the data within Salesforce is probably more secure than data held on someone’s personal computer or mobile phone.

Oddly enough, marketing departments have been a little less prone to this sort of guerilla IT development than some other groups. The main reason is probably that modern marketing revolves around customer data and customer-facing systems, which are still managed by a corporate resource (not necessarily IT: could be Web development, marketing ops, or an outside vendor). In addition, the easy availability of Software as a Service packages has meant that even rogue marketers are using software built by professionals. (Although once you get beyond customer data to things like planning and budgeting, it’s spreadsheets all the way.)

This is what makes the notion of systems like myEinstein so dangerous (and I don’t mean to pick on Salesforce in particular; I’m sure other vendors have similar ideas in development). Because those systems are directly tied into corporate databases, they remove the firewall that (mostly) separated customer data and processes from end-user developers. This opens up all sorts of opportunities for well-intentioned workers to cause damage.

But let’s assume there are enough guardrails in place to avoid the obvious security and customer treatment risks. Personal systems have a more fundamental problem: they’re personal. That means they can only manage processes that are within the developer’s personal control. But customer experiences span multiple users, departments, and systems. This means they must be built cooperatively and deployed across the enterprise. The IT department doesn't have to be in charge but some corporate governance is needed. It also means there’s significant complexity to manage, which requires some sort of trained professionals need to oversee the process. The challenges and risks of building complex systems are simply too great to let individual users create them on their own.

None of this should be interpreted to suggest that AI has no place in marketing technology. AI can definitely help marketers manage greater complexity, for example by creating more detailed segmentations and running more optimization tests than humans can manage by themselves. AI can also help technology professionals by taking over tasks that require much skill but limited creativity: for example, see Qubole, which creates an “autonomous data platform" that is “context-aware, self-managing, and self-learning”. I still have little doubt that AI will eventually manage end-to-end customer experiences with little direct human input (although still under human supervision and, one hopes, with an occasional injection of human insight). Indeed, recent discussions of AI systems that create other AI systems suggest autonomous marketing systems might be closer than it seems.

Of course, self-improving AI is the stuff of nightmares for people like Nick Bostrom, who suspect it poses an existential threat to humanity. He may well be right but it’s still probably inevitable that marketers will unleash autonomous marketing systems as soon as they’re able. At that point, we can expect the AI to quickly lock out any personally developed myEinstein-type systems because they won’t properly coordinate with the AI’s grand scheme. So perhaps that problem will solve itself.

Looking still further ahead, if the computers really take over most of our work, people might take up programming purely as an amusement. The AIs would presumably tolerate this but carefully isolate the human-written programs from systems that do real work, neatly reversing the “AI in a box” isolation that Bostrom and others suggest as a way to keep the AIs from harming us. It doesn’t get much more ironic than that: everyone writing programs that computers ignore completely. Maybe that’s the future Apple’s “Everyone Can Code” is really leading up to.

____________________________________________________________
*Little did we know.  It turned out that far from requiring a new skill, computers reduced the need for reading, writing, and math.

Monday, November 06, 2017

TrenDemon and Adinton Offer Attribution Options

I wrote a couple weeks ago about the importance of attribution as a guide for artificial intelligence-driven marketing. One implication was I should pay more attention to attribution systems. Here’s a quick look at two products that tackle different parts of the attribution problem: content measurement and advertising measurement.

TrenDemon

Let’s start with TrenDemon. Its specialty is measuring the impact of marketing content on long B2B sales cycles. It does this by placing a tag on client Web sites to identify visitors and track the content they consume, and then connecting client CRM systems to find which visitor companies ultimately made a purchase (or reached some other user-specified goal). Visitors are identified by company using their IP address and as individuals by tracking cookies.

TrenDemon does a bit more than correlate content consumption and final outcomes. It also identifies when each piece of content is consumed, distinguishing between the start, middle, and end of the buying journey. It also looks at other content metrics such as how many people read an item, how much time they spend with it, and how many read something else after they’re done. These and other inputs are combined to generate an attribution score for each item. The system uses the score to identify the most effective items for each journey stage and to recommend which items should be presented in the future.

Pricing for TrenDemon starts at $800 per month. The system was launched in early 2015 and is currently used by just over 100 companies.

Adinton

Next we have Adinton, a Barcelona-based firm that specializes in attribution for paid search and social ads. Adinton has more than 55 clients throughout Europe, mostly selling travel and insurance online. Such purchases often involve multiple Web site visits but still have a shorter buying cycle than complex B2B transactions.

Adinton has pixels to capture Web ad impressions as well as Web site visits. Like TrenDemon, it tracks site visitors over time and distinguishes between starting, middle, and finishing clicks. It also distinguishes between attributed and assisted conversions. When possible, it builds a unified picture of each visitor across devices and channels.

The system uses this data to calculate the cost of different types of click types, which it combines to create a “true” cost per action for each ad purchase. It compares this with the clients’ target cost per actions to determine where they are over- or under-investing.

Adinton has API connections to gather data from Google AdWords, Facebook Ads, Bing Ads, AdRoll, RocketFuel, and other advertising channels. An autobidding system can currently adjust bids in AdWords and will add Facebook and Bing adjustments in the near future. The system also does keyword research and click fraud identification. Pricing is based on number of clicks and starts as low as $299 per month for attribution analysis, with additional fees for autobidding and click fraud modules. Adinton was founded in 2013.  It launched its first product in 2014 although attribution came later.

Further Thoughts

These two products are chosen almost at random, so I wouldn’t assign any global significance to their features. But it’s still intriguing that both add a first/middle/last buying stage to the analysis. It’s also interesting that they occupy a middle ground between totally arbitrary attribution methodologies, such as first touch/last touch/fractional credit, and advanced algorithmic methods that attempt to calculate the true incremental impact of each touch. (Note that neither TrenDemon nor Adinton’s summary metric is presented as estimating incremental value.)

 Of course, without true incremental value, neither system can claim to develop an optimal spending allocation. One interpretation might be that few marketers are ready for a full-blown algorithmic approach but many are open to something more than the clearly-arbitrary methods. So perhaps systems like TrenDemon and Adinton offer a transitional stage for marketers (and marketing AI systems) that will eventually move to a more advanced approach.

 An alternative view would be the algorithmic methods will never be reliable enough to be widely accepted.  This would see these intermediate systems as about as far as most marketers ever will or should go towards measuring marketing program impact. Time will tell.

Sunday, October 29, 2017

Flytxt Offers Broad and Deep Customer Management

Some of the most impressive marketing systems I’ve seen have been developed for mobile phone marketing, especially for companies that sell prepaid phones.  I don’t know why: probably some combination of intense competition, easy switching when customers have no subscription, location as a clear indicator of varying needs, immediately measurable financial impact, and lack of legacy constraints in a new industry. Many of these systems have developed outside the United States, since  prepaid phones have a smaller market share here than elsewhere.

Flytxt is a good example. Founded in India in 2008, its original clients were South Asian and African companies whose primary product was text messaging. The company has since expanded in all directions: it has clients in 50+ countries including South America and Europe plus a beachhead in the U.S.; its phone clients sell many more products than text; it has a smattering of clients in financial services and manufacturing; and it has corporate offices in Dubai and headquarters in the Netherlands.

The product itself is equally sprawling. Its architecture spans what I usually call the data, decision, and delivery layers, although Flytxt uses different language. The foundation (data) layer includes data ingestion from batch and real-time sources with support for structured, semi-structured and unstructured data, data preparation including deterministic identity stitching, and a Hadoop-based data store. The intelligence (decision) layer provides rules, recommendations, visualization, packaged and custom analytics, and reporting. The application (delivery) layer supports inbound and outbound campaigns, a mobile app, and an ad server for clients who want to sell ads on their own Web sites.

To be a little more precise, Flytxt’s application layer uses API connectors to send messages to actual delivery systems such as Web sites and email engines.  Most enterprises prefer this approach because they have sophisticated delivery systems in place and use them for other purposes beyond marketing messaging.

And while we’re being precise: Flytxt isn’t a Customer Data Platform because it doesn’t give external systems direct access its unified customer data store.  But it does provide APIs to extract reports and selected data elements and can build custom connectors as needed. So it could probably pass as a CDP for most purposes.

Given the breadth of Flytxt’s features, you might expect the individual features to be relatively shallow. Not so. The system has advanced capabilities throughout. Examples include anonymizing personally identifiable information before sharing customer data; multiple language versions attached to the one offer; rewards linked to offers; contact frequency limits by channel across all campaigns; rule- and machine learning-based recommendations; six standard predictive models plus tools to create custom models; automated control groups in outbound campaigns; real-time event-based program triggers; and a mobile app with customer support, account management, chat, personalization, and transaction capabilities. The roadmap is also impressive, including automated segment discovery and autonomous agents to find next best actions.

What particularly caught my eye was Flytxt’s ability to integrate context with offer selection.  Real-time programs are connected to touchpoints such as Web site.  When a customer appears, Flytxtidentifies the customer, looks up her history and segment data, and infers intent from the current behavior and context (such as location), and returns the appropriate offer for the current situation. The offer and message can be further personalized based on customer data.

This ability to tailor behaviors to the current context is critical for reacting to customer needs and taking advantage of the opportunities those needs create. It’s not unique to Flytxt but it's also not standard among customer interaction systems. Many systems could probably achieve similar outcomes by applying standard offer arbitration techniques, which generally define the available offers in a particular situation and pick the highest value offer for the current customer. But explicitly relating the choice to context strikes me as an improvement because it clarifies what marketers should consider in setting up their rules.

On the other hand, Flytxt doesn't place its programs or offers into the larger context of the customer lifecycle.  This means its up to marketers to manually ensure that messages reflect consistent treatment based on the customer's lifecycle stage.  Then again, few other products do this either...although I believe that will change fairly soon as the need for the lifecycle framework becomes more apparent.

Flytxt currently has more than 100 enterprise clients. Pricing is based on number of customers, revenue-gain sharing, or both. Starting price is around $300,000 per year and can reach several million dollars.

Sunday, October 22, 2017

When to Use a Proof of Concept in Marketing Software Selection -- And When Not

“I used to hate POCs (Proof of Concepts) but now I love them,” a Customer Data Platform vendor told me recently. “We do POCs all the time,” another said when I raised the possibility on behalf of a client.

Two comments could be a coincidence.  (Three make a Trend.)  But, as the first vendor indicated, POCs have traditionally been something vendors really disliked. So even the possibility that they’ve become more tolerable is worth exploring.

We should start by defining the term.  Proof of Concept is a demonstration that something is possible. In technology in general, the POC is usually an experimental system that performs a critical function that had not previously been achieved.  A similar definition applies to software development. In the context of marketing systems, though, a POC is usually not so much an experiment as a partial implementation of an existing product.  What's being proven is the system's ability to execute key functions on the buyer's own data and/or systems. The distinction is subtle but important because it puts the focus on meeting the client's needs.  

Of course, software buyers have always watched system demonstrations.  Savvy buyers have insisted that demonstrations execute scenarios based on their own business processes.  A carefully crafted set of scenarios can give a clear picture of how well a system does what the client wants.  Scenarios are especially instructive if the user can operate the system herself instead of just watching a salesperson.  What scenarios don’t illustrate is loading a buyer’s data into the system or the preparation needed to make that data usable. That’s where the POC comes in.

The cost of loading client data was the reason most vendors disliked POCs. Back in the day, it required detailed analysis of the source data and hand-tuning of the transformation processes to put the data into the vendor’s database.  Today this is much easier because source systems are usually more accessible and marketing systems – at least if they’re Customer Data Platforms – have features that make transformation and mapping much more efficient.

The ultimate example of easier data loads is the one-click connection between many marketing automation and CRM “platforms” and applications that are pre-integrated with those platforms. The simplicity is possible because the platforms and the apps are cloud-based, Software as a Service products.  This means there are no custom implementations or client-run systems to connect. Effortless connections let many vendors to offer free trials, since little or no vendor labor is involved in loading a client’s data. 

In fact, free trials are problematic precisely because so little work goes into setting them up. Some buyers are diligent about testing their free trial system and get real value from the experience. But many set up a free trial and then don't use it, or use it briefly without putting in the effort to learn how the system works.  This means that all but the simplest products don’t get a meaningful test and users often underestimate the value of a system because they haven’t learned what it can do.

POCs are not quite the same as free trials because they require more effort from the vendor to set up.  In return, most vendors will require a corresponding effort from the buyer to test the POC system.  On balance that’s a good thing since it ensures that both parties will learn from the project.

Should a POC be part of every vendor selection process? Not at all.  POCs answer some important questions, including how easily the vendor can load source data and what it’s like to use the system with your own data.  A POC makes sense when those are critical uncertainties.  But it’s also possible to answer some of those questions without a POC, based on reviews of system documentation, demonstrations, and scenarios. If a POC can’t add significant new information, it’s not worth the time and trouble.

Also remember that the POC loads only a subset of the buyer’s data. This means it won't show how the system handles other important tasks including  matching customer identities across systems, resolving conflicts between data from different sources, and aggregating data from multiple systems. Nor will working with sample data resolve questions about scalability, speed, and change management. The POC probably won’t include fine-tuning of data structures such as summary views and derived variables, even though these can greatly impact performance. Nor will it test advanced features related to data access by external systems.

Answering those sorts of questions requires a more extensive implementation.  This can be done with a pilot project or during initial phases of a production installation. Buyers with serious concerns about such requirements should insist on this sort of testing or negotiate contracts with performance guarantees to ensure they’re not stuck with an inadequate solution.

POCs have their downsides as well. They require time and effort from buyers, extend the purchasing process, and may limit how many systems are considered in depth.  They also favor systems that are easy to deploy and learn, even though such systems might lack the sophistication or depth of features that will ultimately be more important for success.

In short, POCs are not right for everyone. But it’s good to know they’re more available than before. Keep them in mind as an option when you have questions that a POC is equipped to answer.