The city burst element located on the left side of the CBD Marketing logo and the agency website.
July 31, 2019
Part I: Marketers Need to Educate Themselves Now on the Issues Surrounding Big Tech and Potential Data Issues
Mark Robinson

Thoughtful federal privacy laws are needed to avoid potentially disastrous impacts on our industry.

CHANGE IS COMING

I’m increasingly troubled by the drumbeat of voices advocating for the breakup of “big tech” — specifically Google, Facebook and Amazon. Data-related privacy breaches, algorithmic censorship and market domination have politicians ramping up their rhetoric and staking out positions on all of these issues. For the same reasons, the FTC and the U.S. Justice Department are gearing up for investigations related to anti-trust and privacy. And The Wall Street Journal, The New York Times, Ad Age and many other media and business groups are covering this story with keen interest. But as all of this plays out in the news, I hear plenty of half-truths and factual omissions that, in my opinion, target the digital marketing industry somewhat unfairly, and have the potential to negatively impact businesses and consumers alike.

BIG TECH AND ANTI-TRUST

Specifically, on the issue of anti-trust, I feel like I need a few more things in order to provide a well-informed opinion. Things like…a law degree, a list of all of the business divisions for each of the companies being scrutinized, documentation surrounding each division’s organic conception or acquisition, a list of competitors for each of these divisions, a history of market share percentage for each of the divisions and an analysis of each division’s long-term ability to charge above-market prices or limit competition.

To be honest, it sounds like more than I can accomplish in the time I’ve currently allocated to this particular topic. So, I’m thinking that I’ll leave that well-informed opinion for another time. And maybe by the time I’m ready to tackle it, I’ll also have an answer to the question that’s really been in the back of my mind: “If the ‘algorithmic censorship’ in question benefited certain political powers (as opposed to having the potential to negatively impact future elections for the same individuals), would the looming anti-trust investigations actually be looming, at all?

THOUGHTFUL PRIVACY LAWS NEEDED

But on the topic of privacy, I can’t imagine there’s much of an argument against the need to address the issue. Currently, there’s no single, federal-level set of governing laws in the United States that can be used by consumers to inform them of their rights. And because of the issues highlighted by high-profile examples like Cambridge Analytica’s misuse of Facebook data, it’s clear that there’s a need for a comprehensive set of federal laws to guide companies as well. And while I’ll be the first to say that the issue needs to be addressed, I’m also concerned that there’s the risk of overcompensation that could negatively impact businesses and consumers alike, as evidenced in this New York Times article which states that the “[fines and measures] which the FTC’s commissioners approved in a 3-to-2 vote this month, drew sharp criticism for not going far enough in curbing the data habits of the world’s largest social media company.”

THERE’S NO GOING BACK NOW

Here’s the thing. The genie is out of the bottle. As much as some like to bash these big tech giants as invaders of privacy and killers of competitive options, many businesses — big and small — are utterly dependent on the data collected by the big tech companies. It is the enabler of efficiently targeted, increasingly personalized and highly relevant marketing messages, delivered to the right person, in the right place and at the right time. And it is essential to deliver the budgetary efficiencies that companies now demand from their marketing teams.

DATA ENABLES PERSONALIZATION, AND PERSONALIZATION IS GOOD

But it’s not just about what’s good for business. What the Big Tech Giant Slayers fail to realize is that more personalized and specific targeting can provide improvements to online user experiences in a number of ways. And once a person understands that no personally identifiable information is necessary to deliver a personalized experience (including ad targeting), the topic becomes less scary, and allows us to consider all of the benefits that personalization offers. First, through the use of data, algorithms and thoughtful audience targeting, marketing messages can be used to introduce business managers to products and services that could potentially solve for a critical business need.

Additionally, using the same techniques, a website that a user visits regularly can identify areas of interest or frequently visited pages, and create custom navigation for that user, helping them find the content they’re looking for more quickly. And finally, when I’m looking at my favorite news site, I would rather see an ad promoting the latest SCUBA diving technology or a live-aboard dive cruise (things that I aspire to enjoy someday after my boys have finished consuming all of my money) than an ad for a new pickup truck or a local Shakespeare Festival (random topics in which I have no interest). With these examples in mind, why would anyone choose a non-personalized experience or to see an ad or message that has nothing to do with who they are, what they do or what they’re interested in?

FACEBOOK ALLOWED WHAT, NOW?

Actually, it’s not too difficult to understand why some might err on the side of caution, choosing anonymity over a customized user experience. Especially for those who hear stories like the Cambridge Analytica scandal. In this example, data was obtained by a third-party developer through an online personality quiz that a user could take by logging in through the Facebook Login API (a tool that lets people simply log in to a website or app using their Facebook account instead of creating new credentials). Upon login, the Facebook platform notified the user that the developer would be given access to the user’s data, as well as the data of the user’s Facebook friends. According to the Cambridge Analytica whistleblower, that data was then shared with Cambridge Analytica for psychoanalysis and microtargeting of psychographic audience segments with test messaging and imagery related to the existence of a “deep state” government and government conspiracy, the existence of NSA spying, people scaling border walls, and more, with the ultimate goal of changing “people’s perception of what was happening in America to make them more open to an alt-right vision.”

WHAT WERE THEY THINKING?

Again, this is why it’s clear that there’s a need for a comprehensive set of federal laws to guide U.S. companies. Even if Facebook directly notified users that the app would be given access to the user’s data, as well as the data of the user’s Facebook friends, how could Facebook think that this was a good idea? In what way would they benefit from sharing their users’ personal data? Why would they think that any company that obtained the data would consider it to be sacred, protecting it from potential misuse? Has anyone even brought up the fact that if Cambridge Analytica had developed and delivered this online quiz themselves, so that the same user data was captured through the same means directly by Cambridge Analytica, none of this would even have violated Facebook’s Terms of Service? And regardless of the fact they disclosed it in the fine print in their Terms of Service, under what framework of intelligent logic does Facebook believe that one of their users should have the ability to release the personal data of potentially hundreds or thousands of their friends? Not to mention that fact that this release of data was granted for the purposes of gaining access to an online personality quiz. (I’m guessing all of this was considered in the massive $5 billion fine handed down by the FTC.)

CAUTION! DO NOT IRON SHIRT WHILE WEARING

Honestly, I feel like I must be missing some sort of critical detail that would exonerate the naïve mindset (that’s the nicest way I could possibly say it) that the Cambridge Analytica quiz takers must have. As I noted earlier, the Facebook platform notified each user when logging in through the Facebook Login API that the developer would be given access to the user’s data, as well as the data of the user’s Facebook friends. And still, 270,000 users made the choice to login through Facebook, granting the developer of the quiz access to the profile data of 50 million users! Why would any Facebook user release all of their personal data to a random online quiz developer? Or why would they think it’s ok to grant that same developer access to the personal data of every one of their Facebook friends? Again, unless I’m missing something, the fact that 270,000 users answered these questions wrong by way of continuing with this quiz delivers a critical insight related to the need for privacy laws. A comprehensive, well-defined framework of regulation is needed, and it’s needed fast in order to protect our general population from…well…themselves. Suddenly, I’m reminded of every seemingly over-the-top warning label that shouldn’t need to exist but is actually necessary.

Share this article