So, here’s an interesting question: how do you apply that to your marketing and overall startup business? What are your marketing white belt techniques? What are the problems your startup company runs into most, and how well do you do at solving them - and critically, at teaching your team how to solve them?
Then take a look at the solutions in the marketing technology marketplace. How well do the various tools out there help you solve your most common martech problems? I’d predict that the answer is, “not very well at all”. Why? When I look at the search terms and queries and social media comments I receive most often, they look like one or more of the following: How do I know what’s really working? How do I optimize my website for Google’s algorithm today? How do I show any kind of results for social media marketing? And variations thereof.
These are the problems that millions of marketers face every day. These are the white belt problems, but we’re not delivering white belt solutions to those problems, as an industry. Here’s the funny thing about white belt techniques in the martial arts: you never, ever stop practicing them.
My senior teachers call it polishing the mirror, making your basics better and better over time. Yes, you learn more techniques, and deal with more elaborate problems and their solutions, but if you stop practicing the basics, you lose touch with your most common problems. Consider some of the problems we’re constantly chasing in marketing; let’s use optimizing your website content for search. What’s the fundamental problem? We’ve got more competition than ever (including from Google itself) and we want to be found for search terms relevant to us. What’s the general solution, the white belt basic?
Create content so good that everyone relevant to you wants to share it and link to it. The fundamental, the basic, the white belt problem is that most marketing content really, really sucks. No one cares about it. No one wants it. Now, how many different SEO tools, services, agencies, and team members are focused on solving that basic, that fundamental? The answer is almost none. Tools only help you do more, faster - if you create cruddy content, tools help you create more filler, just faster. If you want your marketing to succeed, if you want your customers to be deliriously happy, if you want to make a great big pile of money, then figure out what the most important, most common problems your marketing is supposed to solve, and go solve them.
The path to black belt begins by being really, really good at the white belt techniques, and the path to becoming a master marketer is solving the biggest problems that plague your customers.
Helping Agencies and freelancers Prove Their Value is essential. Recently we discussed how reporting should be used to prove the value of an agency. Agencies are notorious for showing overwhelming amounts of activity as if to say, "look at all the things we did for you" but very little in the way of results that matter to the client. So I put together a brand new talk on how agencies could use data-driven marketing as a way to showcase their value and real results they obtain. Fundamentally, agencies need to take five steps to make this journey: Become data-driven. Making decisions with data allows you to act faster and make better decisions when done right. Be crystal clear about KPIs. What’s a KPI? It’s the number you get your bonus for (or fired for). Build an agency cookbook. Cookbooks set apart good agencies from great ones. Use data to become proactive. Impress clients by being more proactive and pushing them. Squeeze all the juice from your tools. You probably don’t need to buy more tools. An agency which takes these steps becomes more and more valuable to its clients. For folks on the client side, these are the things you should expect of your agencies, things you should ask for when agencies are pitching you. Agencies not doing these things will not serve you as well as they could.
Everything can be measured. The question is whether or not we’re willing to invest the appropriate amount of time, effort, and money to measure well. Let’s take brand as an example. What’s the value or strength of a brand? Brand market research has existed for decades and has proven, unimpeachable techniques for measuring the strength of the brand. For example, do a telephone poll of thousands of consumers in a representative sample and conduct unaided recall tests like “Name your favorite brand of soda to drink”.
Branded organic search data is available to most of us. NPS data - Net Promoter Scores - measure the satisfaction and willingness to recommend a company to friends and colleagues after they’ve had an interaction with our brand. In fact, when you look at the modern voice of the customer, it’s difficult to argue that anything customer-related can’t be measured in some fashion: Voice of the customer graphic The honest, ugly reality is that when someone says something can’t be measured in marketing, what they’re really saying is they’re unwilling to make the necessary investment to measure that thing. Market research, properly done, costs a lot of money - tens of thousands of dollars if you use a good market research firm. NPS data is pricey. Collecting all that data across your enterprise costs time, talent, money, and commitment. Why would someone be unwilling to make the investment to know what’s working?
A few common reasons: Sometimes it’s just not in the budget. That’s an unfortunate reality because we’ve almost uniformly done a bad job of setting expectations about what good measurement costs. If you want your measurement to be best in class, plan to spend 25 cents on the dollar; for every $1 you plan to spend on marketing, plan to pay a quarter for measurement. Sometimes it’s because we’re afraid of what measurement will uncover. Measurement, done properly, is unbiased and reveals the good and the bad alike. In many organizations, there are stakeholders with vested interests in looking good no matter what the facts say, and they can be substantial obstacles to measurement because they know deep down that they’re all hat, no cattle.
Sometimes it’s because we’ve failed to explain the value of measurement. If you believe that something has little value, you will invest little in it. If we haven’t made a business case for measurement - such as avoiding the waste of money - then we will run into substantial headwinds trying to get resources. The bottom line is that marketing can be measured commensurate to the level of investment in measurement. The more you invest, the more you could measure.
Fairness is a difficult subject to tackle in business, because people have many different ideas of what constitutes fair treatment. In the context of things like bank loans, citizens’ rights, being hired for a job, etc. what is fair? The dictionary definition is both straightforward and unhelpful: “impartial and just treatment or behavior without favoritism or discrimination” What constitutes fairness?
This is where things get really messy. Broadly, there are four different kinds of fairness, and each has its own implementation, advantages, and pitfalls: Blinded: all potential biased information is removed, eliminating the ability to be biased based on provided data Representative parity: samples are built to reflect demographics of the population Equal opportunity: everyone who is eligible gets a shot Equal outcome: everyone who is eligible gets the same outcome For example, let’s say we’re hiring for a data scientist, and we want to hire in a fair way based on gender.
We have a population breakdown where 45% identifies as male, 45% identifies as female, and 10% identifies as something else or chooses not to identify. With each of these types of fairness, how would we make the first step of hiring, interviewing, fair? Blinded: gender and gender-adjacent data (like first names) are removed from applications. Representative parity: our interview pool reflects the population. If we’re in China or India, there are 115 males for every 100 females, so our interview pool should look like that if we’re using representative parity. Equal opportunity: we interview everyone who meets the hiring criteria until we reach 45% male, 45% female, 10% other.
Equal outcome: we interview everyone until we have second-round candidates in the proportions of 45% male, 45% female, 10% other. Each of these scenarios has its drawbacks as well, either on excluding qualified candidates or including unqualified candidates. Blinded fairness doesn’t address underlying structural fairness problems. For example, if women feel excluded from data science jobs, then the pool of applicants would still reflect an overall bias, blinded or not. Representative parity doesn’t address the structural fairness problem as well, though it does do slightly better than purely blinding data. Equal opportunity may exclude qualified candidates in the majority, especially if there’s a substantial imbalance in the population, and potentially could include lower quality candidates in the minority.
Equal outcome may achieve the overall intended quality benchmarks but could take substantially longer to achieve the result - and depending on the imbalance, might not achieve a result in an acceptable timeframe. Why does any of this matter? These decisions already mattered when it was humans like you and me making decisions, but they matter much more when machines are making those decisions based on algorithms in their code, because the type of fairness - and its drawbacks - can have massive, even society-level impacts.
From everything like determining what the minimum wage should be to who gets hired for a job to even how a supply chain should function, fairness algorithms can either reduce biases or magnify them. How should we be thinking about these kinds of algorithms? We have to approach them from a balance of what our ethics and values are, balanced with our business objectives. Our ethics and values will dictate which fairness approach we take.
Many different simulation tools exist that can evaluate a dataset and provide projections about likely outcomes based on a variety of fairness metrics, like IBM’s AI Fairness 360 Toolkit and Google’s What If Toolkit along with their trusted updated Google Analytics. But the onus to think about and incorporate fairness techniques is on us, the humans, at each stage of decision-making for every business.
And speaking of Google, take a look at the basics of setting up a goal in Google Analytics 3 (Universal Analytics, aka what 90% of marketers use) that will save you time when you have to eventually make the move to Google Analytics 4. By converting all your GA3 goals to events, you’ll be prepared for Google Analytics 4 and can even begin collecting conversion data today. While Google has not given us an end date for GA3, they made it plentifully clear at the recent Google Marketing Live that anything and everything new will only be in GA4. The longer you wait to get set up, the more of a disadvantage you will be at compared to competitors who are feeding it data now for use later.
And what about "Fair Market Value" for investors and startups? Desmoothing private market returns can dramatically alter volatility and correlation estimates The lack of transparency in private markets has helped attract institutional investor capital through the potential for outsize returns from an inefficient market. In some ways, however, this opaqueness has created significant challenges to how asset allocators have traditionally built and managed portfolios. Traditional multi-asset portfolio management has relied heavily on modern portfolio theory, which requires accurate estimates of asset classes' return and risk characteristics and their relationships with other asset classes.
But accurately estimating the risk of private market asset classes is difficult because investments are only valued by the market when they're bought or sold. In all other periods, the values of private investments are self-reported by the GP, based on "fair value" accounting principles. As covered in our new research, fair value accounting has been shown to understate the changes in the true value of private investments from quarter to quarter, which in turn leads to artificially smoothed returns and lower volatility.
For example, after "desmoothing" a quarterly return series for an example company, the estimated annualized volatility increases from 9.8% to 17.1%. The effect of desmoothing on the volatility of VC returns is even greater, more than doubling from 21.1% to 53.2%. For this reason, it's imperative for asset allocators to apply a desmoothing procedure to reported private market return series prior to calculating volatility, particularly when it's being used as an input to asset allocation modeling. Failure to do so can lead investors to misinformed asset allocation decisions that favor larger allocation to private markets.
Business in the new economy is no picnic and often unfair, so you need to stay on top of your game and practice as much fairness whenever possible. Keep these tips in mind as you build up your business base fair and square.