April 23, 2024

Beyond Faster Horses (1 of 2): AI's Role in Disrupting Marketing

Join Dom Hawes as he dives deep into the transformative role of AI in marketing with industry experts Steven Millman and Jonathan Harris. Discover the metaphorical shift from "building faster horses" to "creating cars," illus...

Join Dom Hawes as he dives deep into the transformative role of AI in marketing with industry experts Steven Millman and Jonathan Harris. Discover the metaphorical shift from "building faster horses" to "creating cars," illustrating how AI is set to revolutionise business practices beyond mere efficiency improvements.

The trio discusses the potential of AI to replace traditional marketing processes with innovative models that drastically change how businesses interact with consumers.

Tune in to explore how AI is not just accelerating existing processes but creating entirely new pathways for value creation and strategic marketing.

About Jonathan Harris 

Jonathan is the founder and CEO of Sub(x), a marketing technology provider that uses AI automation to drive revenue, growth and customer acquisition for digital subscription businesses. 

 A former investment banker at Morgan Stanley and Merrill Lynch, Jonathan launched and exited three previous businesses, and has 10 years’ experience in the application of data science in the marketing technology sector. 

 Sub(x) is transforming digital marketing using a proprietary self-learning AI autopilot, enabling businesses to optimise their online customer acquisition and revenue without the need for manual testing and experimentation. 

About Steven Millman 

Executive, Award-Winning Researcher/Data Scientist, Innovator, Inventor & Coffee Snob. Throughout Steve's career he's had a focus on quantitative/statistical analysis, survey design, research design, AI/ML, and other applied research techniques. Steven is presently serving as Global Head of Research and Data Science at Dynata, the world's sixth largest market research company where he leads a team of over 100 researchers and data scientists. Steven is a frequent speaker and author, multiple Ogilvy award winner, patent holder, and recipient of the prestigious Chairman's Prize from the Publishing & Data Research Forum. Steven serves as a member of the Board of Trustees for the Advertising Research Foundation, the ARF.

Links 

Full show notes: Unicorny.co.uk  

LinkedIn: Jonathan Harris | Steven Millman | Dom Hawes  

Website: Sub(x) | Dynata 

Sponsor: Selbey Anderson  

 

Related Unicorny episodes: 

A/B seeing ya! Is AI the end of split testing? with Julian Thorne 

"It’s a language model, stupid". How marketing should and shouldn’t use AI with Steven Millman 

Everything, Everywhere, All at Once with Steven Millman

 

 Related Marketing Trek episodes: 

Data Ethics in Marketing with Steven Millman 

Breaking The Cookie Jar with Steven Millman 

 

Other items referenced in this episode: 

The Mathematics of Machine Learning by Wale Akinfaderin  

 

Episode outline

The Role of AI in Marketing  
Dom Hawes introduces the podcast and discusses the impact of AI in marketing with Jonathan Harris, CEO of Sub(x) and Steven Millman Steven Millman, Global Head of Research and Data Science at Dynata 
 
Moving from Process-Driven to Outcome-Driven  
Jonathan Harris emphasizes the shift from a process-driven to an outcome-driven ecosystem in marketing, highlighting the importance of understanding the problem space and data inputs for AI to deliver value. 
 
Reimagining the Marketing Process  
The focus shifts to reimagining the marketing process and adopting a broader approach to AI implementation, aligning with the ultimate goal of selling more at a higher margin. 
 
Understanding AI Tools and Processes  
Steven Millman delves into the complexity and simplicity of AI tools and processes, emphasizing the need to understand the inputs and problem statements before leveraging AI effectively. 
 
Identifying Recurrent Problems in AI Implementation  
The conversation puts a focus on the fundamental problem of poorly understood inputs, lack of transparency, and competing objectives in AI implementation across multiple channels. 
 
Probabilistic Outputs and Input Quality  
The discussion focuses on the role of human-created inputs in AI tools like chat GPT, emphasizing the need for marketers to ensure the best possible inputs for their AI tools. 
 
Large Language Models and Data Quality  
The conversation delves into the potential drawbacks of training large language models on too much data of questionable quality, and the need to consider the potential impact of data quantity on model performance. 
 
Caution with Synthetic Data  
The guests discuss the use of synthetic data in AI and the importance of cautious testing to ensure its effectiveness, highlighting the prevalence of snake oil solutions in the current market. 
 

The Irrelevance of Presentation Layer  
Jonathan Harris discusses the irrelevance of the presentation layer in content consumption, emphasizing the importance of editorial teams in curating popular content. 



This podcast uses the following third-party services for analysis:

Podder - https://www.podderapp.com/privacy-policy
Chartable - https://chartable.com/privacy

Transcript

PLEASE NOTE: This transcript has been created using fireflies.ai – a transcription service. It has not been edited by a human and therefore may contain mistakes 

 
00:03 
Dom Hawes 
Welcome to Unicorny. This is a podcast about the business of marketing, how to create value, and how you can help your business win the future. And I'm your host, Dom Hawes. Not so long ago, I was out at lunch with Jonathan Harris, who is CEO of Subx, and were talking about artificial intelligence in marketing, and he asked me what I was seeing, what I was hearing, and the discussions were having generally around the topic. So I talked to him about acceleration, about optimization, about efficiency, and about how I thought AI might ultimately replace parts of the process that are currently very human. And then, channeling his inner Henry Ford, Jonathan said, these things are all about building faster horses. Someone somewhere is building a car. Oh, my God, that was a great point. It hit me really hard. Hard enough.  

 
00:56 
Dom Hawes 
I went straight back to my own office, back to my innovation team, to pose the same question, like, hey, guys, are we just building faster horses? Now, assuming the objective is to get from one place to another, well, faster horses are generally better than slow horses. So creating faster horses in itself, that's not a problem. So I went and spoke to another all round AI tech and marketing guru who's been on this podcast before, Stephen Millman. And he pointed out to me that you can only build a car if you've already invented the internal combustion engine. Well, I haven't. And as far as I know, when we recorded this episode, certainly no one else has. It's probably gonna happen, and when it does, then my faster horses, well, they won't be all that, but for the time being, I think we're okay.  

 
01:42 
Dom Hawes 
But that's what got me to think about this episode that you're about to listen to. Like, we all touch AI every day now, but how often do you get to think really deeply about what it is, how it works, and how it's going to affect us all. So I figured we could go for a really deep conceptual conversation about the thing hanging over all of our heads. And I knew the two absolutely ideal people to lean on, the same two I've just mentioned. Now, Stephen Millman should be no stranger to your ears if you're a follower of our podcasts, both this one and marketing trek. He is an award winning researcher and data scientist. He is chair of the Advertising Research Foundation's work stream on artificial intelligence, and he is Dynata's global head of research and data science.  

 
02:29 
Dom Hawes 
And Jonathan Harris, him, who I was at lunch with, came into my view after I met and interviewed his colleague on this very podcast, Julian Thorn. Julian starred in the episode, by the way, called ABC inure when we discussed how AI was signaling an end to the Ab split test. Now, I referenced that later on in today's episode without explaining it properly, but now you know the reference. Anyhow, look, Jonathan's a really big thinker. He founded Sub X technology in 2013. He's deep into his AI and its impact on commercial models. So what I've got for you today is a really special treat to mega brains discussing AI at a conceptual level to give the rest of us pointers towards the future. Now, I only just hung on to this conversation honestly by my fingertips, but I think I did it.  

 
03:16 
Dom Hawes 
And if you're looking to have the grey matter tested, I am delighted to bring you three star Michelin food for thought.  

 
03:23 
Dom Hawes 
Let's go meet Stephen and Jonathan.  

 
03:26 
Dom Hawes 
Jonathan, I teed up the whole podcast today by talking about cars and faster horses, and I have a feeling they may be referenced a little bit later on today. Look, the metaphorical car implies total disruption, like the whole ecosystem needs to change, but it kind of feels like we're trying to get somewhere, maybe for the sake of it. What's the goal in marketing? What are we looking for from AI?  

 
03:47 
Jonathan Harris 
The challenge that I've often seen is that when we speak to partners and clients in the industry, the conversation inevitably and the value of AI lands on the outcome. So we may talk about this later, that we're moving from less of a process driven to an outcome driven ecosystem where we don't have deterministic inputs, we have probabilistic outcomes. That's all well and good, but the main problem is, and you kind of allude to it, there is what is the current state of play? How is the ecosystem built today, and how do you move that ecosystem so that it works well in this new environment? And typically where I tend to look is the litmus test I have is around the inputs. So generally silly data in bad outcomes, right? So bad inputs, bad outputs.  

 
04:41 
Jonathan Harris 
And so before you talk about the outcomes and the value of AI, I think you have to understand, in my view, the problem statement and the problem space, which is how is the environment that you're currently trying to optimize for whatever that is. We've become very good at collecting data through the customer data platforms like segment and permittiv, but very little comprehension about what that complexity of data actually means to being able to achieve an AI outcome. Because the value of AI is that there is a predictability. You know, when we talk about large language models and we talk about images and text, there's a predictability to the outcome, right? So in order to have a model determine what the best outcome is, there has to be a sequence of events that you get to that rewards the model or penalizes the model.  

 
05:30 
Jonathan Harris 
If you're using reinforcement learning and says, yeah, that's the outcome.  

 
05:33 
Steven Millman 
I think a lot about outputs versus outcomes, and I think a lot of where people are not thinking about these tools correctly is that they're trying to figure out how to recreate or more efficiently get to each of the waypoints that lead them to the outcome that they want. Instead of starting with, for example, I need to get the right ad to the right person at the right time. They will start with, all right, I need to make a segment, and how can I make the segment more effective? And then I need to have a better bidding model. How do I make the bidding model more effective? And then they think, how do I now properly design the correct series of ads that I can then put into this bidding process? And so they're really focused on these waypoint entities in the space.  

 
06:19 
Steven Millman 
And this is, I think, reflective of what you were talking about with different divisions being very focused on their own points. There are ways to start with the outcome and think about very broadly, how do I get from a creative process to delivering that ad to the right person at the right time? And what are the tools and techniques that I can use to get there, knowing that there's better tools out?  

 
06:45 
Dom Hawes 
Ultimately, the goal is to sell more.  

 
06:46 
Dom Hawes 
At a higher margin there.  

 
06:47 
Dom Hawes 
That is the ultimate outcome that everyone's after. Everything on the journey in between the starting point, which is today, to there. Everything on there is a waypoint, I guess. And I think when I think about disruption in that journey, it's, how do you get from here to selling more using a different process than we're currently using? Because it's enormously inefficient and it's enormously ineffective.  

 
07:10 
Dom Hawes 
We all know it is.  

 
07:11 
Dom Hawes 
I don't think necessarily, I don't think solving efficiency on its own solves the problem. There's so much we know now about what makes communication effective that we could be building into a more algorithmic approach.  

 
07:22 
Jonathan Harris 
Yeah.  

 
07:23 
Steven Millman 
And I think the other issue is that a lot of folks who are employing new tools aren't doing it for the purpose of actually improving either outcomes or outputs. It is a performative act. They want to be seen as using these tools. Any of your listeners or fans of Monty Python, I always think back to the machine that goes bing. It's a segment where they're trying to impress hospital administrators and they're saying, okay, well, we need this machine and this machine, we need a machine that goes bing. And it has no purpose, but you have to have one. And of course the administrator shows up and he says, I see you have a machine that goes bing. And for AI, for a lot of companies right now, is a machine that goes bing.  

 
08:03 
Jonathan Harris 
And also when you speak to teams about AI, there is, I think, often a belief in some way that it's mercurial, right. That there's a little bit of magic in there. And you can take out a lot of the business language that we use to kind of build, construct the thinking around AI. You could actually just replace it with the word magic and it would probably work, right? But that's not to be disingenuous to the clients or to the people that we work with. And it's no way to say this. The data science part of the business is more important or has more insight. But what I think it is actually a misunderstanding that machine learning and AI is just very complex math, right. It has a very defined architecture to it.  

 
08:51 
Jonathan Harris 
What that means when you think about that is that you have to construct the problem and the data that goes into that problem in a way that is mathematical rather than mercurial. This is, you know, we're increasingly used to getting outcomes where you lean back and you go, that's profoundly amazing. I didn't know that could happen, but that's not actually what's going on in the background. And I think the challenge is, as you say, outputs and outcomes. And the machine that goes bing is to actually understand what goes into the process. And it comes back to the problem statement. It comes back to the data in order to help teams rethink the way in which they approach the problem. And that is often, first of all, understanding the noise that's been created in the data that they currently have.  

 
09:40 
Dom Hawes 
Isn't that approach, though? If you're looking at the process, I guess it's a good starting point. You look at the process, say, this is how we currently do things, which bits of this can we speed up and automate? And that is building a faster horse in the definition that we're talking about it in today. A car is something that completely replaces it. So the objective of a company being to sell more. Right now, the approach to that is to build up sales and marketing team, to create those departments, to give them budget. They go out and do a bunch of research, they try and understand customers, et cetera, et cetera. So right now, the approach everyone is after is saying, you know, how do we build faster horses? How do we take some of that process, shortcut it, automate it?  

 
10:14 
Dom Hawes 
My challenge when I look at our business and I look at our clients who are trying to do marketing more effectively, is how do we do something that isn't just a machine that goes bing, whether it does nothing or whether it just does something a little bit faster, it may as well just be a bing.  

 
10:29 
Steven Millman 
I think fundamental to this is just understanding what each of these models and processes do. Arguably, the math is actually not all that complex. It's just done at such a massive scale that it defies the ability of a human mind to understand what's actually happening through it. So any individual step is actually very simple, in the same way that the human brain is very simple. If you're looking at a single neuron, you explode it out to billions, and then suddenly it becomes both something that you can't imagine around, but also increases the likelihood that you start to see emergent effects. And large language models as we're seeing them in chat, GPT, and bard, and others, is really an emergent effect. It's doing a thing that we don't exactly understand how it's happening based on the math that underlies it.  

 
11:16 
Steven Millman 
It shouldn't be as good at what it does as it turns out to be, and it's just simply not clear why. But understanding what the tools can and cannot do allows you to apply the imagination, the ingenuity, to figure out a new way to solve a problem. That's the fundamental starting point. I can't get to the point where I say I want to build a car unless I understand internal combustion. Once I understand that I can build a machine that can produce motion from fuel, then I have a whole bunch of things that I can now imagine.  

 
11:51 
Jonathan Harris 
Began my journey with machine learning. I had to kind of create a mental model of what it was, because I'm not a data scientist through background, and for me, and especially when I try and talk through some of the things that we do to teams that are not technical, at its core, we're just talking about a filter. It's a transformation layer, right? It's a transformation of data. Data comes in, it's transformed, filtered, however you'd phrase it, and something comes out. And that has many applications. It can be one language in and another language out.  

 
12:28 
Jonathan Harris 
It can be text image these days, or it can be simpler recommendation systems, for example, behavior in, recommendation out, coming back to the point that you made, and what I started off with is that the ability to leverage anything, any form of AI, whether it's a process or whether it's a recommendation or it's a value, that helps you do something else, somewhere else in your business, to not labor the point, the inputs make a significant difference. It is fundamental to the whole process. And what's broken in the system at the moment is the inputs are poorly understood because they're coming in from multiple channels. And there is no single point of decision around that because it comes through multiple teams and no transparency and competing objectives. And also, there is no concept of causality and significance.  

 
13:24 
Jonathan Harris 
If I change something over here, what is the impact on something else?  

 
13:29 
Dom Hawes 
Right?  

 
13:29 
Jonathan Harris 
So that could be within your own channel. If I run this test over here, what's the implication of that somewhere else? And or if I'm on a social media team, what's the impact of optimizing my social media strategy on the rest of the ecosystem? So that's a fundamental problem, right? And what we've discovered, or we, is that when working with businesses, when you get them to frame what it is they currently do, you see recurrent problems in that this is not a space that we can solve a problem in. The quality of the input is not consistent.  

 
14:09 
Dom Hawes 
As marketers, we should all know that when it comes to data, garbage in means garbage out. And the emphasis that both Jonathan and Stephen place on inputs, therefore, well, that should strike a chord. I say should because it's not a no brainer. Plenty of outfits are training technology on bad suspect or incomplete data. Now, I say that with confidence, not because I know it as a fact, but because the kind of data that we need to truly disrupt our current approach to marketing communications doesn't exist, at least not in an accessible form. Like, we can do bits and pieces, but we can't disrupt. So, sure, it's cute that AI can personalize content at scale, but being able to do that actually doesn't really change that much.  

 
14:50 
Dom Hawes 
Jonathan made the point that many of the AI based tools we're using right now work using probabilistic outcomes. In effect, they are next best engines, next best word, next best sentence, next best offer. You kind of get the picture. GPT, Claude, Bert and the rest of the LLMs, well, they're very sophisticated, but they are sophisticated. Next best engines. And so, to illustrate the point about probabilistic outputs and the quality of inputs, when you ask an LLM like chatgpt to write a paragraph and you hate the output, it's not the output that's at fault, it's the inputs. And the inputs are created by humans. Chat GPT isn't bad at writing. Most people are. So as you plan building your own AI tools, make sure your inputs are the best they can possibly be.  

 
15:39 
Dom Hawes 
And Stephen nailed the current state of AI in many businesses by channeling Monty Python, the machine that goes ping. It's a technological novelty that lacks substantive application. It serves as a reminder that the true value of AI lies not in its novelty, but in its ability to transform complex data into strategic insights and actions. But only if we've got the inputs. Think about your job targeting b two B professionals. What's the very best source of self validated thermographic, demographic, behavioral and other data? Well, it's LinkedIn, of course. But be honest, don't most posts on LinkedIn make your teeth itch just a little? They're not real. They're mostly virtue signalling and self promotion. So you're trying to understand psychology, you're trying to understand how people behave. But would you build your model on LinkedIn data even if it was available?  

 
16:31 
Dom Hawes 
It's all about inputs, and that's what I wanted to dig into next, inputs. So the large models we all know, are they already trained on too much data of questionable quality? That's what I wanted to put to Stephen and Jonathan.  

 
16:48 
Steven Millman 
This is why a lot of these models have initially taken the bigger is better approach. If we just scrape everything, there'll be enough information in there for me to be able to get out of it what I want to get out of it. And there's also a bigger is better marketing component to that, right? I mean, you should use us because we've got 180 billion parameters and they've only got 30. But, for example, in my world, it will never come up that I need to get answer to a question in the form of an iambic hexameter. Right, give it to me in the. In the format of an epic poem like Homer. That just doesn't come up. I don't need that in my data set. And the more information that's in the data set, the more likely a data set may be.  

 
17:32 
Steven Millman 
And again, it's still poorly understood, but it seems like it's more likely to start to hallucinate because it has so much more to draw on. So there's pros and cons to that. In some of these very much more tightly constrained models. For example, Claude appear today. Claude four appears today to be performing really well compared to much larger competitors in certain use cases, what I.  

 
17:54 
Jonathan Harris 
Always try and do is connect what we're seeing, as you've described. I mean, talking about LinkedIn in my post, in my feed, the pace of change around AI, I didn't think it could accelerate, but it has accelerated. If you take the frequency of truly transformative research that's moving this industry forward at a pace which is, you wouldn't think achievable is now accelerating. Right? So we talk about parochialness and bias. What I'm always trying to do is I'm trying to look at how does the research and the advancement in large language models actually solve real marketing problems. How do you ground what's happening? If we're talking about input data and what the state of play is currently for marketers, the horizon of what you see in your LinkedIn feed, it's not even a conversation. That level of advancement is actually not relevant.  

 
18:56 
Jonathan Harris 
That's being used in a much different way. I saw OpenAI just dropped into a, can't remember the name, figure one or something robot doing the most profound stuff. And, you know, but in reality, actually, large language models and you, we talked about, you know, where's the car? You know, the rocket ship for this industry is what large language models are gonna do, what chat is gonna do, what human interface driven relationships with content and everything else is gonna do to the entire marketing ecosystem. So we're talking about websites, right? We're talking about apps, and we talk about what we know to be the interface today. Well, you and I spoke about it. And that, you know, what if those interfaces just don't exist anymore? We're talking search engine becomes answer engine becomes what next? You know, what's the relationship with product?  

 
19:43 
Jonathan Harris 
How does that change? Why is the website that you go to for your daily news the right medium? And so I think what's interesting about large language models and what the research that's taking place today is that is the true disruption that is going to bring a wave of customer interaction that even the problems we're trying to solve for today, right offer, right person, right time, could be redundant. Where I'm investing my, you know, some of our funding is into the kind of, and again, we spoke about this into the hypothesis, which is, what if websites don't exist? What if apps don't exist in the current form? Take those away. They are now redundant. Five years from now, what is the interface? And interestingly, the marketing landscape becomes better. And it becomes better for one important reason. You talked about behavior.  

 
20:39 
Jonathan Harris 
And if you take behavior of someone landing on a website, we are inferring behavior. We're not, actually, we are inferring a decision, inferring an outcome from a combination of behaviors that are constructed into a state, into a customer state, where, however well constructed is inferring. And if it didn't, I talk about inferring behavior, because if were right all of the time, then conversion rates would be 100%. Right. But they're not. They're 1% or 2%. So then 98% of the time, for every person that you infer a behavior from, someone who looks almost identical will not take the same action. Right. So this is a game of majority and proportions. But what's happening in chat, and you talked about LinkedIn, is that chat will bring a layer of comprehension about the customer that marketers have never had.  

 
21:33 
Dom Hawes 
And that's where I was talking about LinkedIn, actually. It's about building Persona and understanding real people. And, Stephen, we've talked in the past about synthetic panels. Right? Again, I'm thinking about our traditional process. I think when I'm thinking about synthetic panels, I'm thinking car. But I get, actually, that's just a faster horse, because I'm still thinking about the same communication process to achieve the aim.  

 
21:53 
Steven Millman 
Yeah, it's even arguable that it's a better horse. One of the interesting things is we're seeing an emergence of people working with synthetic data. First off, synthetic data is not new. We've been using synthetic data forever. We just haven't been calling it synthetic data. So anytime we have a missing data point, and we estimate what that missing data point would have been, that's synthetic data. That is data we're using as though it's real, even though we have estimated it. We use that with multiple imputation modeling and imputation modeling in general, ascription, fusion, variable by variable modeling. These are all things that we do either to join disparate datasets that cannot reasonably be joined in any other way than probabilistically, or when we just have empty spaces in our data that we try to fill.  

 
22:36 
Steven Millman 
So we're trying to use these things to create synthetic data in a new way, a hopefully better way, where there's a vast amount of data to draw on for intercorrelation coming from these language models, which are fundamentally neural networks. And there is some really interesting research, but there is so much snake oil. It's the snake oil part of this that's the problem right now. So I won't names, but I saw one recently where a company is selling the ability to create additional survey responses. So I have a survey, I get 200 responses, but it's not enough to be statistically significant. And this will give me another 100 responses. And now I can do static tests. And they say it's a very high degree of representivity to the data you've already got. But that's the problem.  

 
23:22 
Steven Millman 
The reason why you don't trust small data sets is because the data you've got isn't representative. So creating additional responses that look like the responses you already have isn't fundamentally different than just copy and paste the 200. Now you got 400. Now I can run my stat test. So we've got to be really cautious about this. There has to be a lot of testing to make sure these things are doing the right thing thing.  

 
23:46 
Dom Hawes 
And in that instance, so we're going to a large data set because we're trying to model behavior to create cohorts or segments. But I'm fascinated, John, just nagging away in the back of my head is the conversation you were just having about what's the interface? Because maybe in the future we don't need to do that, like segmentation modeling. Maybe the technology, if the interface is different, will allow us to build segments of one.  

 
24:09 
Jonathan Harris 
I had this conversation with someone in my office the other day because we work with a very big media brand, and were talking about this person consumed the media brand. And I said, you know, this is what we're building for. It was the this'll never happen comment from this guy. I love that interface. That made me realize that this is definitely gonna happen, right? Because if you think, let's take content as an example, right? So everyone talks about the value of content. We always think about content in the context of the presentation layer. How is it being delivered? The amount of money that's spent on web design and the amount of money spent on app design is huge, right? So the question I was asked is, well, is that the best presentation layer? What actually is at the root of that?  

 
24:51 
Jonathan Harris 
Well, at the root of that is a relationship between words and an individual. That's the relationship. And we think about large language models, and we think about chat at the moment as being an interface because that's how we're using it. But is that the interface that doesn't necessarily have to be the interface? And so I think the thing that I think about is if you take away the presentation layer and you think about the relationship between the consumer and the thing that they're consuming. And that can be if you go to a website, ecommerce website today it's that 90% of them look the same. You know, there's a search bar and the product recommendation is this.  

 
25:28 
Jonathan Harris 
But if you think about how chat will change the relationship with products, that is, this is the use case of the product, find the product that will deliver the outcome to me that I want. So, you know, I want a barbecue. But here are all the parameters around what make the barbecue relevant to me. Where I live in the world, you know, what size my family is, how frequently I use it. And then you think about all the other human beings interacting with that product, and then you think about the body of knowledge that then exists around an individual product. And what you realize is that the presentation layer is irrelevant.  

 
26:10 
Dom Hawes 
I'm truly sorry to leave you on such a cliffhanger, but oh boy, what a cliffhanger that is. The presentational layer is irrelevant. In part two, which is available right now on this very platform, we're going.  

 
26:21 
Dom Hawes 
To find out why.  

 
26:22 
Dom Hawes 
We're going to talk retargeting acceleration of technology and how we think. We're training AI, but actually it's training us. You can hear that right now by pressing play on part two. You've been listening to unicorny and I am your host, Dom Hawes. Nicola Fairleigh is the series producer, Laura Taylor McAllister is the production assistant, Pete Allen is the editor. Unicorny is a Selby Anderson production.  

Steven MillmanProfile Photo

Steven Millman

Global Head of Research & Data Science, Dynata

Executive, Award-Winning Researcher/Data Scientist, Innovator, Inventor & Coffee Snob. Throughout my career I have had a focus on quantitative/statistical analysis, survey design, research design, AI/ML, and other applied research techniques. I am presently serving as Global Head of Research and Data Science at Dynata, the world's sixth largest market research company where I lead a team of over 100 researchers and data scientists. I am a frequent speaker and author, multiple Ogilvy award winner, patent holder, and recipient of the prestigious Chairman's Prize from the Publishing & Data Research Forum. Steven serves as a member of the Board of Trustees for the Advertising Research Foundation, the ARF.

Jonathan HarrisProfile Photo

Jonathan Harris

Founder & CEO, sub(x)

Jonathan is the founder and CEO of Sub(x), a marketing technology provider that uses AI automation to drive revenue, growth and customer acquisition for digital subscription businesses.

A former investment banker at Morgan Stanley and Merrill Lynch, Jonathan launched and exited three previous businesses, and has 10 years’ experience in the application of data science in the marketing technology sector.

Sub(x) is transforming digital marketing using a proprietary self-learning AI autopilot, enabling businesses to optimise their online customer acquisition and revenue without the need for manual testing and experimentation.