April 25, 2024

Beyond Faster Horses (2 of 2): AI's Impact on Search and the Digital Ecosystem

In this compelling continuation of Unicorny's exploration of AI in marketing, Dom Hawes returns with part two of his discussion with Steven Millman and Jonathan Harris. Building on the insights from part one, this episode del...

In this compelling continuation of Unicorny's exploration of AI in marketing, Dom Hawes returns with part two of his discussion with Steven Millman and Jonathan Harris.

Building on the insights from part one, this episode delves into the emerging role of personal AI assistants and their potential to revolutionise consumer interaction by removing traditional interfaces.

The conversation also covers the broader impacts of AI on revenue models and the digital ecosystem, highlighting both the opportunities and ethical challenges that come with such transformative technology.

Don't miss this deep dive into how AI is reshaping not just marketing strategies but also the very fabric of digital engagement.

About Jonathan Harris 

Jonathan is the founder and CEO of Sub(x), a marketing technology provider that uses AI automation to drive revenue, growth and customer acquisition for digital subscription businesses. 

 A former investment banker at Morgan Stanley and Merrill Lynch, Jonathan launched and exited three previous businesses, and has 10 years’ experience in the application of data science in the marketing technology sector. 

 Sub(x) is transforming digital marketing using a proprietary self-learning AI autopilot, enabling businesses to optimise their online customer acquisition and revenue without the need for manual testing and experimentation. 

About Steven Millman 

Executive, Award-Winning Researcher/Data Scientist, Innovator, Inventor & Coffee Snob. Throughout Steve's career he's had a focus on quantitative/statistical analysis, survey design, research design, AI/ML, and other applied research techniques. Steven is presently serving as Global Head of Research and Data Science at Dynata, the world's sixth largest market research company where he leads a team of over 100 researchers and data scientists. Steven is a frequent speaker and author, multiple Ogilvy award winner, patent holder, and recipient of the prestigious Chairman's Prize from the Publishing & Data Research Forum. Steven serves as a member of the Board of Trustees for the Advertising Research Foundation, the ARF.

Links  

LinkedIn: Jonathan Harris | Steven Millman | Dom Hawes  

Website: Sub(x) | Dynata 

Sponsor: Selbey Anderson  

 

Related Unicorny episodes: 

A/B seeing ya! Is AI the end of split testing? with Julian Thorne 

"It’s a language model, stupid". How marketing should and shouldn’t use AI with Steven Millman 

Everything, Everywhere, All at Once with Steven Millman

 

 Related Marketing Trek episodes: 

 Data Ethics in Marketing with Steven Millman 

Breaking The Cookie Jar with Steven Millman 

 

Other items referenced in this episode: 

Sam Altman CEO of OpenAI 

Sapiens by Yuval Noah Harari 

 

 

Episode outline

 
The Rise of AI Personas  
Jonathan and Steven discuss the concept of creating AI Personas of individuals for personalized recommendations and advertising. They speculate on the future of unique personal assistants. 
 
Impact of AI on Marketing and Content Creation  
The conversation delves into the potential disruption of the marketing industry by AI, leading to the possible obsolescence of traditional content creation and marketing roles. 
 
Unintended Consequences of AI  
The discussion highlights the unintended consequences of AI development, including the training of human behaviour by AI and the potential risks of AI with physicality in our space. 
 
The Illusion of Data Protection  
The conversation starts with a discussion on the illusion of data protection and the impact of GDPR compliance. It highlights the belief that data will ultimately win over the law. 
 
The Impact of GDPR  
The discussion delves into the challenges of GDPR and how “bad actors” can find ways to bypass it. It also explores the ethical considerations and legal ways that organizations can still exploit data. 
 
The Contradictory Nature of Data Privacy  
The conversation covers the contradictory nature of data privacy, where individuals want both privacy and targeted ads. It also touches on the level of inference and insight that large language models can provide. 
 
Designing Workflows and Removing Bias  
The discussion emphasizes the importance of designing workflows around customer choice and removing bias from the environment. It also highlights the need to flatten the subjectivity embedded in data to achieve more reliable outcomes. 
 
Embracing AI and Navigating Ethical Considerations  
The conversation concludes with a focus on AI's role in marketing as a catalyst for innovation and a mirror reflecting broader societal concerns about privacy and ethics. It emphasizes the need to maintain trust and transparency with stakeholders, especially customers. 
 



This podcast uses the following third-party services for analysis:

Podder - https://www.podderapp.com/privacy-policy
Chartable - https://chartable.com/privacy

Transcript

PLEASE NOTE: This transcript has been created using fireflies.ai – a transcription service. It has not been edited by a human and therefore may contain mistakes 

 00:03 
Dom Hawes 
You are listening to unicorny, and I am your host, Dom Hawes. If you're coming straight into this, it's part two. You're going to want to go and listen to part one first, I think, or you're going to miss all the context. And if you have already done that, welcome back. Great decision. This part is every bit as good as the first. Coming up, we're going to talk about things like AI, personal assistance, how robotics companies are using familiarity and cognition bias to stop us being afraid of their products, how revenue models are going to be impacted by AI and its effect on our ecosystem. But I left you in the last part on a massive cliffhanger when Jonathan said this.  

 
00:41 
Jonathan Harris 
And then you think about all the other human beings interacting with that product, and then you think about the body of knowledge that then exists around an individual product. And what you realize is that the presentation layer is irrelevant.  

 
00:54 
Dom Hawes 
And here's how the conversation then continued.  

 
00:58 
Jonathan Harris 
We invented the landing page because it's a curation of what the editorial teams who are very good. And actually what we've found is that editorial teams have very high hit rate. For example, if they want to flag most popular content, the human is really good at identifying most popular content. We often get asked the question, can you help us recommend most popular content? You have to come back and say, what's most popular? Right? Is it what's trending? Or is it what the editorial teams think is going to grab the audience? So our humans are really good at doing that. But the interface was designed by a bunch of engineers who said, well, actually what we need to do is we need a place that people land on. Then social media came along. People don't land on that thing anymore.  

 
01:37 
Jonathan Harris 
They go straight into the article. And then when you look at the article, what's the article? Well, the article for 99% of content sites is just text. Font might be different. The headline might be slightly bolder. There's some ads down the right hand side, but substantially the page is text. And so what's the user doing? They're interacting with text. And then, so you take away that formatted presentation layer and you say, well, what do I, as a customer, actually want? And I'm going to go. And I think this is good for brands. We talk about trust, and we talk about integrity and authenticity, and we talk about editorial teams, and I'm talking about, obviously, media at the moment. Editorial teams are really good at doing that. That's where the intellectual capital is.  

 
02:18 
Jonathan Harris 
But I can go into a news brand and through a process, build a relationship with that data in a way that I can't do through the current medium. And I think that's where the whole industry's going.  

 
02:33 
Steven Millman 
Yeah, I think the thing that I'm looking for someone to build, and I think if I were smarter, this is what I would do. But getting to your point about does the app become irrelevant? Does this artificial layer become irrelevant? Is people starting to build AI Personas not of others, which is what people in my industry are trying to do, but about yourself? So I'm going to create an AI Persona of me, and that AI Persona of me is going to follow me around on a device that I can connect to. You know, could be my phone, could be my watch, could be some wearable. And because it has been trained on me, I've given it a lot of information about me. It can serve as a recommendation engine. It can look for the news sources I'm most likely to be interested in reading.  

 
03:16 
Steven Millman 
It will be able to make those choices in a way that is relevant for me specifically and not for anybody else, and it will also be able to help drive ads to me in that scenario. I think that kind of a unique personal assistant has got to be on the horizon. Someone's got to build that.  

 
03:37 
Dom Hawes 
That's definitely a car because you're suddenly it's all pull, not push. It's completely changed the dynamic.  

 
03:43 
Steven Millman 
Exactly. And so I say, gosh, you know, I need new shoes. Can you tell me what kind of shoes I should pick? Give me some ideas. Search engines out the door at this point? Or if I say, hey, tell me what's going on in politics today. I think the challenge that we're going to have when this happens, and I feel really sure this is going to happen at some point, but when this happens is how does content make money when you've built this assistant layer?  

 
04:07 
Dom Hawes 
I think that's an issue before, way before we get to having a personal assistant. An AI assistant, like even now, Google makes its money, basically. It's really not much more than a glorified directory. It's not that different from the Yahoo it replaced. Its algorithm is a bit smarter, but its algorithm is biased. We know that you take something like a perplexity and you're getting your answer on the page. And Google will ultimately have to deliver the answer on the page, not refer it to other sites. So that whole cohort of marketers that make living creating content to try and game a search engine to get people to their site so that they can sell them something, won't need to exist.  

 
04:43 
Steven Millman 
Yeah. And news aggregators, if you think about the problem news aggregators cause to news sources being able to make money, this would take that and ramp it by a million.  

 
04:53 
Jonathan Harris 
It's. The search ecosystem is bigger than search. Yeah. So that's the thing that gets disrupted. It's like you remember CDs back in the day. You buy CDs, but they need to buy a rack and you had to buy a cleaner. So the ecosystem around the CD was bigger than the CD industry itself. But the challenge here is that no one cuts the branch they're sitting on. So you're just not going to disrupt your own industry. So the edge of this is as a business. There are going to be vendors who can build the transition journey for the clients. That's the key. Because what will happen is the environment will move at such a pace that it is systemically impossible for businesses to move at that pace. They can't. It's not reasonable to even say to a business, you have to build for this future.  

 
05:44 
Jonathan Harris 
The problem statement that I'm always thinking about is how do we build for that future? Knowing what the problem and the business is today and building that pathway through to an outcome where, for example, the layer that we currently service doesn't exist.  

 
06:02 
Steven Millman 
Right.  

 
06:02 
Jonathan Harris 
That might be five years, it might be ten years. I suspect it could be three.  

 
06:06 
Dom Hawes 
Right?  

 
06:07 
Jonathan Harris 
That's gonna happen.  

 
06:08 
Dom Hawes 
You're scaring me now, but it's a lot faster.  

 
06:11 
Jonathan Harris 
It's gonna be a lot faster.  

 
06:12 
Steven Millman 
A friend of mine at the chief research officer at the advertising Research foundation, the ARF politonato, he likes to say that there's no such thing as current research in AI. Because by the time you've published anything, it's out of date. It's no longer relevant.  

 
06:27 
Dom Hawes 
This episode, we're bumping this up on the production schedule for the same reason. Leave it too long. Three years is very scary for some of the things we're talking about. But like you, I wouldn't be surprised.  

 
06:39 
Steven Millman 
Last time we spoke, I made a comment like, worrying about AI taking over the world and doing disastrous things is like worrying about overpopulation on Mars, right? We're just not there yet. The first thing that I've seen that actually scares me happen just recently, and it's what you brought up with them. Loading an AI into a robot. And this is where we start to get to a place where things can get bad, is when you allow the AI to have physicality in our space.  

 
07:07 
Jonathan Harris 
Dexterity is the thing you don't want AI to.  

 
07:09 
Steven Millman 
Exactly.  

 
07:10 
Jonathan Harris 
You actually don't want it to have five digits.  

 
07:13 
Steven Millman 
Never give an AI robot an opposable thumb. Yeah. Don't teach it to use tools.  

 
07:19 
Jonathan Harris 
That's the truth.  

 
07:20 
Dom Hawes 
Unfortunately, humans are greedy and someone will.  

 
07:22 
Steven Millman 
Remember what I said. Oh, yeah. Maybe five years from now, we'll have this conversation. I'll have a different feeling on the subject. Well, what was it, six months, eight months?  

 
07:30 
Jonathan Harris 
There are two tracks taking place. There's the track that you're referring to there. And I think increasingly the system around that risk is developing. I think it'll develop faster. You look at the AI act in Europe, it'll happen faster, but it'll always behind the curve because there's more capital going into solving that problem than there is going to defending against that problem. So that's just.  

 
07:50 
Steven Millman 
And there's plenty of actors that aren't going to pay attention to flaws.  

 
07:53 
Jonathan Harris 
Exactly. Exactly. The interesting thing on bringing it back to kind of the ecosystem that we're talking about is that actually the same is true. The disruptor hasn't actually happened yet. The reason we see that video is because that's a, you know, it's like Boston Dynamics did over the last 20 years. Boston Dynamics was really good at making dancing robots. Right. And why is it good at making dancing robots? Is because they're friendly and they're not scary. That's why they dance.  

 
08:15 
Steven Millman 
That's why they have heads.  

 
08:16 
Jonathan Harris 
That's why they have heads. Yeah. And that's why they do backflips and why, you know, spot does really cool stuff and it's, you know, that's why they do that, because it's non threatening in the background. Amazon is, you know, populating its warehouses with all of these robots. Right. And. Yeah, so, but that's a different question. But the same is true when we think about the ecosystem that we work in today, except that the disruptor hasn't happened yet, that there is no talking robot, but it's happening in the background. There will come a point where the industry realizes that it is not going to claw its way back from that. It's not like paywalls. Right. So you think about paywalls. 20 years of paywalls. Kind of work. Didn't work. Kind of work. Someone made some money. Someone didn't make some money.  

 
09:01 
Jonathan Harris 
People are now realizing that ads are under threat, but the subscription economies reach saturation. The streaming networks are cutting their own throats on pricing. All this kind of stuff inject ads into paid streaming. But that has been a fairly benign, competitive ecosystem. It's been a, you know, it's a classic competitive space. I genuinely think that we will look back in three years time and there will be fallout at the, some very big enterprise level businesses. There will be fallout because systemically they just weren't able to change fast enough. Because I think what's happening, which is different, is I know people use this a lot, but it's so true is your gran is on Chachi BT, right? It's not what she's doing with it, but it's the training as human beings, what's happening now.  

 
09:54 
Jonathan Harris 
We're being trained to think about, as we talked about right at the beginning, the outcome. I don't care about the process. Not interested in any of that. Just give me the outcome and that's what's the customer's gonna change.  

 
10:10 
Dom Hawes 
Okay. If you're in the marketing business, the last ten minutes might have been quite sobering. While people who should better informed, people like Sam Altman, are predicting that 95% of the marketing business won't exist in years because AI will be able to do what people currently do, or as he said, it will handle. The work here is a scarier and much more disruptive thought. The work as it's currently done by hundreds of thousands, if not millions of creatives, writers and designers might not exist at all. We've built a whole industry around the interfaces that are developed since April 30, 1993. The website, the video aggregator, the landing page, all of these things we access via a graphical interface on our devices. But who's to say that interface is going to survive?  

 
10:55 
Dom Hawes 
The web as we know it today is prettier and higher bandwidth, but it's not that different from the one we used 20 years ago. And it's not that one from the one that first appeared in 1993. AI is going to change that. If I was smart enough, right now, I'd be acting on Steven's model and I'd be building an AI avatar to have a virtual me interacting online, screening, inbound filtering, negotiating, scanning, generally acting for me. If I don't need to be online myself in the future, what happens to all those people writing content, designing ads, making and selling media? Scary thought answers, please, via a chat prompt to my avatar. But what we just heard from Jonathan struck me too. Like we think we're training AI, but actually it's training us.  

 
11:40 
Dom Hawes 
His thought reminded me of a passage in Yuval Noah Harari's sapiens, where he says, we didn't domesticate wheat. Wheat domesticated us. So it is that we think we're training large language models, but actually they're training us. That's scary. Human beings are pretty good at being victims of unintended consequences, as it turns out. Like, please don't give AR opposing thumbs. Marketers, be careful what you wish for, because in time, I think we may see more than our breakfast being eaten by headless robots and they don't need to dance anymore. But it's not all doom and gloom. Stephen, maybe the law will slow things down. The language models have already ingested the copyrighted materials, of course, and I know people are starting to kick off, but I wonder if which is easier to change, the law or the models?  

 
12:32 
Steven Millman 
Well, I mean, I think the laws are going to change. The problem is that the people writing the laws as a rule, number one, don't really understand the technology, but number two, the laws, like everything else, are going to be years behind the technology. So the European Union's AI act, which I think is now in translation, which is the final thing before it gets past through the union, is not considering any of the stuff that happened in the last year. None of it. It's about facial recognition. It's some broad strokes about what the AI is attempting to do is dangerous, but it's very much looking at threats were thinking about three years ago.  

 
13:11 
Jonathan Harris 
It's a good example because look at GDPR. Do you feel safer because of GDPR? Do you feel that your data is being used in a way that is truly compliant and non invasive? No, it just doesn't feel like that. You know, the fact that someone accepting GDPR pop ups on a website is almost a pain because I know. I know what's happening in my Instagram feed and on Twitter and LinkedIn. I know what's happening there. Stuff appears there. That appears there. Right. So, and I've got, you know, I've got ad blockers and I got protection on my chrome browser and all this, but it just. We've given genuinely, it's an intellectual debate that's had at a political level to ensure that we take the right actions. But on the ground, I do not feel that my data is protected.  

 
14:03 
Jonathan Harris 
And honestly, I don't care that much.  

 
14:06 
Dom Hawes 
This is why I think the data ultimately will win, not the law. I mean, it's like Spotify. The labels and the musicians willingly gave all of their rights away, effectively to Spotify in the belief that the technology was going to become a discovery engine. But actually behavior changed and people don't use it as a discovery engine, they just consume through it and it completely changed the dynamic of intellectual property, writing music, and I think we'll see the same through large language models.  

 
14:29 
Jonathan Harris 
Yes.  

 
14:29 
Steven Millman 
And the lens of GDPR. It's very easy for bad actors to figure out ways to do exactly what GDPR is written to prevent without violating GDPR. I shouldn't say it's easy. It's not terribly hard. And so if you're a large organization, you have to either believe in the spirit of the law and attempt to refrain from doing bad things or not. So we've talked about this previously at Dynato. We have absolutely, we have methods, but we don't employ them because we don't want to abrogate people's privacy. And that's just our philosophy. But there's certainly legal ways that people could do that. Of course, I'm like you, I actually don't care. I kind of like the fact that my ads are targeted to me. I want to see relevant ads. Who cares? Yeah, you're going to get an ad either way.  

 
15:20 
Steven Millman 
Do I want to get an ad about a new release of a fabulous scotch or women's underwear? I mean, you can argue, knowing me, which of those I'd prefer, but I'd still like it to be targeted. Is the thing fruity?  

 
15:33 
Jonathan Harris 
We're going to get banned for a trick. But the environment that we live in is highly contradictory. Right? You walk down the street, you're being tracked, facially recognized everywhere. Every transaction you take maps, you. So what happens is it's like the myth of recycling, invented by big oil to put the burden of saving the planet onto whether or not you put a bottle in the blue bin or the brown bin or the green bin, all of a sudden it's my responsibility to deal with carbon emissions. It's the same with data. The fallacy that clicking on accept in any way protects you at a kind of fundamental human level. I clicked accept on GDPR compliance pop ups on twelve sites today. Now I'm safe. And of course, you know, they talk.  

 
16:26 
Steven Millman 
About both ism, right? So I don't want anyone to track me. I don't want anyone to know anything about me. But I will tell you when I'm leaving my house on Facebook, I will talk about my medical history on Facebook or Instagram. And I do want targeted ads. So I want both of these things at the same time. And it's very hard to sort of peel that apart.  

 
16:47 
Jonathan Harris 
What chat GPT knows about people today, from the questions that they ask, is a factor greater than whether I navigate through five pages on a website or buy a red dress or blue pair of shoes or whatever. Just the level of inference and insight into what it is that's driving me as an individual, what my interests are, what my expectations are, what I'm trying to achieve in any given point in time is beyond that of whether I read this newspaper or that website or I have that app. The level of engagement is different. The level of insight is different.  

 
17:26 
Steven Millman 
Today. They're not personified in a way that's usable, but that's a matter of time, without a doubt.  

 
17:31 
Dom Hawes 
Well, I'm having too many anxiety attacks here. We need to loop it back to some basics, problem statements. So we started out talking about, we need to define our problem statements, listeners, what should they be doing if they want to avoid some of the calamities that we've been talking about? What does their problem statement look like? How do they go about defining it?  

 
17:51 
Jonathan Harris 
I'll give you an example without giving a name. In trying to understand the relationship between an offer and an outcome, getting the right offer to the right person at the right time. I think of websites as mazes that at given points in time will change the direction of the customer journey arbitrarily. There's a block here, right? Think about the editorial team, and I'll use media because they're very good example that most people won't understand. The editorial team, that piece of content is blocked question, should it be blocked? Not, should it not be blocked empirically from a data point of view, in terms of the journey, in terms of maximizing the outcome, should it or should not? Content that should never be blocked.  

 
18:34 
Jonathan Harris 
Stuff that the editorial team believes should always be available because it drives traffic, because it drives ad revenue, and then you have the kind of the natural journey of the customer. So coming back to the point about what's the problem statement, and it comes back to the data, is that the greater the subjectivity embedded into this environment that you're putting the customer in, the harder it will ever be to deliver a truly AI driven outcome because there's too much subjectivity about what the journey should be baked into the data. That's the challenge, right? So if I change because I believe that's the same as saying I'm going to do an a B test, because I've done some number crunching and I think that's the right a b test to do.  

 
19:19 
Jonathan Harris 
So I think the question is the environment that I'm putting the customer in biased? The root environment, is it biased? If it is biased, it doesn't actually matter what you do over the top. You're moving the customer through journeys that are not necessarily intuitive to the customer. So it's a business centric environment and not a customer centric environment. What you should do is flatten the environment. Say, listen, I have no subjective bias over what this environment should be. I'm going to let the data tell me what the customer journey should be. Where should people be blocked? Because where you block someone depends on what their interest is and how much time they spend in that content. Why would I arbitrarily block someone who doesn't read that content most of the time? It's the wrong environment to do that.  

 
20:01 
Jonathan Harris 
So I think generally, and this is true of all businesses, flatten the subject, get rid of the subjectivity that's baked into the data. Then you can begin to build a system that uses data to deliver the outcomes.  

 
20:14 
Dom Hawes 
Okay, I like that a lot. Stop designing your workflow around what works for you. Remove barriers, remove bias, and give your customers choice. Then you'll have better data and can build a system to deliver more reliable outcomes. An example. Take down the gates. Your content, they're skewing your data. Stephen, what's on your mind?  

 
20:35 
Steven Millman 
So, for me, I think, in answer to your question, it's about understanding the tool and not applying a tool because it's new to a problem that it is not well suited for. So when we talk about a b testing for conversion metrics and the use of AI, that is really easy to do with machine learning. When I say easy, my team always says, don't call it easy. It's straightforward. It's straightforward. We know how to do it right. Building a skyscraper is not easy, but we know how to do it well, not us. People know how to do it. Machine learning is exceptional at these things, because machine learning is designed to look at two or more outcomes and have the machine, without human intervention, determine how one would predict the one or the other. Those are very easy to test.  

 
21:28 
Steven Millman 
It's easy to know whether or not it's working, and it's well suited to the task. When we start using things like generative AI to solve problems, we don't have today, easy ways to test whether or not they're doing them right. And they require a lot of human oversight or computational oversight, depending on the process. And they are very good, objectively, very good at certain things, but we try to apply them to everything to see if it works. And without appropriate testing, that gets hard. So I think, as we're looking at trying to get to build cars instead of buy more horses. We really need to spend a lot more time thinking about what's the right tool as opposed to what's the newest tool.  

 
22:11 
Jonathan Harris 
When I talk to our clients, we talk about our domain experience. So domain experience is very important because it's not just about the data that comes in, but it's about the way the data is used and what's good data, what's bad data and domain experience. What actually domain experience means is that we've made more mistakes and spent more money making those mistakes than our clients have. That's generally what happens. So where we landed was on reinforcement learning for the problem of recommendation systems. And the reason reinforcement learning is so good is because it's highly dynamic, right? And it has a feedback loop. And so the problems, again, we're coming back to the problem, and that is that machine learning is very good at certain things.  

 
22:51 
Jonathan Harris 
Applying the right type of machine learning to a problem with context about the data and the outcomes that you're looking to achieve is the hard thing is about fitting the round peg in the round hole and making sure that's all synchronous. And I agree with you. I think the most valuable thing that we're able to do with reinforcement learning is to understand the impact of the outcomes, not only just at the client level, but also what goes into those outcomes. And I think your point about outcomes and generative is absolutely right. And I think that will be the thing that creates friction for businesses, because what businesses love is defined outcomes.  

 
23:29 
Steven Millman 
Yeah. And I think just to layer one piece on top of that, it is really hard to do machine learning without experts. Right? You need your data scientists, the ones that cringed at what I just said. You need them to do machine learning. But anyone can go on chat GPT, and they can just do stuff without any comprehension of how these things work and what they need to do to review them. They don't understand retrieval, augmented generation. They don't understand adversarial testing. They don't understand any of the things that are very complicated that you need to do to figure out if what you're doing works. The very democratization of this technology, which is fabulous, I think, is what's driving a lot of the snake oil in.  

 
24:09 
Jonathan Harris 
The industry and will create noise.  

 
24:11 
Steven Millman 
Oh, yeah.  

 
24:12 
Jonathan Harris 
Therefore, opportunity. If you can get rid of the noise, that's where the space is.  

 
24:22 
Dom Hawes 
Well, I don't know about you, but as I said right at the start of the first part of the this podcast, I only just clung onto that conversation by my fingertips. What a ride. So where are we? Well, from where I'm sitting in unicorny Towers, I see it like this. The job of a marketer isn't to adopt AI at all. It's to weave it into the fabric of marketing to enhance personalization, efficiency and engagement. And we all have to do that while navigating the ethical implications of advanced data analysis. This underscores the importance of ethical considerations and continuous adaptation. Because if we're going to harness AI's power to transform our customer experiences and therefore ultimately our effectiveness, we have to do it while maintaining trust. That ain't easy.  

 
25:11 
Dom Hawes 
So I'm starting to see AI's role in marketing both as a catalyst for innovation and as a mirror reflecting broader societal concerns about privacy and ethics. So as this technology continues to evolve, you and I have to be both agile and smart. We have to embrace the right new technology as it appears. Ignoring hype, ignoring anything that looks like emperor's new clothes, ignoring anything that is probably snake oil. And we've got to do it while maintaining trust and transparency with our stakeholders. And of course, among them, customers come first. Simple, not a bit of it. Like, I feel like I'm standing on a beach at the moment, staring out to sea. I can handle the waves of innovation as they kind of steadily approach. They're challenging, of course, but you know what? I love a challenge, so I'm kind of finding it enjoyable.  

 
26:01 
Dom Hawes 
But I know somewhere out there is a tidal wave and it's heading my way. That's, that's a completely different thing. It's disruption on a scale that maybe can't be written out using the tools and assets that we have today. As Jonathan said to me at the lunch that birthed this very episode, if the innovation you're working on doesn't scare you at least a little bit, it's not radical enough. You're building faster. Horses do that knowing someone somewhere is building a car. You've been listening to unicorny, and I am your host, Dom Hawes. Nicola Fairle is the series producer. Laura Taylor McAllister is the production assistant. Pete Allen is the editor of Unicorny is a Selby Anderson production.  

Steven MillmanProfile Photo

Steven Millman

Global Head of Research & Data Science, Dynata

Executive, Award-Winning Researcher/Data Scientist, Innovator, Inventor & Coffee Snob. Throughout my career I have had a focus on quantitative/statistical analysis, survey design, research design, AI/ML, and other applied research techniques. I am presently serving as Global Head of Research and Data Science at Dynata, the world's sixth largest market research company where I lead a team of over 100 researchers and data scientists. I am a frequent speaker and author, multiple Ogilvy award winner, patent holder, and recipient of the prestigious Chairman's Prize from the Publishing & Data Research Forum. Steven serves as a member of the Board of Trustees for the Advertising Research Foundation, the ARF.

Jonathan HarrisProfile Photo

Jonathan Harris

Founder & CEO, sub(x)

Jonathan is the founder and CEO of Sub(x), a marketing technology provider that uses AI automation to drive revenue, growth and customer acquisition for digital subscription businesses.

A former investment banker at Morgan Stanley and Merrill Lynch, Jonathan launched and exited three previous businesses, and has 10 years’ experience in the application of data science in the marketing technology sector.

Sub(x) is transforming digital marketing using a proprietary self-learning AI autopilot, enabling businesses to optimise their online customer acquisition and revenue without the need for manual testing and experimentation.