Skip to main content

The Hunch Podcast:
Carl Wocke

How can your skills and expertise be harnessed and taken further and faster while earning more money? Carl Wocke of Merlynn describes how AI-powered Digital Twins will impact work and society and considers the ethical and regulatory questions that will need to be answered.

Episode transcript

Mark Schmid

Hello, and welcome to The Hunch. In this episode, we're looking at the future of work, AI and the potential digitization of human experience and expertise. to guide me through this topic is Carl Wocke, of Merlynn. Carl has built his AI business with a mantra of "doing what I should, not what I could". And this ethos allows Merlynn to imagine a better world where the limitation of access to expertise, experience and wisdom is removed. Welcome, Carl, and thanks for joining us.

 

Carl Wocke  

Thank you. Thanks for the opportunity.

 

MS 

Tell us a little bit about how you've arrived in the world of AI. what's your what's your background?

 

CW

I get asked that a lot. I think the, you know, my, my obsession has always been around AI. I'm sort of 56 years old. I've got a number of kids. And if you were to ask my kids, what do I do, they, they might well be confused and not be able to answer that correctly, neither. But I think my fascination with AI started before the turn of the century. I had a dream to be able to capture expertise capture the essence of someone when they die, you know, so the dream was to be able to go and reference a relationship once that relationship had passed. And you know, ever since those days, I've been thinking up schemes and designs and approaches to, to use AI to, to access, you know, the personality of someone.

And that's sort of eventually practically led to the development of, of our current technology set, which is, is really a sort of the essence of someone, but in terms of what that someone's expertise is. So yeah, I think you know, I always joke, I think by the time I die, people will say the most useful thing I ever did was to, you know, this thing of building this type of AI.

 

MS 

A legacy hey! Fascinating Carl you know, the thought of replicating the essence of an individual for you didn't start with a business application. It was much more about the individual in the round. And now of course with digital twinning and its relationship with AI, you're bringing that forward into the world of work.

 

CW 

That's right, if you have a quick brief history of digital twins, I mean, the concept of digital twins have been around for a very long time. One of the early sort of definers of the phrase was IBM many years ago, originally digital twins were used to do simulation modelling. So you would recreate a factory, if you will. And you would build simulators, sort of Monte Carlo type simulators to try and work out what are the potentials but in essence, digital twins have evolved since then.

Certainly with, with AI the version of digital twins has become a lot more sophisticated. And Gartner sees in essence four main evolutions or phases of digital twins, the most advanced being digital twins of, of people, and that's pretty much sort of a intersection around my fascination. I've always been fascinated about digitising people, specific people with specific skills and, and now the most advanced version of digital twins. The philosophy or the methodology of the most advanced version is now possible.

 

MS

And when you think about digitising an individual, how just give you this example, Carl, of how you might see that deploy today and the particular business areas that you get most excited about when you think about the potential of this?

 

CW 

I think one of the things I've learned about business has been and certainly technology has been around timing and when you look at all of the investment in, in things, like the RPA is robotic process automation, the automation technologies, the application for a digital twin capability, once organisations have invested in automation is now real. You're really talking about concepts of human in the loop human decisioning human decision automation.

So anywhere where you've got uncertainty and therefore risk, you know, where data hasn't really figured out what that rule is. So environments where we, you know, this layer lands on human expertise, that's actually where your digital twins will absolutely find, find home. More advanced areas like sort of human empathy, ethics, any area where you can't necessarily codify with rules. Those are areas where, you know, you find human capabilities or human decision capabilities, missing areas more specifically.

So, as you mentioned that at the start, I'm from a business called Merlynn, we've played in a number of areas, and we're finding, you know, areas where there are levels of uncertainty and risk, are areas like financial services and healthcare, you know, to name two, you've got others, like the cyberspaces, the law enforcement spaces, but areas to take areas like your financial services and healthcare, there's still a level of dependence on, on sort of that human factor in financial services, you've got a typical example could be where you've got things like transaction monitoring, so we're banks will monitor transactions for, for illicit transactions.

And in those environments, there's still a bottleneck around access to human expertise. So definitely, you know, just to circle back where you have a start of, of automation. So where organisations will deploy large levels of sort of automation technology, they eventually work out that there's that need for, for that human in the loop type involvement.

 

MS 

Yeah, let's just explore that human in the loop phrase a little more, if we can Carl, you know, what exactly does that mean, in this application, and explain its value a little bit, if you would?

 

CW  

Human in the loop refers to having a person's input or judgement within the process or persons consideration in a process. So, so human in the loop is, can be anything from a second opinion to a process, or it could be a individual monitoring a process. So when I'm talking about process, I'm talking about an automation level. So, if one was to consider that an organisation has run, you know, or has invested in in levels of automation, there's very often still a need for a human to be considered. One can also look at it from another perspective, where a lot of the AI giants out there, the Elon Musk's of the world are talking about challenges with AI and, and potentially handing over the reins.

At a macro level, you know, one can understand that you don't just want to give a supercomputer full access to, to sort of the levers that that will affect us. But at a practical level within an organisation. In a similar way, you don't want to hand over, you know, your processes within the organisation or your workflow within the organisation, simply to the AI tech, you still want to bring areas or introduce areas where human in the loop is considered.

 

MS

So you mentioned earlier, healthcare as, as potential early adopters, and you're using a phrase also, you had second opinion, you could imagine that in then then healthcare, the digital twin may have the expertise gleaned from many kind of health care experts. But you'd want that second opinion, from a human in the loop as, as almost a safeguard, I guess, almost a confidence giver as well.

 

CW

That's a really good way to look at it. And I think what you're touching on is, is the importance of positioning of digital twin technology as well. So, I like to see a successful digital twin deployment as a sort of as a monitoring and a prevention mechanism against a bad decision. So an example would be and maybe these are sort of good ways to understand the concept.

So one example would be we build a fully digitised digital twin of a clinician, and this clinician can start dispensing drugs and offering advice and the like. So that technology is here and we can actually do that. And, and that would be sort of a careless adoption of the technology, a better a better adoption or use of the technology would be a caution against the bad decision. So a bad decision could be, there's a call centre is called because it's been an accident, and the healthcare provider has got to approve, you know, approve a protocol that sees this individual going into hospital. Call centre agent looks at the rules and says no, you don't qualify and the individual doesn't go into sort of any kind of emergency care, digital twin could be positioned to caution against that bad decision.

So it wouldn't make the decision to put the person into care. But it would, it would be a second opinion to a decision where the person wouldn't be allowed into care. So I suppose what I'm, what I'm trying to say is the, the positioning of the digital twin as a cautionary to a bad decision is, is a fantastic placement of this kind of expertise, or this kind of digital twin expertise.

 

MS  

So checks and balances, as in pretty much all important functions of state and commerce are important - I think that's loud and clear. And we've mentioned before that there are some huge global businesses that naturally are involved in deploying this technology from financial services and healthcare. And obviously, IBM and Gartner you reference as doing a lot of work in this area to understand it, as well. A lot of our listeners are from SMEs small and medium businesses, how might this technology and this ability to kind of replicate and make available at scale your expertise? How might that work for a smaller business?

 

CW  

So I see businesses being differentiated by their ability to respond when things go off script. So a general statement then could be that your smaller businesses are, by nature, more entrepreneurial, and, and therefore having the ability to respond better to off-script events, you know, those are your mom and pop type businesses, you know, if you if you and these are very general statements, but if you actually look at the evolution of business, that starts with a great idea, great execution, that's entrepreneurial. And then at a point there is a need to scale, and when you start scaling in a business, you start, you, you have to start bringing in the automation technology.

So, you know, you need to automate to scale to optimise costs, and efficiencies and all of those good things. What happens is we, we see that these businesses lose the entrepreneurial edge. And part of that entrepreneurial edge is, is you know, the ability to handle things going off script. With a digital twin technology, you able to automate that expertise that off script expertise. And I see huge opportunity for organisations or small organisations to be more flexible to embed off script capabilities, entrepreneurial capabilities, ethics capabilities, customer empathy, kind of capabilities, which one would find within entrepreneurial environment, you can embed that within your automation.

So that really gives your small organisations ability to, you know, if they were to to create a scale with the intelligence of, you know, an entrepreneurial intelligence at scale, that'll give, very quickly give them the capability to, to compete against the large organisations.

 

MS

And expertise, we always think of that as rooted in the individual to some degree, if you are hired as an employee, and your business, through the course of your work, captures, replicates your knowledge and expertise, through digital twin technology, you leave that business, your, your expertise and knowledge is, is retained within the business within the digital twin. So long term, what do you think? Do you think there are question marks around who owns your knowledge as an individual and who is able to use it and over what time frame?

CW

So that's a really interesting and important question, the digital twins, and certainly our technology, and I'm sort of limiting the discussion to what we do. So just, just for your listeners, we build digital twins out of real people, specific people so we could build a digital twin out of Mark, we don't build a generalised representation of a doctor or generalised representation of a risk manager, we build a specific individual. So the challenge that you've highlighted is that the human has created a product of their IP, of their expertise.

This is definitely going to force organisations to revisit the relationship between employer and employee. And for a number of reasons, though reality though, just just to talk about the reality of the digital twins, the digital twins are a representation of a specific human's decisioning. And as the human’s decision changes, the digital twin will change and age. So you can't necessarily take a digital twin away from the human you can for a period of time. But the digital twin is still tethered for its learning and evolution to that human. So that's a practical statement. However, what now becomes possible is that, in the past, you'd have employees working in an organisation and servicing potentially the customers of their organisation, what now becomes possible is a digital twin of the employee can actually go and work in the organisation in, in a customer's organisation.

A practical example would be that, you know, if you've got a risk manager within a bank, risk managers are very expensive resources and the bank's customers won't necessarily have access to those kinds of skills, or resources to build that kind of capability. What could now happen is a risk manager from a bank can be digitised and be planted into an automation layer at the customer of the bank. Now, that's the question that you asked around, you know, what is this going to mean to sort of the business and the employee - is going to be challenged because the employee can argue that they are not adding value outside of the organisation within a sort of a workflow outside of the organisation?

So, you know, we've had legal opinion on this some time back. And differing legal opinion, interestingly. But I think when you as an employee work for an organisation, obviously what you create belongs to that organisation, within the current context of understanding, but when one expands that outwards, where I can now go and work in the customers of my customer, that that becomes challenging. And I think, you know, back to the earlier point, we're going to have to revisit these relationships because, you know, the value that I bring to the organisation can now be sold 1000 times outside of the organisation. So, in a way, organisation could eventually become an agent of my skills. All very interesting concepts.

 

MS  

It is. And recently, of course, we've heard Elon Musk's maybe slightly flippant remark that with the advancement of AI, no one will need to have a job in the future, what you've laid out for us today, Carl makes us think that you're going to need that human in the loop, you're going to retain, particularly in high skilled areas, the human expertise and knowledge, but it will just be made much, much more available into many, many different areas via digital twin technology. But clearly, as you mentioned, there's going to be able to be a whole rethink in terms of employment contracts. And in terms of regulation. I mean, how much regulation do you think there needs to be? And of course, you know, we're speaking to you, you're in you're in Africa today, we're in in London. And so, would you think regulation needs to be global? Will it be by, by country?

 

CW  

You know, one of the questions that I get asked as well is how much regulation and do we need regulation - you absolutely need regulation. And that's really to monitor for fair practice and fair advantage and you know, all the regulatory things that if one has become accustomed to so we need regulation without a doubt, we need regulation but within very well defined objectives and, and the devil's in the details in terms of those objectives, and that will form the basis of a discussion maybe a future hunch episode, just understanding what is the regulatory relationship with AI and the like. But regulation is key. So, if you look at concepts of, of ethics within AI, which is one of the regulatory principles and objectives, you'll find that ethics changes based on region as, as you've just suggested, you know, different regulations for different regions.

And one of the challenges that regulators and I know, regulators are having within the AI framework is around ethics. So, if we take that as a point, you know, to explore a bit further, I could have different ethics to my neighbour, and in certain respects, I would have, I have different attitudes about things. I certainly would have a different ethics profile, you know, based on a different region in the world. So, you know, ethics is an exceptionally, exceptionally difficult challenge that we're facing, simply because you can't necessarily codify ethics. So when a regulator has got to codify a rule that must be followed by an industry, how do you how do you codify ethics if, if my ethics is different from someone who loves, you know, a kilometre away from me, so, so, so we're gonna have these kinds of regulatory challenges, a very novel approach to, to sort of achieving some kind of regulation, and would be where you start to introduce sort of customer perspectives into your organisation. So imagine a world where we create customer advocacy capability, where you bring digital twins of your customers to monitor for, you know, compliance, fair trade, and the like, in the same way this technology makes it possible for you to introduce regulators in a digital guise into your organisation, what I'm talking about today's is not necessarily the best approach, I'm rather talking about what's now possible. So, is it possible to have a panel of regulators monitoring my business?

Absolutely. Whether that'll ever be practical and acceptable, is another discussion. Is it possible for me to bring my customers into my organisation as digital twins to perform a similar function, you know, to, to ensure that we're trading in a in a fair way? Absolutely. Whether that would be acceptable or not, again, you know, is, is that other discussion.

 

MS  

So potentially, you could have AI-powered transparency, where you create the persona of the customer, the regulator, the compliance adjudicator, and they are effectively looking at all of the decisions you're making and flagging, if they think you're doing something that isn't in the customers best interest or is not in, in line with regulation, and then flagging that up to the humans in, in the senior level to actually, you know, make a make a decision based on that.

But at any time, either the regulator, regional, or national, or the customers can almost in real time, come on in and take a look. So, I know, as you said, is, this is dependent on on, on a number of things, but in theory, you could have almost have that constant transparent, that constant monitoring and transparency all across your business.

 

CW

Absolutely. I mean, that's, that's a brilliant concept. And, you know, if I could if I could simplify a view on business, so that view would be true for the business. And it would be true. Well, let's say though, that view would be true for stakeholders in the business and stakeholders outside of the business. So, stakeholders in the business, you would consider that a sort of a, a process in a business is cross functional. So, the process could start as a commitment by salesperson and eventually end as the fulfilment in manufacturing, you could, using a digital twin AI technology you could create, and those of stakeholders to that process, you could create a monitoring layer across the organisation, which would allow the salesperson to monitor against their expectations as, as this process flows through the functions.

So, another way to look at a business is that their business and large business invariably ends up in in two distinct layers, one being strategy and one being operations. Strategy being, you know, informed, and defined by the directors and the management of the organisation. And they really hope that it gets translated at an operational level. And we know that there's a disconnect between the two. Imagine if if you're able to digitise the objectives, the management to monitor against operations. So that's in the organisation, outside of the organisation stakeholders in the organisation are, as we correctly said, the regulators, but they're also ultimately your customers. Now, my view of what a future landscape could look like, would be where you have your customers being the ultimate adjudicators of fair practice.

And what I mean by that is when you start introducing concepts of cancelled culture, fairness, and the like, the eventual arbitrator isn't going to be a court of law, it's going to be your customer who could deem you as not having traded fairly. So, a practical example in that regard would be an insurance company that has the right to refuse or repudiated claim, but customer sentiment and public opinion could say, well, you actually should have paid that premium. You know, for these reasons, we think you've, you've done, you've done the wrong thing here. Now, that kind of event is, is then sort of after the fact, you know, we've, we've now agitated the customer, we've agitated the public opinion, and we in trouble as a brand.

If we bring customer opinion, upstream into the organisation, we could mitigate for that risk before it becomes sort of a claim that we should have paid but didn't. So, so in terms of ultimate transparency, there is no reason why we don't bring ultimate transparency in terms of objectives, stakeholder objectives from within the organisation and outside of the organisation, to help monitor within our business world.

 

MS

So, in reputational terms for an organisation is the court of public opinion in the court of law that will ultimately make or break you as much in an automated world. As it does now. It’s really interesting and just thinking in terms of the, again, the public view, and their kind of confidence levels and also their emotional engagement. Could you see a time when made by humans, as a label, is a differentiator for companies in an almost totally automated world?

 

CW

Absolutely. Absolutely. These things, are all pendulums, we've moved from lots of human pendulum swung to lots of automation, it's going to swing back, lots of human. So absolutely, certainly differentiation is what one is looking for. The idea that you have human touch, come, and have coffee with me, you know, let's, let's onboard you as a customer, we'll come to you and all of those things. Absolutely correct. However, the difference now is that we're dealing in an environment where we demand scale. So, organisations, small businesses, don't have the ability to, to have to only have practical human touch, we now need to bring automation. So maybe halfway between automation or full automation and human touch would be automation that involves human sentiment, human touch, you know, the ethics empathy type sensors. So, so I think the pendulum will come back to the middle.

But most certainly a differentiator. I mean, organisations are constantly, you know, espousing that they are, you know, we see you as a person and come to our website and have a discussion with a chatbot. So, I think, you know, to answer your question, is that going to be a distinction and one that one should exploit? Absolutely.

 

MS  

And as citizens what should we be looking out for? Merlynn is set up to do things in the right way for the benefit of all with with AI technology. What should we as citizens be watching out for, what, what things could come back to our detriment if we don't stay alert?

 

CW  

I think being disintermediated out of a process. You mentioned Elon Musk, and he had a view that we're not we're not necessarily going to all have to work- he's a phenomenally clever guy, and, and I'm certainly not going to sort of be his intellectual equal - but I think the challenge with the concept of as not having to work is that we won't be contributing anymore. So, if I'm not contributing, and this is the logical challenge for me, if I'm not contributing, I can't be earning. You know, that works very well, you know, not contributing, and still benefiting works very well, I think in the primary needs, like, you know, food, water power, when you look at things around you, you know, the, not the luxury  items, but your cars and TVs and computers, and, and, and then the better food, the better houses etc.

You need to somehow differentiate the value that you bring, to be able to have differentiated value in terms of what you take. So, so the model where you say, you don't have to work, but you're just going to be served by AI, I think is flawed. And we need to be very careful around that disintermediation out of processes that loss of relevance, where my contribution is no longer needed, it speaks to the concept of, you know, building these digital twins out of out of what your contribution is, you know, I've always seen a world where my digital twin goes to work, you know, my digital twin is really a proxy of me, you know, I have a relationship with a digital twin in, in, in that the digital twin will do most of my work, and then communicate back to me when it hasn't been able to do the work and where there's input needed, and where there's further training needed.

So, so what should we be looking out for? Complacency. You know, feeling that we all going to be alright, when AI takes over? I'm not so sure, I think, you know, the, the single statement would be loss of relevance. You know, if you have loss of relevance, I think, I don't know if you're going to be able to take value off the table anymore.

 

MS

So, you won't be rewarded for those material differentiators. And you'll also probably have a have an issue around your mental health in terms of feeling fulfilled and feeling that you're contributing. So yeah, that is indeed a big watch out, Carl. We'd like to end on a positive note, though. So, I'm gonna ask you what your hunch is about an AI-powered or enabled future of work.

 

CW

So, I think we can all acknowledge the world around us is becoming more digital more complex. I see a world where AI allows us to participate. So, where we can create digital versions of us to be able to participate. I think that's key. Our business has already started to build the pieces for this for this view.

So, my hunch would be I'm not necessarily only going to be the beneficiary of this new technology, where I get a pizza delivered faster, and I get more entertainment. But, I need to be able to participate in a world where we participate and not only consume, I think the vision that Meta had, Mark Zuckerberg had in terms of a Metaverse, I'll probably be shut down for this - but I think that that's correct. I don't think necessarily the translation that that they have of that world is correct. I think there's a lot of resolution missing to that view. But I do see a digital world where sort of that Metaverse of being served and actually serving in a digital form is real, I see a digital marketplace for digital twins. So, you know, I would want the ability and see the ability for me to create digital twins of my skills of my expertise placed into sort of an Amazon-type marketplace, where people can interact and consume those skills.

Again, the pieces for all of this are there where I could, you know, the future world would be I send my digital twin to work, it works, you know in 10 Different countries in a number of different organisations adding value within those organisations at an operational level earning me a small revenue per use, which you know, which, which will rapidly, rapidly add up. So, a digital world - a Metaverse enabled world.

 

MS  

Thank you, Carl. Listeners can find out more at Merlynn, that's M-E-R, L-Y-N-N hyphen AI .com Thank you for listening and Carl, thank you very much for joining us on The Hunch.

 

CW

Thank you so much.

About the author

Simmons x Schmid

Cookies

We use cookies on this site for essential functionality and performance statistics.