Title text for Heavy Strategy episode 069

HS069: Regulating AI

Greg
Ferro

Johna Till
Johnson

Listen, Subscribe & Follow:
Apple Podcasts Spotify Overcast Pocket Casts RSS

In today’s episode Greg and Johna spar over how, when, and why to regulate AI. Does early regulation lead to bad regulation? Does late regulation lead to a situation beyond democratic control? Comparing nascent regulation efforts in the EU, UK, and US, they analyze socio-legal principles like privacy and distributed liability. Most importantly, Johna drives home the point that regardless of what your government may or may not be doing, your company better get some internal AI regulations in place.

Episode Transcript

This episode was transcribed by AI and lightly formatted. We make these transcripts available to help with content accessibility and searchability but we can’t guarantee accuracy. There are likely to be errors and inaccuracies in the transcription.

Greg Ferro (00:00:00) – Welcome to Heavy Strategy, the show, which is more about questions than actually giving you answers. We’re trying to help you think about the problem and come up with your own questions that relate to your situation. Johna and I want to have a difficult topic today, and it’s about AI and the regulation that we’ve seen emerge around it. And Johna and I have very divergent attitudes around this. So I want to put a trigger warning in here to start. This whole topic of regulation touches on politics and geopolitics. You can’t get away from it in the current era. And it also touches on society. Is AI a societal issue? And because it’s a societal issue, is it a political issue? I do want to say that both Johna and I approach this apolitically. That is, we’re not particularly on one side or the other of this fence here. What we’re trying to get into here is should there be regulation around AI, or should it be allowed to develop on its own without any government control at some point in the future? And somewhere in the middle is our positions on this?

Johna Till Johnson (00:00:58) – I think that’s Correct. I don’t think either one of us is of the position that no regulation is wonderful and desirable and moreover, effective. So we can scratch that off the list. If you’re listening to this and you’re saying, you know, hey, I’m a free market libertarian. Well, as Greg said, there was a trigger warning there. Sorry about that. But we do think that regulation has a place in society. Yes. Generally speaking, and very likely in the in the special case of AI. Yeah.

Greg Ferro (00:01:26) – That’s right. And I mean, if you are a, you know, absolutist around this, that companies should be able to do whatever they want and the market will decide, just remember that if you didn’t have the appropriate health body in your world, you know, in your country, in your jurisdiction or the water control board that make sure your water is clean, your water would not be clean, and your food would not be healthy because those companies would use the cheapest possible.

Johna Till Johnson (00:01:48) – Don’t get me started, Greg.

Johna Till Johnson (00:01:49) – The usual responses I don’t need no stinking government. I’ll collect my own rainwater. So. But as you said, we are not going to be political on this.

Greg Ferro (00:01:57) – That’s exactly right. The traffic rules exist to protect everybody, not just you. So let’s get into this. So what we saw in the last 2 or 3 months is a whole bunch of regulation discussion. We saw the EU get together and make moves on its AI act specifically. The US this week there’s been an executive order led by the presidential arm around AI, putting some initial guidelines out on what they expect AI to do. The G7 have also announced some voluntary guidance. The UK had a AI summit of some sort, and we got to watch our Prime Minister fawning all over Elon Musk like he’s not an idiot, which was really embarrassing.

Johna Till Johnson (00:02:34) – I’m glad I missed that. Great.

Greg Ferro (00:02:37) – Yeah, it was terrible. So what we’re seeing is this emergence of, politics, and governments putting some controls around or looking into or setting out a case for putting regulations around AI.

Greg Ferro (00:02:49) – The question is, Johna, is this good? Is this bad, or where’s the degrees of good bad in here?

Johna Till Johnson (00:02:56) – Well, it depends what you define. Is this, from the standpoint of saying, technology needs to be looked at holistically and assessed realistically. So what are the threats? What are the vulnerabilities? What are the upsides, the downsides as well as the upsides? I think that’s a good thing. However, if by this you mean the specific regulations that are in place, then I have some real reservations, particularly with the executive order, I have some concerns about this generally and not about the notion of regulating AI. Okay. But the way it’s been gone about.

Greg Ferro (00:03:33) – So maybe we should start with the first question is, should I be regulated? Let’s not talk about what I maybe we’ll we’ll focus on that as a second stage. Do you agree in principle that AI as a very broad idea, should there be some regulation, like there is a case to be made to say I should be allowed to evolve until it reaches some sort of useful product.

Greg Ferro (00:03:53) – So today, ChatGPT has been on the market for less than 12 months. You could make an argument that ChatGPT hasn’t really found its market. It hasn’t, like even though it’s making $1 billion a year in subscriptions, is it much bigger than this? Should it be left to evolve and grow until it reaches some sort of city state, and then the government should jump in? Or could you take more of my position, which is government should engage early and not let these tech companies go out of control and become too big to manage?

Johna Till Johnson (00:04:21) – I think it depends on what you define as regulation. For example, as you were posing the question, I think very reasonable regulation would be warning of the form that the US government finally imposed on cigarette manufacturers, which is, you know, cigarettes are dangerous and can kill you. And so you can’t buy a pack of cigarettes without this leaping out at you. That said, you can still buy the pack of cigarettes. And we talked about this in the prep call.

Johna Till Johnson (00:04:46) – We kind of disagreed on on this basic stance, but, you know, ChatGPT can create absolute nonsense. You were calling it garbage in, gospel out. And my response was, well, for heaven’s sakes, everybody knows that. And your response? Was they do not. And so to the extent that everybody doesn’t know that, I think regulation that would require purveyors of consumer oriented, ChatGPT like services to clearly label not only their their tools, but also the product of the tools. I think that would make sense. So saying putting, you know, what I’m saying in English is slap a warning on ChatGPT and anything like it that says you must not trust this output. It is completely untrustworthy by design. And oh, by the way, the whole watermarking concept, which is embedded in the executive order is a brilliant one if somebody could figure out how to do it. So from the standpoint of warning people that this can be abused, I think that’s definitely something.

Greg Ferro (00:05:44) – I think that’s a fallacious use case.

Greg Ferro (00:05:46) – Let me point you to the link that I’ve got here. Industrial robot crushes man to death in a South Korean distribution center. This is a robot that’s using image recognition to detect what’s on the on the line. And it recognized the worker as a box and then proceeded to pick him up and kill him. Is that a terrible accident? Should that is that an example? That would be an extreme accident where the output of an AI algorithm that’s doing image recognition failed to do it correctly and resulted in a human death?

Johna Till Johnson (00:06:15) – Well, I think you’re playing a bait and switch here that isn’t even remotely ChatGPT. That’s actually AI going to work in industrial robotics setting. And let me remind you that there’s plenty of the there’s plenty of regulation around industrial robotics. And this is the crux of my challenge with AI generally. It’s not the AI that’s the problem. It’s how people use it or could use it. So just to bring up another example, I wrote an advisory a few months back called Gas Station Heroin AI in You.

Johna Till Johnson (00:06:48) – And the problem was that you can buy entirely unregulated supplements and gas stations in the United States, and they can be one molecule off of actual heroin. And you can you can really mess yourself up because these things are unregulated, because they’re not selling heroin. Now, what’s happening is the companies are using AI to generate more and more and more of these compounds, none of which are regulated. My point is, I don’t really care whether the compound got generated by the use of AI, or whether it was a couple mad scientists in a in a lab somewhere. The point is, you’re selling effectively heroin at gas stations, with which probably is a bad idea. So back to your robot.

Greg Ferro (00:07:26) – Back to my robot.

Johna Till Johnson (00:07:27) – The point is that you should be not regulating so much the use of AI, but actually the use of industrial robots and pay, you know.

Greg Ferro (00:07:35) – In this case, the thing that the the part of the machine that that led to this, this unfortunate person’s death was the artificial intelligence machine learning algorithm that was not accurate enough or not good enough.

Greg Ferro (00:07:48) – And in my mind, this is what regulation from my reading of how.

Johna Till Johnson (00:07:52) – Does regulation make AI algorithms better?

Greg Ferro (00:07:54) – Part of the AI regulation. So I’m going to quote you here from Firefox or the Mozilla Foundation. Right. They released some guidelines. They tend to get involved in political issues. It’s sort of associated with their openness. And they said that they see there’s four things in contemplating rules for AI or regulation for AI. Right. Incentivize openness and transparency. Open systems facilitate scrutiny and help foster an environment. So far, all of the AI companies have refused to engage on that point. They refuse to be open. They refuse to be transparent because they say that’s our that’s our competitive behavior. And more importantly, we can’t. And this comes back to this line, you know, that I brought up, which is garbage in, gospel out. People look like, you know, how many times have you read a Microsoft word document where someone’s trusted in autocorrect to do all the spelling corrections?

Johna Till Johnson (00:08:42) – Well, well, coming back to this.

Johna Till Johnson (00:08:44) – So this is exactly the point that I made up front that you dismissed. Openness is key. If you say expect garbage out, here are my algorithms. This is what’s going on. That’s that’s a form of openness. And that’s actually very, very important because you’re simply giving people the tools to evaluate.

Greg Ferro (00:09:00) – You’re going with the idea that people can smoke cigarettes and kill themselves. That’s okay. We shouldn’t take cigarettes off the we shouldn’t make cigarettes safe in any way by reducing tar or by making sure that there’s no toxic chemicals. So, for example, just a few years ago, cigarettes used to contain toxic chemicals which used to help them burn faster.

Johna Till Johnson (00:09:16) – All I can point out is the two two examples of this. One is that once we started putting those warnings, smoking began to decrease and it’s decreased out the wazoo. The second thing is that we’re doing the same thing now putting calorie labels on food. It’s amazing the impact when people walk in, they actually do make do change their minds when they’re buying fast food.

Greg Ferro (00:09:37) – To my mind, that would be a light touch regulatory control

Johna Till Johnson (00:09:38) – Exactly. And that’s why I put a warning.

Greg Ferro (00:09:41) – Label on the outside.

Johna Till Johnson (00:09:42) – I think I think we can agree on that.

Greg Ferro (00:09:44) – But then the question is how much for? So the one of the interesting ones that Mozilla Foundation put forward is said distribute liability equitably.

Johna Till Johnson (00:09:52) – That I agree with enormously. It’s like in the United States, you know, not to get into politics, but one of the canards is the. You can’t sue a gun manufacturer or a gun owner for what got done with their with their dangerous piece of equipment, whereas you can in fact and it has happened that you can actually sue and recover damages from a rental car agency that leased you a car. And if that car was, you know…

Greg Ferro (00:10:18) – An unsafe lawn mower, which causes, you know, that’s.

Johna Till Johnson (00:10:20) – It’s crazy.

Greg Ferro (00:10:22) – So yes, I’m saying here the complexity of AI systems necessitates a nuanced approach to liability that considers the entire value chain from data collection to model deployment.

Greg Ferro (00:10:30) – Liability should not be concentrated, but distributed reflects how I developed and brought to market, rather than,

Johna Till Johnson (00:10:36) – I think in the United States in particular. That’s an excellent, you know, that’s an excellent way to control things.

Greg Ferro (00:10:40) – Yes. That’s right. We shouldn’t just you know, the last point in the chain should not be expecting liability, because then the liability would have to be pushed back up the chain by each person in the chain. And then the challenge there becomes, someone in the chain breaks down or goes broke, and then the liability chain is broken. And that’s not exactly. Which is actually.

Johna Till Johnson (00:10:55) – A very interesting point. Yes.

Greg Ferro (00:10:57) – And I think that’s very reasonable too, because, what we’ve also seen here is there is liability around copyright, where these companies have, by and large, sold data off the internet, which is copyrighted, which belongs to somebody. My blog, for example, is showing up in ChatGPT, right. They’ve taken my content, which is copyrighted, has a copyright logo on the bottom, and it’s now forming up part of the knowledge base of ChatGPT.

Greg Ferro (00:11:19) – That is theft. Right. and but more importantly, what happens if I’m wrong, right? What if I’ve written an article.

Johna Till Johnson (00:11:26) – Which is exactly there? There’s there’s actually side note, there’s actually a wonderful book out which is an AI generated book about mushroom mushroom recognition. And as everybody knows, some mushrooms can be extremely toxic. And in this book, it’s AI generated one of the tests that they recommend for the to find out if a mushroom is toxic is eat it. And this book looks exactly like a like a legitimate mushroom book. You know, I read that.

Greg Ferro (00:11:53) – That is one that is the least, at least good advice. I’ve heard about mushrooms in a while.

Johna Till Johnson (00:11:57) – Right, right. You know, that’s a Yeah.

Greg Ferro (00:12:23) – The next one they talk about is championing privacy by default. They’re saying that privacy legislation must be at the forefront. Now, the use case that I would put here is that this week we saw a company called cutover. They were out… They’ve just been sued by, sorry, Kochava, and they’ve been sued by the FTC for selling data that tracks people, to the point of, like, it was just mind boggling just how much close they are. But the actual, lawsuit that the FTC is bringing is saying we track people, whether they go to reproductive health clinics, places of worship or other sensitive locations, and they sell that data so people can then buy data based on, oh, you’ve just been to a clinic. Maybe, you know, something personal, maybe a sexual disease or maybe a heart disease. And then insurance companies can go and get that information and decide whether to insure you or not.

Greg Ferro (00:13:12) – And I think the thing the angle to consider here is not that the models contain this data because that horse has bolted. What this does is it allows, the, the use of AI and machine learning to give outcomes which are abusive, if you will.

Johna Till Johnson (00:13:29) – Right. Well, so the challenge that I have here, and this is, broadly speaking, the challenge with regulating AI generally you want to regulate the outcome. The problem isn’t AI abusing your privacy. It’s the problem is your privacy getting abused and one of the bigger challenges. And I’m going to keep narrowly focusing on the executive order because I have some real concerns about it. The challenge is in the United States, there is not the bedrock principle that that privacy exists. And in fact, we’ve recently eroded that. So there’s a notion that you don’t inherently have privacy. And even when the states are trying to promote that, the federal government has not taken a position such as the way, for example, Europe has with GDPR. So it’s very difficult to layer AI privacy or, you know, protection of data use within AI if there. So you would have to…

Greg Ferro (00:14:19) – But what you’re saying is just to me, there is not that it can’t happen, but that if you start to do this here, you actually have to do something radically different in the US context, whereas other countries would have less of a uplift here and say it’s much more likely.

Johna Till Johnson (00:14:32) – So. Exactly. And that gets back to the gas station heroin. Because the problem is, the problem isn’t that AI is being used to generate gas station heroin. The problem is that supplements are completely unregulated. So then the question is, do we want to regulate every single supplement? That’s a whole different discussion. But the point is it’s the output. It’s not the you know, it’s a different way of looking at everything, not just I like I said, now.

Greg Ferro (00:14:55) – Let me also for the people who are radical transparency, saying there is no you shouldn’t have privacy. There’s nothing to be privacy. Next time you go to the to a public toilet, I want you to imagine glass windows in between each toilet.

Greg Ferro (00:15:07) – No male and female toilets, just one with glass windows. Right? So you could actually.

Johna Till Johnson (00:15:11) – Have that in some trendy clubs in New York.

Greg Ferro (00:15:15) – So if you think that is radical transparency, right. If you’ve got nothing to hide, then you believe in toilets, you know, in public toilets, actually having glass between them so you can see what everybody’s doing. And that is an extreme example. But there is a point where something should be, you know, you don’t need to see everything that somebody is doing.

Johna Till Johnson (00:15:32) – Right. And I do I do think in the United States that we’ve taken the, you know, privacy is unnecessary way, way too far. But I would like to point out something, which is, I think in the end, even though we promised, you know, politics, this comes down to politics with a small P. I mean, this isn’t Republican or Democrat or, you know, Tory or Labour or whatever, but it does come down to fundamental premises on how society should be run.

Johna Till Johnson (00:15:56) – And I think that’s part of the challenge is everybody’s overburdening this notion of of AI regulation with things that properly belong to a much broader political, not political, but social.

Greg Ferro (00:16:09) – But as always, when it comes to political change, it tends to come around some sort of a key issue or a right.

Johna Till Johnson (00:16:15) – There’s a catalyst.

Greg Ferro (00:16:16) – There’s usually a catalyst. And then that often then frames the resulting legislation, which is then not necessarily ideal. Right. But I think my point here is, is that.

Johna Till Johnson (00:16:25) – You got to start somewhere. Yes.

Greg Ferro (00:16:26) – That’s the flip side here of privacy and AI is it’s not that it’s not that they’ve got the data, although that is a worrying and a concerning thing. It’s the exploitation of that data or the the information that you can extract from that data that AI enables. That is the concern here. now, to be sure there are going to be nation states out there. So what we are seeing is the weaponization of this data collection. We’ve had nation states going around collecting data out of, you know, stealing data for quite a long time now as our cyber security was poor generally.

Greg Ferro (00:17:00) – And what. We now know is that they can now apply AI to that to find out. You know, they could go to look for people who potentially could be compromised as spies or people who’ve got access to, to classified information or.

Johna Till Johnson (00:17:11) – At a much more relevant level. One of the things that we’ve been guiding clients on, and I think it’s important to think about, is inside an enterprise, you probably have a data classification process and well in place, you basically go through and you label your data. This is either sensitive data like PII or it’s not sensitive data. And you feel like you’ve controlled it with DLP, you’re done. The challenge that AI brings to the table is that I can recreate, to a high degree of accuracy, PII and other sensitive data from non sensitive data or not sensitive data, which means that from an enterprise perspective, regardless of what’s going on with the regulations, you need to be thinking about how you’re going to address this problem. Because. So what that actually means is that if you have a data classification scheme in place and you have AI anywhere in the organization, you have to go back and rethink what you’re doing with data classification, based on the fact that I can recreate a lot of sensitive data without you even being aware of it.

Johna Till Johnson (00:18:11) – And that’s a very major shift in thinking that enterprises need to make again, regardless of of regulation.

Greg Ferro (00:18:17) – Yeah. So in the FTC complaint against which is being accused of selling way too much information in this case, they’re saying the data could be used to track consumers to a place of worship, thus really reveal the religious beliefs and practices. The data center. The data sample identifies the mobile devices that were located at Jewish, Christian, Islamic and other religious denominations places of worships. They also then have data. They can then track you through to the place that you live, so they can track that you being in the same location over a period of time, which can reliably then be used as an identifier of this family residence that you live at, so they can now track your religious belief as well as where you live. And they then sell that information. So in the wrong hands of somebody who wants to persecute a particular subgroup, maybe it’s homelessness, maybe it’s addict addicts of some sort. You can actually find out where they are, what they leave things in the past.

Greg Ferro (00:19:08) – And this is what I’m saying is that AI enables this or allows you to take this up to another level where before it was kind of.

Johna Till Johnson (00:19:14) – It, it absolutely does. But again, it’s the same problem, which again, the solution for enterprise organizations and for folks that are thinking IT strategy is to make absolutely sure that you have in place digital ethics and AI governance, because that exact problem existed 15 years ago. Once people started handing out work phones to employees, it was then able to track wherever that phone went and it suddenly was able to notice. And this was a…

Greg Ferro (00:19:43) – Read every email, read every piece of code.

Johna Till Johnson (00:19:45) – Well, that part that’s that’s, you know, 30 years ago, 40 years ago. But with the phone, you could actually be able to see something like, oh, gee, we have an employee who is in recovery who just walked into a bar, or at least his phone walked into the bar. And the question became, as an IT person, what is my obligation, should I? So none of these.

Greg Ferro (00:20:03) – Issues on, you know, my point is, is this not that this exists as an issue, it’s whether AI changes the nature of the problem.

Johna Till Johnson (00:20:09) – But I guess the point that I’m trying to make here, Greg, is it’s all well and good for governments to regulate. And I and again, I think bad regulation is worse than no regulation is kind of the way I would put it. However, as an organization and IT organization, you don’t have the luxury of waiting until good regulation comes along. You actually have to start thinking about how you’re going to deal with these problems. So as you’re listening to this, think about how you would how you would handle these problems inside your organization. And I think one thing that really leaps out from the executive order is even if you think you aren’t using AI, you absolutely must have an AI governance in place.

Greg Ferro (00:20:46) – Well, the evolution we’re seeing in the in the market in terms of product ization is this is AI is being added to everything. Exactly. Microsoft Office, Salesforce, slack, Dropbox, they’re all using AI somewhere.

Greg Ferro (00:20:57) – Exactly that. And this has all happened very, very quickly. Typically, you know, I said as I said, ChatGPT has only been around for 12 months. This has led to a draw, a whole new fashion amongst companies to add AI to their products somewhere in, I’m sure. And you would make the point that AI’s been around for 20 or 30 years and it’s been creeping into various things. so, for example, this week, Cisco made an announcement that it’s enabling its AI sales platform for resellers. So Cisco is now going to tell resellers using AI enhanced technology to see what I mean. Cisco doesn’t have an AI product, but it does have an AI sales strategy.

Johna Till Johnson (00:21:30) – Cisco has been talking about building AI into its products and have been doing it for for at least 15 years, actually closer to 20. So the point is, ChatGPT became the catalyst. So suddenly everyone was aware of what? Of the potential of this technology. and I’m going to reiterate my point because I think it’s important if you’re listening to this and you’re going, well, I agree with Greg.

Johna Till Johnson (00:21:52) – Regulation is good. I’m glad they’re regulating or gee, I’m anti-regulation I’m not so sure it’s good that the bottom line is that actually doesn’t matter because. Because you if you’re if you’re dealing with it inside a company, you’re going to need to agitate to have AI governance put in place, not, you know, not next year, not next month, not next week, but maybe tomorrow. And in fact, if you look at the executive order, that’s a flat out requirement. I’m not going to go into the details of the ways the executive order is extremely ambiguous, but the net result of its ambiguity is there’s no possible way you can read that executive order and say that you’re immune. Yes. however, if you’re using AI and…

Greg Ferro (00:22:34) – My counter to your point here is saying yes, bad regulation is bad. But my point here is that when we didn’t put regulations around the internet, around the 2000, we ended up with these tech companies running away into directions. And now we’re trying to put society wants to see controls around organizations like TikTok and Facebook and Google and their dominance in the market.

Greg Ferro (00:22:55) – But governments are really struggling because those companies have become far too powerful in some way or their ability to hijack the democratic and the legal system and work it to their benefit. So there is an angle that you could take, and one that I’m thinking is what’s being done here, which is governments are saying in this case we have to intervene early, right? We have to put in.

Johna Till Johnson (00:23:15) – That’s absolutely what they’re taking. It’s just that the nature of the intervention is such, you know, the example I’m thinking of, Greg as you’re talking about this is oh my gosh, I have kids. Oh my gosh, the older ones smoked dope once so I’m going to lock the younger one in her room. And she will never get to go out and never get to have friends and never get to go to school. There’s a level of overreaction to admitted harms of previous eras. And, you know, again, I’m not I’m not anti-regulation per se. I give you some examples of where I could see it being highly valuable, particularly the distributed risk, which I love, and distributed liability.

Johna Till Johnson (00:23:54) – However, one of the concerns I have is reading through this regulation. It is extremely onerous in terms of what it imposes, not just on the AI companies, on literally.

Greg Ferro (00:24:04) – A signal that government is serious about this. I think what you’re seeing here is a signal from certainly the Western government that they are saying, take this back to sort of fundamental purposes and and redact it down to its basic. What it’s saying is we’re not just saying be good people, right. It’s not like don’t don’t make remember the old Google a line of, you know, don’t be.

Johna Till Johnson (00:24:25) – Don’t be evil.

Greg Ferro (00:24:26) – Right? Which they no longer subscribe to because it just got unworkable after repeated time. If they had a posted that we know that the tech companies and the venture capital and the private equity people would have just ignored it and just gone right on and done whatever they want, and it wouldn’t have achieved anything. So by putting some more, clarified, forceful, possibly bad regulation in place is probably better than not doing anything at all or doing something very low.

Johna Till Johnson (00:24:51) – So so basically, basically, I think we’ve zeroed in on our respective positions. Greg’s is that bad? Regulation is better than none at all in this mind. Is that bad? The bad regulation, can be worse than than what it’s attempting to regulate. Here’s the kicker. It doesn’t matter. You can side with Greg. You can side with me. But given where things are going in the US and outside the US, the impact on anyone doing it is that you’re going to have to spin up an AI governance group and have it be more than just ticky dot in the very near future, because keeping abreast of these regulations and the point at which they’re going to start having teeth right now, they don’t have teeth, but that’s I’m sure we can all agree that’s temporary. That’s going to be a full time job for somebody or somebody, depending on how large your organization is. And you’re going to need to be feeding this information back to people, setting corporate strategy that, you know, you want to stay ahead of the corporate strategy, because if they’re not aware of the implications of what they’re doing, the company could get in a world of hurt.

Johna Till Johnson (00:25:54) – So fundamentally, you have to stand up.

Greg Ferro (00:25:57) – So one of the one of the thing I think what you’re alluding to there is that if the regulation works in a certain way, your employer could suddenly be exposed to that as part of the blast radius. You might suddenly be on the…

Johna Till Johnson (00:26:08) – I’m saying that as written today. Yes, that is the case because the the characteristics of those for whom the regulation applies are written so broadly and so ambiguously. It’s basically everybody. And so you don’t even have a defense. As we said earlier.

Greg Ferro (00:26:24) – Ability starts in that the the you know, your company has has liability here. If you’re using AI products, you’re incurring at least some liability. And like it is with AI.

Johna Till Johnson (00:26:34) – And if you are not using AI products or claim you aren’t, check again because you probably are. So yeah.

Greg Ferro (00:26:39) – Cyber security up until cyber security, you can just do whatever. All you need to do is do enough to say, oh, we tried so hard. And now that’s been clocking up and the pressure on companies to improve the level of sight.

Greg Ferro (00:26:49) – Now we’ve got this situation where the CISO from SolarWinds was now being sued. Now there are extenuating circumstances here and all that sort of stuff. It’s not. But what we’re seeing here is just the societal push to increase cyber security because it’s unnecessary well, thing.

Johna Till Johnson (00:27:04) – And coming back to that distribution of liability, I think the big thing that’s changed in the past two years in cybersecurity is now we’re seeing that the board is liable, senior executives are liable, and companies are trying to react by saying, oh, it’s not us, it’s the CISO. But that’s not working anymore. And that’s what we’re going to see with AI.

Greg Ferro (00:27:22) – So what are two things I’d like to see you take away? Just remember that anything AI is, is a garbage in and gospel out, unless you’re there to say it’s not. There’s no question if you think of autocorrect in Microsoft Word, AI is conceptually very similar. That’s how I’ve often pitched it to, to boomers who don’t understand. I said, look, it’s just like autocorrect in Microsoft Word.

Greg Ferro (00:27:42) – Sometimes it’s right and sometimes it’s not right. And you have to be there to judge the decisions. The responsibility is still on you in that. And the other one is that perhaps in this case, the governments here are just trying to get ahead of a tech, a new emerging tech industry before it gets too large and too uncontrolled, and then they lose control of the situation. Yes. It’s possible that there’s bad regulation going on here. Probably likely. But at least they’re doing something instead of letting it go. You know, turn it into something.

Johna Till Johnson (00:28:09) – To which I would respond, listening to all this, that’s all very nice in the abstract. At a practical level, it means that you are not immune. So even if you think you’re avoiding, you’re dodging the AI bullet, you probably aren’t. And you really want to start thinking about what you’re going to do inside your company.

Greg Ferro (00:28:24) – Thanks very much for listening to Heavy Strategy. As always. It’s been such a privilege to have you with us.

Greg Ferro (00:28:28) – Johna, where can people find out more?

Johna Till Johnson (00:28:30) – Come hit us up at Nemertes.com. We have a community where we talk about this and other issues. You can just fill out the application form and we look to see you there.

Greg Ferro (00:28:39) – And I’m Greg Ferro. You can find me over on Packet Pushers with a lot of other fine free technical content. If you appreciate it today, don’t hesitate to throw your ideas out at packetpushers.net/FU. Send us your follow up and let us know what you’re thinking. Did we get this right? Do we get this wrong? You know, hopefully we found a balance through the middle of this. And if we haven’t, we apologize. We did try very hard, and we even put a trigger warning at the front. Thanks very much for listening. We’ll see you again in a couple of weeks.

Share this episode

Join the conversation

Find professional peers and chat all things networking in the Packet Pushers Slack community.

JOIN 💬

Because you need maintenance too.

Human Infrastructure is a weekly newsletter about life in IT.

Subscribe