podcasts Market Intelligence /marketintelligence/en/news-insights/podcasts/next-in-tech-episode-160 content esgSubNav
In This List
Podcast

Next in Tech | Episode 160: AI use cases

Podcast

MediaTalk | Season 2 | Ep. 27 - College Football Preview & Venu Injunction

Podcast

Next in Tech | Ep. 181: Lighting up Fiber

Podcast

MediaTalk | Season 2 | Ep. 26 - Premier League Kicks Off

Podcast

Next in Tech | Ep. 180 - Datacenters and Energy Utilities

Listen: Next in Tech | Episode 160: AI use cases

While there is no end of generative AI discussion, it’s not often clear how it’s being used. Nick Patience and Alex Johnston return to explore the results of a recent study that digs into AI use cases with host Eric Hanselman. Projects are charging forward and concerns around data dominate. Getting access to data remains challenging, but, as use matures, data quality has become increasingly critical. Interestingly, trust in AI results is declining as understanding grows. Hallucination anyone?

Subscribe to Next in Tech
Subscribe

Eric Hanselman

Welcome to Next in Tech, an S&P Global Market Intelligence podcast where the world of emerging tech lives. I'm your host, Eric Hanselman, Chief Analyst for Technology Media and Telecom at Global Market Intelligence. And today, we'd be looking at use cases or generative AI. And to discuss it with me are returning guests Nick Patience and Alex Johnston.

Welcome back to the podcast to you both.

Nick Patience

Thanks, Eric. Good to be back.

Alex Johnston

Yes. Thanks for having us on, Eric.

Eric Hanselman

This is an area where, I guess, we talked about a lot of the broad aspects of generative AI. And of course, it's buzzy in writing massive amounts of hype and all sorts of things that are going on. But there are people who are doing real work with us, and you've actually been digging into some of those perspectives with some of the voice of the enterprise study were. And I wanted to get into some of that data.

I guess to start, one of the things we keep kicking around a lot is what's happening with adoption? What are the levels of maturity? And really, how far have organizations got with all this? And I guess there's a bit of data we can go into there.

Alex Johnston

Yes. Thanks, Eric. Maybe I'll start on that. I think the way Nick and I have been really seeing this is the generative AI maturity curve could be split in a number of ways. But if we look at it in terms of 5 segments, you probably start with the organizations that aren't seeing generative AI in use at all. So it's not in use anywhere within those organizations. And that's a tiny proportion of organizations. It's around 8% of the respondents we got back from our survey data.

And obviously, above that, even if generative AI is being used, but informally, so without organizational support, so that could be sales associates using ChatGPT to help write outreach emails, for example. But again, that's sort of informal use, and that's around a fifth of organizations. And the plurality of organizations are a bit above that or a bit ahead of that in the curves in the sense they're trying to make more official generative AI tools available. So there's pilots, there's proof-of-concept projects as an intent to have organizational support for these capabilities. That's around 30% of organizations.

So at the higher levels of maturity, you have those that have adopted generative AI partially, so it's limited to specific departments or projects perhaps. But again, these are sort of formally adopted tools. They're officially supported. And that's around 28% of organizations are partially deployed to put to a limited degree, perhaps to a specific project, for example, as mentioned. And then you have a 13% we see up at the top there, see it not just as being formally adopted, but integrated across the entire organization.

Eric Hanselman

Interesting. So well, I guess it does speak to the level of interest, concern, the fact that there are only 8% that are sitting on the sidelines, if we contrast that with typical technology adoption curves. I mean this really is everybody's in, everybody is playing with it, in some form or fashion. But interesting to see that there are 13% that are up there actually across organization plan. This is serious enough at the high levels that the people are really working with it.

Alex Johnston

You mentioned that rapid uptake, and that's really interesting. But actually, if we look forward to end state in 12 months' time that the forecast organizations have, we don't actually see that many organizations successfully shifting to the end state, whether it's towards fully adopted and integrated. So you mentioned that 13% are relatively mature. Actually, there's a surplus of organizations accelerating to that level, those that see themselves as being there in 12 months. Even the increase that we saw is overwhelmingly companies already had some sort of capabilities partially adopted.

There's actually been a bifurcation where we have this sort of cohort of around 50% of companies have adopted these tools. They're rapidly maturing. And there's a ramp that are really stuck at the stage of still making those tools available. And we're finally getting stuck at that third stage, the ability to scale out their pilots, that's ability to shift up experimentation. I think something that Nick would likely touch on, there's obviously a number of challenges that organizations are facing. And at that stage, we see a lot of concern around the infrastructure needs of generative AI capabilities as they move from pilot to in-production, which is quite interesting.

Eric Hanselman

Well, this is -- Nick, you'd had the concept of data dabblers in terms of how organizations were able to deal with data. And it sounds like there's that large middle that is dabbling in AI without maybe a plan to go forward. And I guess what are the challenges they're facing?

Nick Patience

Yes, that's right. That's right. So taking those 5 stages that Alex talked about, there's some interesting kind of data points around the specific challenges you face -- organizations face in each of those 5 stages. So if you take the app to earlier stage where it's not being used, as should be expected, one of the challenges of identifying use cases, why should we be doing this? Another one is around, do we have training data and also security risks as it always does.

And as we move into the second stage, slightly more generative AI or AI-specific things like bias and fairness come into play as a challenge, but also cost as companies start to realize this stuff doesn't necessarily -- well, it doesn't come cheap at all by any stretch of the imagination.

And in that third stage, that critical third stage, the hump there that Alex was talking about, those infrastructure requirements are a big issue. And that obviously could depend on whether you're trying to do this in a hybrid cloud environment, if you're all on premises or if you're in public cloud or the edge is an issue for you. It varies quite a lot. But also data privacy is cited as another issue because at that point, you're starting to do proof-of-concepts with real data. And so those kind of issues could come to play.

And then in the fourth stage, penultimate stage, regulatory compliance, integration, the maturity of the tools, these kind of things listed organizations tell us those are the main challenges. And when we get to the last stage, the kind of we're all in, it does get quite specific to generative AI. So some of the challenges around the quality of the generations being made or content diversity.

In other words, is this image generation tool spitting out the same thing, or more or less the same thing, 25x? Or actually, is there actually diversity in the images? But also the sustainability of generative AI becomes an issue there because you can imagine if you're a reasonably sized company and you're deploying this stuff across the company, if you have ESG concerns, then that's going to come to the fore. And I think we here at S&P, yes, we're a reasonably large company. We know where we are in this, and we're using it internally and we'll be using it in a customer-facing way as well.

And so it's those kind of when will you run them past the people here that are doing our generative AI implementations, these kind of challenges, they've kind of nodded their heads and said, yes, yes, those are pretty much in line with what we are seeing. And also other organizations we've been talking to across different industries.

Eric Hanselman

Interesting. So while the hype is fueling a lot of engagement, there are still some fundamental challenges. I mean, Nick, you've talked before about AI infrastructure on the podcast and some of the concerns about what's really necessary, the challenges with data. Those are things that, I guess, are still problems that most organizations have to be able to work through in order to really leverage AI capabilities and put them to work, really sort of sorting these kinds of things out. I guess, still some challenges remaining.

Nick Patience

Yes. And the data thing is interesting because we've been down the route of big data. We had Hadoop became a thing, what, 12 years ago or something like that. And there was never really perceived kind of urgency on organizations to kind of clean up the data or do anything with it. And then there's a lot of enterprise AI wave of 2015, 2016 came along and they did solve, put a little bit more focus on it. But still it was kind of lip services being paid to issues around data quality, data governance and things like in data integration.

And now the degenerative AI wave has shown a much brighter light on these issues. And maybe just maybe this is the catalyst for things to change in part because, as I think we discussed before, yes, generative AI is the first time that AI in enterprises has become a C-level concern. And actually on our overall list of challenges to organizational adoption of Gen AI, executive support is the lowest, apart from those that have no challenges.

In other words, the lack of executive support is not an issue. They've got lots of support for it. It's a question of security and privacy and costs, and these are the other much higher ranked challenges. But it also just shows that because you've got executive support, you've got C-level concerns about generative AI and its risks and opportunities, maybe that will force the actual organizations to get their data houses in order, finally.

Eric Hanselman

This was, as we've talked about in the past, everyone was building data lakes, lake houses, however you wanted to characterize that. But the idea was let's accumulate as much data as possible without really focusing on the data quality issues. And I remember one of your examples was like any lake. It may be pristine and wonderful, but some lakes also got old tires and shopping carts and all manner of other things. And that now that we've got tools that are actually pulling that data and doing something with it, suddenly, the challenges of quality of that data have been revealed because realistically, we probably weren't doing the same level of intensity of analytics and certainly not with the same depth until all of a sudden, we had a tool that was able to stack up large volumes of data.

Nick Patience

Yes. And if you go -- I mean, obviously, a lot of organizations will not be building their own foundation models from scrap. The vast majority will not be. But even if they're engaged in fine-tuning or retrieval augmented generation, RAG, all these things put a focus on the data that they're using, either to fine-tune the model or to use a RAG tool to go in search and complement what's in the foundation model. So again, it's all just data, data, data, data.

Eric Hanselman

Well, so are there indicators that you're seeing that are really showing where these things fit? What do you see in the data around this?

Alex Johnston

Yes, sure. I can start with the data on data, if you like. The data sources are being used to power these AI models. So that's everything from -- obviously, IT operational data has been a big focus for a number of years, and we saw that, again, customer data and business operational data as well very popular in terms of organizations that are leveraging. And there areas that are slightly less focused, so supply data, synthetic data and geospatial data. But as you might discuss later, synthetic data is an area that a lot of organizations see generative AI as being quite valuable in enabling.

In terms of actual performance metrics, something which is quite interesting in the study as we go into the impact of these various AI investments -- in terms of looking at what companies are measuring, there's a few things that stood out. The first was that organizations tended to be prioritizing KPIs aligned with the value that AI was delivering. So things like employee productivity, customer satisfaction.

And often they're looking at that more so than they're looking at traditional operational metrics about AI models themselves, model size or inferencing time, for example, which I suppose indicated that organizations are seeing impact against these objectives and they're able to draw out these links, which is -- must be far more valuable than what we see in previous years. It was far more of a focus on just the actual operational metrics by AI models. But in terms of how companies perform against KPIs, we see that most organizations are struggling to meet cost-reduction objectives. But in other areas, for example, process efficiency or star satisfaction, they're seeing quite a lot of success.

Eric Hanselman

Interesting. So I guess it gets back to really sort of what some of the thoughts are about what are they really applying these two? And what are those use cases. The cost reduction piece, again, I think a lot of times as we look at new technologies. We see -- we say, "Oh, it must be going to reduce cost." We saw it with cloud. We've seen it with most new transitions. But yes, I guess the bigger question is what kinds of things are people looking at broadly in terms of some of those use cases? And how do you expect to apply them?

Nick Patience

Yes. So in this survey, we looked at 7 industries this time, up from 6 in the last few years. So we looked at retail, energy, manufacturing, telecommunications, health care and banking and insurance. So we've previously done financial services as one lump, and we split that out into 2. And without going into a lot of detail about each one because that will take many, many podcasts themselves.

Looking, say, in the insurance industry, we have the kind of top 3, I guess, at the moment, and this is across all kinds of AI, not just generative AI: our customer service interaction; cyber threat detection and response; and sale forecasting. And then we always ask organizations, what do you foresee in the next couple of years as being your top use cases? And then these things usually change.

So in the case of insurance, it changed to product recommendations, compliance and then identity and access management. So some are facing customers, like product recommendations, but many are also to do with internal issues. And we see similar things in other industries such as health care and all the others, really. So yes, there's loads and loads of detail in the study.

We really encourage people to dig into the data. But it's, as ever, AI gets quite vertical quite quickly. It did in that kind of enterprise AI wave of 2015, 2016, and it may well do in generative AI world. The one thing that maybe you mitigate against that is, obviously, this broad foundation models trained on large sways of the web that sit underneath more fine-tune models or narrow models.

But they also -- they open up the opportunity to do all sorts of other things that you couldn't necessarily do. But we think organizations quite quickly will want to not only have models trained on subsets of their own data, but have models that are much more task-specific but still backed up with this "understanding" of natural language. So when the usual issues around chat bots and you start asking questions they don't fully understand, hopefully, you kind of mitigate against that with your foundation model underneath it.

But fundamentally, it's going to have to understand you're in the health care industry, you're a hospital system in the Midwest and these are the issues that you have related to your customers, all that kind of stuff. So I think all of your patients in that case. So again, I think industry-specific. And so we'll continue to look at these industries and others in our research.

Eric Hanselman

Yes. I mean you look at issues or applications like customer engagement. Actually, a previous episode was with Raul Castañon Martinez, who's been doing a lot of Communications Platform as a Service, Contact Center as a Service kinds of things. And of course, front and center is the generative AI potential to do all of the natural language engagement thing.

But of course, that's okay if you can understand the basic queries that are going on, but you've got to be able to relate that to the environment you're actually supporting, whether or not that's insurance, health care or what have you. And that then brings in all of that data quality, data density, how do you actually train and integrate that from maybe a large language model that does the language parsing, but yet has to be able to relate that to whatever the environment is, where the application is actually running.

Nick Patience

Yes. And I think it's -- there's a temptation obviously to -- when we look back at the origins of the generative AI exposure in November 2022 with ChatGPT and you look at the ChatGPT interface, and you think that's what everything is going to look like. And as I've said to many clients over the last 15 months, or how long it's been, don't think that this is all this can be.

This could be, sure, there's going to be natural language interfaces and that ability to have applications of any kind "understand" natural language is one of the key breakthroughs of generative AI. But there's other ways in which this will be implemented. There's obviously other user interfaces, other workflows, processes, those -- all those kind of good things that we associate with software. Software, after all, being the process of automating repetitive human tasks, and it has been for 50 or 60 years, and we're just doing more complex tasks now and adding the natural language element to it. But it will be -- it will become much more task-specific and underpinning every single application that we eventually use.

Eric Hanselman

Making sense of disparate data sets, all the things that are that second wave, or if we can label it that of going beyond just the upfront work and actually starting to apply this in ways that you can actually leverage it to greater effect. But we've still got to overcome some of those challenges, Alex. There's broad challenges in this environment as well, I guess, Alex?

Alex Johnston

Yes. So just briefly, Nick, on your previous point, something that we see being quite interesting. You mentioned that ChatGPT not being sort of be and end all. Initially, the other conversations that both Nick and I are having were around sort of sales and marketing implications. But actually, we see in our data and use cases, there's a lot more focus on things like process automation, for example, data visualization.

People aren't just thinking about this as an interface with a customer or a prospect customer, but actually far more broadly as to how it can be deployed with an environment. So there's a huge amount of data where you want to have a way, a conversation interface perhaps of passing through that data. There's a wide array of ways you might have models triggering models, for example. It's not just a kind of ChatGPT-style interface.

In terms of those overall challenges, Eric, there's a few things that stood out. So first was Nick mentioned some of those specific challenges around generative AI. But actually, in terms of how people see AI more generally, we've seen a massive step change in perspective in areas like trust, dataset, customer staff resistance. These are major concerns that have been significantly exacerbated by the rise of generative AI. In fact, the biggest concern organizations identified overall was staff resistance, which was impressive in terms of overall AI implementation issues.

Eric Hanselman

And is that evolving from, of course, the upfront was, hey, AI is going to take my job kinds of things? Did you get a feel for -- is that the primary concern? Or is it really a broader concern about just simply the challenges integrating into the business?

Alex Johnston

Well, I think we've seen this -- obviously nick and I have a lot of conversations with organizations, some of which are very concerned about the perspectives that their staff has on AI and job role automation around people drawing comparisons, for example, with what's happened with the writers' strike in the U.S., for example.

But it's interesting, hopefully, putting people's sort of fears to rest a little bit. A few organizations within our survey were specifically targeting head count reduction, and those that were tend to perform quite poorly against that metric comparative to other areas, which hopefully put some of those fears to bed slightly.

Eric Hanselman

No. Interesting. So for those organizations that were identifying staff reductions, in fact, they weren't necessarily achieving them.

Alex Johnston

Yes. So it's quite a small proportion of organization as well. It's one of the KPIs that was less commonly looked for, against, say, process efficiency or employee productivity, actually explicit head count reduction was only a focus of -- it was less than 18% of organizations that were tracking KPIs, so around 10% of the total data set.

Eric Hanselman

Interesting. So really even in terms of those organizations that are seeing as a goal, this really is augmentation. It's really shifting towards the upskill capabilities of all of this, which, of course, brings us to the question about skills and how organizations can actually leverage them.

Nick Patience

Indeed. Indeed. And so we always ask questions about skills in our AI surveys and in many other surveys. And then we always ask which specific skills you are limiting your ability to use AI. And this year, we have a new one on the top of the tree, which was cybersecurity professionals. And so one of the reasons we think this has come to the fore is because the general uncertainty about the kind of security profile of generative AI.

I mean is that inability to control the input and, to a certain extent, the output, which is quite different to very narrow predictive models trained on various much more narrow data sets of yours as it were. So cybersecurity, first, was top; power architects, second; machine learning engineers, third; data scientists, fourth; app developers, fifth. So you're kind of getting there more into kind of all the classic IT roles that we've seen many times before.

Then we sort of ask people, so what are you going to do about that? And the #1 answer was to train existing staff to learn new skills, which is a healthy metric, I guess. And so -- and then as a follow-up because we like to keep digging in these surveys. We said, "Can you give us a list of the staff titles or roughly the job titles of those people who are going to be prioritized for the reskilling?" And here, we look -- and here, the answer is quite interesting. Data analysts, with narrowly followed by data engineers. Data engineers is really a job didn't really exist about 5 years ago. Those 2 were top. And then systems engineers, security operations folks, app developers, project managers and you go down list, business analysts and so on and so forth.

And so there's lots of, I guess, the data analysts and data engineers being top goes to this kind of what's been the forever running theme in this podcast of the importance of data. And so here, you're obviously not teaching -- pulling people up from -- teaching them from scratch. That's the importance of data. They understand that. But there may be different kind of skill sets they need when it comes to AI and now especially with gen AI.

Eric Hanselman

Well, it gets back to your early point, now suddenly that we've got tools that are identifying and exposing the quality of the data that we're dealing with, suddenly being able to deal with data is a huge in-demand skill to be able to leverage that. So this winds up being, I guess, a bit more of a confirmation that this is the actual staff skill that's needed to deal with some of the data quality problems that maybe actually are lurking underneath all this.

Nick Patience

Indeed.

Eric Hanselman

So in terms of -- and I guess it's one of those things, we see the training thing and from the security studies perspective, it's been interesting that the organizations have historically, there's been a security skills gap for ages and ages. And there had been this progression through, hey, we're going to still -- we're going to hire. And then sort of hiring rolled off. Because hiring was a challenge, and they've actually -- there's now been a shift to more engaging partners and managed services for some of this. I wonder if we'll wind up seeing that over time as well, as it turns out that it's difficult to retain folks who've got those skills because there's such high demand, get into one of those cycles again.

Nick Patience

It was one of the options we gave people actually after training existing staff, and that was narrowly behind. It was utilizing IT integrators or consultants. That was the second on the list. And then the third was hiring staff from outside. And the last one on the list was move to different technologies. That's not really an option. This thing is here to stay. And you want to get skills from inside or outside or a combination of both.

Eric Hanselman

Yes, you got to get the skills from somewhere. Well, again, I guess, it may be heartening to know that there is a realization that they do need partners and managed services have a role play there is often the case. What about the role of regulatory bodies, all of the potential regulatory impacts that are out there and simply getting this out towards employees and customers and gaining trust to overcome some of that employee resistance that you're identifying in the study?

Nick Patience

Yes. The resistance in the trust issue are an interesting one. We asked a question where we try and contrast how trusting are your predictions made by your organization's AI? And how trusting are your predictions made by AIs from organizations other than your own? And we asked this for a number of years. And if we look at the difference between the 2023 study and the 2024 study, bearing mind, 2023 study was the actual surveying was done before ChatGPT came out more or less.

The level of trust now have declined, and it's quite marked. And so when we look at things, the option -- I trust predictions, AI predictions completely. These numbers for your own organization have dropped from 25% down to 18%, and for the third parties from 24% to 19%. And then next one down, I mostly trust, in your own organization, that's fallen a couple of percentage points.

And then from outside the organization, it's fallen by 5 percentage points. And so the trust may be declining slightly partly because this always happens when you have a new technology, it's exciting. It's also, for some people, scary. And I think that just generates mistrust, to coin a phrase, which isn't the thing you kind of hope with generative AI.

Eric Hanselman

We love that. But interesting that that's striking a difference. Again, I guess, maybe now or in a point at which everybody can spell hallucination. A lot of the concerns sort of cropped up with the realities of what actually comes out of it. People have played with it. And again, once you see the kinds of -- the levels of adoption and experimentation that are there. But it does seem like you've gotten to a point at which now there is much broader social community understanding of what the potential limitations are. So it sounds like maybe that's really what's coming into play.

Nick Patience

Yes. When -- especially even things like open AI and the stuff that happened there in November with the kind of board level shenanigans, that became the world's biggest business story for a few days. I mean this stuff has become so mainstream. And they've also seen lots of interest from regulators, from governments. I mean people have seen Senate committees and other committees with people sitting there at the tables, predicting the end of the world because of AI and all these other kind of stuff. And so that regulation thing is another aspect of what we asked about.

And so most people are in favor of regulation, whether they're strongly in favor of it or somewhat in favor of it. And it's happening anyway because obviously, the EU AI Act is progressing. It will become law gradually through bits of '24 and into 2025. And then we ask what kind of impact do you think it's going to have? And almost 1/3 of our responders say a significant impact and then 43% say a moderate impact. So people are kind of prepared for this, but they also want -- I think they want protection from what they might see as some of the scary aspects of it.

And then we are -- we also then do what we usually do, say, okay, so what are you as an organization doing about this? How are you preparing for AI regulation? And then we ask what are you doing now? And what do you plan to do over the next 12 months? And so the top answer is increased investment in AI governance. And that's AI governance tools, but also policies and about 37% chose that now and then about 46% choose it in the next 12 months.

Seeking outside consultants comes next; increasing the size of our compliance, our legal team comes next; and then changing scope, maybe pulling back on some initiatives. And then the penultimate one was increasing our lobbying efforts, which is also only for the largest companies, or maybe even changing our vendor selection criteria. But almost all of these issues predicted adoption of them as a technique to cope with regulation increases over the next 12 months. So I think we can see a lot more focus on whether compliance teams or AI governance technologies and policies. All those kind of things, I think, we're going to see a lot more of before we come back to do the survey again.

Eric Hanselman

A lot more impact that's rolling into this.

Nick Patience

Yes.

Eric Hanselman

We're also coming just about at time. So are there other things you want to cover or close with?

Alex Johnston

Sort of got through most of it, I think.

Eric Hanselman

All right. Well, I'll dive back in, Nick. Then hand it to you, both you and Alex want to sort of wrap and then we can close it up.

Nick Patience

Sure.

Eric Hanselman

Interesting. So all of this from a governance perspective really is coming together in ways that certainly are going to have significant impacts going forward.

Nick Patience

Yes, indeed. And this is, I say, part of our ongoing research. I mean we're going to be looking at the governance issues, but also look at the technology issues, look at the spending issues, all these kind of things as we go through 2024. So this survey, The Voice of the Enterprise AI Use Cases survey, is out now for our customers and they've been digging into it. But it kind of fits into a whole load of as you've done -- as everybody would expect, there's a large research calendar ahead of us. So these include another survey in the middle of the year where we do one on AI infrastructure, where we've done podcast on this before, but we're going back to that subject.

And we're going to change that a little bit and get much more involved and it will include a lot more of our data center analysts in that survey as well. So that's going to be a really interesting stuff beyond in Q3. And also, I wanted to highlight listeners, we do market forecast for generative AI. We have done -- we first published it in June of last year. We updated in November. We'll come back with another update in the next 3 or 4 -- sometime in Q1 and then we're also looking to expand it. So there's a lot of other things coming out from us. And then we have some very specific reports on the image generation market, the video generation market, gen AI ops, all these other things. So we here at the 451 Research and S&P Global Market Intelligence, I think it's safe to say we're fully on top of the -- of what's going to happen in AI in 2024 and beyond.

Eric Hanselman

A lot that's happening. Well, to your point, because it is this horizontal technology that's touching so many things, and I joke about it on the podcast all the time, but it really is touching so many different aspects. I mean whether or not it's customer and employee experience, the broader operational pieces, everything from security to infrastructure and beyond, a lot of different pieces. So there's a lot to keep talking about. So I guess many more appearances to come, I hope for the both of you.

Nick Patience

I think so. It's truly a general purpose technology.

Eric Hanselman

Well, lots of things to discuss, but we are at time for this episode. Thank you both for being back.

Nick Patience

Thanks, Eric.

Alex Johnston

Thanks very much, Eric.

Eric Hanselman

And that is it for this episode of Next in Tech. Thanks to our audience for staying with us. And thanks to our production team, including Caroline Wright and Kaitlin Buckley on the Marketing and Events teams and our agency partner, The 199. I hope you'll join us for our next episode where we're going to be talking about what's happening in some of the infrastructure that supports it. We'll be talking about at the cloud-native computing foundations KubeCon and CNCF Container Conference that's coming up, a lot of the things that are going to be building the foundations on which AI is built. I hope you'll join us then because there is always something Next in Tech.

Copyright © 2024 by S&P Global Market Intelligence, a division of S&P Global Inc. All rights reserved.

These materials have been prepared solely for information purposes based upon information generally available to the public and from sources believed to be reliable. No content (including index data, ratings, credit-related analyses and data, research, model, software or other application or output therefrom) or any part thereof (Content) may be modified, reverse engineered, reproduced or distributed in any form by any means, or stored in a database or retrieval system, without the prior written permission of S&P Global Market Intelligence or its affiliates (collectively, S&P Global). The Content shall not be used for any unlawful or unauthorized purposes. S&P Global and any third-party providers, (collectively S&P Global Parties) do not guarantee the accuracy, completeness, timeliness or availability of the Content. S&P Global Parties are not responsible for any errors or omissions, regardless of the cause, for the results obtained from the use of the Content. THE CONTENT IS PROVIDED ON "AS IS" BASIS. S&P GLOBAL PARTIES DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, ANY WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE OR USE, FREEDOM FROM BUGS, SOFTWARE ERRORS OR DEFECTS, THAT THE CONTENT'S FUNCTIONING WILL BE UNINTERRUPTED OR THAT THE CONTENT WILL OPERATE WITH ANY SOFTWARE OR HARDWARE CONFIGURATION. In no event shall S&P Global Parties be liable to any party for any direct, indirect, incidental, exemplary, compensatory, punitive, special or consequential damages, costs, expenses, legal fees, or losses (including, without limitation, lost income or lost profits and opportunity costs or losses caused by negligence) in connection with any use of the Content even if advised of the possibility of such damages. S&P Global Market Intelligence's opinions, quotes and credit-related and other analyses are statements of opinion as of the date they are expressed and not statements of fact or recommendations to purchase, hold, or sell any securities or to make any investment decisions, and do not address the suitability of any security. S&P Global Market Intelligence may provide index data. Direct investment in an index is not possible. Exposure to an asset class represented by an index is available through investable instruments based on that index. S&P Global Market Intelligence assumes no obligation to update the Content following publication in any form or format. The Content should not be relied on and is not a substitute for the skill, judgment and experience of the user, its management, employees, advisors and/or clients when making investment and other business decisions. S&P Global Market Intelligence does not act as a fiduciary or an investment advisor except where registered as such. S&P Global keeps certain activities of its divisions separate from each other in order to preserve the independence and objectivity of their respective activities. As a result, certain divisions of S&P Global may have information that is not available to other S&P Global divisions. S&P Global has established policies and procedures to maintain the confidentiality of certain nonpublic information received in connection with each analytical process.

S&P Global may receive compensation for its ratings and certain analyses, normally from issuers or underwriters of securities or from obligors. S&P Global reserves the right to disseminate its opinions and analyses. S&P Global's public ratings and analyses are made available on its Web sites, www.standardandpoors.com  (free of charge), and www.ratingsdirect.com  and www.globalcreditportal.com (subscription), and may be distributed through other means, including via S&P Global publications and third-party redistributors. Additional information about our ratings fees is available at www.standardandpoors.com/usratingsfees.

© 2024 S&P Global Market Intelligence.

No content (including ratings, credit-related analyses and data, valuations, model, software or other application or output therefrom) or any part thereof (Content) may be modified, reverse engineered, reproduced or distributed in any form by any means, or stored in a database or retrieval system, without the prior written permission of Standard & Poor's Financial Services LLC or its affiliates (collectively, S&P).