podcasts Market Intelligence /marketintelligence/en/news-insights/podcasts/next-in-tech-episode-168 content esgSubNav
In This List
Podcast

Next in Tech | Episode 168: AI Data Strategies

Podcast

MediaTalk | Season 2 | Ep. 29 - Streaming Services, Linear Networks Kick Off 2024/25 NFL Showdown

Podcast

MediaTalk | Season 2 | Ep. 27 - College Football Preview & Venu Injunction

Podcast

Next in Tech | Ep. 181: Lighting up Fiber

Podcast

MediaTalk | Season 2 | Ep. 26 - Premier League Kicks Off

Listen: Next in Tech | Episode 168: AI Data Strategies

As organizations push to realize the benefits of AI, they are increasingly challenged to deliver the necessary data to fuel their initiatives. While most have data repositories to draw from, the quality and accessibility of that data can be an issue. Shiv Trisal of Databricks and John Schirripa of S&P Global Market Intelligence, join host Eric Hanselman to explore the nature of these challenges and ways to address them. One of the major benefits of AI tooling is the democratization of access to data and it has to be a primary goal, but it’s not always easy to achieve. It means pulling together complex data sets and being able to wrap the necessary controls around them. Governance concerns demand better data quality at the outset, to make the path to implementation simpler.

This week’s guests:
Shiv Trisal, global industry lead for manufacturing and industry at Databricks
John Schirripa, VP, product management, distribution solutions, at S&P Global Market Intelligence

Subscribe to Next in Tech
Subscribe

Eric Hanselman

Welcome to Next in Tech, an S&P Global Market Intelligence podcast where the world of emerging tech lives. I'm your host, Eric Hanselman, Chief Analyst for Technology, Media and Telecom at S&P Global Market Intelligence.

And today, we're going to be talking about AI data, its management and strategies. And joining me to discuss this today are Shiv Trisal, the global industry lead for manufacturing and energy; and John Schirripa, one of our product managers in distribution solutions team. Welcome to you both.

Shiv Trisal

Glad to be here, Eric. Thanks for having me.

John Schirripa

Hi, Eric. I'm glad to be here with you and Shiv.

Eric Hanselman

And this is a continuation of a set of discussions we've been having around AI and the various impacts. But when we start thinking about what really is the lifeblood of AI, it's the data and the things. The challenge, of course, is that when we look at where most enterprises are and some of the -- their needs to be able to move forward in effective ways of really starting to deliver some real value from AI. They face a set of challenges. And the biggest one tends to be issues around data.

And straight out of the box, the first of those lines are being access to data. It's a stubborn problem in really getting to generative AI or large language models. I guess generative AI kind of broadly, but it's getting access to that data is a challenge for many. And I'm curious, what do you see is the largest problem in sourcing data and really delivering it for performing access to really get enterprises really moving forward on their AI journey?

John Schirripa

I'll start with that question. I guess what I kind of think about here, and it's really -- it falls with the term AI-ready data, although that's more of a more recent term, right? But I think all the analysis needs data that's prepared and structured and brought together, well organized properly described for use in various workflows. And how do you bring that data together when you have so many disparate sources out there and bring it together and make it workflow ready for yourself. It's been a problem that's been around for many years.

I think it's gotten a lot better over the last few years, especially, right, things like marketplaces, around that larger data vendors doing a great job of aggregating all these disparate data sources together on top of more core or foundational type of data sets by companies like us who had S&P Global Market Intelligence. We've seen the journey of getting access to the data and into client workflows and have built ecosystems to assist that, right? You need to explore, discover, evaluate, analyze and then get access to that data. And we again built tools to facilitate that process from end to end.

Shiv Trisal

Yes. Just to add to John's point, the way to think about -- we live in this world of generative AI. And the way to think about generative AI in the enterprise world is to really focus on 2 things. One is democratization and the other is innovation. And when we think about democratization, democratization can't just be for just technical users. It has to be for all users regardless of technical skill in order for them to make better decisions.

Well, everyone's been thinking about this ChatGPT-type experience when it comes to generative AI. At Databricks, we find that the fundamental value of generative AI is to continuously learn the structure of your organization's data versus requiring in the past a specialized team to go model that information. And that has an order of magnitude improvement on how much the accessibility of the data is improving.

And if you think about this fundamental unlock that generative AI makes possible, this enables you to truly democratize information to nontechnical users and domain experts to go interact with these data sets regardless of their technical barrier or any technical batter or coding ability. In the past, we see a lot of organizations kind of tend to go towards dashboards.

And I work with a lot of customers who tell me, "Hey, I've built out like over 2,500 dashboards to give people access to information." And I think some of them are useful, right, or maybe a square root of that 2,500, maybe 50 of those are actually useful. But I think what people are now looking for is to be able to interact with their data and do complex reasoning with it. And I think that's where we are living in this new paradigm where access to data becomes easier because of the ability of generative AI to go learn the structure of your data and make that available.

And then on the user front, you're able to kind of interact with your data in a chat-style interface that allows you to ask 1, 2, 3, 4 level deep questions and get the information you need to make better decisions. And I think that's going to have a huge impact on how organizations are run in the future of work.

Eric Hanselman

Well, the point that you make is that the dashboard approach is certainly useful. But of course, it abstracts away what the end users need from the team that actually has to build. You got to go build a dashboard, there's some latency in actually putting that together, and it doesn't allow that level of query afterwards to do the what-ifs that go beyond it. And that, I think, democratization, the point that you're making really is that key aspect to that.

Shiv Trisal

Yes, absolutely. And in the past, what has happened and the reason for the existence of over 2,000 dashboards in an organization not -- all of them are not useful. And the reason for that happening is in order for -- to explain a metric that you saw in one primary dashboard, you've had to create 2, 3, 4, 5 dashboards to explain that first dashboard, right?

Eric Hanselman

So dashboards on top of dashboards.

Shiv Trisal

Yes, it's like very Christopher Nolan, if you think about it, right, Dream Within A Dream. But yes, I mean, that's fundamentally what has happened over the last 3 years and with this new capability. Now it's not all sunshine and rainbows in the sense that you just bring this in and naively throw all of this technology at the data and there are some governance aspects to it.

But fundamentally, the promise of this technology is really opening it up to users beyond just the quarters in the IT department, and I think making data sets more easier to model with these capabilities, complex data sets, which [ fit ] in documents or text or sensor data that comes from IoT equipment, all of that can now have a much lower barrier to go model this information to make it usable. And I think we're seeing some really good results with customers in that area.

Eric Hanselman

You're busting some of the balloons here, Shiv. I mean...

John Schirripa

I was going to say the same, right? He basically -- he threw out the G word that I was going to throw out there. When you say democratization, right? You need to have that element of governance there as well to ensure that policies around the data, the rights around the data, et cetera, are all well established. And I guess you're building the hundreds or thousands of dashboards, right? You basically build governance into each use case, which means that clients or your users don't get exactly what they need. But yes, as you do democratize, you want to make sure that, that clear governance layer is there.

Eric Hanselman

Well, you raised a really interesting point there. I thought our listeners will know that at the RSA Security Conference this week. And of course, the topic of effective management from a security perspective is a big part of it. Governance is a big deal, understanding access. Of course, you have to have an understanding of the governance. You have to have sufficient classification. You have to have all these other aspects of being able to have the pieces in place, the metas in order to allow you to manage it effectively.

But I want to dig down just a little bit before we get into those about data quality because, of course, one of the challenges that we see in enterprise adoption is that generative models now, because they can pull in such large volumes of data, it's ensuring that the data that they're pulling in is in the enthusiasm to get as much data as possible. I mean many organizations have built data lakes and have consolidated all of their data assets, brought them all together and now have a generative model that they can turn loose on that data and are realizing in fact, one of my colleagues on our AI and analytics team is fond of saying, those data lakes are just like lakes in real life in that there's a lot of water, there's a lot of fish, but there also are probably some old shopping cards, some discarded tires and some other things in there that, to extend that metaphor, may not be what you actually want as part of your training corpus for actually building out these environments. So quality has got to be something that it presents some challenges as well.

John Schirripa

Absolutely. And I think you need to make sure that you're working with reliable sources for that data. And to your point, right, at the data lakes are just like a lake, right? But is there a nice GPS map of the contours of all the lakes where all the rocks are, where all the shallow water is, et cetera? How well is that lake? How well is it documented? How well are there good definitions? Do you understand the data lineage, et cetera, to kind of help you navigate your way through that lake?

To me, that's, I think, a key part of ensuring that you're getting data quality. We already talked about the governance aspect of it, basically, how does the supplier ensure accuracy, consistency and integrity and basically, and then also what have they done to help you integrate that data? Are they doing any type of curation on the data or standardization on the data? What level of cleaning do they do and review of that data before it gets to you?

Eric Hanselman

Well, because this is -- and Jim, I'm heading back into the sunshine and rainbows aspect. I think one of the things we also hear is this, well, of course, we've now got -- we've got models that will help us clean the data and a lot of this, I think, happy expectation that this is just a simple thing. You dump it through a set of filters. The model turns out some wonderful. And now you've got good data, not really quite that easy, right?

Shiv Trisal

Yes. I think there's a couple of things that I'll kind of talk about with respect to quality and come back to the question you kind of disposed there. So I mean, we have to realize that the goal here for -- in my opinion, for data quality should be around synchronization. So how to have operational systems and AI systems that we are seeing more and more work of the same set of data.

At the end of the day, that's what I feel a lot of what data quality is about. Now we can measure it in different ways, accuracy, completeness, reliability and so on. But fundamentally, the goal here is to achieve a level of synchronization so that people who are looking at data and the applications and people who are looking at data and AI systems are fundamentally working off of the same set of information. And why has that not been the case for so many years is because events and data are not treated as first-class citizens in most enterprise applications, right?

So -- and this is something I've seen across the board over so many years that have been in the industry as you think about 2 fundamental problems. One is applications. They are built with a very finite data model, right? They want very specific set of imports and convert them to a very specific set of outputs. The work of AI doesn't work like that. There is more than finite set of inputs you would take any signal that you can get, any high-value signal that you can get to improve the predictability of your output.

And that's in its sense, it's a completely different paradigm in which it operates. And I think the other aspect of this is there's 2 different applications may have sync of the same data very, very differently. So I work with a lot with IoT data in one plant of our manufacturing network, you can be looking at the same sensor recording at a millisecond frequency versus something that's recording once a second, right?

So there is very, very different units of measure, and they think of the same data in a different way depending on where the data is coming from. So I think having this visibility into data integrity checks at every step of the data value chain, I think companies can achieve a much better synchronization of their information, a much better, faster root cause analysis and do any data quality issues and just a more accurate assessment of all of the downstream data products and by data products, I mean, predictions they're making, reports they're making off of that data, dashboards and even now generative AI applications that are consumed by end users.

And I think that's where this whole aspect of data quality is absolutely paramount and even more coming into the spotlight, given the nature of generative AI and how pervasive it is. But I think this is something that organizations have been struggling with for a while. I think generative AI just brings way more spotlight and attention on the issue and then just need a way more comprehensive, much more deep and wide approach to actually improve the quality of information over time.

Eric Hanselman

No. It's -- when you look at such large -- [ they didn't ] just capacity. We've now got to be that much more careful about what's actually feeding in and data quality and that curation of data becomes much more significant. John, is it the point that you were making?

John Schirripa

It's the foundation. Data is the foundation. It's the first building block in establishing your workflow, whether it's GenAI or LLM, right? We used to say garbage in, garbage out. Now I think we use the H word, hallucination, more to avoid that, but it all starts with that data is that foundation, making sure that you're comfortable with that data as you embark on your journey.

Shiv Trisal

And I would say this, just to add to that point, I think one thing that I hear a lot about is a lot of the usual data quality solutions and approaches and practices have mostly focused a lot over the last few years around structured data, right? Things that comes in tables like [ labeled ] and rows and columns, but most AI work actually happens on unstructured data, right, where there haven't been a lot of real solutions around data quality there for a very, very long time and the things that you can depend on.

And this is where I was coming at it from -- if you look at it from a LLM or generative AI sort of issue, is just put more attention on the governance and quality of unstructured, the curation of unstructured data because you have to go way more upstream and trying to understand, say, if you're trying to analyze PDF documents, they come in various structures. If you're trying to understand text, there is no finite set of values for text. People are fat-fingering in whatever they can, and you have to understand the structure of that in a better way.

And even if it's coming to even more complex data sets like things from IoT sensors, each sensor has a different function depending on the equipment it's on. So any organization that I speak to has one common theme. Unstructured data is growing 2 to 5x every year. And I think management and the quality of this data curation process of converting unstructured data to things that are usable data products, super important, especially in the age of generative AI.

Eric Hanselman

And there's tremendous value there. But as you're pointing out, understanding what the nature of unstructured data is and how you actually put it to work is a challenge.

Shiv Trisal

Yes, absolutely. And I think one of the things that we've really invested heavily in Databricks is to kind of look at the management of data and the governance of data and the development of AI and the governance of AI and kind of instead of having to focus them as 2 different disciplines, have it completely tightly knitted, integrated together because the only way you can be successful is if you understand the data that you built your AI systems on and if there is no synchronization there, it's going to be very hard to explain.

You may be able to build a proof of concept here or there that showcases, hey, I can return a page in section of a PDF document. But when it comes to productionizing some of these capabilities, you're going to have a hard time if you think of these as 2 separate disciplines. They have to be tightly integrated together.

John Schirripa

Yes, 100% agree. And it kind of rolls into something else we've been discussing, right, like data provenance. You need to make sure that you have data trustworthiness, you need to have that transparency and auditability back to the source because there's also another word, I think we'll get into as well around compliance, right? You need to ensure that what you're putting into the model is data that you can trust, you have the rights to use and basically are held accountable to.

Eric Hanselman

Yes. I mean that's, again, a topic of a lot of discussion here at the RSA Conference has been that focus on, okay, we've come to the table with expectations of the ChatGPTs of the world, which all you have to do is train it on the Internet and then you're done. Well, which, of course, raises more than a handful of problems, but being able to actually have an understanding of what went into the model and what went into the model, both the outset and over time because we're facing what is a very different set of clients constraints.

When we start looking at AI models, you have to be able to have an understanding of what's gone into it, both from a [ mission ] perspective, a lot of discussion about bill of materials for AI data. Those are things that we want to look at an understanding data provenance is, of course, critical.

Shiv Trisal

Yes. I mean from -- I mean, think about it like one is the compliance and the traceability and auditability, sort of aspect of it. I personally think that this is even broader than that. There's real economic and sort of strategic value levers that are probably even more important than just a compliance aspect of it.

So just to give you an example, work a lot in manufacturing industry like quality requirements in almost every corner of that industry in terms of product quality are increasing with -- if you look at tighter emission regulations, so people want to be a little bit more clear on what kind of materials are used in products and the customer expectations are, hey, at the end of the day, if I paid for the product, some of these complex machinery and materials run into millions of dollars per SKU, they want to make sure that they're getting the highest quality product that they can.

And so imagine like you're trying to help train models that help quality control professionals identify and scrap potentially defective products earlier in the production process so that valuable manufacturing resources are not listed. And more importantly, the effective products are not ending up in the hands of customers. And this is what we would call an industrial AI system. And an industrial AI system for decision-making will be trained mostly on unstructured data like we discussed. So think about sensors, images, videos, text, documents and even more complex systems.

But with the criticality of this use case, right, with a mission-critical use case like this, that not only impacts like compliance things, but also impacts the safety of operations, the quality of the product and the productivity of the employees on the shop floor. There's a cost of poor predictions here and the cost of poor data quality can cost millions of dollars.

And with the stakes in so high, you don't want to have bad data or have no visibility into what data has gone into training these models because you want to make sure that if there is a success that you're able to replicate it, first of all. And second, if something is not working, you're able to trace back where in that value chain, things went wrong or can be improved. And you want to do that really, really quickly in an integrated way.

My whole view on this is the promise of AI, no matter which industry, I took an example in the industrial space, but the promise of AI cannot be realized with shortcuts and governance. And I think with the criticality of problems, particularly solved in the manufacturing and energy industry, the industry just needs a way more comprehensive approach towards governing this entire AI workflow that covers all data types, whether it's structured, semi-structured or unstructured, but even govern the AI artifacts, right?

So it's not just about data governance, but also about AI governance and governing features, governing models to improve the explainability or the traceability and the reproducibility over the life cycle of these things is super critical. And I think it's coming into more attention just given the -- how much this capability of AI is getting democratized across the world.

Eric Hanselman

Well, as you're saying, it extends well beyond manufacturing and industrial uses, the same kinds of things, the ability to understand how well that decision process is actually proceeding, be able to look back on it and understand whether or not it's a quality control model that's being managed, whether or not that happens to be decision tools working with in an enterprise, all have got that same need to be able to have that set of capabilities.

John Schirripa

And how do you ensure you're keeping biases? Out of that trading material. I think Shiv to the example you provided, right? Let's say, if you're only training on one type of sensor, and there are 10 different ones that could potentially be used, right?

Eric Hanselman

Yes, similar kinds of problems. Well, I want to start out -- so how should enterprises really be looking at expectations around data? What are the things that they should be planning for as they start looking at data strategies and really looking at reducing this into the real operational AI capabilities.

John Schirripa

Great question. And I think it leads to a couple of different points. I think you want to look at what's the interoperability of the data, obviously, accuracy, transparency, auditability, compliance, right? All those things kind of come to mind there. I think that these are kind of the aspects that these users need to think about as they move into using data.

Eric Hanselman

Well, it's a much more comprehensive approach. I mean if we think about what typically, I think, Shiv, to your example, the outset, we've come from an environment in which we were looking at relatively defined paths, whether or not that happens to be a dashboard, that output of data was fairly constrained and it meant that we could constrain what that universe was, but in democratizing use, we need to open it up to a much broader range of possibilities.

Shiv Trisal

Yes. And I think there's a couple of different aspects to that. If you're looking at data strategies, I think you have to look at data and AI strategies. You can't think of one without the other, just because the cart is going to lead the horse. In this case, it already is. So the thought here is it really becomes a question of what you're trying to do as an organization. You're trying to mobilize the organization's domain expertise. And the way I think about that is, are you looking to provide actionable trustworthy information that's trained on the entirety of your entire company's knowledge base. But then -- I think that's super important to kind of really articulate well.

If you're trying to do that, and then cost is a factor, which is always is, but so is protecting what makes you unique, right? So deep know-how about your products, your technologies, your domain knowledge, your end-use markets. I think open source models, this question always comes back and it's like what do you think about models and how they're performing. I think open source models are comparable in performance and the gap is getting narrower and narrower. In fact, we've seen situations where we've trained these to be task-specific, anything that they outperform in -- open source model really outperform in task-specific or domain-specific sort of scenarios. And I think that's great.

And I think any paradigm right now has to stand the test of what the economics and performance will look like in production, not just in a handful of proof of concepts. And it's one thing, like I said, to return a page in section from a PDF document versus building systems that can provide specific intelligence unique to your data and can be capable of complex reasoning that you can answer using the entirety of your company's knowledge base.

Overall, I think if you're trying to build a data strategy in the age of generative AI, my advice to you is really understand the value that generative AI brings in being able to curate data sets much better, much faster, much more economically so that then you can open it up to a broader range of users, not just technical users and make them better business decisions and make them 10 to 100x more productive in what they're trying to do because that's what this technology is all about. And really, as you're doing that, understand the impact that unstructured data brings and the amount of unstructured data being used to go deliver these solutions, how do you manage and govern that and also tightly integrate that with the governance of how you're developing these complex AI systems.

John Schirripa

Great answer, right? I mean simply stated, right? It all goes back to the business goal of the use case and then you build from there.

Eric Hanselman

Well, and it's getting back to an expectation of much more agile use flexibility comes into this and the ability to really deliver, I guess, on a scale that we really haven't considered in terms of how we integrate that in the business and who's actually going to be leveraging these capabilities. And this is -- clearly a lot more in this conversation that we could head into. But unfortunately, we are at time for today.

I want to do a couple of different things. First, thank you both for all the insights. And second, point out a set of resources that are coming up on May 14. We have panel discussions about a voice from the markets, discussion all about cloud management AI strategies. Hopefully, our listeners will be able to pick up on that. And a lot of the work that the 2 of you have been doing as well.

John Schirripa

Great. Thank you, Eric. Appreciate it. Great conversation. Shiv, I look forward to maybe meeting you at the Databricks Summit in June.

Shiv Trisal

Yes, John. So absolutely, this is the biggest AI conference on the planet. So this runs between June 10 and 13 in San Francisco. So see you, John, for sure, at Databricks Data and AI Summit. We're going to have over 10,000 people there. So this is going to be a lot of what we discussed. The cool thing about Databricks is we bring innovations that go address these pain points. So that, would you want to first take here on what we're working on, you're going to make it to San Francisco, June 10 to 13. So looking forward to seeing you there, John, and hopefully, many of the listeners.

John Schirripa

Yes. And we'll be highlighting a few of our key things that we're working on around data access through delta sharing. We've been a longtime Databricks partner and have an application called Workbench, which is S&P data on white labeled on top of Databricks to provide that notebook-driven Workbench test sandbox for people to look at data, get good understanding of data and comfortable with data before they incorporate it into their workflows. So I encourage folks to take a look at that.

Eric Hanselman

That's great. We'll point them in that direction. And that is it for this episode of Next in Tech. Thanks to our audience for staying with us. And thanks to our production team, including Caroline Wright, Sophie Carr and [ Kate Asplin ] on the Marketing and Events teams and our agency partner, the 199. Please keep in mind that statements made by persons who are not SP Global Market Intelligence employees represent their own views and are not necessarily the views of S&P Global Market Intelligence.

I hope you'll join us for our next episode where we're going to be digging into aspects about AI energy use, some projections from some recent study work we've been doing and some thoughts about how we start to address one of the other sides of the many facets of AI and capabilities. I hope you join us then because there is always something Next in Tech.

Copyright © 2024 by S&P Global Market Intelligence, a division of S&P Global Inc. All rights reserved.

These materials have been prepared solely for information purposes based upon information generally available to the public and from sources believed to be reliable. No content (including index data, ratings, credit-related analyses and data, research, model, software or other application or output therefrom) or any part thereof (Content) may be modified, reverse engineered, reproduced or distributed in any form by any means, or stored in a database or retrieval system, without the prior written permission of S&P Global Market Intelligence or its affiliates (collectively, S&P Global). The Content shall not be used for any unlawful or unauthorized purposes. S&P Global and any third-party providers, (collectively S&P Global Parties) do not guarantee the accuracy, completeness, timeliness or availability of the Content. S&P Global Parties are not responsible for any errors or omissions, regardless of the cause, for the results obtained from the use of the Content. THE CONTENT IS PROVIDED ON "AS IS" BASIS. S&P GLOBAL PARTIES DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, ANY WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE OR USE, FREEDOM FROM BUGS, SOFTWARE ERRORS OR DEFECTS, THAT THE CONTENT'S FUNCTIONING WILL BE UNINTERRUPTED OR THAT THE CONTENT WILL OPERATE WITH ANY SOFTWARE OR HARDWARE CONFIGURATION. In no event shall S&P Global Parties be liable to any party for any direct, indirect, incidental, exemplary, compensatory, punitive, special or consequential damages, costs, expenses, legal fees, or losses (including, without limitation, lost income or lost profits and opportunity costs or losses caused by negligence) in connection with any use of the Content even if advised of the possibility of such damages. S&P Global Market Intelligence's opinions, quotes and credit-related and other analyses are statements of opinion as of the date they are expressed and not statements of fact or recommendations to purchase, hold, or sell any securities or to make any investment decisions, and do not address the suitability of any security. S&P Global Market Intelligence may provide index data. Direct investment in an index is not possible. Exposure to an asset class represented by an index is available through investable instruments based on that index. S&P Global Market Intelligence assumes no obligation to update the Content following publication in any form or format. The Content should not be relied on and is not a substitute for the skill, judgment and experience of the user, its management, employees, advisors and/or clients when making investment and other business decisions. S&P Global Market Intelligence does not act as a fiduciary or an investment advisor except where registered as such. S&P Global keeps certain activities of its divisions separate from each other in order to preserve the independence and objectivity of their respective activities. As a result, certain divisions of S&P Global may have information that is not available to other S&P Global divisions. S&P Global has established policies and procedures to maintain the confidentiality of certain nonpublic information received in connection with each analytical process.

S&P Global may receive compensation for its ratings and certain analyses, normally from issuers or underwriters of securities or from obligors. S&P Global reserves the right to disseminate its opinions and analyses. S&P Global's public ratings and analyses are made available on its Web sites, www.standardandpoors.com  (free of charge), and www.ratingsdirect.com  and www.globalcreditportal.com (subscription), and may be distributed through other means, including via S&P Global publications and third-party redistributors. Additional information about our ratings fees is available at www.standardandpoors.com/usratingsfees.

© 2024 S&P Global Market Intelligence.

No content (including ratings, credit-related analyses and data, valuations, model, software or other application or output therefrom) or any part thereof (Content) may be modified, reverse engineered, reproduced or distributed in any form by any means, or stored in a database or retrieval system, without the prior written permission of Standard & Poor's Financial Services LLC or its affiliates (collectively, S&P).