podcasts Market Intelligence /marketintelligence/en/news-insights/podcasts/next-in-tech-episode-175.xml content esgSubNav
In This List
Podcast

Next in Tech | Ep. 175: Metaverse Advances

Podcast

MediaTalk | Season 2 | Ep. 29 - Streaming Services, Linear Networks Kick Off 2024/25 NFL Showdown

Podcast

MediaTalk | Season 2 | Ep. 27 - College Football Preview & Venu Injunction

Podcast

Next in Tech | Ep. 181: Lighting up Fiber

Podcast

MediaTalk | Season 2 | Ep. 26 - Premier League Kicks Off

Listen: Next in Tech | Ep. 175: Metaverse Advances

Discover the resurgence of the metaverse and its impact on the industrial sector. Join analysts Neil Barbour and Ian Hughes on the Next in Tech podcast as they share their insights from the 15th anniversary of Augmented World Expo USA. Hear how augmented reality tools are transforming the way we interact with technology and which advancements in semiconductors, remote rendering, and cloud streaming are driving the adoption of the metaverse in retail applications.

Subscribe to Next in Tech
Subscribe

Presenters

ATTENDEES

Eric Hanselman

Ian Hughes

Neil Barbour

Presentation

Eric Hanselman

Welcome to Next in Tech, an S&P Global Market Intelligence podcast where the world of emerging tech lives. I'm your host, Eric Hanselman, Chief Analyst for Technology, Media and Telecom at S&P Global Market Intelligence. And today, we're going to be talking about what's going on in the metaverse and to discuss it with me, I've got returning guests Ian Hughes and Neil Barbour. Welcome back to the podcast to you both.

Ian Hughes

Thank you very much. Good to be here.

Neil Barbour

Thanks, Eric.

Question and Answer

Eric Hanselman

So you are both off some metaverse related travel. There's a lot of research that's going on. I guess maybe start with some of the research pieces. Neil, you're working on some revenue forecasts for metaverse. So I'm curious, what is it you're seeing and give us a little background.

Neil Barbour

Yes, that's right, Eric. So we're sort of looking at the metaverse coming out of a disillusionment trough coming out of the surge in revenue that we saw during the pandemic. So headset sales were up. There were a lot of metaverse leaning experiences that grew over the pandemic, including Roblox and Fortnite did okay for itself. At the same time, we also saw the industrial and enterprise leaning metaverse experiences and hardware and solutions, those started to ramp as well.

At around 2022, some of the air started to come out of that market. There were some overpromising and underdelivering across the stack. But in 2023, we're starting to see some more technologies come online. We're looking specifically at the Apple Vision Pro. And as I'm sure we'll talk later a lot of the new companies and ideas and solutions coming out of Augmented World Expo.

So we're kind of back on track for a pretty steep growth curve for the metaverse revenue segment over the next 5 years. And again, we're tracking that quarter-to-quarter. So we're already seeing some of that come to life on the ground in the early 2024. And moving forward, there's still potential for that to realize.

Eric Hanselman

So rumors of the death of the metaverse were exaggerated?

Neil Barbour

Sure, so to speak. I mean, it's a slow, steady climb, but there's still plenty of road to run here.

Eric Hanselman

Well -- and I guess it's one of those things that we've talked about on previous episodes, which is the fact that there's -- well, you talked about a certain amount of disillusionment that was out there, some of the heady promises. I think we're now getting in some circles where avatars actually have legs now and some of the, I guess, maturing parts of different parts of this, but there's so much that's going on outside of what I think is the piece that people tend to sort of jump to when talking about metaverse, which is the full simulated existence.

And in fact, there are so many practical uses that are taking place now. And of course, you mentioned your augmented reality world. It seems like this was the point at which you really got to see who's back in, what's taking place. The fact that you've got a conference, that attendance was up. A lot of people were there. Maybe we should dive into what you all saw there and what your impressions were.

Ian Hughes

Yes, sure. So Augmented World Expo has been going on for 15 years now. So again, as you said, metaverse is dead. If not, it's been there for a very long while in all sorts of shapes and sizes. And one of the interesting things more so than ever this year at this event was that there was a focus on the history of this. Because constantly, we have this -- the peaks and troughs and quite often, at the peak, everyone in the peak kind of forgets all the things that happened before. So you get all the, this is the first x to do y-type thing happening.

But this year, they were inducting 101 Hall of Famers, people going back, way back into the '60s and things like that, that had worked and developed and changed the way that we might be able to interact and can interact with data and with one another. So it's very -- whether you use the metaverse word or not, having people like Professor Tom Furness known as the Godfather of VR, he started work on heads-up displays in fighter pilot cockpit. He kind of started that one way and another. And then moved to trying to understand how to re-represent all the complexity of information you have on a massive set of dials in a fighter pilot while you're mostly unconscious, as he said, when you're flying at high G.

But trying to give the pilot the right information that works for their brain, not the other way around, not the, "Here's a bunch of dials, learn them." And his entire lineage has now got moved from the military to actually -- I mean, started loads of companies, started loads of unusual things, but looking to how this -- all this tech that we've got and all these advances we've got can improve us and improve humanity.

So you talked about human interfaces, but humane interface and considering people. And I probably mentioned that lots of times before that this is -- all this stuff is about people. So having that lineage and people from then willing to put up with everyone saying we've done this for the first time when they haven't, but trying to remind everyone, there's one big -- there's a huge great family of things going on.

Eric Hanselman

We have been doing this for a while, and there are lessons that hopefully will follow through. I think that the point that you made about human interfaces is so fundamental to a lot of what we've been thinking about. As we've discussed, a lot of this transition metaverse is that it's got the ability to be an environment in which you can present information in ways that winds up being more useful for people and being able to -- whether or not that happens to be pilots flying planes, being able to get the right information in the right format in ways that people can leverage it and use it and work with it effectively.

Neil Barbour

Yes. We really saw the rubber hit the road on that for a number of companies on the show floor. I'm thinking particularly of Qualcomm, NTT, Xreal, sort of transitioning what has been a headset, VR headset focused market, particularly thinking of the hardware, exclusively, not necessarily the software side, but putting a big, heavy, hot, power-hungry VR device on your head and taking that and moving it more towards the smart glasses division, something that rests easily on your ears and nose and gives you a more digestible heads-up display kind of information instead of sort of immersing you in this other world.

And I think a lot of market observers, myself included, have seen this as the end goal of the AR/VR phase, perhaps maybe we're a more exploratory VR phase, we're sort of learning the nuts and bolts of what it is to interact in a virtual space and then maybe stripping away a lot of the VR parts where the AR parts remain. And that's where you see the glasses potentially become a smartphone replacement over, not a 5-year story, but maybe over a 10-, 15-year story. But on the show floor, we saw a lot of advancements in waveguide and integrations with machine learning and Gen AI technology to deliver on the kind of promises that have been made in the smart glasses space for, what, decades now?

Eric Hanselman

It's interesting. I mean, you mentioned Qualcomm, and it seems like we're now into this area in which a lot of the technologies that have been leveraged for smartphones, a lot of the high-performance rendering capabilities, so companies like Qualcomm, MediaTek, maybe to a lesser degree, Intel, are now starting to turn up in ways that you're now integrating a lot of that what had been the smartphone capabilities into something that's a little more lightweight, wearables in their various forms.

Neil Barbour

Yes. And I think that's why Qualcomm's name kind of rises to the top here in a lot of cases because of their ARM-based infrastructure and how they're very power conscious that is to say that of the Snapdragon processors, their key advantage in the market is how power efficient they are. And so that's really important in the smart glasses space where you're not going to have a lot of room to hide a battery. So the battery is going to be smaller. You're going to -- particularly if the smart glass is going to be your main device throughout the day, you're going to need arguably more battery than you would in your phone, because they're up and they're working all the time.

But there are definitely other companies in play here as well. As you said, the MediaTek and Intel. NVIDIA certainly isn't done with this race. They're still involved at the edges of some of these headsets being -- and sell directly into them the power the graphics inside of these in some of the more advanced headsets. What we're kind of seeing right now, too, is that a lot of devices that are forward-looking are sharing power with a phone. So you have half the rendering done perhaps on the glasses and half on the phone.

That's not great for -- first of all, for true portability. If you -- potentially, your phone's in your pocket all the time, but maybe it's not. It's also not good for the phone's battery life. So you don't want to be sacrificing one device's power for another. One of the interesting solutions we found to that problem on the floor is Xreal started producing a phone-sized tablet that would power its smart glasses exclusively. The trade-off there is now you're carrying 2 phones around -- sensibly carrying 2 phones around your pocket all day long.

Eric Hanselman

A phone and a tablet, well -- but it does get into that question of sort of the rendering power that's necessary and what -- or just, like, computational power in general, necessary. I bring in MediaTek because they -- there was a lot of discussion about their latest device and the fact that they went with larger-sized arm cores, their multicore product.

And it really is one of these things where you think about what that -- the kind of horsepower that we're now expecting our devices to have, that's one of those things that we're getting into a point at which when you start dropping whatever the heck AI means in all of this -- into this, again, more computational work, more power, more capability that's got to be dropped into this as well. I'm sure you saw no small amount of AI when you're out in Long Beach as well.

Ian Hughes

There was a lot of AI, obviously.

Eric Hanselman

Strangely enough.

Ian Hughes

But I think just on that rendering power piece, there's also a lot more things that are about cloud streaming content to whatever device. So that stuff NVIDIA have been doing that you don't need a device, you need a screen and a communication channel. And Hololight has created a cloud streaming rendering service for kind of heavy duty engineering models. You see the complexity of the things that you're trying to render in industrial space, for instance, whether it's on a headset or on a screen has typically been -- typically required some high-end desktop to power that stuff and a big Pentium graphics card.

Instead, they render it on the cloud and stream it to you, and so do the management of that. So you actually get the full beans engineering model delivered to you. And we're seeing a lot of those things. And then, of course, then, there's also a potential for AI on not just on your device, but AI in the environment you're streaming from to be able to do even more with the things they're sending to you. And that's some of the things. We see that kind of blended piece, that this is proper distributed computing. It's not just on the device and it's not just on the cloud.

Eric Hanselman

And it starts to get -- to overcome somewhat historically, cloud-based rendering. The challenge was that you had all those laws-of-physics problems of if the rendering is taking place, where there is -- it takes a long time for the photons to actually get there, you get lagged. But now we've got a widely dispersed enough and well connected enough computational capacity. In fact, we're starting to overcome some of those issues around latency.

Ian Hughes

Yes. And you also have, again, the device. The device has a degree of prediction on it as to what's coming next and what it's seeing. And for instance, in just general gaming rigs, the graphics cards rendering things that have never been sent to it. They just -- by filling in the gaps, they kind of keep the framerate up. And I think this -- should I say, we've been talking about the hardware, the whole software stack of things that are there to support the metaverse and the things that's been off from it was again maturing.

And we saw a light bunch of announcements at the show about kind of the next level of software engineering stack to build something. And it might be the 8th Wall kit that used to -- it used to be a kind of like typing HTML into something, but now is a full WYSIWYG editor that looks -- I mean, they all end up looking like Unity and UnReal, but they're about saying this is a spatial application and this piece is going to be here, and we're going to size this and put this over here.

Snapchat doing the same thing. Zapper doing a similar thing for the WebXR. And they're all these rich toolkits to let you create some incredibly complex content, not necessarily simply, but incredibly complex content to be delivered on whatever platform, whether it's one of the headsets or whether it's on your phone and that level of maturity, I mean, they don't put that kind of investment in those tools unless there's a reason and people are going to use them and they need to attract the brands or attract the enterprises or attract the consumers to do things with it.

Eric Hanselman

I mean this has taken that step beyond what was some of those tools that had come out of gaming environments. And you mentioned UnReal World, Unity, some of those frameworks that started out in that environment to now get to that next stage or something that's a bit more general purpose, it sounds like.

Ian Hughes

Well, to some degree. I mean, UnReal and Unity also have tools for these spatial environments and they're starting to support the USD former and things like that. But it was the -- if you take Snapchat, Snapchat's there for a particular use case -- set of use cases, isn't it? And so understanding what a facial mesh is and how you're going to replace something and make an unusual mask or a quirky -- a quick app for somebody is one thing or being able to deliver across all the web XL browsers, which is what Apple were going for. It's the level of intuitive tooling that then does all the complicated bits for you.

Eric Hanselman

Right. I kind of took the glut of new development platforms as a sign that Unity and UnReal were not meeting the needs of AR/XR/VR developers, that these are really drilling down into very special use cases. And so I think -- I don't know, Ian, did you see a lot -- you spent more time with that space than I did. Did you see a lot of the Gen AI story being told in these special purpose platforms?

Ian Hughes

It was certainly starting to. So particularly Snapchat, I think, was putting some significant effort into helping generate the content using some components of Gen AI. But it was also kind of going back to what's on the device, trying to do some diffusion modeling of live video and make changes based on that. So rather than, at the moment, a lot of the stuff is really a cut and paste over a piece of video if you do something in Snapchat, you have -- it's a filter. So this next level of filter isn't just a filter. It's actually dynamically changing every single pixel at a very fast rate.

I mean, they're trying to get faster and faster, but on the device. So if you want your film that you're shooting to look like Van Gogh, it will. If you want it [ 3 cubits-ed ] , it will. And so they're combining that kind of rendering technology as well as a lot of these tools, just inherently have kind of some sort code pile or some sort of resistance thing. But it's obviously people like Roblox, they're busy talking about full-on Gen AI to build the entire experience. I'd say, obviously, but that is what they're doing.

Eric Hanselman

It seems like platforms like Roblox and actually, I want to do a shout out to the Metaverse Digest that you've been putting together. It sounds like you've got a set of folks who are heading towards some of these environments where they've got complex tools and environments, some of those that are going to platforms like a Roblox kind of thing and starting to go with some pretty mainstream organizations that are starting to leverage these environments, a lot of the likes of a Walmart.

Ian Hughes

Absolutely, Walmart, Ikea people. So as I mentioned in the kind of history piece, there's lots of people apparently doing some things for the first time again, but many of these kind of brand activations and engagements with these platforms have happened in some shape or form before. That's not to say that it don't happen again and it's all good. But one of the reasons we've produce this monthly digest of all the things that have been going on, not necessarily all the things have been going on because -- we produced the digest because so many things are going on.

But we try and cover the full breadth of metaverse, so it's not -- it's everything from the deepest industrial use case to enterprise, B2B, B2C, what consumers are doing, what countries are doing, going country-level activations of things. They're whales that has got a -- visit whales on the spatial platform, almost like a tourist guide.

So we try and gather all those together and put them along with things like the forecast and the numbers and the other pieces that we're doing to try and get it all in one digestible piece. And if you look at it, you don't often get things like Kung Fu Panda next door to Siemens' industrial metaverse. That's not what coverage normally is. But we're trying to show those kind of crossovers, and there are. They exist.

Eric Hanselman

You ever get Kung Fu Panda speaking, Welsh. [Foreign Language] to you all.

Ian Hughes

[Foreign Language], well done, yes.

Eric Hanselman

I did -- like, limited amount of Welsh, so.

Neil Barbour

Yes. And the producer of the Walmart crossover with Roblox was on the show floor, the production company's name is Sawhorse, is there. They're kind of a transmedia advertising agency, and they really were at the center of probably was not a new phenomenon, but maybe a growing phenomenon of, again, content producers, creative-type sort of swirling around this industry, trying to find out how to push maybe more traditional media or retail properties into this new virtual opportunity, particularly to target Gen-Z and younger audiences that may be losing or not seeing the entire brand landscape on traditional media.

Ian Hughes

Well, that opens up some really interesting possibilities, because you've now got -- you've got viewing environments that are not -- you don't have to be entirely headset based. There are other options in terms of how to deliver that. So it seems like that's opening up some opportunities there as well just in terms of expanding the audience.

Neil Barbour

There's also the things that are helping people just be able to capture the spaces around them and the things around them in ways other than just a photo and then they want to be able to share those with people. So there's lots of -- the Gaussian Splats of the world, of which there are many applications to do Gaussian Splatting. And we saw a lot of those in the show as well, help people and brands, for instance, say, well, this is a very accurate model that can be experienced, not just a 3D model. It's an accurate place or thing or shop or product that can then be dropped into a multitude of applications, including all the applications we were just talking about, the kind of software development platform.

And all those things help people kind of understand that the spatial, to use Apple's words, but the spatial world is something that we respond to as humans because we are spatial beings. So all that gets assisted by this technology in Gaussian Splatting or NeRF, both of which are heavily AI focused, again, to get back -- but not necessarily Gen AI based, taking some content and understanding from a few photos how to make a 3D -- a proper 3D environment from that thing.

And so, "come and join me in my garden" kind of thing. You can do that with people. And for all -- whilst I love the more fanciful end of the metaverse, where you can be anything and have anything, everyone needs to start and does start in what they know, in the buildings they know, in the place that they know, in the products they know. And that's obviously what retail wants to do. They want to sell you the products they know so they can build these experiences, but from what they've already got.

Eric Hanselman

That's a point in thinking about the journey of ensuring that some of those first steps are into an environment that's identifiable, that people can specifically relate to. I'm curious, is that something where you think that, that's the best first path. I mean, again, so much of what we've talked about, I think, first reactions to metaverse are a lot of the more fanciful capabilities, but there's so much more that is the practical here and now capabilities that we've got direct access to.

Ian Hughes

I think everyone starts -- whatever you're doing, you start with that solid base. I mean, if you go into an enterprise and say we're going to do a metaverse thing, everyone will model their office, guaranteed. And then they'll go, why have we done that then? Why can't we -- we can fly now. Who'd want to -- we don't need those desks sand panels. But if it's quick to do that, which is coming quicker, because you just scan around with iPhone and you've got a model and off you go, then people can make that leap quicker and then start finding all the unusually expressive ways. You can use this infinite canvas of infinite things and infinite number of pixels. But I think that's a psychological I don’t care.

Neil Barbour

Yes, Ian and I have this, maybe debate is too strong a word. But if Walmart were to move all of its retail into the "metaverse" or virtualize a store, would people be better served or more comfortable in a model of an existing store where they walk around and they might know where things are and then put them into their virtual cart and then eventually, they would be sent those goods physically, either by a truck or through the mail or they go pick them up as they do now if you would order off Walmart?

Or would people better be served by a virtual experience where it was easy to grasp all the items in front of you in some more abstract, but easily -- an interface that was more easy to manipulate than walking around a store entirely. And that's a debate that these brands and retail presences are going to have with themselves and with the audience over the next 5, 10 years.

Eric Hanselman

It opens up that question of how do you address people's ability to work with that interface. And do you start with something that is maybe familiar to the walking around piece and clunky, sort of, "point at the thing, drop it in my cart," sort of stuff, click-here kind of thing or what's the speed with which you can get them into something that's an easier way to actually get to whatever the goods are, whether or not that has to be the structure of the environment, the mechanisms they're using.

A lot of this is really starting to work with what's the constituency that you're actually trying to address, because some folks are going to be able to grasp some of these mechanisms more rapidly than others. And I guess it's a question of do you have to do one, can you do more than one. And how you make decisions around that?

Ian Hughes

That's back to human interfaces and understanding, not just all humans, but the one that's currently engaging with the experience. And again, that's a thing that potentially is a great place, again, for AI and Gen AI to sit, to respond. And so we're going back to the history of this, to respond to the needs of the pilot, in this case, or the consumer. And what are they frustrated by, what's not working for them, and, okay, what to give them to make it better.

And that's not a mad suggestion because that's what we see in robotics, with collaborative robotics. We've talked about before that the robot adjusts to the skill of the person that's engaging with them. And if they can go faster, they go faster. If they're going slower, they'll go slower. Those kind of adaptive things is really the benefit of all this tech to help us, not to help it.

Eric Hanselman

I keep getting back to the Clippy model, "It looks like you're shopping for pots and pans today."

Ian Hughes

Yes. And -- well, there's good ways and bad ways of doing it on those.

Eric Hanselman

Absolutely true. Well, but again -- and you brought us right back to the beginning. We've done a full 360 here. So much of it is adapting to what is useful for the person that's using the environment at that point, whether not it is that heads-up display, shopping for things, so much of this is getting into a point at which it's useful for the human that does actually work in this.

Ian Hughes

Yes. And again, going back to the show, to Augmented World Expo, all the talks, of which there were many, were all being AI transcribed and translated into 144 languages live. So if you had your little -- I mean it's -- you don't have to have a fancy headset, really. You just open your phone up and go -- and you get the subtitles in your language. It's adjusting to the person. I mean, they all -- you pick the language you want, but it's not a -- if you're going to do 1 or 2 languages, then it might -- it doesn't care. It just does the law. And that's one of those things with large language model. They're particularly good at those kind of translations, and enough for a conference and -- or a conversation.

Eric Hanselman

And we've gotten to our own Babelfish, wow.

Neil Barbour

And the answer's there. You don't need the headset. And that is true, and that is one of the great parts of a lot of these technologies is they diffuse across multiple platforms. But the company doing the translation, XREAL, is selling their own heads-up smart glasses that will do the translation inside the lenses. And this is a lot of these AR/VR/XR hardware vendors are trying to underline is that -- particularly in this moment is that this hardware is the ideal platform for Gen AI, in their words.

So your -- the headset see what you see. They get the context of the space around you. So when you ask it a question, it already feeds into it through front-facing cameras or sensors or what have you, and you don't have to give it a long text description. What is this in front of me. Who is that. When's the last time I saw that person. Where is my next direction. Give me some context of the world around me. And so the smart glasses, in particular, are supposed to be that next turn. Not only do the translations exist and you can access them, they're already being spit into your eyeballs.

Eric Hanselman

We're translating that street sign as you're looking at it.

Neil Barbour

As you're looking at it.

Eric Hanselman

Well, man, a lot of interesting progress. I will point our listeners at the Metaverse Digest to get the latest and greatest what's going on, but we are at time. I want to thank you both for being on. This is -- there are so many more conversations around this to be had. I am sure we'll be having you both back on. But thank you very much for being here today.

Ian Hughes

Thank you. Been great.

Neil Barbour

Thanks, Eric.

Eric Hanselman

Never enough time to go as far as we'd like, but that is it for this episode of Next in Tech. Thanks to our audience for staying with us. Thanks to our production team, including Sophie Carr, [ Gary Sussman ] and [ Kate Aspen ] the Marketing and Events team and our agency partner, the One Nine Nine.

I hope you'll join us for our next episode where we're going to be talking about a cyber issue around hybrid warfare. And some of the things that we've taken on. It's actually a revisiting of a conversation that we had at the RSA Conference. I hope you'll join us then because there is always something Next in Tech.

No content (including ratings, credit-related analyses and data, valuations, model, software or other application or output therefrom) or any part thereof (Content) may be modified, reverse engineered, reproduced or distributed in any form by any means, or stored in a database or retrieval system, without the prior written permission of Standard & Poor's Financial Services LLC or its affiliates (collectively, S&P).