Alphabet Inc.'s Google LLC brings a wealth of assets — including research and development leadership in AI/machine learning models and expertise in data engineering — to the challenge of bringing AI ideas to fruition. Google is infusing the capabilities of its multimodal (text, audio, video, images and code) Gemini large language model into its developer, productivity, database and security suites. It is aiming to attract net new customers and partners to its ecosystem and improve its traction in the enterprise.
A new marketing push, an expanded sales organization and a fleet of former Microsoft Corp. executives are part of the effort. Many of the 30,000 attendees at the recent Google Cloud Next event in Las Vegas are new to the company's cloud platform. For 2023, Google Cloud's annual run rate reached $37 billion, five times the 2018 level and 26% higher than in 2022.
Google has long taken the "if you build it, they will come" approach, relying on engineering know-how to deliver some of the world's most popular consumer software and services, such as Search, Chrome, Gmail, Maps and Translate. The company grew up on AI and ML but in the cloud era has struggled to compete in the enterprise with Amazon Web Services (with its loyal fan base and scrappy innovation) and Microsoft (with its big-business application and partner acumen). Now, as generative AI shows its promise for transforming a range of business processes and industries, Google is digging in with IP at the infrastructure, platform and application level to help companies accelerate their adoption of the technology using its models and tools.
Context
Google's focus is on business outcomes, such as better customer experience, stickier services and more efficient supply chain management. This is where the company concentrated its messaging at Google Cloud Next, with dozens of customer stories documenting productivity gains and cost savings.
As AWS proved in the early years of the cloud, a vibrant ecosystem can multiply opportunities and create a flywheel of experimentation, adoption and expansion, as long as customers stay engaged. Gaps remain in terms of the data preparation needed for effective AI deployment and measuring return on AI investment, but Google is investing across the board to ensure it remains a dominant player.
Google bets on Gemini
Google's announcement of the public preview of Gemini Pro 1.5 underpinned most of the other Next 2024 product developments. Gemini 1.5, which was trained using Google's own tensor processing unit (TPU) accelerators and not GPUs, comes standard with a 128,000-token context window. A version with up to 1 million tokens is in private preview. The latter translates to approximately one hour of video, 11 hours of audio, or 700,000 words.
A year ago, similar models had windows of less than 10,000 tokens, and one Googler mentioned that company researchers have been operating Gemini with a context window of up to 10 million tokens. In addition to Gemini's size, its multimodality (the ability to combine text, image and video into a single prompt and reason across those modes) is key and can enable a new type of task automation.
The other major AI announcement centered on what Google calls agents — conversational AI operatives that can respond to natural language prompts or do work on behalf of other agents to automate tasks. For example, an agent could use an API call to Gemini to summarize text, then to Google's Imagen image-generation model for a picture, and then to a text-to-speech engine to create a podcast of the text with the image as the podcast logo.
The Vertex AI Agent Builder works as a pipeline orchestrator for creating agents in Vertex AI, Google Cloud's platform for creating, deploying and managing AI models and applications. Agents can be configured either by using the Vertex AI Agent Builder's prebuilt tools and APIs covering things like search, conversation and language understanding capabilities, or by custom coding that integrates with open-source frameworks such as LangChain, which is offered as a service within Vertex AI. The Builder also includes a Vertex AI Search RAG (retrieval augmented generation) system to ground language models in enterprise data.
Google also announced Grounding on Google Search in public preview, where Google web search is used as a data source for grounding model responses, as well as the general availability of Imagen 2.0, which can generate short live video clips from text prompts.
Workspace adds AI-powered Vids, real-time translation
To enhance its Workspace productivity suite, Google is adding intelligent agents, custom-built and AI-powered processes, and new opportunities to create and collaborate. The company claims more than 10 million paying customers and 3 billion monthly active users worldwide. At the Next conference, it unveiled a series of feature updates, commercial add-ons and a new video app, among other updates.
Gemini for Google Workspace builds the company's newest and most capable AI model directly into Gmail, Docs, Slides, Sheets and Meet. Gemini is also a stand-alone chat experience that can assist with research, analysis, content creation and project management tasks. It was launched as a premium feature for Enterprise and Business plans, priced at $30 and $20 per user, per month, respectively, with an annual commitment.
A new addition to Workspace is an AI-powered video editor called Vids, currently in alpha testing with select customers and set for a June release in Workspace Labs. Vids leverages Gemini to generate narrative outlines, storyboarding and style customization for video creation. New add-ons for Meet and Chat, priced at $10 per user, per month, introduce AI-driven features such as notetaking (in preview), translation of 69 languages in Meet (coming in June), and translation for Chat and on-demand conversation summaries (later this year).
Google wants to empower customers to tune their foundational model of choice with their own organizational data to create employee agents built directly into Workspace. Leveraging its AI-optimized infrastructure and a Workspace add-on framework, these agents will comprehend multimodal information and make rational decisions from various inputs. Applications range from optimizing processes and managing routine tasks to editing and translating communications, as well as responding to queries from Workspace users.
A big challenge for Google will be in messaging this shift. To justify the price premiums, customers will need to see the impact on their own human resources, marketing, sales and other workflows, which will require Google to come up with specific prompt libraries and integrations with third-party apps for each persona, and provide education around what assistant-driven workflows look like. The pace of product innovation is, however, impressive.
AI Hypercomputer and Gemini
AI Hypercomputer is Google's integrated stack of AI-optimized processors, servers, storage, networking and software delivered via a flexible consumption model. More a system-level concept than a product, it refers to an end-to-end architecture with a well-defined control plane that orchestrates and schedules software processes to work most efficiently on the underlying hardware. Customers should not commit to an AI model, Google says — models are changing and improving too fast to pin your company's future on one or another. What is needed is a substrate that can adapt to future model developments, and this is where open architectures play a role.
Performance-optimized hardware enhancements include the general availability of Google's own TPU v5p (the "p" is for performance) with four times the capacity of the TPU v4. The TPU v5p is the processor used to train and serve the latest Gemini models, as well as "hero workloads" from Anthropic and other foundation model builders. A3 Mega VMs powered by NVIDIA H100 Tensor Core GPUs have double the networking bandwidth of the A3 VMs introduced in 2023.
Hyperdisk Storage Pools with advanced capacity make it possible to access block storage from a pool that is not explicitly tied to VMs, reducing total cost of ownership by delivering storage volumes based on workload demand rather than having to purchase and provision in advance. Google claims to be the first major cloud provider to offer this capability as a first-party service. JetStream is a throughput- and memory-optimized inference engine for large language models (LLMs) that Google says offers higher performance per dollar on open models such as Gemma (the open-source version of Gemini), JAX and PyTorch/XLA.
For developers, Gemini Code Assist, a competitor to Microsoft's OpenAI-powered GitHub Copilot Enterprise, is a code agent designed to enable faster environment setup, better unit test coverage and automated code review. It inherits the million-token context window being previewed for Gemini 1.5 Pro, which Google claims enables more accurate code suggestions and reasoning over larger proprietary code bases. Newly announced at Next was Gemini Cloud Assist, an application life-cycle management tool that enables real-time querying and troubleshooting based on an organization's runtimes and environments.
BigQuery's expanded remit
With respect to Google's database offerings, the message resonating at Next was the melding of data and AI. Historically used for data warehousing, the BigQuery SQL engine is now being positioned as a unified analytics platform, bringing all of the company's analytics services under one BigQuery roof. It is the culmination of a significant development effort spanning the past year and a half.
A core element of BigQuery is its common storage layer, BigLake, announced in April 2022, which provides unified storage access regardless of compute engine or data type. In that context, notable announcements at Next included BigQuery support for Apache Spark, which aligns with BiqQuery's multiple compute engine strategy, enabled by a newly released unified runtime metastore that provides universal data mapping for table definitions and enforces policy controls for data access. Other supported processing engines and updates include the ability to do SQL queries on real-time arriving data such as Vertex AI and BigTable, and a new Kafka-managed service for streaming data.
On the AI front, BigQuery is getting direct integration with Vertex AI, including the ability to carry out multimodal data analytics on unstructured data such as documents, audio and video. Integration of Google's Gemini LLM is pervasive within BigQuery, including BigQuery Data Canvas (which allows data engineers to create complex data processing pipelines within a guided user environment based on Gemini) and BigQuery for SQL and Python code assist.
Barely two years old, AlloyDB (Google's fully managed PostgreSQL database service) has received significant development attention. Last year, the company announced AlloyDB AI, a portfolio of AI tools and capabilities to better enable the building of generative AI applications. This year, Google is furthering that AI integration. While vector support was universally added to all Google databases, including AlloyDB, Google added improved vector indexing capabilities that are compatible with pgvector and leverage tree-based indexing for significant performance improvements, according to the company. Moreover, Google Cloud has added a new model endpoint manager that makes it easier to connect to Vertex AI, third-party and customer AI models.
Google showcases security portfolio progress
Google released a dizzying array of security-related announcements at Cloud Next, spanning AI, security operations, secure browsing, trusted cloud, Google Workspace, network security and data security. Highlights included general availability of Security Command Center Enterprise (SCCE), a multicloud risk-management offering that combines cloud security and enterprise security operations.
Additionally, the company shared a preview of Gemini in SCCE, which enables security teams to search for threats and other security events using natural language, quickly summarizing critical misconfigurations and vulnerability alerts, plus generating attack paths to help understand cloud risks for remediation.
The company also announced a preview of Mandiant Hunt for SCCE, which allows customers to leverage elite-level Mandiant analysts on demand, to aid in incident response and threat hunting. Gemini in Threat Intelligence enables analysts to ask Gemini for the latest threat intelligence from Mandiant, including detection of indicators that their environment has been compromised. In preview is the ability for customers to conduct searches across Mandiant's threat intelligence repository using natural language.
Gemini's ability to summarize OSINT (open-source intelligence) reports from VirusTotal directly in the platform is now generally available. Google also announced GA of Chrome Enterprise Premium secure browser, Google Cloud Next Generation Firewall Enterprise and Cloud Armor Enterprise, plus several confidential computing and data security improvements.
Partnerships capture AI/ML life-cycle opportunities
Throughout Next, Google highlighted the critical importance of partners in delivering "last mile" AI value. SaaS providers/ISVs, systems integrators and developers are the stars of the generative AI show. While foundation model (Hugging Face, Anthropic, AI21labs, Cohere, etc.) and infrastructure/platform providers are also key partners, their roles lie in providing the enabling technology for the tightly integrated Google Cloud AI stack. The services- and application-centric partners help the company in its ongoing pivot from a tech-heavy "supply the tools and let the customer build it" vibe to a more enterprise-oriented business outcomes approach.
To target enterprises more directly, the company will foreground vertically focused, outcome-based commercial propositions and take them to market as a consulting sell in conjunction with global SIs, many of which were key sponsors of Next, including Accenture PLC, Capgemini SE, Cognizant Technology Solutions, Deloitte, HCL Technologies Ltd., McKinsey and Tata Consultancy Services Ltd.
Simplifying customers' access to technology through a platform framework and managed services makes it easier to get up and running on Google infrastructure once the business value, use cases and applications get figured out. The company's own generative AI cloud consulting team offers training, solution creation and operational readiness services to both partners and selected enterprises, and has positioned itself as a go-to-market "sell with" companion as part of its 100% partner-attached strategy.
The company announced enhanced credentials for partners such as a new generative AI services specialization (including expanded access to AI resources), generative AI delivery excellence and technical bootcamps, and a new program for technology co-development with ISVs via the Google Cloud Tech Expert community.
Customers and partners we met with at Google Cloud Next underscored the importance of structuring data in a way that allows it to be usefully accessed, ingested and inferenced by AI tooling. Carrefour Group, for example, spent three years on its database structures and tagging regime, which enabled it to quickly build services to take advantage of AI. The lesson here is that enterprises can take a planful approach focused on different parts of the business (such as social or customer experience) rather than having to address the entire organization.
This article was published by S&P Global Market Intelligence and not by S&P Global Ratings, which is a separately managed division of S&P Global.
451 Research is a technology research group within S&P Global Market Intelligence. For more about the group, please refer to the 451 Research overview and contact page.