AWS re:Invent 2024: Builder ethos embraces simplicity

AWS re:Invent 2024 this past week ushered in a transformative chapter for both Amazon Web Services Inc. and the broader tech ecosystem.

This year’s event marked the debut of Matt Garman as chief executive of AWS, stepping into a role that aligns with what John Furrier aptly describes as a “wartime CEO” — a technically adept leader and trusted consigliere. Garman’s keynote set the tone for AWS’ strategic focus: doubling down on core infrastructure capabilities across silicon, compute, storage and networking, while extending its Graviton chip playbook to graphics processing units and potentially large language models.

Key highlights included the unveiling of six new artificial intelligence LLMs, formerly codenamed “Olympus” and now named Nova, introduced by none other than Andy Jassy, the godfather of cloud and now CEO of Amazon.com Inc., who made a cameo appearance with a substantive keynote. Jassy’s return underscored Amazon’s commitment to innovation at large scale, bridging its heritage in infrastructure with the future of AI.

A notable emphasis on simplification was palpable at this year’s event. Though the company remains steadfast in offering primitives, or granular services, and service flexibility, it’s now streamlining AI, analytics and data workflows into a unified view. This vision materialized with advancements in SageMaker, positioning it as a hub for machine learning, analytics and AI. Additionally, the event showcased a deluge of new features and services across AWS’ expansive ecosystem.

In this Breaking Analysis, we explore the critical takeaways from re:Invent 2024. Leveraging insights from Enterprise Technology Research data, we’ll unpack AWS’ strategy, the implications for the broader ecosystem, and how we believe the next several years will unfold.

Amazon’s legacy of ‘primitives, not frameworks’ is evolving

One of the recurring themes in AWS’ journey has been its “primitives for builders” ethos — a foundational philosophy shaped by Amazon Chief Technology Officer Werner Vogels. At re:Invent 2021, this philosophy came under the spotlight as AWS navigated a market shift from tech-first early adopters to mainstream corporate developers who demand simplicity and pre-integrated solutions.

During the event, Vogels articulated the essence of AWS’ approach: Primitives provide unparalleled choice and power, enabling builders to craft virtually anything. However, he also acknowledged a critical tradeoff: As primitives proliferate without being inherently designed to fit together, they introduce complexity. This bottom-up approach starkly contrasts with the top-down strategy employed by competitors such as Microsoft Corp., which prioritizes simplification and abstracts away the complexity of underlying primitives.

Key takeaways:

  • AWS’ bottom-up philosophy: Vogels emphasized that frameworks often take years to perfect, by which time market dynamics may have shifted. Primitives, in contrast, offer immediate flexibility and enable innovation, but at the cost of added complexity.
  • Competing philosophies: AWS’ reliance on primitives underscores a bottom-up strategy, which prioritizes developer control and flexibility, deferring simplification. Microsoft’s top-down approach flips this, starting with simplicity and hiding the complexity of primitives behind polished frameworks.
  • Evolving market demands: The shift from tech-savvy builders to corporate developers requires AWS to balance its traditional ethos with growing demands for streamlined, pre-opinionated solutions. This tension has been a defining narrative in recent years.

Context and broader implications:

Former AWS CEO Adam Selipsky’s tenure at AWS attempted to address this tension by introducing a simplification narrative, beginning at re:Invent 2021. However, these efforts are akin to what Bill Gates said many years ago: When you have an architecture or a business model that you try to shift radically, it’s like kicking dead whales down the beach.

In other words, it’s a monumental shift that takes time to execute fully. Now, under Garman’s leadership, AWS is positioned to rationalize these seemingly competing philosophies, leveraging its traditional strengths while addressing the complexity concerns of mainstream enterprises.

AWS’ challenge remains clear: Integrating its extensive portfolio of primitives into cohesive solutions without sacrificing the flexibility that has long defined its identity. As the market continues to demand simplification, AWS must reconcile these competing forces to maintain its leadership.

Matt Garman at the helm

AWS, under its new CEO Garman, is reasserting its dominance in core infrastructure while embracing the simplification imperative, for certain parts of its estate. This marks a significant evolution, blending AWS’ traditional ethos of “primitives for builders” with a shift toward pre-integrated, simplified solutions tailored for corporate developers.

One of the standout strategies involves leveraging Nitro and applying the AWS Graviton Playbook — originally devised to lower x86 compute costs — to GPUs and perhaps even LLMs. This year, AWS introduced six new LLM models called Nova, along with enhancements in SageMaker. These announcements underscore AWS’ commitment to lowering the cost of AI training and inference while reducing its reliance on Nvidia Corp.’s expensive GPUs. With Graviton, Trainium and Inferentia silicon offerings, AWS is making strides toward a more independent and cost-efficient AI strategy.

This approach has also allowed AWS to get on the generative AI training and inference learning curve at lower cost, using its own infrastructure (silicon and networking). We’ll come back to this point later in the post, but we see this as a significant competitive differentiator.

Key takeaways:

  • Simplification versus choice: Amazon is trying to do both. By no means is it steering away from its Unix-inspired ethos of providing granular primitives. But it’s adding a more refined focus on simplicity and integration, which resonates with mainstream corporate developers.
  • Composable infrastructure: AWS is able to deliver on a framework for composable infrastructure, breaking free from constraints tied to Nvidia’s allocation and scarcity challenges. This is explicitly enabled by Nitro and its internal silicon capabilities.
  • SageMaker enhancements: The new SageMaker integrates data, analytics and AI more cohesively, with improved usability. AWS is converging these areas to better serve developers and enterprises alike.
  • Nova models: Six new homegrown LLMs deliver enhanced price-performance, expanding on AWS’ Graviton-inspired approach to silicon.
  • Q for developers: While Q Business remains in its early stages, Q Developer shows promise in simplifying the software development lifecycle and is more advanced, in our view.
  • Data harmonization gaps: Despite progress, AWS still lacks a unified data source akin to Salesforce’s Data Cloud, leaving room for improvement in establishing a coherent source of truth.

The broader perspective:

The irony of this shift is that AWS’ original “primitives-first” ethos left the perception that it was lagging behind in LLM infrastructure and, like others, was forced to rely on Nvidia. However, for the past two years, AWS has been able to gain critical learnings by evolving its AI infrastructure, using its lower-cost chips. Moreover, Garman’s leadership is now reaping the benefits of predecessor Selipsky’s simplification groundwork. AWS’ renewed focus on user-friendly, integrated solutions positions it to compete more effectively in the evolving enterprise market, in our view.

As AWS evolves, the simplification and choice positioning will remain central to its strategy. We’ll continue monitoring how this balance shapes AWS’s trajectory in AI, analytics, and infrastructure.

AWS leverages silicon and systems expertise to challenge Nvidia’s dominance

Garman brings a deeply technical background, having led EC2 for years. His leadership reflects AWS’ strategy of combining technical depth with a pragmatic approach to infrastructure. His tenure marks a shift from Selipsky’s operational focus, positioning AWS to capitalize on its extensive systems expertise. As the industry increasingly relies on AI-driven innovation, AWS is reasserting its dominance in silicon, networking and infrastructure, aiming to reduce dependency on Nvidia’s GPUs for AI training and inference.

This strategy stands in contrast to Microsoft’s approach, which has been limited by GPU constraints and a reliance on Nvidia’s supply chain. AWS’ silicon strategy changed dramatically when it started working with Annapurna Labs and eventually acquired the firm in early 2015.

Its silicon roadmap, including Graviton, Trainium and Inferentia, is focused on lowering costs and optimizing performance for AI workloads. The result? AWS is able to position itself aggressively as a leader in scalable, cost-effective AI infrastructure. Microsoft got started much later in its silicon journey and as such is more exposed to supplies of Nvidia GPUs.

Google LLC, like Amazon, has been on a custom silicon journey for more than a decade with its tensor processing unit or TPU, first deployed around the 2015 timeframe. Subsequently, Google has rolled out other silicon innovations and has a leading position in the space.


Key takeaways: AWS, Microsoft and Google in the silicon race

  • Reducing Nvidia reliance: AWS’ silicon offerings, particularly Inferentia and Trainium, are designed to lower the cost of AI training and inference while reducing reliance on Nvidia’s GPUs. Notably, AWS has achieved significant milestones in running advanced models such as Claude 3.5 on its silicon, a step toward challenging Nvidia’s inference dominance.
  • Microsoft’s GPU constraints: Microsoft in our view faces certain challenges in the AI space because of its dependence on Nvidia. Limited GPU access forced Microsoft to prioritize applications such as Bing before scaling Copilot functionality across Office and Azure. This sequencing highlights the challenges of scaling gen AI while reliant on external supply chains.
  • Google’s strategic lead: Google’s early investment in TPUs and high-performance networking allowed it to embed gen AI capabilities throughout its products. Unlike AWS and Microsoft, Google anticipated the infrastructure needs of AI and began building its accelerator and networking capabilities years ahead. This foresight enabled Google to operationalize gen AI ahead of competitors.

AWS’ competitive edge: systems expertise and Nitro

AWS’s Nitro system is a core differentiator. The company is able to deliver high-performance, Ethernet-based networking infrastructure optimized for AI workloads, avoiding reliance on Nvidia’s InfiniBand infrastructure. Nitro exemplifies AWS’ ability to integrate hardware and software into cohesive systems, and to a certain degree, replicate Nvidia’s systems-level approach to AI. Though Nvidia continues to lead in training at large scale — leveraging software beyond CUDA to optimize massive clusters — AWS is catching up by porting its entire Model Garden to Trainium-based clusters.


The Claude 3.5 milestone and Anthropic partnership

Anthropic PBC, with AWS as its primary cloud sponsor, represents a pivotal partnership. Though we believe much of the $8 billion investment in Anthropic includes cloud credits, the collaboration showcases AWS’ ability to support advanced models such as Claude 3.5 on Inferentia. This capability is a key step toward cracking Nvidia’s moat around inference, which has historically been a bottleneck for cloud providers. In short, despite some naysayers, we believe the Anthropic investment is working for both companies.


Broader implications: AI and the future of cloud infrastructure

The gen AI revolution is reshaping every phase of the software development lifecycle. AWS, once perceived as a laggard in AI, now appears well-positioned to compete at large scale. By leveraging Nitro and its silicon expertise, AWS is aligning its infrastructure to support gen AI workloads effectively, even as it plays catch-up in areas such as large-scale training.

However, AWS’ systems expertise gives it a critical edge. While Microsoft struggles with GPU constraints and Google continues to lead in AI functionality, AWS is building a roadmap to independence. By optimizing its silicon and networking capabilities, AWS is on track to challenge Nvidia’s dominance and expand its AI offerings within its own ecosystem.


The road ahead

AWS’ focus on silicon and systems integration signals a long-term strategy to redefine AI infrastructure. Though Nvidia remains the leader in large-scale training, AWS’ accelerated progress in silicon and networking could erode that dominance, at least within the AWS installed base. With its Model Garden and advanced development tools maturing, AWS is proving that even lower-cost infrastructure can drive meaningful innovation.

Garman’s leadership will be pivotal in navigating this transition. AWS is no longer just following Nvidia’s footsteps; it is leveraging its systems expertise to chart a path forward that could reshape the competitive landscape of AI infrastructure.

AWS’ large language models – from Alexa to Olympus to Nova

At re:Invent, the introduction of the Nova family of gen AI models marked a significant milestone for AWS, signaling its intention to build leadership in the foundation model space. Jassy’s return to the stage to unveil these models was both a symbolic and strategic move, emphasizing AWS’ commitment to innovation and reinforcing its cultural and technical legacy.

The Nova family, much like Graviton’s positioning against x86 chips, is designed to provide customers with optionality and competitive price-performance. AWS indicated to theCUBE Research that Nova models will not be exclusively tied to any AWS services, maintaining its ethos of flexibility and customer choice.

This, however, could also indicate an internal bias against Nova, which comes out of Amazon.com’s Alexa team. Our understanding is there were at least two competing teams for LLM leadership and it was decided that the Nova team was further along, so it won the resource battle.

Nonetheless, the roadmap and benchmarks for these models demonstrate Amazon’s ambition to compete head-on with industry leaders such as OpenAI, Google and Anthropic in gen AI, again, providing potentially lower-cost solutions for customers.


Key takeaways: The Nova family breakdown

  1. Diverse model offerings:
    • Nova Micro: A text-only model designed for lightweight, cost-effective applications.
    • Nova Lite: A multimodal model supporting text, image and video, priced for accessibility.
    • Nova Pro: A balanced multimodal model offering advanced capabilities, including a planned upgrade to a 5 million-token context window.
    • Nova Premier (2025): A sophisticated model designed for complex tasks like model distillation, launching in Q1 2025.
    • Nova Canvas: Focused on image generation, competing strongly against benchmarks such as DALL-E and Stable Diffusion.
    • Nova Reel: Specializing in video generation, offering advanced features like zoom, 360-degree rotation and watermarking.
  2. Roadmap:
    • Q1 2025: Introduction of speech models.
    • Mid- to late 2025: Launch of multimodal-to-multimodal models capable of processing and outputting text, speech, image and video.
  3. Price-performance leadership:
    • Initial performance indicators and early demos highlight the competitive pricing of Nova models.
    • The planned 5 million-token context window for Nova Pro vastly outpaces competitors such as Gemini Ultra, providing greater flexibility for applications requiring extended context.
  4. Training infrastructure:
    • Pretraining for Nova models was conducted on Nvidia hardware, but fine-tuning and preference optimization are now performed on Trainium, signaling AWS’ gradual shift away from Nvidia.

Why Nova matters for AWS

AWS’ entry into the foundation model game can be considered a strategic necessity. Just as the acquisition of Annapurna Labs laid the groundwork for in-house silicon development (such as Nitro and Graviton), the Nova family provides AWS with critical skills in gen AI, ensuring it remains competitive in a rapidly evolving market. This approach also positions AWS to offer differentiated solutions while gradually reducing its dependence on third parties for LLMs and training infrastructure.


Cost-effectiveness and technical implications

The fine-tuning of Nova models on Trainium represents a deliberate effort by AWS to control costs and build independence in its AI strategy. As noted, though pretraining on Nvidia hardware remains standard due to its compute intensity, fine-tuning and inference — less resource-intensive phases — are being transitioned to AWS’ silicon.

This shift highlights AWS’ focus on cost-effectiveness, contrasting sharply with the high costs associated with training competitors’ models. For instance, Google’s Gemini Ultra reportedly cost nearly $200 million to train, underscoring the financial burden of building large-scale gen AI models.


The competitive landscape: Nova versus industry giants

  1. Flexibility and optionality: AWS favors Anthropic, but its decision to offer other industry models reflects its strategy of enabling customer choice, a core tenet of its overall approach.
  2. Long-term vision: The roadmap for multimodal-to-multimodal models demonstrates AWS’s intention to try and lead in advanced gen AI capabilities.
  3. Strategic comparisons: Nova’s design parallels Graviton’s approach to x86 CPUs, providing a signal that AWS views foundation models as essential to its infrastructure portfolio.

The road ahead: AWS’ gen AI ambitions

AWS appears committed to a long-term strategy of innovation in GenAI. While still leveraging Nvidia for pretraining, its investments in Trainium and other in-house silicon solutions position it to gradually reduce reliance on external vendors. The Nova models are an essential step in building the technical and operational capabilities required to compete in an increasingly AI-driven world.

With a clear roadmap, competitive pricing and a focus on optionality, AWS is signaling that it is not just participating in the gen AI race but is determined to shape its trajectory.

AWS SageMaker evolves – toward a unified data platform

AWS continues to push forward in the data and analytics space, balancing its core ethos of power primitives with the industry’s demand for simplification. At re:Invent, Amazon unveiled significant updates to SageMaker, signaling its ambition to redefine the data and analytics landscape. Historically, AWS’s data platform struggled to compete as a cohesive solution. The updates position SageMaker as a robust, unified platform, aimed at simplifying workflows for data and analytics professionals while preserving flexibility for developers.


Key takeaways: a unified SageMaker ecosystem

  1. Unification across personas:
    • The new SageMaker Studio integrates previously disparate tools into a cohesive platform, catering to a broad range of personas, from data engineers to business analysts.
    • S3 Iceberg tables: A critical advancement, enabling managed Iceberg tables with quasi-unified metadata that bridges operational and technical data, incorporating Amazon DataZone within Sagemaker Catalog.
    • Bedrock integration: Amazon integrates Bedrock into its new SageMaker platform through the Amazon Bedrock IDE, which is now part of Amazon SageMaker Unified Studio (preview).
    • The promise of read-write capabilities for Iceberg tables in GA is beneficial to customers as it doesn’t exist in competing platforms today. Our understanding is Databricks Inc. is closing in on unifying Delta and Iceberg with read write capabilities targeted for early next year. It’s unclear what this means for Snowflake Inc., Polaris and Horizon. We reached out to Snowflake to get an on-the-record comment, which we summarize at the end of this section.
  2. Addressing historical shortcomings:
    • AWS has long been seen as the “redheaded stepchild” in data and analytics due to fragmented tools and a lack of cohesion compared to competitors such as Snowflake, Databricks and Microsoft.
    • This new platform makes strides in unifying the storage and metadata layers, critical for transforming data into a strategic, unified asset.
  3. Convergence of data, analytics and AI:
    • The convergence of AI and analytics tools within SageMaker allows for a seamless pipeline, from ingesting and refining data to analytics feeding feature stores and LLMs.
    • SageMaker Lakehouse now bridges the gap between storage and analytics, while integrating personas ensures accessibility across the enterprise.

Summary of Snowflake’s take on the AWS announcements: Snowflake underscored its growing partnership with AWS, citing 68% year-over-year growth in bookings and recognition with nine AWS re:Invent 2024 awards, including “Global Data and Analytics ISV Partner of the Year.” The collaboration spans integrations like Snowpark for Python with SageMaker and joint work on Apache Iceberg to enhance data interoperability, reduce complexity and mitigate vendor lock-in.

Snowflake’s unified platform approach, supported by its Datavolo acquisition, accelerates AI application development and maintains robust governance through its Horizon Catalog. Highlighting AWS SageMaker’s recent improvements and its shared commitment to Apache Iceberg, Snowflake announced a multiyear partnership with Anthropic to integrate Claude 3.5 models into its Cortex AI platform on AWS and expressed plans to deepen collaboration in analytics and AI innovation.

Our take on Snowflake’s response is it’s no surprise the company would put forth an optimistic stance. As well, Snowflake and AWS (and Databricks) have shown this is not a zero-sum game. At the same time, we continue to feel that Snowflake’s primary competitive advantage is its integrated experience and the quality of its core database. As we’ve previously discussed, the value is shifting up the stack toward the governance catalog, which is becoming more open. This in our view will push Snowflake in new directions, which its response acknowledges.

From the Databricks perspective, we believe what AWS announced validates much of that which Databricks CEO Ali Ghodsi has been espousing, that open formats will win and customers want to bring any compute to any data source.


The competitive landscape: metadata as a key battleground

As AWS moves more toward a unified data platform, metadata emerges as the key value layer. This harmonization or semantic layer transforms technical metadata into business-relevant information — people, places, things and their relationships.

  • The metadata gap: Today, no conventional data platform has fully addressed the harmonization layer. Leaders in this direction include Salesforce Inc.Celonis SERelationalAI Inc., and Palantir Technologies Inc. within their specific contexts.
  • The opportunity: AWS’ updates to SageMaker suggest it’s targeting this market, evolving from an infrastructure-first mindset to a business process-driven approach, bridging SQL systems with knowledge graphs.

Why metadata and semantic layers matter

  1. From strings to things: Traditional platforms focus on data snapshots (strings), representing past states. The next step is modeling real-world entities (people, places, activities) and their relationships to enable forward-looking insights.
  2. Processes and knowledge graphs: Incorporating business processes into this semantic layer shifts the focus from infrastructure to applications, creating a cohesive model of the business that drives insights and automation.

Challenges and market implications

Skepticism remains about AWS’ ability to execute at this level of integration. Industry experts have previously doubted Amazon’s capability to unify its platform effectively. However, the reimagined SageMaker, with its unified storage and metadata layers, represents a meaningful step forward.

The updates will intensify competition with Snowflake, Databricks and others in the battle for enterprise data dominance. As AWS builds out its platform, market confusion could grow as enterprises weigh the benefits of SageMaker versus other well-established data platforms.


The road ahead: AWS’ data ambitions

The new and improved SageMaker signals AWS’ commitment to delivering a unified platform that simplifies workflows without sacrificing power. While competitors like Snowflake and Databricks have led the charge in unifying tools and storage, AWS is rapidly closing the gap by converging data, analytics and AI capabilities under one umbrella.

AWS’ roadmap points toward greater integration, including knowledge graph capabilities and semantic layers. By bridging its infrastructure expertise with application-centric design, AWS is positioned to redefine how enterprises harness the power of their data.

This evolution will determine whether AWS can shift from a fragmented toolkit provider to a leader in unified data platforms. The stakes are high because we believe ultimate agentic architectures won’t scale without this harmonization capability. We believe Amazon is laying the foundation with SageMaker and AWS is prepared to compete at the highest levels of the data ecosystem.

Q Developer: Ushering in a usability revolution for cloud and AI throughout the software development lifecycle

AWS continues to bridge its core ethos of “primitives for builders” with the need for usability and integration. A standout example of this strategy is Q Developer, a suite aimed at transforming the software development lifecycle by introducing agents that automate noncoding tasks such as documentation, unit testing and code reviews. This represents a major step forward in simplifying cloud and AI development, making these capabilities more accessible to developers and enterprises alike.

At the same time, AWS is pursuing legacy modernization efforts, targeting workloads like mainframes, Windows/.NET, and VMware, with varying degrees of success and challenges. Though these efforts showcase AWS’ ambition, the real headline is the growing ecosystem around gen AI-powered development agents, which AWS is integrating into its services, signaling a new era of productivity and accessibility for developers.


Key takeaways: Q Developer and the gen AI ecosystem

  1. Revolutionizing developer productivity:
    • Q Developer integrates AI agents to handle noncoding activities, significantly easing the burden on developers and enhancing productivity.
    • The suite exposes AWS services in a way that allows third-party gen AI development agents (e.g., Poolside, Cursor, Replit) to fit seamlessly into AWS’ ecosystem.
    • This combination marks a usability revolution, making cloud services more consumable for developers and aligning with broader trends in low-code/no-code tools.
  2. Legacy modernization efforts:
    • AWS continues its push to modernize legacy workloads, particularly:
      • Mainframes: Though AWS claims to compress migration timelines from years to quarters, the success in this area remains to be seen. We are skeptical that this will move the needle.
      • Windows/.NET: AWS has become a major destination for Windows workloads, competing directly with Microsoft in retaining these applications.
      • VMware: Targeting VMware environments appears more straightforward, leveraging AWS’ existing relationships and tooling, migrating customers from VMware Cloud on AWS.
  3. Long-term vision for proactive management:
    • AWS’ telemetry data collection aims to proactively manage services by identifying issues, suggesting fixes and even automating remediation. However, this remains an ambitious, long-term journey due to the complexity of building coherent models from a fragmented set of services.
  4. The killer app for gen AI: software development agents:
    • Gen AI-powered development agents are emerging as a killer app for gen AI. These agents assist developers in navigating AWS services more easily, creating a synergy between automation tools and AWS primitives.
    • The combination of gen AI development agents and AWS services is one of the most significant advancements in cloud development usability since the inception of the cloud.

Developers: from cloud kingmakers to AI kingmakers?

As cloud evolves, so too does the role of developers. While they have long been the “kingmakers” in cloud, the rise of gen AI and low-code tools signals a shift in focus toward data-centric application development. In this context:

  • Microsoft and Salesforce advantage: Tools such as Microsoft’s Power Platform and Salesforce’s Agentforce leverage coherent sources of truth, enabling graphical user interface-based drag-and-drop application development. These platforms offer a strong foundation for low-code developers, enabling them to build data-driven applications efficiently.
  • AWS’ challenge: AWS must balance its traditional developer-centric approach with the growing demand for low-code solutions that empower business users and analysts.

Harmonizing data and metadata: The future of AI development

The interplay between gen AI agents and harmonized metadata is central to this usability revolution. By integrating operational and technical metadata, AWS is creating a framework where agents can navigate seamlessly across datasets and services. This approach aligns with the broader vision of an AI-driven development ecosystem:

  • Unified metadata: The harmonization of operational and technical metadata acts as a central traffic system, guiding agents and ensuring a coherent source of truth.
  • AI-driven insights: By embedding intelligence into development tools, AWS is enabling developers to focus on innovation while offloading mundane tasks to AI agents.

The big takeaway: a usability revolution in cloud development

AWS’s Q Developer and its integration of gen AI agents signal a paradigm shift in how cloud development is approached. By combining the power of primitives with enhanced usability, AWS is paving the way for a more accessible and productive development environment. This effort positions AWS to compete not just with traditional cloud competitors but also with platforms that emphasize low-code, AI-driven workflows.

As the market evolves, the battle will increasingly focus on usability and integration, where AWS, Microsoft, Salesforce and others compete to define the next generation of software development. AWS’ success will hinge on its ability to harmonize its deep infrastructure expertise with the growing demand for developer and enterprise simplicity.

An early foray into Copilot-style information access

Q Business represents a nascent attempt to bring “Copilot”-like functionality to the enterprise, echoing Microsoft 365 Copilot’s promise. This early iteration from AWS seeks to unify collaboration and business data under a single index, enabling LLM queries. The potential benefits are substantial but in our view will take more time to evolve:

Key features

  • Unified index: Consolidates collaboration and business data, facilitating seamless LLM queries.
  • Next-gen RPA automations: Empowers the creation of advanced automations with generative UI, overseen by agents.
  • ISV ecosystem: Offers a platform for independent software vendors to build upon.

Challenges ahead

However, Q Business faces significant hurdles:

  • Entity disambiguation: Lacking a layer to convert vector indexes into meaningful entities (people, places, things and activities).
  • Data integration: Requires seamless integration of operational/analytic data (such as Salesforce Data Cloud) and collaboration data (for example, Glean, Microsoft Graph) to unlock its full potential.

As Q Business continues to evolve, addressing these challenges will be crucial to realizing its vision of streamlined information access and automation.

Tracking momentum and penetration in the ML/AI landscape

The chart below provides a classic view into spending momentum and market penetration within the machine learing and AI ecosystem. Using ETR data from 1,775 enterprise information technology decision-makers, the vertical axis reflects Net Score, a measure of spending momentum —essentially the percentage of customers increasing their spending on a platform. The horizontal axis, labeled as Overlap, represents penetration within the surveyed accounts. Together, these metrics highlight both platform momentum and adoption across the ML/AI market. The red line at 40% on the vertical axis represents a highly elevated Net Score.


Key insights: leaders and emerging contenders

  1. Dominance in momentum and penetration:
    • OpenAI and Microsoft continue to dominate the horizontal axis with strong penetration across enterprise accounts, reflecting widespread adoption of their AI solutions.
    • Meta Platforms Inc.’s Llama makes a significant leap, overtaking both Microsoft and Anthropic in spending momentum for the first time. This signals increased activity and interest around Llama, particularly among enterprises experimenting with open foundational models.
  2. Anthropic’s position through Bedrock:
    • Anthropic maintains strong momentum, ranking close to Meta Llama, Microsoft and OpenAI. A significant portion of Anthropic’s enterprise traction is facilitated through AWS Bedrock, highlighting Amazon’s role in extending Anthropic’s reach within its ecosystem.
  3. Google’s momentum in the ML/AI sector has been notable since the October 2022 survey. Google has shown steady increases in Net Score, peaking at 62% in the July survey and pulling back in the latest polling to 50%, still highly elevated with a meaningful presence in the data set.
  4. Gen AI’s role in data platform players:
    • Databricks and Snowflake are solidifying their positions within the ML/AI market. Both companies are leveraging their data estate strategies, with Databricks building on its established ML/AI foundation and Snowflake entering the space more recently but showing encouraging momentum.
    • Their integration of gen AI capabilities underscores the importance of AI in driving data platform differentiation.
  5. Legacy AI players:
    • Companies such as Dataiku Inc.DataRobot Inc.C3 AI Inc. and Anaconda Inc. continue to hold their positions but lag behind in terms of both momentum and penetration compared with newer gen AI-driven players.
    • IBM Corp.’s Watson shows a slight uptick in momentum, signaling renewed interest, but IBM Granite and other recent offerings are not yet prominent in this data.
    • Oracle Corp. is a player within its confined ecosystem.

Notable observations and gaps:

  • Amazon Nova: Though AWS Bedrock is driving Anthropic adoption, Amazon’s Nova models are not yet represented in this data.
  • Legacy players in niches: Legacy AI platforms, including Oracle and IBM Watson, show limited but noticeable presence. IBM Watson’s uptick is worth watching for potential resurgence.
  • Emerging dominance of Meta Llama: Meta’s rapid rise in momentum reflects the growing appeal of its Llama models, likely driven by the accessibility and versatility of open foundational models. This momentum is a key development to monitor.

Framing the competitive landscape:

  1. Momentum leaders:
    • OpenAI, Microsoft, Meta Llama and Anthropic dominate the high-momentum, high-spending velocity space, reflecting their leadership in both adoption and innovation.
  2. Data platform leaders:
    • Databricks and Snowflake are leveraging their data-first strategies to carve out a significant space in gen AI and ML/AI, aligning their tools with enterprise AI demand.
    • As enterprises integrate gen AI into their workflows, these data platforms will likely see sustained momentum.
  3. The legacy guard:
    • Players like IBM Watson and Oracle represent legacy AI platforms attempting to maintain relevance, but their influence remains limited in the context of modern gen AI leaders with greater account penetration. That stated, both companies command premium pricing and sell value with significant drag for other products and services that drive revenue.

This snapshot of the ML/AI sector highlights a evolving competitive landscape where foundational model players such as OpenAI, Meta Llama and Anthropic are redefining the space. Data-centric platforms such as Databricks and Snowflake are integrating AI to remain competitive, while legacy players such as IBM and Oracle are fighting to stay relevant.

AWS Nova models may move the needle, but it’s likely Anthropic will continue to be the dominant platform in the ETR data AWS will in our view continue to exert influence through Bedrock and its other partnerships and its optionality strategy is playing out as planned. As enterprises increasingly view AI as foundational to their strategies, the momentum and penetration of these platforms will shape the next wave of innovation in the ML/AI ecosystem.

The emerging harmonization layer: Toward a virtual assembly line of expertise

The evolving software stack (shown below) is pushing beyond traditional silos of infrastructure, platform services and software-as-a-service applications. We believe the imperative for the next decade is the creation of a harmonization layer — a transformative new abstraction that orchestrates end-to-end business processes, aligning activities such as demand forecasting, internal operations and supplier coordination into a seamless whole.

We describe this vision as a “virtual assembly line of expertise,” which seeks to achieve for managerial and collaborative work what the physical assembly line did for manufacturing a century ago: Optimize throughput, eliminate inefficiencies and enable expertise to flow seamlessly across the organization.


Key takeaways: The role and evolution of the harmonization layer

  1. From silos to orchestration:
    • For 60 years, enterprise IT has been characterized by islands of applications and analytics. Although these have enabled incremental automation, they lack the ability to connect processes into a coherent, predictive framework.
    • The harmonization layer aims to unify these silos, spreading a layer of intelligence across disparate systems and turning fragmented work cells into a predictive assembly line.
  2. Foundations of the new application platform:
    • The harmonization layer is built on top of coherent data foundations like open table formats, or OTFs, such as Iceberg tables, but it requires an equally coherent metadata layer. This layer transforms operational and technical data into a knowledge graph of the business, linking people, places, activities and processes.
  3. Emerging players and approaches:
    • Companies such as Celonis, RelationalAI and EnterpriseWeb LLC are pioneering this area, focusing on integrating and optimizing business processes.
    • Collaborative tools such as Microsoft’s 365 Graph and startups such as Glean Technologies Inc. are developing knowledge graphs that contextualize operational and collaborative data, bridging the gap between disparate systems. We’ve also seen Kubiya.ai show some early promise in developer environments.
  4. Agentic frameworks and governance:
    • The rise of AI agents necessitates robust governance frameworks to manage agents, enforce access control and ensure aligned outcomes.
    • While Amazon’s Q Business is indexing operational and collaborative data into a vector index, more sophisticated platforms such as Glean’s and Microsoft’s tools are advancing toward creating knowledge graphs that enable deeper contextualization and insights.

Transformative potential and industry challenges

  1. Automation’s untapped opportunity:
    • A significant portion of enterprise processes remains non-automated, representing a 10x multiplier opportunity. SaaS and custom applications have only scratched the surface, leaving vast areas for innovation in end-to-end process alignment.
    • This opportunity underscores the importance of the harmonization layer as the key to unlocking exponential productivity gains.
  2. The decade-long journey:
    • The top layers of the emerging stack, represented by green areas in the diagram, remain undefined and are the focus of intense innovation. Startups, SaaS companies and hyperscalers such as AWS, Google and Microsoft are all vying to define this market.
    • Building a cohesive harmonization layer will require sustained efforts over the next decade, with incremental advancements shaping the journey.
  3. Knowledge graphs as a core component:
    • Knowledge graphs represent a critical building block of the harmonization layer, contextualizing data and processes for actionable insights. For example:
      • Celonis integrates business process intelligence.
      • Glean focuses on collaborative data.
      • Kubiya Inc. applies these concepts to the developer lifecycle.

A virtual assembly line of expertise: the new paradigm

This new harmonization layer represents a seismic shift in enterprise IT. By extending the concepts of databases and enterprise resource planning systems (for example, Oracle and SAP SE) across the entire data and application estate, this layer creates a predictive, interconnected framework that transcends traditional silos.

  • From snapshots to predictions: Moving beyond static snapshots of processes, the harmonization layer enables dynamic predictions and real-time alignment of activities.
  • From strings to activities: The focus shifts from data as isolated strings to interconnected entities (people, places, things) and their activities.

Closing perspective: The long road to transformation

The emergence of the harmonization layer is nothing short of transformative, redefining how enterprises align their processes and leverage expertise. However, this evolution is still in its early stages, with significant challenges in governance, agent integration and metadata harmonization.

As AWS, Microsoft and startups such as Glean, Celonis and Kubiya push the boundaries of this sector, the journey will define the next generation of application platforms. Though the payoff may take a decade or more, the potential to revolutionize enterprise productivity and collaboration is undeniable.

What do you think? Did you catch the action at AWS re:Invent 2024? What caught your attention that we didn’t touch upon? Are there things in this research note you disagree with or to which you can add context?

Let us know.

Photo: Robert Hof/SiliconANGLE
Disclaimer: All statements made regarding companies or securities are strictly beliefs, points of view and opinions held by SiliconANGLE Media, Enterprise Technology Research, other guests on theCUBE and guest writers. Such statements are not recommendations by these individuals to buy, sell or hold any security. The content presented does not constitute investment advice and should not be used as the basis for any investment decision. You and only you are responsible for your investment decisions.
Disclosure: Many of the companies cited in Breaking Analysis are sponsors of theCUBE and/or clients of Wikibon. None of these firms or other companies have any editorial control over or advanced viewing of what’s published in Breaking Analysis.

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU

Related Content

What podcasts looked like in 2024 — literally

A look at the more challenging AI evaluations emerging in response to the rapid progress of models, including FrontierMath, Humanity's Last Exam, and RE-Bench (Tharin Pillay/Time)

2024 has been an amazing year for roguelikes

Leave a Comment