Nearly 7 in 10 IT leaders believe AI-enabled technologies will make teams more efficient, but 52% say their organizations don’t yet have needed IT infrastructure
AMD (NASDAQ: AMD) released the findings from a new survey of global IT leaders[i] which found that 3 in 4 IT leaders are optimistic about the potential benefits of AI — from increased employee efficiency to automated cybersecurity solutions — and more than 2 in 3 are increasing investments in AI technologies. However, while AI presents clear opportunities for organizations to become more productive, efficient, and secure, IT leaders expressed uncertainty on their AI adoption timeliness due to their lack of implementation roadmaps and the overall readiness of their existing hardware and technology stack.
AMD commissioned the survey of 2,500 IT leaders across the United States, United Kingdom, Germany, France, and Japan to understand how AI technologies are re-shaping the workplace, how IT leaders are planning their AI technology and related Client hardware roadmaps, and what their biggest challenges are for adoption. Despite some hesitations around security and a perception that training the workforce would be burdensome, it became clear that organizations that have already implemented AI solutions are seeing a positive impact and organizations that delay risk being left behind. Of the organizations prioritizing AI deployments, 90% report already seeing increased workplace efficiency.
“There is a benefit to being an early AI adopter,” said Matthew Unangst, senior director, commercial client and workstation, AMD. “IT leaders are seeing the benefits of AI-enabled solutions, but their enterprises need to outline a more focused plan for implementation or risk falling behind. Open software ecosystems, with high-performance hardware, are essential, and AMD believes in a multi-faceted approach of leveraging AI IP across our full portfolio of products to the benefit of our partners and customers.”
Future of AI-Powered Computing for the Enterprise
To ensure IT leaders have the best computing platform as they implement AI solutions, AMD is focused on developing cutting-edge solutions with AI capabilities across our product portfolio – from the cloud to the edge to endpoints – while working in close collaboration with open industry-standard software.
This year, AMD launched the first AMD Ryzen™ 7040 Series processors with select models featuring a Ryzen™ AI Engine with support for Windows Studio Effects, along with Ryzen AI developer tools – delivering unique experiences not currently available on other x86 processors[ii] and paving the way for greater AI capabilities directly on laptops.
A dedicated AI engine for mobile PCs is complementary to cloud-based AI and essential to the adoption of AI applications in the workplace. It has the potential to:
- Enable more personalized, secure experiences for employees by running AI models locally.
- Enhance the laptop’s power efficiency, which means better employee productivity and connectivity.
- Increase the overall bandwidth for a business to run AI workloads by enabling the laptop to run next generation software.
For businesses that also want to run AI workloads in their on-premise data centers, having up-to-date infrastructure is critical. By upgrading a data center to modern AMD EPYC™ processors, customers could reduce the number of racks needed in their existing infrastructure by up to 70%.[iii]
AMD also recently shared details about its AMD Instinct™ MI300X accelerator (192 GB) based on AMD CDNA™ 3 accelerator architecture, which will be the world’s most advanced accelerator for generative AI,[iv] and will provide the compute and memory efficiency needed for large language model training and inference for generative AI workloads.
To complement the hardware, AMD is bringing an open, ready, and established AI software platform to market through the AMD ROCm™ software ecosystem for data center accelerators.
View the full report to learn more.
Supporting Resources
- Learn more about AMD AI Solutions
- Become a fan of AMD on Facebook
- Follow AMD on Twitter
[i] Online survey conducted by Edelman Data & Intelligence and commissioned by AMD, from May 3 to May 25, 2023, among 2,500 IT Decision Makers in the U.S., U.K., Germany, France, and Japan.
[ii]As of May 2023, AMD has the first and only available dedicated AI engine on an x86 Windows processor, where ‘dedicated AI engine’ is defined as an AI engine that has no function other than to process AI inference models and is part of the x86 processor die. For detailed information, please check: https://www.amd.com/en/products/ryzen-ai. PHX-3.
[iii] In a server refresh scenario with a 2P AMD EPYC 32 core 9334 CPU powered server solution replacing a 5 1/2 -year old Intel 2P server based on the 16 core Intel Xeon Gold 6143 CPU based server solution; to deliver 80,000 units of integer performance, the AMD EPYC 9334 CPU takes an estimated: 296 fewer servers (111 AMD servers vs 407 Intel servers) and 5,920 fewer cores and 70% less space (AMD 6 rack vs Intel 20 rack); with a $2.5 million or 62% lower 3-yr TCO than the legacy Intel based server solution TCO. The new AMD solution TCO is comprised of the server cost (CapEx) and the power (OpEx). The legacy Intel solution TCO consists of OpEx only (the extended warranty cost and power). Over the 3-years of this analysis, the AMD powered server uses 65% less power with an estimated cost of $284,054 vs the Intel based server power cost of $822,797, using a PUE of 1.7, saving $538,743 over the 3 years of this analysis with an estimated US power cost of $0.128 / kWh. The 2P EPYC core CPU solution also provides estimated Greenhouse Gas Emission savings emissions avoided equivalent to 1,571 MTCO2e (1,731 US tons) over the 3 years of this analysis which is 577 US tons of CO2 annually and is the equivalent of the sequestration equivalent of 628 acres USA forest annually. SP5TCO-055:
[iv] The AMD Instinct™ MI300X accelerator is based on AMD CDNA™ 3 5nm FinFet process technology with 3D chiplet stacking, utilizes high speed AMD Infinity Fabric technology, has 192 GB HBM3 memory capacity (vs. 80GB for Nvidia Hopper H100) with 5.218 TFLOPS of sustained peak memory bandwidth performance, higher than the highest bandwidth Nvidia Hopper H100 GPU. MI300-09.