Meta is investing billions of dollars in Nvidia’s widely used computer chips, integral to artificial intelligence (AI) research and projects. Mark Zuckerberg revealed in an Instagram Reels post that Meta’s AI-focused “future roadmap” requires the establishment of a substantial compute infrastructure. By the end of 2024, this infrastructure is expected to incorporate 350,000 H100 graphics cards from Nvidia.
While Zuckerberg did not specify the quantity of graphics processing units (GPUs) already acquired, analysts estimate the cost of Nvidia’s H100 between $25,000 and $30,000, potentially totaling close to $9 billion for Meta if prices are at the lower end. Additionally, Zuckerberg mentioned that Meta’s compute infrastructure will include “almost 600k H100 equivalents of compute if you include other GPUs.” In December, companies like Meta, OpenAI, and Microsoft expressed their intention to use AMD’s new Instinct MI300X AI computer chips.
The heavy-duty computer chips are crucial for Meta’s pursuit of research in artificial general intelligence (AGI), a long-term vision for the company. Meta’s chief scientist, Yann LeCun, emphasized the significance of GPUs, stating that the more GPUs are needed for AGI research. The commitment to AI is reflected in Meta’s projected expenses of $94 billion to $99 billion for 2024, with a focus on computing expansion.
Zuckerberg also announced Meta’s plan to “open source responsibly” its yet-to-be-developed “general intelligence,” aligning with the company’s approach to its Llama family of large language models. Currently training Llama 3, Meta aims to enhance collaboration between its Fundamental AI Research team (FAIR) and GenAI research team. This move is expected to accelerate progress, with FAIR becoming a sister organization of GenAI, the AI product division.