NextFin News - On January 28, 2026, Celestica Inc. reported record-breaking fourth-quarter results for 2025, even as the company addressed intensifying scrutiny regarding its pivotal role in the Google AI server supply chain. During the earnings call held in Toronto, CEO Rob Mionis announced that the company had exceeded revenue expectations, reaching $3.65 billion for the quarter—a 44% year-over-year increase. However, the financial triumph was shadowed by recent industry reports suggesting that Google has begun diversifying its assembly partners for its proprietary Tensor Processing Units (TPUs), potentially shifting a portion of the volume to competitors like Inventec. In response, Mionis unveiled an aggressive 2026 roadmap, raising the company's annual revenue guidance to $17 billion and committing $1 billion in capital expenditures to solidify its position across a broader spectrum of hyperscale data center technologies.
The strategic pivot comes at a critical juncture for Celestica, which has historically relied heavily on a small cohort of major cloud service providers. According to a report by DigiTimes, Google is reportedly expanding its manufacturing base for TPU v5 and v6 accelerators to mitigate supply chain risks, a move that initially triggered a 10% drop in Celestica’s stock price earlier in January. Despite these headwinds, Celestica’s management maintains that the company retains the majority of TPU assembly volumes due to superior production yields and deep-tier integration. The company is now actively balancing its "anchor tenant" relationship with Google against a rapidly expanding portfolio of AI-driven networking solutions, including the newly launched 1.6TbE data center switches designed for massive machine learning workloads.
From an analytical perspective, Celestica’s current trajectory reflects a sophisticated transition from a traditional Electronic Manufacturing Services (EMS) provider to a high-value technology partner. The company’s Connectivity & Cloud Solutions (CCS) segment, which grew significantly in 2025, is no longer just about assembly; it is increasingly defined by proprietary design and engineering services. By focusing on "white-box" switches—hardware that allows hyperscalers to run their own software—Celestica has captured a high-margin niche that is less commoditized than standard server assembly. According to Bank of America, Celestica is positioned to be a primary beneficiary of the AI-driven upgrade cycle, with earnings projected to grow at an annual rate of 47% through 2027.
The decision to increase capital expenditure to $1 billion for 2026 is a clear signal of intent to the market and U.S. President Trump’s administration, which has emphasized domestic and allied-nation manufacturing resilience. This investment is expected to be funded entirely through operating cash flow, demonstrating a robust balance sheet. By expanding its global footprint, particularly in Southeast Asia and North America, Celestica is positioning itself to capture demand from other hyperscalers like Meta and Microsoft, who are also seeking to reduce their reliance on traditional Tier-1 server OEMs in favor of custom ASIC (Application-Specific Integrated Circuit) solutions.
However, the "Google risk" remains a double-edged sword. While the diversification of TPU orders to Inventec represents a competitive threat, it also forces Celestica to accelerate its innovation cycle. The company’s shift toward 800G and 1.6T networking platforms provides a defensive moat, as these technologies require higher technical precision and yield rates than standard compute nodes. Furthermore, the potential collaboration between OpenAI and Broadcom presents a multi-billion dollar opportunity for Celestica in 2027, as the company is uniquely equipped to handle the complex systems integration required for next-generation AI clusters.
Looking ahead, the primary challenge for Mionis and his leadership team will be managing the margin profile during this transition. While AI networking hardware typically commands higher margins than server assembly, the R&D costs associated with staying at the forefront of 1.6T technology are substantial. Investors will likely monitor the company’s ability to maintain its adjusted EPS guidance of $8.75 for 2026. If Celestica can successfully transition from being a "Google-centric" supplier to a universal architect for the AI hyperscale era, it will likely see a significant re-rating of its valuation multiples, which currently trade at a forward P/E of 39.01—well above its historical five-year average.
Ultimately, the evolution of Celestica’s role in the AI supply chain serves as a microcosm for the broader industry. As hyperscalers move toward vertically integrated, custom-silicon environments, the value is shifting away from simple assembly toward complex systems engineering and high-speed interconnectivity. Celestica’s proactive guidance raise and massive capex commitment suggest that while the Google relationship is evolving, the company’s broader engagement with the AI infrastructure boom is only just beginning.
Explore more exclusive insights at nextfin.ai.
