Mar. 21 at 8:51 AM
$EWY ⚠️ Samsung just made a high-stakes AI memory bet — and the market may be underestimating it.
👉If this is helpful to you, tap @NasdaqKnight
Here’s what’s happening
• Massive shift: >50% of Pyeongtaek foundry now dedicated to HBM4 base die (4nm node)
• Utilization: already pushing 90%+ → tight capacity, strong demand signal
• Demand drivers:
$NVDA,
$AMD, and now
$OPENAI.X all pulling HBM4 supply
• Output: 2026 HBM4 production expected >5.5B gigabits
• OpenAI allocation: reports suggest up to ~800M gigabits (12-layer HBM4)
• Timeline: production ramps 2026, shipments start 2H26
This isn’t just memory anymore.
Samsung is moving from commoditized DRAM → vertically integrated AI memory + logic stack
Translation:
Higher margins, tighter supply, deeper ties to AI hyperscalers.
But here’s the trade-off:
⚠️ Concentration risk is rising
⚠️ Execution must be flawless at advanced nodes
Bottom line:
HBM4 is becoming the new battlefield. And Samsung just went all-in.