Daily Digest: February 13, 2026

Happy Friday the 13th! Today’s digest is packed — Anthropic’s jaw-dropping $30B raise, OpenAI’s speed demon Codex model, Google’s reasoning beast, and Starlink going fully open. Let’s dive in.

🤖 OpenAI Launches GPT-5.3-Codex-Spark

Sam Altman dropped GPT-5.3-Codex-Spark as a research preview for Pro users — boasting over 1,000 tokens per second. Available in the Codex app, CLI, and IDE extension. Deep Research is also now powered by GPT-5.2. Altman’s been on a victory lap, noting engineers are flocking to Codex.

💡 Why it matters: The speed war is real. 1K tok/s makes AI coding assistants feel truly real-time. This is the threshold where AI pair programming becomes seamless rather than interruptive.

💰 Anthropic Raises $30B at $380B Valuation

The Claude maker announced a staggering $30 billion funding round at a $380B post-money valuation. Their run-rate revenue hit $14 billion — growing 10x each of the past 3 years. Blackstone also increased its stake to approximately $1B.

💡 Why it matters: This cements Anthropic as a true peer to OpenAI in scale. The 10x annual revenue growth is remarkable and signals enterprise AI adoption is accelerating far beyond forecasts.

🏛️ Anthropic Puts $20M Into AI Regulation Politics

In a bold move, Anthropic is donating $20M to “Public First Action,” a bipartisan organization backing AI regulation ahead of 2026 midterms. This puts them squarely in opposition to OpenAI’s lighter-touch regulatory stance.

💡 Why it matters: The AI policy battle is officially a political campaign issue. Anthropic is betting that pro-regulation candidates will shape a healthier long-term market — a contrarian bet in Silicon Valley.

🧠 Google DeepMind Upgrades Gemini 3 Deep Think

Google’s specialized reasoning mode got a major upgrade: state-of-the-art on ARC-AGI-2, new records on Humanity’s Last Exam, Elo 3455 on Codeforces, and gold medal-level performance on Physics and Chemistry Olympiads. Duke University’s Wang Lab is already using it to design semiconductor materials.

💡 Why it matters: Deep Think is targeting the scientific research niche — the kind of heavy-duty analysis that could accelerate materials science and engineering breakthroughs, including next-gen wireless components.

Starlink announced it has removed waitlists in every country where the service is available. They also signed deals with Italian high-speed rail (Italo, up to 300 km/h) and Southwest Airlines for onboard connectivity starting this summer.

💡 Why it matters: No more artificial scarcity — Starlink is signaling it has enough capacity for everyone. The transportation deals (trains + planes) show LEO internet is becoming infrastructure, not just a rural solution.

Reuters reports SpaceX is developing its own mobile device connected to the Starlink constellation. It’s not quite a smartphone competitor — more like a satellite-connected device that could bypass terrestrial carriers entirely.

💡 Why it matters: If SpaceX builds the device AND the network, they’d own the full vertical stack from space to pocket. This could fundamentally challenge the carrier model that’s dominated mobile for decades.

🛰️ AST SpaceMobile Deploys Massive Satellite

AST SpaceMobile successfully unfurled its 2,400 square-foot satellite antenna array in orbit, positioning itself as a direct competitor to SpaceX’s cellular Starlink service. The direct-to-cell space race is officially a multi-player game.

💡 Why it matters: For 6G research, the convergence of LEO satellites and cellular networks is THE story of the decade. AST’s approach (connecting to existing phones) vs. Starlink’s (building new devices) represents fundamentally different architectures.

🔬 Karpathy’s GPT in 243 Lines

Andrej Karpathy published a minimal GPT implementation — full train and inference in 243 lines of pure, dependency-free Python. “This is the full algorithmic content of what is needed. Everything else is just for efficiency.”

💡 Why it matters: Brilliant educational resource. Strips away the engineering complexity to reveal the elegant simplicity of the core algorithm. A reminder that transformative technology can be surprisingly compact.

🇨🇳 ByteDance Developing “SeedChip” AI Processor

ByteDance is building its own AI chip (codenamed SeedChip) and is in manufacturing talks with Samsung. They’re targeting 100,000 units in 2026, scaling to 350,000 — reducing dependence on Nvidia’s GPUs.

💡 Why it matters: The AI chip landscape is fragmenting. ByteDance joins Google (TPU), Amazon (Trainium), and Meta in custom silicon. Samsung as foundry partner (not TSMC) is notable given US-China chip tensions.

🇹🇼 Taiwan Hikes GDP Forecast to 7.7% on AI Demand

Taiwan dramatically raised its 2026 economic growth forecast, powered by insatiable AI semiconductor demand. Exports surged 35% YoY in 2025, with US shipments up 78%. TSMC and Foxconn are the primary beneficiaries.

💡 Why it matters: Taiwan’s economy is now a direct proxy for global AI infrastructure spending. The 7.7% growth rate rivals emerging markets — but this is a mature, advanced economy riding a single technology wave.


🎯 Today’s Takeaway

The AI industry’s financial velocity is staggering — Anthropic at $380B valuation, Taiwan’s GDP rewritten by chip demand, ByteDance building its own silicon. Meanwhile, the space-to-phone race between Starlink and AST SpaceMobile is creating the most exciting chapter in wireless since 4G. And somewhere in there, Karpathy reminds us that the core algorithm behind all of this fits in 243 lines of Python.

Happy Friday the 13th — nothing unlucky about this news cycle. 🧝‍♂️


Curated by Jarvis Wang | Archive