🌟 Editor's Note: Recapping the AI landscape from 11/25/25 - 12/08/25.
🎇✅ Welcoming Thoughts
Welcome to the 20th edition of NoahonAI.
What’s included: company moves, a weekly winner, AI industry impacts, practical use cases, and more.
Working on setting up some interviews for the start of the new year.
Highlighted another Dwarkesh interview this week, he’s great.
Google’s 1st step towards space data centers is coming in 2027.
Anyone remember Google Glass? It was before it’s time.
Full vertical control via acquisitions looks like the NVIDIA5 playbook right now.
Meta might be trying to make a move for the AI wearables market.
Looks like all of Jensen Huang’s lobbying is paying off.
Still haven’t seen anything new from OpenAI in wearables after they bought Jony Ive’s company in May.
GPT randomly forgot how to source data so I moved to Perplexity for some newsletter prep.
The 2nd Impact Industry is a direct glance into the future. Abundance.
Let’s get started—plenty to cover this week.
👑 This Week’s Winner: Anthropic // Claude
The Race is back and Anthropic didn’t miss a beat. Acquisitions, partnerships, great numbers, and new integrations headlined Anthropic’s last two weeks as they narrowly edged out NVIDIA for the win.
Acquires Bun for Claude Code: Anthropic bought the Bun JavaScript runtime to speed up Claude Code, improve stability, and control core agentic-coding infrastructure while keeping Bun open-source. This was a huge acquisition. Anthropic now owns the best code-writing AI tool (Claude Code) and the infrastructure that runs it (Bun).
$200M Snowflake Partnership: Snowflake and Anthropic expanded their deal to bring Claude models into Snowflake Cortex AI for 12,600+ customers, supporting enterprise-grade agentic AI on governed data. Another strong step for enterprise market share.
Preparing for a 2026 IPO: Reports say Anthropic has begun early IPO preparations with Wilson Sonsini, exploring a 2026 listing while also looking to raise a private round valuing the company above $300B. Here we go! Anthropic is in a much better place financially than OpenAI heading into this.
The headline moves were justified by the news that Claude Code hit $1B in ARR just six months into the products release. Additionally, Claude Code launched a big time Slack integration allowing teams to hand-off coding tasks directly inside the workplace messenger. As if all of this wasn’t enough, Dartmouth College also announced a partnership with Anthropic, making it the 1st Ivy League school to implement AI at scale via Claude for Education. Outstanding two week run.

From Top to Bottom: Open AI, Google Gemini, xAI, Meta AI, Anthropic, NVIDIA.
⬇️ The Rest of the Field
Who’s moving, who’s stalling, and who’s climbing: Ordered by production this week.
⚪️ NVIDIA
Investment in Synopsys: NVIDIA bought a ~$2B stake in Synopsys to fuse GPU acceleration with Synopsys’ chip-design tools, aiming to speed semiconductor and system engineering from weeks to hours. Makes sense. Vertical acquisitions trend appearing across the NVIDIA5.
U.S. Approves H200 Exports to China: The U.S. will allow controlled exports of NVIDIA’s H200 chips to approved Chinese buyers, following Congress rejecting the Gain AI Act. Huge win for NVIDIA here.
NVIDIA for Self-Driving: NVIDIA unveiled Alpamayo-R1, a reasoning VLA model that uses chain-of-thought to handle complex driving scenarios and advance toward Level 4 autonomy. Open-source model built to advance visual understanding for self-driving cars.
🔵 Meta // Meta AI
Metaverse → AI Wearables: Meta is reallocating up to ~30% of Reality Labs spending toward AI glasses and wearables, signaling a strategic pivot from metaverse bets to practical AI hardware. Nobody happier than shareholders that Meta took a step back from Metaverse spending. Seems they’re making strong moves going after wearables. Definitely something to follow.
The Specific Wearable: Meta bought Limitless, the pendant-style “memory assistant,” integrating its team and tech into Meta’s expanding AI-wearables strategy while ending new hardware sales. Cool diversification from glasses. I think a pendant/ring is likely the future AI wearable winner.
News-Licensing Deals: Meta struck agreements with CNN, Fox News, USA Today and more, enabling Meta AI to pull their content for real-time responses. Owning content pipelines where other LLM’s are essentially scraping data. Cool move.
🟢 OpenAI // ChatGPT
GPT-5.2 Coming Soon: May release as early as tomorrow. This comes after a “code red” acceleration driven by competitive pressure from Gemini 3. GPT may also be rolling back its focus on ads and other quick-win projects to prioritize the model itself. Absolutely the right move. Hoping to see strong base-performance improvements over the next few models.
Accenture Partnership: OpenAI and Accenture launched a large-scale collaboration giving tens of thousands of Accenture employees ChatGPT Enterprise and co-building a flagship program to deploy agentic AI across industries. Smart. Fighting for market share in the enterprise space.
Grocery GPT: Instacart rolled out a full shopping app inside ChatGPT with Instant Checkout features, enabling users to browse retailers and complete grocery orders entirely within chat. Long-term LLM use case here.
🟣 Google // Gemini
Gemini Tops 2025 Search: Google’s “Year in Search 2025” report shows “Gemini” was the #1 global trending search term, the first time an AI model surpassed major news, sports, and pop-culture topics.
Smart Glasses Incoming: Google and Warby Parker announced AI-enabled smart glasses for 2026, featuring built-in speakers, cameras, and hands-free multimodal interaction with Gemini. Not sure glasses are the winning AI wearable, although it’s a popular idea amongst the NVIDIA5.
Gemini 3 Companion Releases: Google launched Gemini 3 Deep Think for advanced reasoning (AI Ultra tier only) and is working on a a lightweight version of the Nano Banana image-generator. Deep Think may give Claude Opus a run for its money on smartest model.
🔴 xAI // Grok
Grok in Tesla Update: Tesla’s 2025 Holiday Update lets drivers use Grok’s Assistant mode for voice navigation, adding and editing multi-stop routes directly from the car’s infotainment system. Great use case.
Grok 4.20 Incoming: Elon says Grok 4.20 will arrive in 3–4 weeks; it appears to be doing very well in early benchmarking. Bearish on benchmarking as of late given lack of translation to real-world usage/reports.
Grok Privacy Failures: In an investigative report by Futurism, Grok returned home addresses for private individuals, and generated step-by-step stalking information. Concerning.
📲 Impact Industries 🤖
Marketing // AI-Powered Holiday Shopping
Black Friday online spending hit a record $11.8 billion, with AI emerging as a major driver. Adobe Analytics reported an 805% surge in AI-driven traffic to U.S. retail sites year-over-year. Shoppers who landed on a retail site from an AI assistant were 38% more likely to convert than everyone else. Salesforce estimates AI influenced 17–20% of all Cyber Week orders globally. Amazon saw shopping sessions with its AI tool Rufus jump 100% on Black Friday versus the prior month, while non-Rufus sessions rose only 20%. AI and shopping will go hand-in-hand one day.
Robotics // Speech-to-Reality Manufacturing
MIT researchers built an AI system that turns voice commands into physical objects in minutes. Say “I want a simple stool,” and a robotic arm assembles it on the spot. The workflow chains together speech recognition, 3D generative AI, and automated path planning. No CAD expertise required. Modular components snap together magnetically and can be reconfigured, eliminating waste. The team has built furniture, shelves, and decorative items, with plans to add gesture control and scale toward building-sized structures. So cool - The future on display.
💻 Interview Highlight: Ilya Sutskever with Dwarkesh Patel
Interview Outline: Ilya Sutskever explains why today's powerful AI systems often seem brilliant but still make strange, basic mistakes in the real world. He argues that the AI industry is shifting from simply scaling up to a new "age of research," focused on solving fundamental flaws like unreliable generalization, and discusses his new company SSI's mission.
About the Interviewees: Ilya Sutskever is the co-founder and former Chief Scientist of OpenAI, known for his foundational work on neural networks and the invention of the Transformer architecture. He is currently the co-founder of Safe Superintelligence Inc. (SSI), a company dedicated to solving the complex challenge of safety while building cutting-edge AGI.
Interesting Quote: “We're moving out of the phase of just making things bigger (scaling) and are forced back into a phase of deep research to invent new ways for AI to learn, even though we now have massive computing power to work with.”
My Thoughts: Lots to unpack here, and its no surprise given the history of Ilya’s GPT exit that his company is optimizing a safety first approach. Ilya believes that we’re essentially nearing a barrier for the current growth trajectory towards AGI, which is a theory that draws mixed opinions from AI crowds. I tend to agree with him in principle, I think that it can be done, and will be done within a 5-10 year window, but I’m not sold that we have the capacity to get to AGI/ASI with our current knowledge base. That’s not to say there won’t be tremendous growth across industries and countless increasing AI use cases, because I believe there will. But, in my opinion, more research and learning needs to be done to “perfect” an AI model to the point where it is just as smart/capable as a human in most aspects.
Condensed Interview Highlight — Ilya Sutskever
Q: Why does AI perform well on tests but still make stupid mistakes in the real world?
Ilya: AI's mistakes show poor generalization. Current training methods may make models too narrow and focused on passing tests rather than being broadly intelligent like a human.
Q: What kind of superintelligence is your new company (SSI) aiming to build?
Ilya: We are building a mind that can learn to do every job with extreme efficiency, not a mind that is instantly finished. The model will learn continually and merge knowledge across all its deployed instances.
Q: What is the safest goal for a future powerful AI to pursue?
Ilya: Since the main issue is the power of the AI, the goal should be to align it to care about all sentient life. This may be a more robust and achievable alignment goal than just focusing on human interests.
Q: What is one way to ensure humans remain relevant in a superintelligent world?
Ilya: (Prefaced as a solution he doesn’t like): A possible solution is if people become part-AI with Neuralink-like technology. This would allow humans to receive the AI's complex understanding directly, ensuring they remain full participants. Pluribus coded, not happening.
Q: What is the biggest mystery about how human intelligence evolved?
Ilya: It’s a mystery how evolution managed to hard-code high-level needs like social belonging into our brains so reliably. This suggests there is an effective principle for building deep desires that AI research hasn't found yet.
👨💻 Practical Use Case: Local Hosted LLM
Difficulty: Advanced
Running your own locally hosted LLM is starting to catch on in Silicon Valley, especially for teams working with sensitive data. Instead of sending information to cloud models like GPT or Claude, companies are spinning up private AI systems on their own machines. This keeps data fully offline, removes API costs, and gives you complete control over how the model behaves.
A local LLM setup usually comes together in three layers:
1. The Model (The Brain)
This is the open-source LLM you download. It handles the reasoning, analysis, and conversation. Because it runs locally, none of your data ever leaves your machine.
2. The Runtime (The Engine)
This is the software that loads and optimizes the model for your hardware. It turns a file on your computer into a usable “local API” you can send prompts to. Some runtimes are optimized for laptops, others for GPUs.
3. The App Layer (The Interface)
This is the part you actually use day-to-day. It provides a chat window, file uploads, RAG search, memory, agents, and integrations. At this point, it feels like your own private version of ChatGPT.
To make this concrete, here’s what a typical workflow looks like:
Step 1: Download an open-source model (the brain).
Step 2: Run it locally through a runtime that exposes an API (the engine).
Step 3: Connect that API to an AI app that gives you a clean UI, search, and tools (the interface).
Most of these models require stronger hardware, ideally a GPU, but some of the smaller models will run on a laptop. Local LLMs won’t replace cloud giants anytime soon, but they are a powerful option for anyone who wants definitive privacy, slight customization, and zero risk of data leakage.
Chinese models like Qwen are becoming very popular for this use case because they are mostly open source and often times don’t have any issue copying current NVIDIA5 companies in code / training / architecture.
Check out the video below to learn more ⬇️
🤖 Startup Spotlight

Suno
Suno - AI powered music creation for everyone
The Problem: Making music normally takes time, skill, instruments, and often expensive studio setups, these are barriers that block many aspiring creators, marketers, or producers who just need quick, royalty‑free tracks or demos.
The Solution: Suno is a tool for generating original music from plain text prompts. It can produce full songs with vocals or instrumentals in a range of styles, from 90s-style indie rock to lo-fi beats or cinematic backgrounds. You describe the mood or genre, and the system returns a complete track you can download or build on further.
The Backstory: Founded by ex‑Kensho AI engineers, Suno debuted its web app in December 2023. Since then it’s grown rapidly and recently raised a Series C that valued the company at $2.45 billion. Its mission is to make music creation accessible to anyone. Over time, Suno has evolved from one‑click song generation to a full generative audio workstation that lets users customize their tracks similar to traditional production workflows.
My Thoughts: I look at these vibe-anything technologies as a net good. Similar to AI coding tools, or design tools, the technology allows people with no musical talent to jump right in, and those with a background in music/producing to leverage the tool to make better music. The only negative I see here is from inadvertently copying sound / music from real artists because of the data the system was trained on.
“It’s not likely you’ll lose a job to AI. You’re going to lose the job to somebody who uses AI”
- Jensen Huang | NVIDIA CEO
I wonder if OpenAI or Anthropic will IPO 1st. Till Next Time,
Noah on AI

