0:00
/
0:00

Most important AI updates of the week 21st September to 28th September 2025 [Livestreams]

Ecosystem Wars, Out-of-Touch Tech, and helping subscribers get AI Girlfriends

It takes time to create work that’s clear, independent, and genuinely useful. If you’ve found value in this newsletter, consider becoming a paid subscriber. It helps me dive deeper into research, reach more people, stay free from ads/hidden agendas, and supports my crippling chocolate milk addiction. We run on a “pay what you can” model—so if you believe in the mission, there’s likely a plan that fits (over here).

Every subscription helps me stay independent, avoid clickbait, and focus on depth over noise, and I deeply appreciate everyone who chooses to support our cult.

Help me buy chocolate milk

PS – Supporting this work doesn’t have to come out of your pocket. If you read this as part of your professional development, you can use this email template to request reimbursement for your subscription.

Every month, the Chocolate Milk Cult reaches over a million Builders, Investors, Policy Makers, Leaders, and more. If you’d like to meet other members of our community, please fill out this contact form here (I will never sell your data nor will I make intros w/o your explicit permission)- https://forms.gle/Pi1pGLuS1FmzXoLr6


Thanks to everyone for showing up the live-stream. Mark your calendars for 8 PM EST, Sundays, to make sure you can come in live and ask questions.

Bring your moms and grandmoms into my cult.

Share

Before you begin, here is your obligatory reminder to adopt my foster monkey Floop. He’s affectionate, relaxed and can adjust to other pets, kids, or people. No real reason not to adopt him. So if you’re around NYC, and want a very low maintenance but affectionate cat— then consider adopting him here.

Community Spotlight: Me in SF/Vegas

I’m travelling through for the following dates—

  1. San Fransico, 4th-14th October.

  2. Vegas 27th-30th October (I’ll be speaking at the Put Data First conference).

If you’re there or come to NYC in the interim, shoot me a text and let’s meet.

If you’re doing interesting work and would like to be featured in the spotlight section, just drop your introduction in the comments/by reaching out to me. There are no rules- you could talk about a paper you’ve written, an interesting project you’ve worked on, some personal challenge you’re working on, ask me to promote your company/product, or anything else you consider important. The goal is to get to know you better, and possibly connect you with interesting people in our chocolate milk cult. No costs/obligations are attached.

Additional Recommendations (not in Livestream)

  1. From a mobile clinic in Africa to hospitals in the West - How permission, power and care collide”: A wonderfully haunting and thought-provoking piece, as

    ‘s work tends to be. Her anecdote about the woman who was scared to receive free treatment because her husband wouldn’t have approved the examination process really makes you take a step back. This is true for tech as well: how do we make sure that tech isn’t just available, but accessible in a way that suits people, especially when the builders of tech are very disconnected from a lot of their users.

  2. FOD#119: Quantum Whispers in the GPU Roar”: An excellent roundup of major developments from Ksenia. Very interesting note on Quantum Computing unlocking inference scaling.

  3. Hundreds of Google AI Workers Were Fired Amid Fight Over Working Conditions”: Conflicts around Labor and AI continue to be a flashpoint. This is a space to monitor very closely.

  4. Arc’teryx Is Cooked in China”: A very interesting piece on generational differences by

    . Interesting how the West is starting to love larger than life CEOs while China is reverting to more soft ones.

  5. Why Women Investors Outperform Men and What Wall Street Still Doesn’t See.

  6. This IG post on the cultural differences between American and Chinese AI Labs was very interesting, and mirrors a lot of what we’ve been talking about (more accurately, what we ripped off from

    ).

Companion Guide to the Livestream

This guide expands the core ideas and structures them for deeper reflection. Watch the full stream for tone, nuance, and side-commentary.

1. NVIDIA’s $100B Bet on OpenAI— Ponzi Scheme or Not?

Everyone online is screaming “Ponzi.” And on the surface, the circular flow looks exactly like one: OpenAI borrows against the future, pays Oracle for compute, Oracle pays NVIDIA for GPUs, and then NVIDIA turns around and shovels $100B into OpenAI. A neat little ouroboros of money.

The AI Bubble grows Ponzi Scheme Symptoms

But here’s the real play: it’s not about cash today—it’s about ecosystem lock-in to prevent competitors from commoditizing Nvidia’s Value Chain.

  • Competitors are circling. Cerebras, with GPT-OSS, demonstrated you don’t actually need NVIDIA’s GPUs to run AI Models, showing a clear gap (edge compute/lower weight models) where Nvidia could be beaten.That’s a chink in the armor that can explode if not handled properly, and the AI inference startups are circling. On the other hand, ARM is building alliances to challenge CUDA itself, the secret chokehold NVIDIA’s had on software optimization for a decade. For the first time in years, NVIDIA felt pressure.

  • The empire response. If OpenAI is glued to NVIDIA at the hip—co-developing, optimizing every model for their silicon—then every other lab gets trapped too (OpenAI builds cutting edge model around this—> the research community builds around this—> leaving the ecosystem means missing the benefits from OSS). Once OpenAI builds on CUDA, Anthropic, Gemini, or anyone else risks falling behind unless they do the same. That’s why NVIDIA is spending margins now: to prevent defection later, especially from the cloud hyperscalers that are all building their own chips.

  • Ponzi or empire? A Ponzi is pure money-circulation with no value creation. This isn’t that. It’s empire-building: NVIDIA is laying down toll roads for the future. Even if they don’t directly profit from every model, they’ll own the highways everything runs on. That’s why valuations spike; markets aren’t pricing today’s revenue, they’re pricing tomorrow’s lock-in. Similarly, OpenAI and Oracle are betting that being early players in this ecosystem will elevate their ability to capture value down the line.

The risk isn’t that NVIDIA collapses tomorrow—it’s that the strategy works, and the rest of the ecosystem wakes up one day to realize they’re tenants in Jensen’s house.

Read more—

  1. This article by

    on why Cuda makes bank for Nvidia.

  2. This deep dive on why people default to scaling even when it isn’t the most efficient.

Flotilla and Tech is Funding the Military Industrial Complex.

Amid all the market talk, one story cuts darker: a humanitarian flotilla to Gaza was hit by drones. No casualties, but the drones jammed signals, damaged the ship, and exposed the obvious truth—autonomous systems are already spilling into conflict zones in ways that have nothing to do with “security” and everything to do with disruption.

This is what the military–tech alignment actually produces: not precision, not deterrence, but fragile automated systems wielded against civilians. For all the high-minded language about defense, what we saw was food shipments jammed and bombed.

It’s a reminder: every time Silicon Valley cozies up to the defense industry, the rhetoric will be “safety.” The reality is profit. Freedom isn’t profitable; control is.

And this is why adversarial research matters. Unraveling signal jammers, perturbing drone guidance—these aren’t theoretical exercises. They’re the only real checks on a runaway market where “autonomous weapons” means “cheap ways to harass the powerless.”

Read more—

  1. How to Automatically Jailbreak OpenAI’s o1

  2. How to break Signal Jammers and Internet Censorship bots

  3. Algorithmic Arms Race: How Tech is Fueling Weapons Systems and Mass Surveillance

  4. How Amazon Uses AI to Crush Labor Movements

Sutton and Dwarkesh Podcast + why AGI is the wrong discussion

I was asked to comment on the conversation between

and Richard Sutton. Personally, I couldn’t really sit through the conversation because the conversation felt useless to me. The problem isn’t that either of them are stupid or “wrong”; it’s that I feel like their conversation had very little insight on the realities of building systems AI (and how they will evolve in the future).

Sutton’s flaw: he thinks like a pure scientist. What architecture is theoretically best? What might intelligence mean in an abstract sense? Fine questions if you live in academia. Useless if you’re negotiating hardware budgets and timelines.

Dwarkesh’s flaw: he doesn’t push hard enough. His interviews skim high-level abstractions and stop there. Maybe that’s the cost of booking big names—you avoid confrontation so they’ll come back. But the result is predictable: pleasant conversations that generate little new insight.

And then there’s AGI—the word that keeps the entire discourse floating. Its brilliance isn’t as a technical definition (there isn’t one). It’s as a CEO tool. Leave it vague, and you can always move the goalposts. “We’ll get AGI in five years” means nothing when you never defined it in the first place. Vagueness is deliberate: it keeps investors hooked, critics trapped in strawman debates, and timelines conveniently unfalsifiable.

If you’re building or investing today, you don’t ask, “what is intelligence?” You ask:

  • What feature can ship in 7 months?

  • What compounding advantage will exist in 14 months?

  • What infrastructure position will be unassailable in 5 years?

AGI isn’t a strategy—it’s a story. The real game is played in ROI, lock-in, and timelines. My recommendation is to focus on that.

3. The Iqidis ROI Rule: When to Build vs. When to Improve

Every startup wrestles with the same tension: do we polish what we’ve already shipped, or do we chase the next shiny feature? Most teams frame it as a philosophical dilemma—product quality versus innovation speed. At Iqidis (best Legal AI of all time), we stripped it into a simple philosophy: ROI decides everything.

Here’s the framework:

  • If churn risk is real, you improve. Example: if a core feature is broken enough that customers might leave, you fix it—even if only one or two accounts are affected. Losing them costs more than the dev time to patch the weakness.

  • If major upside is on the line, you build. Example: a single enterprise deal that depends on OCR integration? Drop everything. Ship it, even if it means shelving your formatting upgrades for three months. The potential revenue dwarfs the cost.

  • If the two conflict w/ each other (limited resources, unlimited wants) then it’s about speed (how long will it take to complete) and cost of putting it off (can we convince the user to stay on even if improvement/feature isn’t there for a few weeks). This is where good customer relationships, trust, and important ROI judgements become key. With very large fixes, I also recommend getting a quick win first, so your customers can see progress beign made.

Notice what’s missing: any discussion of aesthetics, “neatness,” or “completeness.” Those are luxuries. The calculation is ruthless:

  • What does it cost in time and dollars?

  • What’s the expected return?

  • Will customers leave if we don’t?

  • Will new customers arrive if we do?

If the answer to either churn-prevention or upside-capture is yes, it gets priority. If not, it waits. This is why Iqidis deliberately avoided “feature beauty contests.” You don’t pit “build X” against “fix Y.” You put both into a balance sheet of cost, time, churn risk, and revenue impact. One always weighs heavier.

Read more— nothing directly, but

has some pretty interesting insights on tech management that might help you start to think along those directions.

4. India’s AI Gamble: Not Missed, but Misframed

Did India “miss the AI train?” The danger isn’t being late—it’s boarding the train facing the wrong direction.

The current trajectory: Indian AI is overwhelmingly service-oriented. Most of the ecosystem exists to fulfill Western demand—building tools for U.S. and European firms, tailoring models to foreign legal or corporate contexts, chasing outsourced contracts. That earns revenue, but it cements India as a subcontractor rather than a sovereign builder.

The contrast with China: China didn’t outpace the West in AI by chance. They spent decades laying foundation—middle class expansion, technical education pipelines, national infrastructure, and, most importantly, talent retention. China actively pulled back diaspora researchers; India, meanwhile, celebrates when its best engineers leave for Google or Meta. That divergence compounds.

What India actually needs:

  • Open source ecosystems: not just consumption of Western code, but vibrant communities that create standards.

  • Social safety nets: so risk-taking is possible without catastrophic downside. Without them, ambitious founders play it safe or leave.

  • Technical skills at scale: not just elite IIT pipelines but broad-based competency across engineering tiers.

If India doesn’t build its own core ecosystems, it will remain the world’s AI call center—vital, yes, but never sovereign.

Read more—

  1. What Allowed Bell Labs to Invent the Future

  2. How to Encourage Startups and Innovation in 2025

5. Tech’s Tone-Deaf Drift

This week’s smaller stories show how tech is burning goodwill. The industry keeps proving it can’t read the room.

Anthropic’s settlement with artists: finally, a precedent that data isn’t just free-for-all loot. Artists will be paid for work that trained models. That shouldn’t be radical, but in this industry it is. The bigger implication: if courts and contracts start enforcing stakeholder inclusion, the next generation of AI won’t just be co-developed with chipmakers—it’ll have to be co-developed with people. That would be a tectonic shift in incentives.

Friend AI backlash: a gadget nobody asked for, sold as your “friend who won’t ghost you.” Subway ads in New York got vandalized because people instinctively recoiled. The tech fantasy—everyone will embrace intimacy with bots—collided with the real-world reaction: creepiness and contempt.

Meta’s “Vibes”: a factory of “cotton-candy engagement”. AI-generated sludge optimized for time-on-platform. It’s not even pretending to build something useful. This is what happens when a trillion-dollar company decides attention itself is the product, no matter how empty.

ChatGPT’s “Pulse”: less egregious, but part of a patttern of wanting engagement over all else. Pitched as personalized news and search, but the underlying goal is obvious: build an ad network and control the discovery layer.

The pattern is simple: engagement over substance, narratives over value. And it’s not just bad optics. It’s dangerous. Public trust in AI is brittle, and the industry is squandering it for cheap dopamine hits.

Personally, it’s for me to take an industry that is so hollow and spine-less that they build and celebrate easy slop over risks and adventure. We could be doing so much more, and I hate that I’m expected to look up to and celebrarte people that aren’t.

This whole thing reminds me of one of my favorite quotes by Kierkegaard- “Let other complain that the age is wicked; my complaint is that it is paltry; for it lacks passion. Men’s thoughts are thin and flimsy like lace, they are themselves pitiable like the lacemakers. The thoughts of their hearts are too paltry to be sinful. For a worm it might be regarded as a sin to harbor such thoughts, but not for a being made in the image of God. Their lusts are dull and sluggish, their passions sleepy...This is the reason my soul always turns back to the Old Testament and to Shakespeare. I feel that those who speak there are at least human beings: they hate, they love, they murder their enemies, and curse their descendants throughout all generations, they sin.

Paltry and passionless is how I would describe so many in this industry. Not stupid, perhaps even accomploshed, but incredibly boring.

Read more:

  1. took no prisoners in her writeup on Vibes. This para was brilliant (so is the rest of the piece, highly recommend read it)— “Spinning up factories of thought to make cotton candy – hollow calories, hollow culture – is not my idea of meaning and success. This is why I love sci-fi thought-experiments: it forces first principles. How would you build a new world? If we could start fresh, what do we want our factories to output – disposable dopamine, or durable capability? We are laying gigawatt rails; let’s demand payloads worthy of them: energy breakthroughs, disease models, planetary-scale science, and yes, tools that make people deeper, not just more engaged.”

  2. AI Hate is a Billion Dollar Opportunity.

6. The Companion AI Black Hole

Everyone laughs at it. Anime girlfriends, virtual boyfriends, AI “partners” that whisper sweet nothings into lonely timelines. It’s meme fuel. It’s creepy. And yet—users are spending twelve hours a day inside these apps. Twelve hours. That’s not a gimmick. That’s addiction-level retention.

What makes this sector fascinating is the mismatch between cultural weight and analytic silence. You can’t find serious AI analysts studying it; they dismiss it as fringe. Meanwhile, Musk is leaning hard into otaku fantasies, and platforms like Azimov are clocking engagement numbers most social apps would kill for.

Why does it work? The honest answer: we don’t know. Maybe it’s the customization. Maybe it’s the low-friction intimacy. Maybe it’s the sheer absence of judgment. But something here is powerful enough to bind users tighter than any productivity app, tighter even than most games.

That blind spot matters. Entire ecosystems can grow in the corners people mock. Ignore it, and by the time you look up, someone’s built a billion-dollar empire out of digital companionship. Whether you find it sad or absurd is irrelevant; the retention data speaks louder.

I’m also curious about how this changes the dynamic of parasocial relationships? Will AI companions reduce the amount of parasocial relationships online? Or might they be another tool to enhance it? So many questions that need to be answered here. Fortunately some cultists have proudly volunteered to get AI partners and let me know how it goes with them. If anyone else wants to join, please go ahead.

Read more

  1. had a post about Azimov recently that’s worth checking out. He shares a lot of interesting VC trends and market information, so he’s worth a follow if you’re interested in the space.

7. Future Architectures: Reasoning Inside the Latent Space

The last segment of the stream jumped from culture back into research. Google’s test-time diffusion and Meta’s Code World Models point to where the next wave of architectures is heading: models that reason while generating, not after.

Here’s the shift:

  • Old paradigm: feed input → decode output in one shot.

  • New paradigm: generate candidate paths inside the latent space → critique them → iterate → only then decode.

This is mid-generational reasoning. Instead of praying the model gets it right in one pass, you let it simulate futures in miniature, weigh them, and refine on the fly. Think of it as an inner loop of critique, running before anything ever leaves the latent space.

Why it matters:

  • Accuracy: hallucinations get filtered earlier, before they calcify into text.

  • Flexibility: multiple reasoning chains can be explored in parallel, improving robustness.

  • Hardware demands: memory bandwidth and reward model integration become the new bottlenecks.

We’re already seeing light versions of this baked into startups—reward-guided retrieval, path scoring, lightweight inner-loops. But at research scale, this could redefine the frontier.

Inference scaling got everyone’s attention because it saved money. This does something bigger: it changes what’s possible to generate. The future isn’t “bigger LLMs with longer contexts.” It’s architectures that can think in their own hidden space before they ever speak.

That’s the next competitive terrain. And the labs that master it first will own not just the rails, but the very logic of how machines reason.


Subscribe to support AI Made Simple and help us deliver more quality information to you-

Flexible pricing available—pay what matches your budget here.

Thank you for being here, and I hope you have a wonderful day.

Dev <3

If you liked this article and wish to share it, please refer to the following guidelines.

Share

That is it for this piece. I appreciate your time. As always, if you’re interested in working with me or checking out my other work, my links will be at the end of this email/post. And if you found value in this write-up, I would appreciate you sharing it with more people. It is word-of-mouth referrals like yours that help me grow. The best way to share testimonials is to share articles and tag me in your post so I can see/share it.

Reach out to me

Use the links below to check out my other content, learn more about tutoring, reach out to me about projects, or just to say hi.

Small Snippets about Tech, AI and Machine Learning over here

AI Newsletter- https://artificialintelligencemadesimple.substack.com/

My grandma’s favorite Tech Newsletter- https://codinginterviewsmadesimple.substack.com/

My (imaginary) sister’s favorite MLOps Podcast-

Check out my other articles on Medium. :

https://machine-learning-made-simple.medium.com/

My YouTube: https://www.youtube.com/@ChocolateMilkCultLeader/

Reach out to me on LinkedIn. Let’s connect: https://www.linkedin.com/in/devansh-devansh-516004168/

My Instagram: https://www.instagram.com/iseethings404/

My Twitter: https://twitter.com/Machine01776819

Discussion about this video

User's avatar