It takes time to create work that’s clear, independent, and genuinely useful. If you’ve found value in this newsletter, consider becoming a paid subscriber. It helps me dive deeper into research, reach more people, stay free from ads/hidden agendas, and supports my crippling chocolate milk addiction. We run on a “pay what you can” model—so if you believe in the mission, there’s likely a plan that fits (over here).
Every subscription helps me stay independent, avoid clickbait, and focus on depth over noise, and I deeply appreciate everyone who chooses to support our cult.
PS – Supporting this work doesn’t have to come out of your pocket. If you read this as part of your professional development, you can use this email template to request reimbursement for your subscription.
Every month, the Chocolate Milk Cult reaches over a million Builders, Investors, Policy Makers, Leaders, and more. If you’d like to meet other members of our community, please fill out this contact form here (I will never sell your data nor will I make intros w/o your explicit permission)- https://forms.gle/Pi1pGLuS1FmzXoLr6
Thanks to everyone for showing up the live-stream. Mark your calendars for 8 PM EST, Sundays, to make sure you can come in live and ask questions.
Community Spotlight: James Wang
 has released a masterpiece of a book on Amazon called “What You Need to Know About AI.” James is one of the most insightful writers and thinkers I know (it’s why I share/cross post his work so often). What You Need to Know About AI doesn’t just tell you about AI; it gives you simple, powerful ways to understand its real impact on your job and the world. This is the most essential, grounded, and genuinely useful guide you will find on the most important technology of our time.I’ve ordered my copy and you guys absolutely should too. The book is so heat that it was number one on Amazon after release. Can’t recommend this enough.
If you’re doing interesting work and would like to be featured in the spotlight section, just drop your introduction in the comments/by reaching out to me. There are no rules- you could talk about a paper you’ve written, an interesting project you’ve worked on, some personal challenge you’re working on, ask me to promote your company/product, or anything else you consider important. The goal is to get to know you better, and possibly connect you with interesting people in our chocolate milk cult. No costs/obligations are attached.
Additional Recommendations (not in Livestream)
I found a Gemini feature so good, I deleted a bunch of apps— not ground breaking, but a few interesting use cases.
Local prediction-learning in high-dimensional spaces enables neural networks to plan: “Planning and problem solving are cornerstones of higher brain function. But we do not know how the brain does that. We show that learning of a suitable cognitive map of the problem space suffices. Furthermore, this can be reduced to learning to predict the next observation through local synaptic plasticity. Importantly, the resulting cognitive map encodes relations between actions and observations, and its emergent high-dimensional geometry provides a sense of direction for reaching distant goals. This quasi-Euclidean sense of direction provides a simple heuristic for online planning that works almost as well as the best offline planning algorithms from AI. If the problem space is a physical space, this method automatically extracts structural regularities from the sequence of observations that it receives so that it can generalize to unseen parts. This speeds up learning of navigation in 2D mazes and the locomotion with complex actuator systems, such as legged bodies. The cognitive map learner that we propose does not require a teacher, similar to self-attention networks (Transformers). But in contrast to Transformers, it does not require backpropagation of errors or very large datasets for learning. Hence it provides a blue-print for future energy-efficient neuromorphic hardware that acquires advanced cognitive capabilities through autonomous on-chip learning.”
The Hardware Lottery: “Hardware, systems and algorithms research communities have historically had different incentive structures and fluctuating motivation to engage with each other explicitly. This historical treatment is odd given that hardware and software have frequently determined which research ideas succeed (and fail). This essay introduces the term hardware lottery to describe when a research idea wins because it is suited to the available software and hardware and not because the idea is superior to alternative research directions. Examples from early computer science history illustrate how hardware lotteries can delay research progress by casting successful ideas as failures. These lessons are particularly salient given the advent of domain specialized hardware which make it increasingly costly to stray off of the beaten path of research ideas. This essay posits that the gains from progress in computing are likely to become even more uneven, with certain research directions moving into the fast-lane while progress on others is further obstructed.”
A great roundup by
: Google’s Zero-Shot Video Breakthrough and Cursor’s Internal Build Playbook- wrote this beauty Deep Dive on Memory (Primer). Highly recommend it given how important this problem is.
 ⚡ N8N Workflow Collection & Documentation: A professionally organized collection of 2,057 n8n workflows with a lightning-fast documentation system that provides instant search, analysis, and browsing capabilities.
PPO for LLMs: A Guide for Normal People. I haven’t read everything here yet since I was on the flight, but
always brings heat and whatever I did read was great.
Companion Guide to the Livestream
This guide expands the core ideas and structures them for deeper reflection. Watch the full stream for tone, nuance, and side-commentary
Real Applications Change Everything
Last week, during the
stream, I casually mentioned that Google’s quantum computing team had something major dropping soon: quantum error correction breakthroughs under wraps, just working through the PR logistics. This was before any media outlet touched it/days even before Google’s official announcement.I keep telling y’all, if you want to stay ahead of the curve, this newsletter is best place for information that noone else gets access to.
Now here’s what actually dropped: Quantum Echoes, an algorithm that computes molecular structures roughly 13,000 times faster than traditional computers. That number alone doesn’t tell you why this matters, though. The real story is what kind of problem they solved.
Quantum computing has been trapped in what I call the toy problem loop for years. Researchers would demonstrate quantum advantage on problems specifically chosen because quantum computers excel at them, but nobody outside the field could point to a use case and say, “Oh, that’s why we need this.”
You couldn’t go to a manager, put your head on the block, and say give me funding for five years and we’ll have X result by quarter Y.
Toy problems don’t generate investor patience. They generate skepticism.
Molecular structure prediction changes that equation completely. If you can simulate how molecules interact—their attributes, properties, behaviors—you’ve unlocked massive implications for:
Drug discovery
Clean energy research
Fusion control
Basically anywhere chemistry matters
Which is everywhere. This isn’t impressive for quantum insiders; this is impressive for people who write checks.
And that’s the real breakthrough here: visible use cases create patience, and patience creates capital flow, and capital flow creates breakthroughs.
Think about the AI infrastructure explosion post-ChatGPT. We had language models before GPT. We had Google’s Pathways Architecture: technically more impressive than early GPT, multimodal, multi-turn, never released to the public. Nobody cared.
But ChatGPT writes mediocre emails, and suddenly investors can see the future.
Mediocre emails today, customized emails that reference current events in three years. The investment thesis writes itself.
That’s not hype; that’s human psychology. People don’t invest in what they can’t envision. Quantum just gave everyone something to envision. When you can hold onto something tangible, even if it’s not perfect yet, you’re willing to put money in and wait.
That waiting period is where the weird, edgy breakthroughs happen: the stuff you wouldn’t have thought would work but does.
So when Google releases Quantum Echoes, they’re not just announcing faster computation. They’re buying years of runway.
Read more:
Google Research – Quantum Echoes and verifiable quantum advantage (Oct 22, 2025)
Why AI will unlock Quantum Error Correction in 2025 (+ other major AI trends).
The New Playbook: Dependency Economics
OpenAI is acquiring everything. Statsig, Johnny Ive collaborations, software companies. They’re embedding ChatGPT into every screen and context they can touch.
Stripe integrations. Slack. Walmart partnerships.
The strategy is obvious if you’ve been watching: control access points before the IPO.
Because here’s the thing: OpenAI’s financials have glaring weaknesses. Everyone in the Valley knows this. But if you can show explosive usage everywhere, you shift the narrative from “Are you profitable?” to “You’re generating value for the world, and value eventually circles back to you.”
Same logic that kept Amazon afloat when they weren’t making money. Usage becomes the proxy for inevitability.
But this isn’t the 2010s blitzscaling playbook anymore. That era was about expansion without profitability:
100x your revenue by bleeding money on discounts
Add new cities at any cost
Show growth above everything else
Investors hoped usage would translate to profits. It didn’t.
Uber, Airbnb, neither captured the same flywheel that Amazon or Facebook did because they lacked true network effects. Your Uber ride doesn’t get better because more people use Uber. Your Facebook feed does get better because more people are on Facebook.
What we’re seeing now is dependency economics: the 2020s attempt to replicate that platform-era magic through a different mechanism.
You don’t need network effects if you engineer irreplaceability.
Anthropic does this with Claude Code and enterprise lock-in
ChatGPT does it through consumer ubiquity
The goal is identical: become so embedded in your customers’ workflows that switching costs are prohibitive
Eventually, you’re the only supplier. You can charge whatever you want because they’re stuck.
I’m living proof this works. I use AI tools aggressively. OpenAI could charge me $2,000 a month and I’d pay without thinking. $20,000? I’d have questions.
But the dependency is real. That’s the upper limit they’re probing for: how far can you push before people break?
But here’s the deeper question, the one that keeps me up.
What gets commoditized next?
Pre-internet, information had premium value. The internet obliterated that. Information became cheap, and entire industries built on information scarcity collapsed. We didn’t see it coming because we were focused on features, not on how fundamental value perception would shift.
AI agents are doing the same thing right now, and we’re not asking the right question.
Not “Will OpenAI crack unit economics?” but “Will they rewire what we value so fundamentally that profitability becomes inevitable—not because they fixed costs, but because what we’re willing to pay for aligns perfectly with what their tools provide?”
I don’t have a clean answer yet. But I think it’s the right question.
Read more / verify:
Stripe + OpenAI launch Instant Checkout integration (Oct 2025)
Walmart partners with OpenAI to launch ChatGPT-powered shopping assistant (Oct 14, 2025)
Our analysis of the Chatbot ecosystem and what their usage patterns reveal.
The Data Engineering Gap
US and EU are moving toward a joint AI governance framework: cross-border data flows, biometric data standards, the usual regulatory convergence.
The obvious play here:
Synthetic data startups
Differential privacy tools
Anonymization services
Regulation creates mandatory demand. If you’re an investor, look there.
But here’s what I’m actually interested in, and I’m thinking about this live: there’s no good end-to-end data engineering startup.
Plenty of companies do point solutions. Synthetic data generation, anonymization, specific data sourcing. But nobody’s built the platform that:
Takes massive unstructured data
Blends it intelligently
Anonymizes it properly
Structures it for use
Delivers it at scale, regularly
The infrastructure for selling data to AI companies is fragmented and manual.
That’s a second-order dependency play. If AI companies need data, whoever controls the data pipeline captures them. And right now, that pipeline is wide open.
If you’re a developer and can’t raise money for this, open-source it. You’ll get attention. Turn that attention into value: consulting, enterprise versions, whatever. But keeping it closed won’t do anything for you.
This opportunity exists because of dependency economics. Regulation forces compliance, compliance requires infrastructure, infrastructure becomes the new chokepoint.
Same pattern, one layer down.
Read more / verify:
EU AI Regulatory Framework – European Commission (2025)
AGI is a Sham
AGI is a sham. It doesn’t exist. There’s no good definition for it, and that’s on purpose.
You know how I know people aren’t serious about AGI?
Because if researchers claiming to work on AGI actually cared about building AGI, they’d at least try to define it rigorously. Even if the definitions were incomplete, they’d be there. We’d have:
Frameworks
Benchmarks
Falsifiable criteria
We don’t.
AGI stays vague, incomplete, a shifting goalpost, because that’s how you raise money without timelines or hard stops. You can claim everything is progress toward AGI. You can say “We’re getting closer” without ever being wrong.
Perfect unfalsifiable fundraising narrative.
Trillions of dollars of workflow change, coming soon, no specific date, just keep the checks flowing.
Browser Overreach and the Limits of Expansion
ChatGPT Atlas dropped, and immediately people started pointing out the security vulnerabilities: how you can embed malicious code in websites, and suddenly Atlas is sending your bank information to attackers.
This isn’t a minor bug. This is a fundamental misunderstanding of what browsers are.
A browser isn’t just an interface. A browser has access to:
Your computer
Your memory
Your cookies
Everything that makes it a browser
You can inject behaviors that fundamentally compromise the system. OpenAI isn’t taking this seriously enough. Neither did Perplexity when they tried this.
I think they’re underestimating how much security engineering goes into browsers versus building AI. These are different skill sets, different threat models, different disciplines. Chrome’s security team has spent decades hardening that surface.
You don’t just bolt AI onto a browser and call it done.
The risk-to-reward here just isn’t worth it. General LLM interfaces work well enough. I don’t see a use case compelling enough to justify the exposure. Maybe for specific workflows, niche tools, but as a default browser?
No. Google prints money with search, but I don’t think you want to get into the browser game with agentic AI. That’s reckless.
This connects to a broader pattern. Tech is too insular right now.
AI mostly sells to AI. Tech sells to tech. We’re building for ourselves, and that creates blind spots: weird assumptions about what the rest of the world wants or needs.
We need more cross-domain collaboration. More AI experts embedded in healthcare, logistics, manufacturing. Not just partnerships. Actual integration.
Because right now, we’re circle-jerking ourselves into irrelevance.
Read more / verify:
TechRadar – OpenAI’s Atlas browser has major security flaws, researchers warn (Oct 25, 2025)
Hardware: The Hidden Efficiency Crisis
Alibaba released a paper called Aegon. I have no idea how to pronounce it, but the work is critical. They figured out how to serve LLMs with dramatically better throughput and lower latency by improving GPU resource pooling.
Here’s the problem they’re solving.
Right now, if you buy 100 GPU units worth of resources, you’re only using 60-70% of that capacity. The rest is lost to overhead:
Memory crashes
Poor resource management
Communication bottlenecks between GPUs
That’s why, in our inference scaling paper, the first step isn’t algorithmic optimization. It’s just making your GPUs work better.
When you reduce that overhead, you’re directly cutting costs. More time on GPUs, more memory crashes, those translate to real money.
Alibaba’s research shows how to:
Pool GPU resources more effectively
Minimize idle time
Handle failures gracefully
Not sexy. Not AGI. But it’s the infrastructure work that actually matters.
I haven’t done a full breakdown yet because it’s more software engineering than pure AI, and I need to read it a few more times to fully understand the technical details. I’m not going to come here and give you a ChatGPT summary.
But if you’re in this space, read the Aegon paper. Worth your time.
Separately, NVIDIA continues to dominate open-source investments: more than anyone else in the world right now. That’s their moat-building strategy. As long as we don’t let NVIDIA color how that open-source work evolves, as long as we keep pushing research in other directions, their contributions are net positive.
But we need to stay vigilant. Open-source controlled by one vendor isn’t really open.
Apple announced a new chip for neural accelerators. I don’t have all the details yet, but I’m watching how they plan to integrate this into their hardware ecosystem and whether they’ll start selling it externally.
We’re already seeing an arms race between Google’s TPUs and NVIDIA’s GPUs. If Apple and the ASIC providers enter that fight seriously, the dynamics shift.
Heterogeneous computing: mixing GPUs, TPUs, ASICs in one rack, could open entirely new frontiers.
Right now, our racks are homogenous. Same chip type, stacked GPUs, brutal memory management challenges. But what if you could:
Mix hardware types dynamically
Repurpose older chips instead of dumping them as e-waste
Extend the lifeline of your investments
Make the system less brittle
That’s not just efficiency. That’s environmental. That’s accessibility.
I really wish we put more effort into heterogeneous computing. But the industry is consolidating instead.
Read more / verify:
What’s Being Hollowed Out
Meta just laid off a significant portion of their research team, including people working on Coconut: cutting-edge work on tokenization, questioning how we represent text at the foundational level.
This is the kind of research that doesn’t pay off immediately but reshapes entire paradigms five years out.
And Zuckerberg is firing them.
I don’t know what direction Meta is headed. I genuinely don’t. They’re hollowing out research, doubling down on what, exactly? Short-term product iterations?
It’s concerning. Coconut’s work was legitimately interesting. Losing that team doesn’t just hurt Meta. It fragments the research community. Those people go somewhere else, sure, but the continuity is broken.
Read more / verify:
Meta cuts 600 roles in AI research division – Reuters (Oct 22, 2025)
Subscribe to support AI Made Simple and help us deliver more quality information to you-
Flexible pricing available—pay what matches your budget here.
Thank you for being here, and I hope you have a wonderful day.
Dev <3
If you liked this article and wish to share it, please refer to the following guidelines.
That is it for this piece. I appreciate your time. As always, if you’re interested in working with me or checking out my other work, my links will be at the end of this email/post. And if you found value in this write-up, I would appreciate you sharing it with more people. It is word-of-mouth referrals like yours that help me grow. The best way to share testimonials is to share articles and tag me in your post so I can see/share it.
Reach out to me
Use the links below to check out my other content, learn more about tutoring, reach out to me about projects, or just to say hi.
Small Snippets about Tech, AI and Machine Learning over here
AI Newsletter- https://artificialintelligencemadesimple.substack.com/
My grandma’s favorite Tech Newsletter- https://codinginterviewsmadesimple.substack.com/
My (imaginary) sister’s favorite MLOps Podcast-
Check out my other articles on Medium. :
https://machine-learning-made-simple.medium.com/
My YouTube: https://www.youtube.com/@ChocolateMilkCultLeader/
Reach out to me on LinkedIn. Let’s connect: https://www.linkedin.com/in/devansh-devansh-516004168/
My Instagram: https://www.instagram.com/iseethings404/
My Twitter: https://twitter.com/Machine01776819



![AGI is Tech Bro Word Soup[Thoughts]](https://substackcdn.com/image/fetch/$s_!n9T_!,w_1300,h_650,c_fill,f_auto,q_auto:good,fl_progressive:steep,g_auto/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50866404-2040-4675-9bb7-e3911aabaf0e_500x597.jpeg)








