It takes time to create work that’s clear, independent, and genuinely useful. If you’ve found value in this newsletter, consider becoming a paid subscriber. It helps me dive deeper into research, reach more people, stay free from ads/hidden agendas, and supports my crippling chocolate milk addiction. We run on a “pay what you can” model—so if you believe in the mission, there’s likely a plan that fits (over here).
Every subscription helps me stay independent, avoid clickbait, and focus on depth over noise, and I deeply appreciate everyone who chooses to support our cult.
PS – Supporting this work doesn’t have to come out of your pocket. If you read this as part of your professional development, you can use this email template to request reimbursement for your subscription.
Every month, the Chocolate Milk Cult reaches over a million Builders, Investors, Policy Makers, Leaders, and more. If you’d like to meet other members of our community, please fill out this contact form here (I will never sell your data nor will I make intros w/o your explicit permission)- https://forms.gle/Pi1pGLuS1FmzXoLr6
Thanks to everyone for showing up the live-stream. Mark your calendars for 8 PM EST, Sundays, to make sure you can come in live and ask questions.
Community Spotlight: AI Infra Summit
I’ll be speaking at the amazing AI Infra Summit in San Fransico on 7th November (late notice, I know; I’ve had a lot going on). I’ll be speaking about the AI infrastructure buildouts happening and what they mean for the economy (bubble or nah). Get your tickets over here.
Paying Subs of the CMC— we have been given a few complimentary tickets. If you’d like to attend this conference, send an email to devansh@svam.com with the subject AI INFRA CONFERENCE to be added to the raffle. We will select up to 10 for free tickets (each worth 400 USD). Look forward to seeing you guys there. Make sure you send the email from your email that has the subscription so the system catches you.
(I’ll be speaking around 2:45 PM if you want)
If you’re doing interesting work and would like to be featured in the spotlight section, just drop your introduction in the comments/by reaching out to me. There are no rules- you could talk about a paper you’ve written, an interesting project you’ve worked on, some personal challenge you’re working on, ask me to promote your company/product, or anything else you consider important. The goal is to get to know you better, and possibly connect you with interesting people in our chocolate milk cult. No costs/obligations are attached.
Additional Recommendations (not in Livestream)
California passes frontier‑AI guardrails. Governor Newsom signed SB 53 – the Transparency in Frontier AI Act – requiring companies developing advanced models to implement safety testing and reporting protocols. It’s one of the first state‑level laws aimed at pre‑deployment AI safety.
Protecting kids from AI and deepfakes. California also signed bills that require age verification for minors using generative AI chatbots, mandate suicide/self‑harm protocols and impose penalties on anyone creating sexual deepfakes of minors. The law goes further than federal proposals and hints at a patchwork future for US AI regulation.
UMG × Stability AI: a licensed AI music alliance. Universal Music Group and Stability AI announced a strategic partnership on 30 October. They will jointly develop next‑generation AI music tools that are fully licensed and built in collaboration with UMG artists. It’s a sign that the music industry is moving from lawsuits to co‑creation.
Optical engines hit 12.5 GHz. Researchers at Tsinghua University unveiled an Optical Feature Extraction Engine that processes data at 12.5 GHz, demonstrating blazing‑fast AI computation using light rather than electricity. The integrated diffraction modules point toward practical optical accelerators.
AI Policy Primer (#22) is a fantastic roundup by
, one of my favorite sources for research on how people are using AI outside of tech.- is an insanely good writeup on the research being done outside the standard LLM models.
The Gap Between Measurement and Meaning by
Good analysis of why we need to ensure that the medical AI evals (which are great) need more interpreting framework to ensure people can understand them. “We are getting closer to figuring out how to measure some of the metrics in healthcare AI, which is fantastic. But I worry that without the institutional knowledge and resources to translate testing and evaluation into action, we won’t be able to meaningfully use the results, even if the results themselves are meaningful.”Intel Q3 2025 Earnings: Doing Fine by
is a great analysis of Intel’s earnings reports.The Artificial Investor #65: What a week in Silicon Valley taught us about the status of AI - part 2 by
is a great coverage of important updates by a very smart investor.Why Trading the Market Well is Like Racing SailGP Boats is a very interesting analogy for trading. I don’t know enough about trading to confirm, but it was interesting to think about. Great work
.The AI Compute Paradox: Why Optimizations (Likely) Won’t Save Us From the Scaling Storm is a great breakdown of the test-time scaling work by
Truth, information, and control architecture by
is a great analysis of the impact of AI-generated content as a way of controlling the flow of information.
Companion Guide to the Livestream
This guide expands the core ideas and structures them for deeper reflection. Watch the full stream for tone, nuance, and side-commentary
1. OpenAI’s Restructure — Capital Meets Compute
The Event — Reuters confirmed that OpenAI and Microsoft have restructured their deal to pave the way for an IPO targeting a $1 trillion valuation. Microsoft converted its equity stake into non-voting shares while OpenAI’s nonprofit parent retained control. Sam Altman and Greg Brockman reportedly began secondary sales, freeing liquidity without losing command of governance.
Why it matters — The IPO isn’t about raising money—it’s about access. OpenAI needs the ability to finance compute expansion beyond what Azure can front. Going public turns retail speculation into industrial capital, letting the company fund data-centre build-outs directly rather than leasing Microsoft’s GPU capacity.
Strategic context — This is the template for how cognitive infrastructure will be financed: data-centre equity backed by compute futures. The restructure allows OpenAI to buy GPUs from Nvidia and AMD directly and use long-term vendor financing models similar to airlines buying aircraft.
Insight — The product isn’t ChatGPT; it’s sovereign compute. This IPO will monetize infrastructure that manufactures cognition.
2. The Small-Model Revolution — Efficiency Becomes Intelligence
The Event — Anthropic’s Claude Haiku 4.5 achieved near-frontier coding accuracy at one-third the cost and twice the speed of Sonnet 4. Meanwhile, IBM’s Granite 4.0 introduced a suite of open-source “Nano” models (350 M – 1.5 B parameters) that run directly on local devices or browsers.
Systemic Trend — The “bigger-is-better” logic of 2022–2024 is dead. Architecture and routing now matter more than raw parameter count. Mixture-of-Experts, low-rank adaptation, and quantization have produced a new equilibrium: 90 % of performance for < 10 % of the cost. Epoch AI’s analysis shows that inference cost per token has dropped by nearly an order of magnitude since 2023. I also expect this to shift more AI offline, changing the dynamics of AI deployed into low-tech environments (where the investment into proper security might not be worthwhile).
Implications — The IPO math from Section 1 depends on these cost curves. As models become cheaper and faster, profitability scales non-linearly. Small models also enable local inference, moving AI from cloud monopolies to distributed edge ecosystems.
Insight — Efficiency is the new scale. In the next cycle, whoever controls distillation pipelines and on-device optimization frameworks will control intelligence distribution.
3. Consumer Interface Flip — Sora 2 and the New Media Physics
The Event — The Verge reported that Sora 2, OpenAI’s generative-video app, surpassed 1 million downloads within five days of its September 30 launch, eclipsing ChatGPT’s early growth.
Why it matters — Text is plateauing; vision is exploding. Sora 2 turns prompt engineering into content creation, allowing users to generate short-form video clips and cameo-level avatars that plug into existing social platforms.
Strategic context — Every Sora session feeds OpenAI’s multimodal training datasets, making it a feedback loop for learning how humans visualize information. This is not just a product—it’s a data flywheel disguised as entertainment.
Insight — Generative video is becoming the interface layer for cognitive systems. Whoever owns the world’s visual-behavior dataset will define the language of culture itself.
4. Hardware Gravity — CUDA, Optics, and the $5 Trillion Wall
The Event — On 29 October, Nvidia’s market cap hit $5 trillion (Reuters). The company’s dominance comes not from chips but from CUDA, the software stack binding its ecosystem.
Economic mechanics — Nvidia’s moat isn’t performance; it’s switching cost. Photonic challengers can’t yet interface with CUDA workloads, forcing companies to stay locked into the GPU supply chain even when better physics exist.
Insight — CUDA has become the monetary policy of AI. Until open-standard optics or neuromorphic chips reach parity in tooling, the entire intelligence economy will orbit Nvidia’s software gravity.
My analysis on the importance for cross chip communication—
This technology will create the next Nvidia
It takes time to create work that’s clear, independent, and genuinely useful. If you’ve found value in this newsletter, consider becoming a paid subscriber. It helps me dive deeper into research, reach more people, stay free from ads/hidden agendas, and supports my crippling chocolate milk addiction.
6. Culture and IP — From Litigation to Collaboration
The Event — Universal Music Group × Stability AI formed a partnership to co-develop licensed AI-music tools, embedding royalties at the dataset level and integrating artists directly into training pipelines.
Why it matters — The creative industry’s posture has shifted from suing AI to owning it. By designing royalty protocols inside model architectures, UMG and Stability AI created a sustainable model for rights attribution.
Insight — Ownership is becoming a protocol, not a courtroom. Expect other creative industries—film, gaming, publishing—to follow this détente once watermarking and payment APIs mature.
We spoke about a similar development during our livestream conversation with
How One of Tech's Best Journalists keeps up with AI
Haven’t been streaming as much (always on the road) so I’m goign to mix the Community Spotlights in with the other posts to make sure we can still give everyone the shoutouts.
7. The Fear Economy — “Ban Super-Intelligence”
The Event — In late October, a coalition of technologists—Yoshua Bengio, Geoff Hinton, Steve Wozniak, and Steve Bannon—signed an open letter calling for a ban on “super-intelligent AI.” The statement lives at superintelligence-statement.org.
Analysis — The petition mirrors the failed “pause” letter of 2023. It offers no enforcement framework, only moral theater. While policymakers debate hypothetical extinction, the real safety work happens quietly inside alignment teams at OpenAI and Anthropic.
Insight — Fear is the cheapest form of regulation. Every time the media amplifies it, engineers retreat from transparency to self-protection. Real safety will come from auditability, not alarms.
Read more—
Subscribe to support AI Made Simple and help us deliver more quality information to you-
Flexible pricing available—pay what matches your budget here.
Thank you for being here, and I hope you have a wonderful day.
Dev <3
If you liked this article and wish to share it, please refer to the following guidelines.
That is it for this piece. I appreciate your time. As always, if you’re interested in working with me or checking out my other work, my links will be at the end of this email/post. And if you found value in this write-up, I would appreciate you sharing it with more people. It is word-of-mouth referrals like yours that help me grow. The best way to share testimonials is to share articles and tag me in your post so I can see/share it.
Reach out to me
Use the links below to check out my other content, learn more about tutoring, reach out to me about projects, or just to say hi.
Small Snippets about Tech, AI and Machine Learning over here
AI Newsletter- https://artificialintelligencemadesimple.substack.com/
My grandma’s favorite Tech Newsletter- https://codinginterviewsmadesimple.substack.com/
My (imaginary) sister’s favorite MLOps Podcast-
Check out my other articles on Medium. :
https://machine-learning-made-simple.medium.com/
My YouTube: https://www.youtube.com/@ChocolateMilkCultLeader/
Reach out to me on LinkedIn. Let’s connect: https://www.linkedin.com/in/devansh-devansh-516004168/
My Instagram: https://www.instagram.com/iseethings404/
My Twitter: https://twitter.com/Machine01776819




![Why the AI Pause is Misguided [Thoughts]](https://substackcdn.com/image/fetch/$s_!pJzL!,w_1300,h_650,c_fill,f_auto,q_auto:good,fl_progressive:steep,g_auto/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7d005a6-bf93-4b4c-bf72-94c56a846cde_720x932.jpeg)
![AI Ethics has a clickbait problem [Thoughts]](https://substackcdn.com/image/fetch/$s_!zhls!,w_1300,h_650,c_fill,f_auto,q_auto:good,fl_progressive:steep,g_auto/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98f6e382-41cb-4ddf-8b40-cf1d2c948b6b_500x500.jpeg)
![Why Morally Aligned LLMs Solve Nothing [Thoughts]](https://substackcdn.com/image/fetch/$s_!DsyU!,w_1300,h_650,c_fill,f_auto,q_auto:good,fl_progressive:steep,g_auto/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F852fd155-8e56-4ad6-a268-ff7b5745aa6d_500x500.jpeg)








