8 Comments
User's avatar
Michael Woudenberg's avatar

I do think the biggest issue in regulation is we really won't like to hold the mirror up to ourselves when we don't like the output of the algorithm. I fear the incination will be to 'shoot the messenger' that is AI instead of actually addressing the systemic issues that AI uncovers.

Expand full comment
Devansh's avatar

True. It's much easier to blame the technology than to fix the underlying system

Expand full comment
Michael Woudenberg's avatar

That's where the regulation becomes a huge issue.

Expand full comment
Bilbo'sBitch's avatar

If we're defining "AI" as the LLM used today by chat-GPT which is just crap, and will derail 'real AI" for 50 years

You don't solve "GENERAL AI" by throwing random crap on a wall like an ape and then picking&choosing the tea-leaves and calling the garbage "Hallucinations"

Only the promise of HOMO-GROOMING of the "CHILDREN" keeps Sam-Altman all of chat-GPT close to the hearts of the World Elite homo-pedo child f*ckers;

Expand full comment
Logan Thorneloe's avatar

I've always thought it comical that we're now getting into regulating AI when it's been controlling a lot more than the general public thinks for years already.

Expand full comment
Andrew Smith's avatar

I need to write about Bell Labs soon!

Expand full comment
Mykola Rabchevskiy's avatar

Protecting users from flaws in neural network systems (“LLM”, “Generative AI”, “foundation models”) is undoubtedly a necessary thing. However, the mere existence of rules regulating developers' activities does not automatically guarantee the achievement of the declared goals.

See https://agieng.substack.com/p/reflections-on-an-open-letter-gary

Expand full comment
Bilbo'sBitch's avatar

AI is bullshit

The best way to regulate it is NOT to use it;

Expand full comment