I do think the biggest issue in regulation is we really won't like to hold the mirror up to ourselves when we don't like the output of the algorithm. I fear the incination will be to 'shoot the messenger' that is AI instead of actually addressing the systemic issues that AI uncovers.
If we're defining "AI" as the LLM used today by chat-GPT which is just crap, and will derail 'real AI" for 50 years
You don't solve "GENERAL AI" by throwing random crap on a wall like an ape and then picking&choosing the tea-leaves and calling the garbage "Hallucinations"
Only the promise of HOMO-GROOMING of the "CHILDREN" keeps Sam-Altman all of chat-GPT close to the hearts of the World Elite homo-pedo child f*ckers;
I've always thought it comical that we're now getting into regulating AI when it's been controlling a lot more than the general public thinks for years already.
Protecting users from flaws in neural network systems (“LLM”, “Generative AI”, “foundation models”) is undoubtedly a necessary thing. However, the mere existence of rules regulating developers' activities does not automatically guarantee the achievement of the declared goals.
I do think the biggest issue in regulation is we really won't like to hold the mirror up to ourselves when we don't like the output of the algorithm. I fear the incination will be to 'shoot the messenger' that is AI instead of actually addressing the systemic issues that AI uncovers.
True. It's much easier to blame the technology than to fix the underlying system
That's where the regulation becomes a huge issue.
If we're defining "AI" as the LLM used today by chat-GPT which is just crap, and will derail 'real AI" for 50 years
You don't solve "GENERAL AI" by throwing random crap on a wall like an ape and then picking&choosing the tea-leaves and calling the garbage "Hallucinations"
Only the promise of HOMO-GROOMING of the "CHILDREN" keeps Sam-Altman all of chat-GPT close to the hearts of the World Elite homo-pedo child f*ckers;
I've always thought it comical that we're now getting into regulating AI when it's been controlling a lot more than the general public thinks for years already.
I need to write about Bell Labs soon!
Protecting users from flaws in neural network systems (“LLM”, “Generative AI”, “foundation models”) is undoubtedly a necessary thing. However, the mere existence of rules regulating developers' activities does not automatically guarantee the achievement of the declared goals.
See https://agieng.substack.com/p/reflections-on-an-open-letter-gary
AI is bullshit
The best way to regulate it is NOT to use it;