by frisco on 11/1/23, 7:31 PM with 86 comments
by dang on 11/2/23, 12:02 AM
Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence - https://news.ycombinator.com/item?id=38067314 - Oct 2023 (334 comments)
by qualifiedai on 11/1/23, 7:36 PM
According to EO's guidelines on commpute, something like GPT4 probably falls under reporting guidelines. Also, in the last 10 years GPU compute capabilities grew 1000x. What will be happening even 2 or 5 years from now?
Edit: yes, regulations are necessary but we should regulate applications of AI, not fundamental research in it.
by brotchie on 11/1/23, 7:44 PM
Talk about snatching defeat from the jaws of victory... damn
by WestCoastJustin on 11/1/23, 7:44 PM
by andrewmutz on 11/1/23, 8:04 PM
I have been afraid of over-regulation of AI but standards and testing environments don't sound so bad.
It does not sound like they are implementing legal regulations that will protect incumbents at the expense of AI innovation, at least at this point.
by MeImCounting on 11/1/23, 8:04 PM
by bediger4000 on 11/1/23, 7:37 PM
by p0w3n3d on 11/1/23, 8:19 PM
Thou shalt not make a machine in the likeness of a human mind
I guess we're heading for spice thenby mortallywounded on 11/1/23, 8:17 PM
by robbywashere_ on 11/1/23, 9:04 PM
by facu17y on 11/1/23, 8:28 PM
Even in feedback loop systems where a model might "learn" from the outcomes of its actions, this learning is typically constrained by the objectives set by human operators. The model itself doesn't have the ability to decide what it wants to learn or how it wants to act; it's merely optimizing for a function that was determined by its creators.
Furthermore, any tendency to "meander and drift outside the scope of their original objective" would generally be considered a bug rather than a feature indicative of agency. Such behavior usually implies that the system is not performing as intended and needs to be corrected or constrained.
In summary, while machine learning models are becoming increasingly sophisticated and capable, they do not possess agency in the way living organisms do. Their actions are a result of algorithms and programming, not independent thought or desire. As a result, questions about their "autonomy" are often less about the models themselves developing agency and more about the ethical and practical implications of the tasks we delegate to them."
The above is from the horse's mouth (ChatGPT4)
My commentary:
We have yet to achieve the kind of agency a jelly fish has, which operates with a nervous system comprised of roughly 10K neurons (vs 100B in humans) and no such thing as a brain. We have not yet been able to replicate the Agency present in a simple nervous system.
I would say even an Amoeba has more agency than a $1B+ OpenAI model since the Amoeba can feed itself and grow in numbers far more successfully and sustainably in the wild with all the unpredictability in its environment than an OpenAI based AI Agent, which ends up stuck in loops or derailed.
What is my point?
We're jumping the gun with these regulations. That's all I'm saying. Not that we should not keep an eye and have a healthy amount of concern and make sure we're on top of it, but we are clearly jumping the gun since we the AI agents so far are unable to compete with a jelly fish in open-ended survival mode (not to be confused with Minecraft survival mode) due to the AI's lack of agency (as a unitary agent and as a collective).
by lewhoo on 11/1/23, 9:11 PM
by codingdave on 11/1/23, 10:16 PM
So I'm assuming some of you have seen more details - can someone share where they can be found?
by lordleft on 11/1/23, 7:59 PM
by paganel on 11/1/23, 8:16 PM
by cmxch on 11/2/23, 12:16 AM
It can only help existing companies to stifle competition and guarantee revenue.
by boredumb on 11/1/23, 8:10 PM
by tap-snap-or-nap on 11/1/23, 8:58 PM
by w4ffl35 on 11/1/23, 7:53 PM