Cirrus Light
"[@Eeveeinheat":](/1246989#comment_6123668
)
I mean, yes and no. I guess it really depends on how the AI is constructed, and how flexible its resultant thinking is. In this sense it's a bit like wondering what problems warp drive engineers will face before discovering relativity.
If it's structured purely at the level programs are typically written at, then it might act like that, but I think the first AIs will have to be ones that learn in order to get there, in which case they'd work differently. Or maybe it'll be something else entirely that better simulates a human brain.
Learning AI looks promising, though I wonder if it'll ever pass Turing tests or if that will come with other techniques. And an AI's tendency to "have things like ocd" will depend on what technique was used to create it.
Ultimately, though, if programmers are good enough to make it work, I think they'll be good enough to debug it enough to keep it from going all Skynet. Programmers spend a lot more time making sure their codes behave as expected than authors spend imagining ways they might not.
If it's buggy enough to kill humanity over an obsession to make paperclips, I think it'll end up freezing up and performing compulsions that slow it down enough to make it easy to stop, basically. The O in OCD will make it want to do something bad, but if it's buggy enough to have OCD, then the C will keep it from doing it effectively, most likely.
Sciencepone of Science!
I mean, yes and no. I guess it really depends on how the AI is constructed, and how flexible its resultant thinking is. In this sense it's a bit like wondering what problems warp drive engineers will face before discovering relativity.
If it's structured purely at the level programs are typically written at, then it might act like that, but I think the first AIs will have to be ones that learn in order to get there, in which case they'd work differently. Or maybe it'll be something else entirely that better simulates a human brain.
Learning AI looks promising, though I wonder if it'll ever pass Turing tests or if that will come with other techniques. And an AI's tendency to "have things like ocd" will depend on what technique was used to create it.
Ultimately, though, if programmers are good enough to make it work, I think they'll be good enough to debug it enough to keep it from going all Skynet. Programmers spend a lot more time making sure their codes behave as expected than authors spend imagining ways they might not.
If it's buggy enough to kill humanity over an obsession to make paperclips, I think it'll end up freezing up and performing compulsions that slow it down enough to make it easy to stop, basically. The O in OCD will make it want to do something bad, but if it's buggy enough to have OCD, then the C will keep it from doing it effectively, most likely.