Policy Update - Rules changes incoming for AI content - Read Here

Viewing last 25 versions of comment by Cirrus Light on image #1246989

Cirrus Light
Economist -
Condensed Milk - State-Approved Compensation
Friendship, Art, and Magic (2018) - Celebrated Derpibooru's six year anniversary with friends.
Helpful Owl - Drew someone's OC for the 2018 Community Collab
Birthday Cake - Celebrated MLP's 7th birthday
Best Artist - Providing quality, Derpibooru-exclusive artwork
Magical Inkwell - Wrote MLP fanfiction consisting of at least around 1.5k words, and has a verified link to the platform of their choice
Not a Llama - Happy April Fools Day!
Friendship, Art, and Magic (2017) - Celebrated Derpibooru's five year anniversary with friends.
An Artist Who Rocks - 100+ images under his artist tag

Sciencepone of Science!
"[@Eeveeinheat":](/1246989#comment_6123668
)  
I mean, yes and no. I guess it really depends on how the AI is constructed, and how flexible its resultant thinking is. In this sense it's a bit like wondering what problems warp drive engineers will face before discovering relativity.


 
If it's structured purely at the level programs are typically written at, then it might act like that, but I think the first AIs will have to be ones that learn in order to get there, in which case they'd work differently. Or maybe it'll be something else entirely that better simulates a human brain.


 
Learning AI looks promising, though I wonder if it'll ever pass Turing tests or if that will come with other techniques. And an AI's tendency to "have things like ocd" will depend on what technique was used to create it.


 
Ultimately, though, if programmers are good enough to make it work, I think they'll be good enough to debug it enough to keep it from going all Skynet. Programmers spend a lot more time making sure their codes behave as expected than authors spend imagining ways they might not.


 
If it's buggy enough to kill humanity over an obsession to make paperclips, I think it'll end up freezing up and performing compulsions that slow it down enough to make it easy to stop, basically. The O in OCD will make it want to do something bad, but if it's buggy enough to have OCD, then the C will keep it from doing it effectively, most likely.
No reason given
Edited by Cirrus Light
Cirrus Light
Economist -
Condensed Milk - State-Approved Compensation
Friendship, Art, and Magic (2018) - Celebrated Derpibooru's six year anniversary with friends.
Helpful Owl - Drew someone's OC for the 2018 Community Collab
Birthday Cake - Celebrated MLP's 7th birthday
Best Artist - Providing quality, Derpibooru-exclusive artwork
Magical Inkwell - Wrote MLP fanfiction consisting of at least around 1.5k words, and has a verified link to the platform of their choice
Not a Llama - Happy April Fools Day!
Friendship, Art, and Magic (2017) - Celebrated Derpibooru's five year anniversary with friends.
An Artist Who Rocks - 100+ images under his artist tag

Sciencepone of Science!
"@Eeveeinheat":/1246989#comment_6123668
I mean, yes and no. I guess it really depends on how the AI is constructed, and how flexible its resultant thinking is. In this sense it's a bit like wondering what problems warp drive engineers will face before discovering relativity.

If it's structured purely at the level programs are typically written at, then it might act like that, but I think the first AIs will have to be ones that learn in order to get there, in which case they'd work differently. Or maybe it'll be something else entirely that better simulates a human brain.

Learning AI looks promising, though I wonder if it'll ever pass Turing tests or if that will come with other techniques. And an AI's tendency to "have things like ocd" will depend on what technique was used to create it.

Ultimately, though, if programmers are good enough to make it work, I think they'll be good enough to debug it enough to keep it from going all Skynet. Programmers spend a lot more time making sure their codes behave as expected than authors spend imagining ways they might not.

If it's buggy enough to kill humanity over an obsession to make paperclips, I think it'll end up freezing up and performing compulsions that slow it down enough to make it easy to stop, basically.
No reason given
Edited by Cirrus Light
Cirrus Light
Economist -
Condensed Milk - State-Approved Compensation
Friendship, Art, and Magic (2018) - Celebrated Derpibooru's six year anniversary with friends.
Helpful Owl - Drew someone's OC for the 2018 Community Collab
Birthday Cake - Celebrated MLP's 7th birthday
Best Artist - Providing quality, Derpibooru-exclusive artwork
Magical Inkwell - Wrote MLP fanfiction consisting of at least around 1.5k words, and has a verified link to the platform of their choice
Not a Llama - Happy April Fools Day!
Friendship, Art, and Magic (2017) - Celebrated Derpibooru's five year anniversary with friends.
An Artist Who Rocks - 100+ images under his artist tag

Sciencepone of Science!
"@Eeveeinheat":/1246989#comment_6123668
I mean, yes and no. I guess it really depends on how the AI is constructed, and how flexible its resultant thinking is. In this sense it's a bit like wondering what problems warp drive engineers will face before discovering relativity.

If it's structured purely at a functionhe level programs are typically written at, then it might act like that, but I think the first AIs will have to be ones that learn in order to get there, in which case they'd work differently. Or maybe it'll be something else entirely that better simulates a human brain.

Learning AI looks promising, though I wonder if it'll ever pass Turing tests or if that will come with other techniques. And an AI's tendency to "have things like ocd" will depend on what technique was used to create it.

Ultimately, though, if programmers are good enough to make it work, I think they'll be good enough to debug it enough to keep it from going all Skynet. Programmers spend a lot more time making sure their codes behave as expected than authors spend imagining ways they might not.
No reason given
Edited by Cirrus Light
Cirrus Light
Economist -
Condensed Milk - State-Approved Compensation
Friendship, Art, and Magic (2018) - Celebrated Derpibooru's six year anniversary with friends.
Helpful Owl - Drew someone's OC for the 2018 Community Collab
Birthday Cake - Celebrated MLP's 7th birthday
Best Artist - Providing quality, Derpibooru-exclusive artwork
Magical Inkwell - Wrote MLP fanfiction consisting of at least around 1.5k words, and has a verified link to the platform of their choice
Not a Llama - Happy April Fools Day!
Friendship, Art, and Magic (2017) - Celebrated Derpibooru's five year anniversary with friends.
An Artist Who Rocks - 100+ images under his artist tag

Sciencepone of Science!
"@Eeveeinheat":/1246989#comment_6123668
I mean, yes and no. I guess it really depends on how the AI is constructed, and how flexible its resultant thinking is. In this sense it's a bit like wondering what problems warp drive engineers will face before discovering relativity.

If it's structured purely at a function level, then it might act like that, but I think the first AIs will have to be ones that learn in order to get there, in which case they'd work differently. Or maybe it'll be something else entirely that better simulates a human brain.

Learning AI looks promising, though I wonder if it'll ever pass Turing tests or if that will come with other techniques. And an AI's tendency to "have things like ocd" will depend on what technique was used to create it.

Ultimately, though, if programmers are good enough to make it work, I think they'll be good enough to debug it enough to keep it from going all Skynet. Programmers spend a lot more time making sure their codes behave as expected than authors spend imagining ways they might not.
No reason given
Edited by Cirrus Light