Uploaded by Badumsquish
2960x2550 PNG 566 kBInterested in advertising on Derpibooru? Click here for information!
Help fund the $15 daily operational cost of Derpibooru - support us financially!
Description
Pillow talk.
Tags
+-SH safe2276522 +-SH artist:badumsquish2566 +-SH derpibooru exclusive43853 +-SH oc1003510 +-SH oc only734406 +-SH oc:yastic7 +-SH living dakimakura8 +-SH object pony1102 +-SH original species38654 +-SH pony1710001 +-SH g42134331 +-SH bed63454 +-SH blanket8086 +-SH body pillow4925 +-SH dialogue100851 +-SH female1918822 +-SH high res413536 +-SH lidded eyes53939 +-SH looking at you284868 +-SH mare812587 +-SH philosophy107 +-SH philosophy in the comments17 +-SH pillow27886 +-SH pillow talk9 +-SH ponified54159 +-SH pun9196 +-SH smiling435065 +-SH solo1512208 +-SH speech bubble44545 +-SH talking to viewer6879 +-SH visual pun2456
Source
not provided yet
Loading...
Loading...
That’s the thing, though – the first true ASI will not be designed by humans; instead, advanced neural networks, which are not even on par with general AI, will get advanced enough to create AI all by themselves. It would be unlikely for a human to create an AI greater than the human brain via manual coding, so some automated process will be needed. Or at the very least a team of hundreds of people. My money is on evolutionary computation mixed with deep neural networks.
I mean, yes and no. I guess it really depends on how the AI is constructed, and how flexible its resultant thinking is. In this sense it’s a bit like wondering what problems warp drive engineers will face before discovering relativity.
If it’s structured purely at the level programs are typically written at, then it might act like that, but I think the first AIs will have to be ones that learn in order to get there, in which case they’d work differently. Or maybe it’ll be something else entirely that better simulates a human brain.
Learning AI looks promising, though I wonder if it’ll ever pass Turing tests or if that will come with other techniques. And an AI’s tendency to “have things like ocd” will depend on what technique was used to create it.
Ultimately, though, if programmers are good enough to make it work, I think they’ll be good enough to debug it enough to keep it from going all Skynet. Programmers spend a lot more time making sure their codes behave as expected than authors spend imagining ways they might not.
If it’s buggy enough to kill humanity over an obsession to make paperclips, I think it’ll end up freezing up and performing compulsions that slow it down enough to make it easy to stop, basically. The O in OCD will make it want to do something bad, but if it’s buggy enough to have OCD, then the C will keep it from doing it effectively, most likely.
Edited
You’re humanizing AI too much. It isn’t intelligence that makes something do something; instead, it is motivation and various other emotions and biological drives. An ASI, by definition, WOULD be smarter than any human. But a human with very narrow OCD. People with OCD are not stupid, are they? That’s how you need to think of this.
Humans wouldn’t cause the apocalypse for paperclips because we have a better grasp of everything around us, as opposed to machines that just have one task.
If a machine is intelligent enough to adapt to all kinds of circumstances around it as would be necessary to cause an apocalypse, then it should (hopefully) be intelligent enough to know that what it’s doing isn’t “helping humanity” as it was meant to.
Computers aren’t smart, they just think fast. That’s the basis of a paperclip apocalypse machine; it’s not smart so it destroys everything for paperclips. But if it’s smart enough to destroy everything, hopefully it’d be smart enough to realize that, for example, there will be no paperclip market if all the humans are dead.
So it’s not impossible, I just don’t think it’s as likely as people think.
Also I need to read Friendship is Optimal sometime.
I for one welcome our new pony AI overlord.
Artificial Super-intelligence is a terrifying aspect. Due to incremental self-improvement, a ‘paperclip optimization’ type AI could very well spell the end of the human race without even truly meaning to. Heck, even Celestia AI is quite a realistic scenario, as she is technically a ‘paperclip optimization’ AI.
Either that or it’ll be like Metal Gear Rising :D
RULES OF NATURE!!!
Nice.
Also someone really needs to get around to vectoring the Squirrel and Responsible Disclosure badges >.>
I want to see a screenshot of that.
Three rows of badges on mobile…
The crazy thing is it’s exponential once machines create machines. Just a runaway cycle. It’ll be hard for humanity to really take advantage of the potential like that, though. I’ll just be happy if humanity survives the growing pains. Automation is already taking jobs away, a lot of phone operators lost their jobs last century, but automated cars will do away with a significant chunk of the economy. It won’t be long after that that most jobs go that way.
Never been a fan of communism, but when millions get unemployed through no fault of their own, as only the first wave of millions and millions more to follow, I hate to say it but a base living wage might be necessary. Though I fear how humans will respond. Humanity will face a great test of character and trial of will in the decades ahead. I hope it comes out on top.
A lot of third world countries are so because they’ve already failed. Why work incredibly hard and do amazing things and create wonders and marvels when you could sit in your shack and live on tea? (Bolivia is largely that way, I hear from people who’ve lived there for years)
Though I think it will. Especially STEM people love what they do, and will hopefully do it anyways, if not just so they can live well instead of just at a minimum living wage.
Others not working hard holds us back when there’s not wealth to do things like develop aerospace technology, scan the stars, land on the moon, and cross the globe in hours. But hopefully the amazing industrial power of machines building machines, all in love with serving humanity, will offset, and probably even completely shadow reduced work ethic. What is the labor of 200 million to the industry of a billion intelligent machines?
But the transition period will be rough, when the incentive to work is reduced but there are not yet a billion intelligent machines.
Edited
She’s at your service :D
@Cirrus Light
I’m one of the ones who’s looking forward to that advanced of AIs and thinks it’ll be a great bright future :D
Would totally ||hump/cuddle/||chat with for hours.
Edited
Nah my intent with that text was “life only sucks if you let it suck”.
Or if you’re a living dakimakura || you suck || and then they die?
Edited
Accurate
In a nutshell