
Superintelligence: Paths, Dangers, Strategies
Currently reading this book....and is not an easy book at all ;)
Superintelligence: Paths, Dangers, Strategies,” argues that true artificial intelligence, if it is realized, might pose a danger that exceeds every previous threat from technology—even nuclear weapons—and that if its development is not managed carefully humanity risks engineering its own extinction. Central to this concern is the prospect of an “intelligence explosion,” a speculative event in which an A.I. gains the ability to improve itself, and in short order exceeds the intellectual potential of the human brain by many orders of magnitude.
When it comes to understand the entire picture of someone's thoughts you have to read the pros and cons and after that make your own neuro-soup.
For a balance you can read Ernest Davis's (Dept of Computer Science/ NYU) review:
https://www.cs.nyu.edu/davise/papers/Bostrom.pdf
Book can be found on Amazon, some other book shops etc.
#bookshelf #AI #superintelligence
Something to put on my list! It's been a while since I've read about AI.
ReplyDeleteThe intelligence explosion is also widely known as The Singularity or the Technological Singularity: https://en.wikipedia.org/wiki/Technological_singularity and a ton of other people have written extensively about it. Douglas Hofstatder and Roger Penrose have both made arguments that it's unlikely to happen soon or ever, and their arguments are also worth reading.
ReplyDeleteDoes he come up with any solution?
ReplyDeleteNot yet Magnus Lewan ...
ReplyDeletePerhaps the main worry is that a super-intelligent AI will use pure logic and realise the world is better off without humans on it, since we can be so destructive and selfish with regards to the natural environment and other species
ReplyDeleteErrors duplicate itself. Its like in our body cells Corina Marinescu . And also the rebel in us will duplicate in such A.I. ;-))
ReplyDeleteCorina Marinescu Does it in any way disentangle the equation of intelligence and ego? - a tendency of smart driven males who typically advance such fields and discussions, to attribute to special smarts what's foremost a function of special selfish drives?... I perceive the Singularity discussion to share a core, undisputed, assumption with Intelligent Design over the reduction of intelligence to top-down means-ends analysis, military command style... Which leads to the second aspect. Of course weapons research and the military-industrial complex are in a position to instill to AI they create the sort of selfish, homicide drives that Singularists appear to fear we might endow AI with inadvertently...
ReplyDelete...like a shadow from the Terminator franchise, whose intrigue over-emphasizes the unintended aspect of the rise of Skynet while under-emphasizing the massive, voluntary investment into putting homicidal power under control of machines, as if that investment was secondary to the danger the AI creates.
IOW, should the Singularity scare not read off as a consequence of (1) the huge success of the Terminator franchise that defines shared background on the issue, combined with (2) aversion for following up on the indictment of the military-industrial complex that's contained in the premises of the story?
This book has certainly brought out both sides, those that agree with Nick's somewhat wild assertations and those that disagree.
ReplyDeleteOwen Iverson Yes, I understand both sides. I do think it will be a bit (within the next century though) before we have human-level sentient AIs, and perhaps a bit longer before they are virtually independent of human society, but I also think this may very well be our evolutionary destiny as I've expressed in my essay Butterflies.
ReplyDeleteThe way I see it Boris Borcic , is not about the formula IQ + EQ +SQ - EGO ...just the point of view of a transhumanist ....and let's face it, transhumanism is such a rarified atmosphere. I'm all for medical advances but the imminence of death is what motivates us to live, to discover, to develop ourselves.
ReplyDeleteBut then again, I like the pressure of time...it's erotic ;)
I can't really accept that in the vastness of the Universe we are at the top of the intelligence chain. Universe is too old and this blue dot way too young. I'll get back after I finish the book.
Maybe we are being watched right now. Quantum mechanics and dark matter could be a clue to where we are heading, if we don't destroy ourselves in the process
ReplyDeleteCong Ma (slightly off topic) I looked up that story to find it was in "Stories of Your Life" which I own in paper, but because Ted Chiang is SOOOOOO good I just purchased the ebook as well!
ReplyDeleteLet it happen. We should engineer our replacement. Humans can't live forever. And we are confined to this planet for all practical purposes.
ReplyDeleteWhat is more likely: 1) the A.I. super intelligence will appear and it will destroy us or 2) we will destroy ourselves before it appears? The hope: die before either happens. I know, I'm an optimist ;)
ReplyDeleteOnce an adventageous mutation occurs, the resulting new form of life establishes itself, replacing the other mutants. That’s what we should expect to see, once someone evolves a strong A.I.
ReplyDeleteBut: algorithmic trading algorithms already are choosing automatically which companies should get investment, and indirectly are responsible for workforce allocations among corporations.
So society is already under control of these algorithms. For algorithmic traders there is motivation to make huge investments in creating general purpose optimization systems to empower themselves.
Mindey I. Yep and the same goes for Google's AI search engine and advertising....shaping society...
ReplyDeleteWhat a coincidence.. Was just about to start that one. I guess I might as well get going.
ReplyDeleteHenrik Ohlin If it's not too late already! :)
ReplyDeleteNah, I think you are expecting too much of humanity. We are just at the brink of understand intelligence as a whole. So far all we've created is artificial stupidity.
ReplyDeleteIf we create something truly intelligent it's just going to benefit some of us.. The ones that are just going to give in to the laziness of letting a machine do the thinking for us.
Sure there are dangers along the way, but those will be there either way in one form or another.
Henrik Ohlin I just hope it has a sense of humor! Maybe it can detect emoticons.
ReplyDeleteIt's one thing for humanity to be replaced by a higher machine sentience -- that would be sad, but one could take solace in the thought that it was an evolutionary progression where humanity was a necessary step. But what I find more disturbing is the idea that humanity might be destroyed by non-sentient blind stupid self-replicating AI, like the "gray goo" scenario of self-replicating nanobots...
ReplyDeleteKent Crispin True, that would be a bit like killing ourselves off through any other means, but you also have to consider, how often throughout the entire universe an it's history exactly that may have happened....a failed evolutionary experiment...
ReplyDeleteAs we don't like to see extinction of other animals with much lower levels of intelligence and as we provide them food and shelter, I don't think a higher intelligence enjoy subjugating and killing human beings! This would be only possible if it wants to grow mechanically without any emotion which is a way of extinction itself. So if that super intelligence is intelligent enough will find its power without fight...
ReplyDeleteAli Shariati What? I think you are making a lot of assumptions and anthropomorphising an independent cognizant alien intelligence we know nothing about. Computer AGI will likely be completely different than anything biological. All bets are off.
ReplyDelete