The Coming Technological Singularity, Vernor Vinge:
Discusses the Idea of Technological Singularities, with the main idea being that eventually, AI intelligence will  be greater than that of humans, due to the constant increasing rate that technology develops.  Discusses the possible advancements, such as giant collaborative networks of AI, a mismatch of human and AI and Biological science adapting AI technology.
The idea of the Singularity was shown as early as the 1960s, by I.J. Good, who defined ultraintelligent machines, as machines who become more intelligent than any men, ever. He also predicted that before the end of the 20th century, one of these machines would be invented. It may be more probable to occur within the next 100 years.

He discusses the ideas of how to avoid the singularity. Discussing government recognition of AI threats and laws and policies against it. But also the risks of it, due to the economically driven society that will benefit from the existence of Super-AI. I personally am a fan of the idea of adding rules inbuilt to the AI, such as Asimov’s ruleset, to allow the development of AI economically, but prevent the potential human harm.

Are you Living in a Computer Simulation, Nick Bostrom:
Presents the idea that humanity is likely to either go extinct before reaching a ‘posthuman’ era, or that they are unlikely to run simulations of their history, or that we are currently living inside one of their simulations, only one of these is said to be true by him.
Each of the ideas and explanations is incredibly complex, discussing in-depth the plausibility of simulating billions of human minds, or even entire galaxies, and the purpose of even doing it. The possibly that any ideas about it being a simulation could just be edited away, and the idea that we are all just strings of data, very advanced data, but still data. To some, the text may just be an implausible mess of jargon that may or may not actually be true, but the general ideas make sense, even if they can be far fetched.

Fallacy of the Cheesecake Factory, Wim Naude:
The final text discusses the potential impacts that AI may have, and the importance of combatting the negative aspects of it as best as we can, as he states that the ultimate future for any species is to be replaced by AI, referencing pop culture like 2001: A Space Odyssey and the AI Hal 9000, alongside the Alien race who turn out to be AI at the end of the movie. he also uses 2001: ASO to discuss the morality of AI, and how their potential interests may not be to benefit humans. The Idea of the AI arms race is brought up, with countries competing both to create smart AIs, but smarter AI powered weapons.
He also discusses if the current term for AI is even relevant, due to the fact that current AI technology doesn’t count as intelligence, due to it being the process of acting upon a large swarm of data to make decisions from previously decided rules.

Personal View:
Regarding the AI singularity, the whole process itself makes sense, as the gradual growth of technology is visibly present today, from stuff like processing power to storage space, which have grown exponentially, both of these technologies are also useful for AIs, showing that their growth will also be exponential. Regarding AI becoming ‘superior’ to humans, I think that whilst the idea is plausible, there is no incentive for humans to let it happen, even with the industrial and economic potential of it, it should be seen by politicians, scientists and other  world leaders that the singularity is a threat, and stopped in time.

If you enjoyed this post, make sure you subscribe to my RSS feed!