Do we live in a computer simulation? Are we simulated by advanced descendants of an original race? Can we win the race against robots? Will machines hold supremacy over the world?

Artificial intelligence doesn’t have any definite definition, but it surely corresponds to machine learning which connects certain features of biology like neural networks to maths which is the Bayes theorem, it is proposed that machines that act intelligently and can make the right decision in uncertain circumstances. These machines prefer to use algorithms that help the machine to learn from data and hence provide its independent output. I read three articles and all three of them have versatile propositions regarding Artificial intelligence.

One of them starts with how the human species is very likely to go extinct before reaching a Posthuman stage, which refers to the era when a mature stage of civilization exists, where humankind has exceeded capabilities radically to those of present humans and our on the same page as the physical laws and with material and energy constraints. On the other side, they introduce the idea of how AI is not even close to attaining human intelligence, it is just another domain-specific competency whose functionality depends on the veracity of datasets. I feel it is the expected way of how humans should think, investigate,  weigh up the pros and cons logically or mathematically and then make a decision. Hence, I would say the purpose of AI is to not replace human intelligence but to complement it. The second one is Posthuman civilization is extremely unlikely to run a significant number of simulations of its evolutionary history o be true. This requires the number of ancestor simulations created by the civilizations to be a large number. And with the coming of Technology singularity, Furthermore, most of the individuals lack the interest and desire to run ancestor simulations or they can’t really afford it. As most individuals look forward to the posthuman stage, they should expect posthuman societies to be really different from human societies. And the third one was that we are almost certainly living in a computer simulation.  The fraction of all people with our kind of experiences that are living in a simulation is very close to one. With the use of the theory of elimination, Nick Bostrom doesn’t really conclude with any one of them but rather just states that all it has to be at least one of the three propositions. 

The second paper starts by stating that technology is the wildcard in humanity’s survival. And it has impacted a late number of areas concerning us, mainly employment. The main argument about this paper is that is it going to solve all the wicked problems existing or is it going to intentionally or unintentionally destroy humanity. It starts with stating that Narrow AI has the intelligence of an abacus, and it will never attain the AGI-Artificial general intelligence. The main research fields are computer vision, natural language processing, and speech recognition. Throughout the start, only the history of technology has been talked about and how we reached the stage of artificial intelligence with the use of mathematical concepts, neural networks, and features of machine learning. But what is machine learning, the article would define it as using assumptions and algorithms which are given to the machine, certain predictions are the output. It all depends on the accuracy and effectiveness of the data which is provided. It is said that the most valuable source on earth is now data and not oil as it can turn around millions of people on the other side of the debate. Misinformation and fake news have always been around since technology started growing. It is then discussed how different companies made use of artificial intelligence in different aspects of their functioning, but they also had to introduce additional terms and policies to protect their customer’s data and prevent misinformation. But now that we have passed many stages, we need to re-skill and re-tool to reach a whole new stage. And then the difference between a robot and AI has been clearly defined, robots can use AI, but they are not part of the automation technology. And the introduction of the gig economy is not far away. It concludes that AI won’t cause huge job losses, it rather creates more jobs. The whole objective of this paper was that, if you’re baking a cheesecake, how large a cheesecake you can bake depends on your intelligence. A Superintelligence could build enormous cheesecakes – cheesecakes the size of cities. And Moore’s Law keeps dropping the cost of computing power. By golly, the future will be full of giant cheesecakes!’ -Yudkowsky

And the third paper discusses the acceleration of technological progress that has been made over the centuries. The main arguments here are that it is doubtful that superintelligent computers which are smarter than humans will be developed shortly, but large computer networks might turn out to be superhumanly intelligent entities. On the other hand, it is also arguing that the user interfaces have come to a level that it can be justified it is superhumanly intelligent, these methods are called intelligence amplification, this is still being researched upon by biologists to find ways which improvise natural human intellect. Superhuman refers to the ability or power beyond a normal/ordinary human. But these arguments are less likely to occur before 2030, as most of them are related to the hardware and the fourth one cannot be hastened. The argument starts with a natural fauna connection, how animals find ways of surviving in their niche surroundings and sometimes just depend on natural selection. But at the same time, humans have the added benefit of analyzing situations using a code of conduct. We have control over certain situations and different outputs of those. Just to make this process, more effective and accurate, we need to find a way 9of developing superhuman intelligence, to take a hundred percent advantage of voluntary actions. But when we look from a normal human’s point of view, it will be an unfamiliarity due to the exponential growth, might lose touch with previous rules and code of conduct. And this is when the idea of singularity is introduced, it can be described as certain models are disregarded and a new set of real rules are introduced. It is desired to be a computer with ultra-intelligent skills which can surpass all intellectual activities of a common man. These machines can further be used to develop and evolve superhumanly powered computers and will be the last invention of man. I feel this paper is on Picasso’s side when he said, “everything you can imagine, it’s real”, as it discusses how science fiction and comic writers have come up with spectacular automation in their creative endeavours. Any especially, in today’s generation, ideas spread faster than ever including radical ones which might be considered basic. The important part is how will the idea of singularity arrive, it will surely be faster than any other previous technical revolution. But what’s next? The expected, it will be filled with optimism and criticism. How practical and better does it make singularity compared to other technological developments then. It can act as a threat to the human status quo but give more accurate, safe, and precise results.The singularity is something that we should all strive for as humans have been built to always want to progress? The answer to that is in the argument which states that singularity revolves around humans’ natural competitiveness and future possibilities in technology. But at the end of the day, humans are responsible for initiating it all, we have hit the trigger point of singularity and there is no going back now. There is an alternative method that might take place, is to create composite systems that largely depend on biological life for guidance and features, and we need no implementations in hardware. But I feel that singularity will trump over this as it relatively sounds more practical and possible, and they are valid reasons for it. Hence, singularity is an inevitable concept

No, the ideas described in the articles are not outlandish and might happen.  I wouldn’t say they are completely offbbeat as Nick Brostom pretty much convinced me that we are living in a simulation, but I feel that many exceptions anomalies were avoided to reach his conclusion which makes it inaccurate and reduces the probability of it happening. Whereas the other ones all make sense, but I feel none of them applied shortly but surely is a possibility because technology has just exponentially risen since the start, there might be an increase in social and ethical issues, but there hasn’t been a decline in the technology development. I feel some arguments are just a few facts based on statistics and have been discussed with the support of technological development affecting our day-to-day life.

Artificial intelligence, one of the articles likes to define it as general-purpose technology akin to electricity, the other argues that intelligence itself is hard to define, but they gave a general definition which was machines that act intelligently and can make the right decisions in uncertain circumstances. But I feel it should now be described as going beyond humans’ ability and power of performing tasks in multiple domains, so that they make things more efficient, effortless and totally don’t replace human labor because I am sure they are some things only humans are capable to do. Yes, it naturally leads to singularity because that is the whole reason AI growing and expanding so rapidly. I feel it needs to be divided into subcategories depending on its application in real life.  This is the reason researchers are going on and on, to improvise or come up with new features which might change the future generations’ outlook of the world.

I would like to quote something that Ford said, It is not us versus the AIs, which has been the theme of many AI futurist dystopian movies. We are going to merge with it. We already have.

If you enjoyed this post, make sure you subscribe to my RSS feed!