ITGS Online

‘hanging out the dirty linen’ to delve into the ethics of IT’s role in society.

ITGS Online (weekly)

Posted from Diigo. The rest of ITGSonline group favorite links are here.

ITGS Online (weekly)

Posted from Diigo. The rest of ITGSonline group favorite links are here.

ITGS Online (weekly)

Posted from Diigo. The rest of ITGSonline group favorite links are here.

Is the Matrix real?

Do we live in a computer simulation? Are we simulated by advanced descendants of an original race? Can we win the race against robots? Will machines hold supremacy over the world?

Artificial intelligence doesn’t have any definite definition, but it surely corresponds to machine learning which connects certain features of biology like neural networks to maths which is the Bayes theorem, it is proposed that machines that act intelligently and can make the right decision in uncertain circumstances. These machines prefer to use algorithms that help the machine to learn from data and hence provide its independent output. I read three articles and all three of them have versatile propositions regarding Artificial intelligence.

One of them starts with how the human species is very likely to go extinct before reaching a Posthuman stage, which refers to the era when a mature stage of civilization exists, where humankind has exceeded capabilities radically to those of present humans and our on the same page as the physical laws and with material and energy constraints. On the other side, they introduce the idea of how AI is not even close to attaining human intelligence, it is just another domain-specific competency whose functionality depends on the veracity of datasets. I feel it is the expected way of how humans should think, investigate,  weigh up the pros and cons logically or mathematically and then make a decision. Hence, I would say the purpose of AI is to not replace human intelligence but to complement it. The second one is Posthuman civilization is extremely unlikely to run a significant number of simulations of its evolutionary history o be true. This requires the number of ancestor simulations created by the civilizations to be a large number. And with the coming of Technology singularity, Furthermore, most of the individuals lack the interest and desire to run ancestor simulations or they can’t really afford it. As most individuals look forward to the posthuman stage, they should expect posthuman societies to be really different from human societies. And the third one was that we are almost certainly living in a computer simulation.  The fraction of all people with our kind of experiences that are living in a simulation is very close to one. With the use of the theory of elimination, Nick Bostrom doesn’t really conclude with any one of them but rather just states that all it has to be at least one of the three propositions. 

The second paper starts by stating that technology is the wildcard in humanity’s survival. And it has impacted a late number of areas concerning us, mainly employment. The main argument about this paper is that is it going to solve all the wicked problems existing or is it going to intentionally or unintentionally destroy humanity. It starts with stating that Narrow AI has the intelligence of an abacus, and it will never attain the AGI-Artificial general intelligence. The main research fields are computer vision, natural language processing, and speech recognition. Throughout the start, only the history of technology has been talked about and how we reached the stage of artificial intelligence with the use of mathematical concepts, neural networks, and features of machine learning. But what is machine learning, the article would define it as using assumptions and algorithms which are given to the machine, certain predictions are the output. It all depends on the accuracy and effectiveness of the data which is provided. It is said that the most valuable source on earth is now data and not oil as it can turn around millions of people on the other side of the debate. Misinformation and fake news have always been around since technology started growing. It is then discussed how different companies made use of artificial intelligence in different aspects of their functioning, but they also had to introduce additional terms and policies to protect their customer’s data and prevent misinformation. But now that we have passed many stages, we need to re-skill and re-tool to reach a whole new stage. And then the difference between a robot and AI has been clearly defined, robots can use AI, but they are not part of the automation technology. And the introduction of the gig economy is not far away. It concludes that AI won’t cause huge job losses, it rather creates more jobs. The whole objective of this paper was that, if you’re baking a cheesecake, how large a cheesecake you can bake depends on your intelligence. A Superintelligence could build enormous cheesecakes – cheesecakes the size of cities. And Moore’s Law keeps dropping the cost of computing power. By golly, the future will be full of giant cheesecakes!’ -Yudkowsky

And the third paper discusses the acceleration of technological progress that has been made over the centuries. The main arguments here are that it is doubtful that superintelligent computers which are smarter than humans will be developed shortly, but large computer networks might turn out to be superhumanly intelligent entities. On the other hand, it is also arguing that the user interfaces have come to a level that it can be justified it is superhumanly intelligent, these methods are called intelligence amplification, this is still being researched upon by biologists to find ways which improvise natural human intellect. Superhuman refers to the ability or power beyond a normal/ordinary human. But these arguments are less likely to occur before 2030, as most of them are related to the hardware and the fourth one cannot be hastened. The argument starts with a natural fauna connection, how animals find ways of surviving in their niche surroundings and sometimes just depend on natural selection. But at the same time, humans have the added benefit of analyzing situations using a code of conduct. We have control over certain situations and different outputs of those. Just to make this process, more effective and accurate, we need to find a way 9of developing superhuman intelligence, to take a hundred percent advantage of voluntary actions. But when we look from a normal human’s point of view, it will be an unfamiliarity due to the exponential growth, might lose touch with previous rules and code of conduct. And this is when the idea of singularity is introduced, it can be described as certain models are disregarded and a new set of real rules are introduced. It is desired to be a computer with ultra-intelligent skills which can surpass all intellectual activities of a common man. These machines can further be used to develop and evolve superhumanly powered computers and will be the last invention of man. I feel this paper is on Picasso’s side when he said, “everything you can imagine, it’s real”, as it discusses how science fiction and comic writers have come up with spectacular automation in their creative endeavours. Any especially, in today’s generation, ideas spread faster than ever including radical ones which might be considered basic. The important part is how will the idea of singularity arrive, it will surely be faster than any other previous technical revolution. But what’s next? The expected, it will be filled with optimism and criticism. How practical and better does it make singularity compared to other technological developments then. It can act as a threat to the human status quo but give more accurate, safe, and precise results.The singularity is something that we should all strive for as humans have been built to always want to progress? The answer to that is in the argument which states that singularity revolves around humans’ natural competitiveness and future possibilities in technology. But at the end of the day, humans are responsible for initiating it all, we have hit the trigger point of singularity and there is no going back now. There is an alternative method that might take place, is to create composite systems that largely depend on biological life for guidance and features, and we need no implementations in hardware. But I feel that singularity will trump over this as it relatively sounds more practical and possible, and they are valid reasons for it. Hence, singularity is an inevitable concept

No, the ideas described in the articles are not outlandish and might happen.  I wouldn’t say they are completely offbbeat as Nick Brostom pretty much convinced me that we are living in a simulation, but I feel that many exceptions anomalies were avoided to reach his conclusion which makes it inaccurate and reduces the probability of it happening. Whereas the other ones all make sense, but I feel none of them applied shortly but surely is a possibility because technology has just exponentially risen since the start, there might be an increase in social and ethical issues, but there hasn’t been a decline in the technology development. I feel some arguments are just a few facts based on statistics and have been discussed with the support of technological development affecting our day-to-day life.

Artificial intelligence, one of the articles likes to define it as general-purpose technology akin to electricity, the other argues that intelligence itself is hard to define, but they gave a general definition which was machines that act intelligently and can make the right decisions in uncertain circumstances. But I feel it should now be described as going beyond humans’ ability and power of performing tasks in multiple domains, so that they make things more efficient, effortless and totally don’t replace human labor because I am sure they are some things only humans are capable to do. Yes, it naturally leads to singularity because that is the whole reason AI growing and expanding so rapidly. I feel it needs to be divided into subcategories depending on its application in real life.  This is the reason researchers are going on and on, to improvise or come up with new features which might change the future generations’ outlook of the world.

I would like to quote something that Ford said, It is not us versus the AIs, which has been the theme of many AI futurist dystopian movies. We are going to merge with it. We already have.

Technological Singularity: 3 AI Articles.

The Coming Technological Singularity, Vernor Vinge:
Discusses the Idea of Technological Singularities, with the main idea being that eventually, AI intelligence will  be greater than that of humans, due to the constant increasing rate that technology develops.  Discusses the possible advancements, such as giant collaborative networks of AI, a mismatch of human and AI and Biological science adapting AI technology.
The idea of the Singularity was shown as early as the 1960s, by I.J. Good, who defined ultraintelligent machines, as machines who become more intelligent than any men, ever. He also predicted that before the end of the 20th century, one of these machines would be invented. It may be more probable to occur within the next 100 years.

He discusses the ideas of how to avoid the singularity. Discussing government recognition of AI threats and laws and policies against it. But also the risks of it, due to the economically driven society that will benefit from the existence of Super-AI. I personally am a fan of the idea of adding rules inbuilt to the AI, such as Asimov’s ruleset, to allow the development of AI economically, but prevent the potential human harm.

Are you Living in a Computer Simulation, Nick Bostrom:
Presents the idea that humanity is likely to either go extinct before reaching a ‘posthuman’ era, or that they are unlikely to run simulations of their history, or that we are currently living inside one of their simulations, only one of these is said to be true by him.
Each of the ideas and explanations is incredibly complex, discussing in-depth the plausibility of simulating billions of human minds, or even entire galaxies, and the purpose of even doing it. The possibly that any ideas about it being a simulation could just be edited away, and the idea that we are all just strings of data, very advanced data, but still data. To some, the text may just be an implausible mess of jargon that may or may not actually be true, but the general ideas make sense, even if they can be far fetched.

Fallacy of the Cheesecake Factory, Wim Naude:
The final text discusses the potential impacts that AI may have, and the importance of combatting the negative aspects of it as best as we can, as he states that the ultimate future for any species is to be replaced by AI, referencing pop culture like 2001: A Space Odyssey and the AI Hal 9000, alongside the Alien race who turn out to be AI at the end of the movie. he also uses 2001: ASO to discuss the morality of AI, and how their potential interests may not be to benefit humans. The Idea of the AI arms race is brought up, with countries competing both to create smart AIs, but smarter AI powered weapons.
He also discusses if the current term for AI is even relevant, due to the fact that current AI technology doesn’t count as intelligence, due to it being the process of acting upon a large swarm of data to make decisions from previously decided rules.

Personal View:
Regarding the AI singularity, the whole process itself makes sense, as the gradual growth of technology is visibly present today, from stuff like processing power to storage space, which have grown exponentially, both of these technologies are also useful for AIs, showing that their growth will also be exponential. Regarding AI becoming ‘superior’ to humans, I think that whilst the idea is plausible, there is no incentive for humans to let it happen, even with the industrial and economic potential of it, it should be seen by politicians, scientists and other  world leaders that the singularity is a threat, and stopped in time.

Models And Simulations


The simulations I tried were Bouncy maps –!/bouncymaps/WORLD/1147138151 and Phet I feel that they have a really easy and efficient user interface as you can play around on the web itself, certain other simulations require you to download/embed one. The Phet one is a little tough to get your head around but as I have used it in physics lessons, I found it easier. For a beginner, I would recommend playing around with the modifiable variables to understand any concept. Using the play/pause/slow motion options gives the users a chance to improve their understanding and concepts. The second one I used was the Bouncy map, it was weird that a normal map acts as the default page rather than the bouncy map. That is either to emphasize how effective a bouncy map could be. It is easy to work with, the zooming in and out can go a little out of control but changing the variables and reading data is really easy.

These simulations improve your understanding but might convey misinformation or a misinterpretation of certain concepts as they are animated. But these can limit your critical thinking or your visualizing skills. And also simulations can either miss out on the exceptional features (anomalies) in certain concepts or might be outdated. These simulations are good as a supporting source in research or teaching, but you can’t depend on them, they are models and simulations to improvise your understanding only. They cannot replace real-life demonstrations.

One of the other interesting simulations is Desmos, which is usually used to graph equations and helps a lot to visualize complex graphs during maths lessons, they are easy to use and provide you with all mathematical symbols and expressions

Hello all 5 people who read my posts that aren’t me, I’m back from the dead to write a new blog post on the topic of simulations and models except this time, I’ll be using a different medium: Infographics.

When tasked with making infographics for the topic, I decided to use canva instead of hand (mouse?) formatting the infographics in Photoshop so that I could spend less time on colour theory, alignment, stylization and the intricacies of making an infographic myself and more on research. With this came the requirement to learn how Canva’s interface worked, some aspects of which I’ll be criticising.

The first thing I noticed about Canva was the volume of pre-existing infographic templates. This made it very easy to just get straight into the writing process without having to spend much time thinking about how I wanted my infographic to look. Apart from infographic templates, Canva also has an extensive gallery of icons and elements that can can be used by simply drag and dropping onto the infographic itself.

The first model I looked at was Bouncy maps, a data mapper that rescaled the size of countries based off their % share of a statistic.

Link to pdf copy: Bouncy Maps

The second model I looked at was a cycling simulation that used a road traffic simulation for its base.

Link to pdf copy: Cycling Sim

Infographic link


PHeT atomic interaction simulations:

This simulation was quite interesting for me since I’d used other simulations from the same website in physics classes before; they were for much simpler topics, or for exploring radioactivity which isn’t possible to do in Singapore. Thus it was quite familiar to me, but I can imagine maybe for a first-time user that all the buttons and variables that you can control can be a bit daunting. However, I think that this is the main advantage of the simulation: that you can technically “experimental” values from it really easily and quickly.


My first impression was that the website was very clean-looking, and didn’t have many words. It took me a bit to realise that there were actually many other variables I could display, and that I could also do variables specific to the USA such as electoral colleges (which is strange considering that the company is European). Overall, this model was quite concise and effective at what it was designed to do, since it was intuitive to navigate and understand what the sizes of countries meant.

ITGS Online (weekly)

Posted from Diigo. The rest of ITGSonline group favorite links are here.



(Links to open the Bouncy Maps and PhET infographics are above)

Computer models and simulations can challenge our learning but they can also really help us understand and visualize things in a clearer way. Bouncy Maps and PhET simulations are quite different but they are both very successful in getting their points across. Through Bouncy Maps you can learn about data from all around the world and through the sizes of the different countries, you can learn how they stand in comparison to the rest of the world. If you scroll down, below the maps you have a table showing the actual numbers and percentage values from every country in the world, you can even download spreadsheets with data from over 50 years for any further research. As a simulation, Bouncy Maps is very successful in getting across the information that they want to share. PhETs simulations are used for Physics, Chemistry, Maths, Biology, and Earth Sciences. Out of their over 150 simulations, over two-thirds are Physics related. These simulations allow anyone to experience and learn Science whenever they want and from wherever they are. The simulations are very clear and include different stages. They help students a lot as some experiments can’t be carried out in the classroom or are simply not very accurate when performed in the classroom. Through these experiments, there is no uncertainty and everyone has access to every experiment they need.

Personally, I have used both of them and think that they are both very successful and useful. I tend to use PhET more than Bouncy Maps as it helps me when I’m studying Physics. Although the information that one can obtain from Bouncy Maps isn’t so relevant in what I do, I still enjoy looking through it as I find it all very interesting. I also like how you have the possibility to download data from previous years, it is a good source to obtain information from and you can be certain that the data is accurate.

PhET acts more like a simulation, it is interactive and helps you learn. There are different scenarios which are imitations of experiments or situations. By the user interacting with the software, it is as though they are in that situation but just experiencing it remotely. Bouncy Maps, however, are more of a model. Although they move around you can’t really interact with the software, it is static and presents data to the user. Bouncy Maps are structured representations of different sets of data. Models and simulations are both very helpful when it comes to learning, in different situations, and for different subjects or topics.

Bouncy Maps:!/bouncymaps/world/-2102779804

PhET Simulations:


« Previous Entries  Next Page »