The Philosophical Argument Against A.I. Killing Us All..?

“there’s another counterargument to be made based on the philosophy of ethics. Until an A.I. has feelings, it’s going to be unable to want to do anything at all, let alone act counter to humanity’s interests and fight off human resistance. Wanting is essential to any kind of independent action. And the minute an A.I. wants anything, it will live in a universe with rewards and punishments—including punishments from us for behaving badly. In order to survive in a world dominated by humans, a nascent A.I. will have to develop a humanlike moral sense that certain things are right and others are wrong”

The Philosophical Argument Against A.I. Killing Us All
http://www.slate.com/articles/technology/future_tense/2016/04/the_philosophical_argument_against_artificial_intelligence_killing_us_all.html
via Instapaper

Will machines be better at STEM tasks than humans ?

“right now we’re training our kids for the opposite. We’re emphasizing science, technology, engineering and mathematics. While those skills might be necessary to get us to the point of singularity, they’re also ripe for obsolescence once we’re there. Machines will be better at math, science and engineering than we will.”

The funny things happening on the way to singularity
http://social.techcrunch.com/2016/04/09/the-funny-things-happening-on-the-way-to-singularity/
via Instapaper

Here's my take