The end of human labour is inevitable, here's why
OK. So, you might look at state-of-the-art A.I. and say "oh, this uses too much power compared to a human brain" or "this takes too many examples compared to a human brain".
So far, correct.
But there are 7.6 billion humans: if an A.I. watches all of them all of the time (easy to imagine given around 2 billion of us already have two or three competing A.I. in our pockets all the time, forever listening for an activation keyword), then there is an enormous set of examples with which to train the machine mind.
"But," you ask, "what about the power consumption?"
Humans cost a bare minimum of $1.25 per day, even if they're literally slaves and you only pay for food and (minimal) shelter. Solar power can be as cheap as 2.99¢/kWh.
Combined, that means that any A.I. which uses less than 1.742 kilowatts per human-equivalent-part is beating the cheapest possible human — By way of comparison, Google's first generation Tensor processing unit uses 40 W when busy — in the domain of Go, it's about 174,969 times as cost efficient as a minimum-cost human, because four of them working together as one can teach itself to play Go better than the best human in… three days.
And don't forget that it's reasonable for A.I. to have as many human-equivalent-parts as there are humans performing whichever skill is being fully automated.
Skill. Not sector, not factory, skill.
And when one skill is automated away, when the people who performed that skill go off to retrain on something else, no matter where they are or what they do, there will be an A.I. watching them and learning with them.
Is there a way out?
Sure. All you have to do is make sure you learn a skill nobody else is learning.
Unfortunately, there is a reason why "thinking outside the box" is such a business cliché: humans suck at that style of thinking, even when we know what it is and why it's important. We're too social, we copy each other and create by remixing more than by genuinely innovating, even when we think we have something new.
Computers are, ironically, better than humans at thinking outside the box: two of the issues in Concrete Problems in AI Safety are there because machines easily stray outside the boxes we are thinking within when we give them orders. (I suspect that one of the things which forces A.I. to need far more examples to learn things than we humans do is that they have zero preconceived notions, and therefore must be equally open-minded to all possibilities).
Worse, no matter how creative you are, if other humans see you performing a skill that machines have yet to master, those humans will copy you… and then the machines, even today's machines, will rapidly learn from all the enthusiastic humans who are so gleeful about their new trick to stay one step ahead of the machines, the new skill they can point to and say "look, humans are special, computers can't do this" right up until the computers do it.
Original post timestamp: Fri, 17 Nov 2017 11:36:34 +0000
Tags: AI, Technology
Categories: Futurology