Race Against the Machine Part 2

Rational choice theory is a cornerstone of conventional economic thinking. It states that:

“Individuals always make prudent and logical decisions. These decisions provide people with the greatest benefit or satisfaction — given the choices available — and are also in their highest self-interest.”

hawking musk gates

Presumably Stephen Hawking, Elon Musk, and Bill Gates had something like this in mind when they published an open letter in January 2015 urging that artificial intelligence R&D should focus “not only on making AI more capable, but also on maximizing the societal benefit,” To execute on this imperative, they urged an interdisciplinary collaboration among “economics, law and philosophy. computer security, formal methods and, of course, various branches of AI itself.” (Since its release, the letter has garnered another 8.000 signatures — you can sign it, too, if you like.)

The letter’s steady, rational four paragraphs praise how technology has benefited the human race, and anticipate more of the same in the future, but its reception and the authors’ comments in other contexts are not so measured. As a result, the letter has become a cheering section for those who think humanity is losing its race against the robots.

Consider, for example, the following from an Observer article:

“Success in creating AI would be the biggest event in human history,” wrote Stephen Hawking in an op-ed, which appeared in The Independent in 2014. “Unfortunately, it might also be the last, unless we learn how to avoid the risks.” Professor Hawking added in a 2014 interview with BBC, “humans, limited by slow biological evolution, couldn’t compete and would be superseded by A.I.”

Elon Musk called the prospect of artificial intelligence “our greatest existential threat” in a 2014 interview with MIT students at the AeroAstro Centennial Symposium. “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.” Mr. Musk cites his decision to invest in the Artificial Intelligence firm, DeepMind, as a means to “just keep an eye on what’s going on with artificial intelligence. I think there is potentially a dangerous outcome there.”

Microsoft co-founder Bill Gates has also expressed concerns about Artificial Intelligence. During a Q&A session on Reddit in January 2015, Mr. Gates said, “I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

Or consider this Elon Musk comment in Vanity Fair:

In a startling public reproach to his friends and fellow techies, Musk warned that they could be creating the means of their own destruction. He told Bloomberg’s Ashlee Vance, the author of the biography Elon Musk, that he was afraid that his friend Larry Page, a co-founder of Google and now the C.E.O. of its parent company, Alphabet, could have perfectly good intentions but still “produce something evil by accident”—including, possibly, “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.”

In other words, Hawking, Gates, and Musk aren’t just worried about machines taking over jobs, they’re worried about the end of the world — or at least the human race. This Washington Post op-ed piece thinks that might not be such a bad thing:

When a technology is so obviously dangerous — like nuclear energy or synthetic biology — humanity has an imperative to consider dystopian predictions of the future. But it also has an imperative to push on, to reach its full potential. While it’s scary, sure, that humans may no longer be the smartest life forms in the room a generation from now, should we really be that concerned? Seems like we’ve already done a pretty good job of finishing off the planet anyway. If anything, we should be welcoming our AI masters to arrive sooner rather than later.

Or consider this open letter written back to Hawking, Gates, and Musk, which basically says forget the fear mongering — it’s going to happen no matter what you think:

Progress is inevitable, even if it is reached by accident and happenstance. Even if we do not intend to, sentient AI is something that will inevitably be created, be it through the evolution of a learning AI, or as a byproduct of some research. No treaty or coalition can stop it, no matter what you think. I just pray you do not go from educated men to fear mongers when it happens.

As usual, we’re at an ideological impasse, with both sides responding not so much according to the pros and cons but according to their predispositions. This article suggests a way through the impasse:

At the beginning of this article, we asked if the pessimists or optimists would be right.

There is a third option, though: one where we move from building jobs around processes and tasks, a solution that is optimal for neither human nor machine, to building jobs around problems.

The article is long, well-researched, and… well, very rational. Too bad — conventional thinking aside — other research shows we rarely act from a rational outlook when it comes to jobs and the economy… or anything else for that matter.

More on that next time.

Race Against the Machine

For the past several years, two MIT big thinkers[1] have been the go-to authorities in the scramble to explain how robotics, artificial intelligence, and big data are revolutionizing the economy and the working world. Their two books were published four and six years ago — so yesterday in the world of technology — but they were remarkably prescient when written, and have not diminished in relevance. They are:

Race Against the Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy (2012)

The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (2014)

Click here for a chapter-by-chapter digest of The Second Machine Age, written by an all star cast of economic commentators. Among other things, they acknowledge the authors’ view that neoliberal capitalism has not fared well in its dealings with the technological juggernaut, but in the absence of a better alternative, we might as well continue to ride the horse in the direction it’s going.

While admitting that History (not human choice) is “littered with unintended… side effects of well-intentioned social and economic policies”, the authors cite Tim O’Reilly[2] in pushing forward with technology’s momentum rather than clinging to the past or present. They suggest that we should let the technologies do their work and just find ways to deal with it. They are “skeptical of efforts to come up with fundamental alternatives to capitalism.”

David Rotman, editor of the MIT Technology Review cites The Second Machine Age extensively in an excellent, longer article, “How Technology is Destroying Jobs.” Although the article is packed with contrary analysis and opinion, the following excepts emphasize what many might consider the shadowy  side of the street (compared to the sunny side we looked at in the past couple posts). I added the headings below to emphasize that many of the general economic themes we’ve been talking about also apply to the specific dynamics of the job market.

It used to be that economic growth — including wealth creation — also created more jobs. It doesn’t work that way any more. Perhaps the most damning piece of evidence, according to Brynjolfsson, is a chart that only an economist could love. In economics, productivity—the amount of economic value created for a given unit of input, such as an hour of labor—is a crucial indicator of growth and wealth creation. It is a measure of progress. On the chart Brynjolfsson likes to show, separate lines represent productivity and total employment in the United States.

For years after World War II, the two lines closely tracked each other, with increases in jobs corresponding to increases in productivity. The pattern is clear: as businesses generated more value from their workers, the country as a whole became richer, which fueled more economic activity and created even more jobs. Then, beginning in 2000, the lines diverge; productivity continues to rise robustly, but employment suddenly wilts. By 2011, a significant gap appears between the two lines, showing economic growth with no parallel increase in job creation. Brynjolfsson and McAfee call it the “great decoupling.” And Brynjolfsson says he is confident that technology is behind both the healthy growth in productivity and the weak growth in jobs.

A rising economic tide no longer floats all boats. The result is a skewed allocation of the rewards of growth away from jobs — i.e., economic inequality. The contention that automation and digital technologies are partly responsible for today’s lack of jobs has obviously touched a raw nerve for many worried about their own employment. But this is only one consequence of what ­Brynjolfsson and McAfee see as a broader trend. The rapid acceleration of technological progress, they say, has greatly widened the gap between economic winners and losers—the income inequalities that many economists have worried about for decades..

“[S]teadily rising productivity raised all boats for much of the 20th century,” [Brynjolfsson] says. “Many people, especially economists, jumped to the conclusion that was just the way the world worked. I used to say that if we took care of productivity, everything else would take care of itself; it was the single most important economic statistic. But that’s no longer true.” He adds, “It’s one of the dirty secrets of economics: technology progress does grow the economy and create wealth, but there is no economic law that says everyone will benefit.” In other words, in the race against the machine, some are likely to win while many others lose.

That robots, automation, and software can replace people might seem obvious to anyone who’s worked in automotive manufacturing or as a travel agent. But Brynjolfsson and McAfee’s claim is more troubling and controversial. They believe that rapid technological change has been destroying jobs faster than it is creating them, contributing to the stagnation of median income and the growth of inequality in the United States.

Meanwhile, technology is taking over the jobs that are left– blue collar, white collar, and even the professions. [I]mpressive advances in computer technology—from improved industrial robotics to automated translation services—are largely behind the sluggish employment growth of the last 10 to 15 years. Even more ominous for workers, the MIT academics foresee dismal prospects for many types of jobs as these powerful new technologies are increasingly adopted not only in manufacturing, clerical, and retail work but in professions such as law, financial services, education, and medicine.

Technologies like the Web, artificial intelligence, big data, and improved analytics—all made possible by the ever increasing availability of cheap computing power and storage capacity—are automating many routine tasks. Countless traditional white-collar jobs, such as many in the post office and in customer service, have disappeared.

New technologies are “encroaching into human skills in a way that is completely unprecedented,” McAfee says, and many middle-class jobs are right in the bull’s-eye; even relatively high-skill work in education, medicine, and law is affected.

We’ll visit the shadowy side of the street again next time.

[1] Erik Brynjolfsson is director of the MIT Center for Digital Business, and Andrew McAfee is a principal research scientist at MIT who studies how digital technologies are changing business, the economy, and society.

[2] According to his official bio on his website, Tim O’Reilly “is the founder and CEO of  O’Reilly Media, Inc. His original business plan was simply ‘interesting work for interesting people,’ and that’s worked out pretty well. O’Reilly Media delivers online learning, publishes books, runs conferences, urges companies to create more value than they capture, and tries to change the world by spreading and amplifying the knowledge of innovators.”

Bright Sunshiny Day Cont’d.

David Lee TED talk

Last time, we heard David Lee[1] express his conviction that, far from destroying human jobs, robotic technology will unleash human creativity on a wonderful new world of work. His perspective is so remarkably and refreshingly upbeat that I thought we’d let him continue where he left off last week:

“I think it’s important to recognize that we brought this problem on ourselves. And it’s not just because, you know, we are the one building the robots. But even though most jobs left the factory decades ago, we still hold on to this factory mindset of standardization and de-skilling. We still define jobs around procedural tasks and then pay people for the number of hours that they perform these tasks. We’ve created narrow job definitions like cashier, loan processor or taxi driver and then asked people to form entire careers around these singular tasks.

“These choices have left us with actually two dangerous side effects. The first is that these narrowly defined jobs will be the first to be displaced by robots, because single-task robots are just the easiest kinds to build. But second, we have accidentally made it so that millions of workers around the world have unbelievably boring working lives.

“Let’s take the example of a call center agent. Over the last few decades, we brag about lower operating costs because we’ve taken most of the need for brainpower out of the person and put it into the system. For most of their day, they click on screens, they read scripts. They act more like machines than humans. And unfortunately, over the next few years, as our technology gets more advanced, they, along with people like clerks and bookkeepers, will see the vast majority of their work disappear.

“To counteract this, we have to start creating new jobs that are less centered on the tasks that a person does and more focused on the skills that a person brings to work. For example, robots are great at repetitive and constrained work, but human beings have an amazing ability to bring together capability with creativity when faced with problems that we’ve never seen before.

“We need to realistically think about the tasks that will be disappearing over the next few years and start planning for more meaningful, more valuable work that should replace it. We need to create environments where both human beings and robots thrive. I say, let’s give more work to the robots, and let’s start with the work that we absolutely hate doing. Here, robot, process this painfully idiotic report.

“And for the human beings, we should follow the advice from Harry Davis at the University of Chicago. He says we have to make it so that people don’t leave too much of themselves in the trunk of their car. I mean, human beings are amazing on weekends. Think about the people that you know and what they do on Saturdays. They’re artists, carpenters, chefs and athletes. But on Monday, they’re back to being Junior HR Specialist and Systems Analyst 3.

“You know, these narrow job titles not only sound boring, but they’re actually a subtle encouragement for people to make narrow and boring job contributions. But I’ve seen firsthand that when you invite people to be more, they can amaze us with how much more they can be.

“[The key is]to turn dreams into a reality. And that dreaming is an important part of what separates us from machines. For now, our machines do not get frustrated, they do not get annoyed, and they certainly don’t imagine.

“But we, as human beings — we feel pain, we get frustrated. And it’s when we’re most annoyed and most curious that we’re motivated to dig into a problem and create change. Our imaginations are the birthplace of new products, new services, and even new industries.

“If we really want to robot-proof our jobs, we, as leaders, need to get out of the mindset of telling people what to do and instead start asking them what problems they’re inspired to solve and what talents they want to bring to work. Because when you can bring your Saturday self to work on Wednesdays, you’ll look forward to Mondays more, and those feelings that we have about Mondays are part of what makes us human.”

We’ll give the other side equal time next week.

[1] David Lee is Vice President of Innovation and the Strategic Enterprise Fund for UPS.

Gonna Be a Bright, Bright, Sunshiny Day

We met Sebastian Thrun last time. He’s a bright guy with a sunshiny disposition who’s not worried about robots and artificial intelligence taking over all the good jobs, even his own. Instead, he’s perfectly okay if technology eliminates most of what he does every day because he believes human ingenuity will fill the vacuum with something better. This is from his conversation with TED curator Chris Anderson:

“If I look at my own job as a CEO, I would say 90 percent of my work is repetitive, I don’t enjoy it, I spend about four hours per day on stupid, repetitive email. And I’m burning to have something that helps me get rid of this. Why? Because I believe all of us are insanely creative… What this will empower is to turn this creativity into action.

“We’ve unleashed this amazing creativity by de-slaving us from farming and later, of course, from factory work and have invented so many things. It’s going to be even better, in my opinion. And there’s going to be great side effects. One of the side effects will be that things like food and medical supply and education and shelter and transportation will all become much more affordable to all of us, not just the rich people.”

Anderson sums it up this way:

“So the jobs that are getting lost, in a way, even though it’s going to be painful, humans are capable of more than those jobs. This is the dream. The dream is that humans can rise to just a new level of empowerment and discovery. That’s the dream.”

Another bright guy with a sunshiny disposition is David Lee, Vice President of Innovation and the Strategic Enterprise Fund for UPS. He, too, shares the dream that technology will turn human creativity loose on a whole new kind of working world. Here’s his TED talk (click the image):

David Lee TED talk

Like Sebastian Thrun, he’s no Pollyanna:  he understands that yes, technology threatens jobs:

“There’s a lot of valid concern these days that our technology is getting so smart that we’ve put ourselves on the path to a jobless future. And I think the example of a self-driving car is actually the easiest one to see. So these are going to be fantastic for all kinds of different reasons. But did you know that ‘driver’ is actually the most common job in 29 of the 50 US states? What’s going to happen to these jobs when we’re no longer driving our cars or cooking our food or even diagnosing our own diseases?

“Well, a recent study from Forrester Research goes so far to predict that 25 million jobs might disappear over the next 10 years. To put that in perspective, that’s three times as many jobs lost in the aftermath of the financial crisis. And it’s not just blue-collar jobs that are at risk. On Wall Street and across Silicon Valley, we are seeing tremendous gains in the quality of analysis and decision-making because of machine learning. So even the smartest, highest-paid people will be affected by this change.

“What’s clear is that no matter what your job is, at least some, if not all of your work, is going to be done by a robot or software in the next few years.”

But that’s not the end of the story. Like Thrun, he believes that the rise of the robots will clear the way for unprecedented levels of human creativity — provided we move fast:

“The good news is that we have faced down and recovered two mass extinctions of jobs before. From 1870 to 1970, the percent of American workers based on farms fell by 90 percent, and then again from 1950 to 2010, the percent of Americans working in factories fell by 75 percent. The challenge we face this time, however, is one of time. We had a hundred years to move from farms to factories, and then 60 years to fully build out a service economy.

“The rate of change today suggests that we may only have 10 or 15 years to adjust, and if we don’t react fast enough, that means by the time today’s elementary-school students are college-aged, we could be living in a world that’s robotic, largely unemployed and stuck in kind of un-great depression.

“But I don’t think it has to be this way. You see, I work in innovation, and part of my job is to shape how large companies apply new technologies. Certainly some of these technologies are even specifically designed to replace human workers. But I believe that if we start taking steps right now to change the nature of work, we can not only create environments where people love coming to work but also generate the innovation that we need to replace the millions of jobs that will be lost to technology.

“I believe that the key to preventing our jobless future is to rediscover what makes us human, and to create a new generation of human-centered jobs that allow us to unlock the hidden talents and passions that we carry with us every day.”

More from David Lee next time.

If all this bright sunshiny perspective made you think of that old tune, you might treat yourself to a listen. It’s short, you’ve got time.

And for a look at a current legal challenge to the “gig economy” across the pond, check out this Economist article from earlier this week.

Learning to Learn

“I didn’t know robots had advanced so far,” a reader remarked after last week’s post about how computers are displacing knowledge workers. What changed to make that happen? The machines learned how to learn. This is from Artificial Intelligence Goes Bilingual—Without A Dictionary, Science Magazine, Nov. 28, 2017.

“Imagine that you give one person lots of Chinese books and lots of Arabic books—none of them overlapping—and the person has to learn to translate Chinese to Arabic. That seems impossible, right?” says… Mikel Artetxe, a computer scientist at the University of the Basque Country (UPV) in San Sebastiàn, Spain. “But we show that a computer can do that.”

Most machine learning—in which neural networks and other computer algorithms learn from experience—is “supervised.” A computer makes a guess, receives the right answer, and adjusts its process accordingly. That works well when teaching a computer to translate between, say, English and French, because many documents exist in both languages. It doesn’t work so well for rare languages, or for popular ones without many parallel texts.

[This learning technique is called] unsupervised machine learning. [A computer using this technique] constructs bilingual dictionaries without the aid of a human teacher telling them when their guesses are right.

Hmmm… I could have used that last year, when my wife and I spent three months visiting our daughter in South Korea. The Korean language is ridiculously complex; I never got much past “good morning.”

Alpha Go match

Go matches were a standard offering on the gym TV’s where I worked out in Seoul. (Imagine two guys in black suits staring intently at a game board — not exactly a riveting workout visual.) Like the Korean language, Go is also ridiculously complex, and mysterious, too:  the masters seem to make moves more intuitively than analytically. But the days of human Go supremacy are over. Google wizard and overall overachiever Sebastian Thrun[1] explains why in this conversation with TED Curator Chris Anderson:

sebastian thrun TED

“Artificial intelligence and machine learning is about 60 years old and has not had a great day in its past until recently. And the reason is that today, we have reached a scale of computing and datasets that was necessary to make machines smart. The new thing now is that computers can find their own rules. So instead of an expert deciphering, step by step, a rule for every contingency, what you do now is you give the computer examples and have it infer its own rules.

“A really good example is AlphaGo. Normally, in game playing, you would really write down all the rules, but in AlphaGo’s case, the system looked over a million games and was able to infer its own rules and then beat the world’s residing Go champion. That is exciting, because it relieves the software engineer of the need of being super smart, and pushes the burden towards the data.

“20 years ago the computers were as big as a cockroach brain. Now they are powerful enough to really emulate specialized human thinking. And then the computers take advantage of the fact that they can look at much more data than people can. AlphaGo looked at more than a million games.  No human expert can ever study a million games. So as a result, the computer can find rules that even people can’t find.”

Thrun made those comments in April 2017. AlphaGo’s championship reign was short-lived:  six months later it lost big to a new cyber challenger that taught itself without reviewing all that data. This is from AlphaGo Zero Shows Machines Can Become Superhuman Without Any Help, MIT Technology Review, October 18, 2017.

AlphaGo wasn’t the best Go player on the planet for very long. A new version of the masterful AI program has emerged, and it’s a monster. In a head-to-head matchup, AlphaGo Zero defeated the original program by 100 games to none.

Whereas the original AlphaGo learned by ingesting data from hundreds of thousands of games played by human experts, AlphaGo Zero started with nothing but a blank board and the rules of the game. It learned simply by playing millions of games against itself, using what it learned in each game to improve.

The new program represents a step forward in the quest to build machines that are truly intelligent. That’s because machines will need to figure out solutions to difficult problems even when there isn’t a large amount of training data to learn from.

“The most striking thing is we don’t need any human data anymore,” says Demis Hassabis, CEO and cofounder of DeepMind [the creators of AlphaGo Zero].

“By not using human data or human expertise, we’ve actually removed the constraints of human knowledge,” says David Silver, the lead researcher at DeepMind and a professor at University College London. “It’s able to create knowledge for itself from first principles.”

Did you catch that? “We’ve removed the constraints of human knowledge.” Wow. No wonder computers are elbowing all those knowledge workers out of the way.

What’s left for human to do? We’ll hear from Sebastian Thrun and others on that topic next time.

[1] Sebastian Thrun’s TED bio describes him as “an educator, entrepreneur and troublemaker. After a long life as a professor at Stanford University, Thrun resigned from tenure to join Google. At Google, he founded Google X, home to self-driving cars and many other moonshot technologies. Thrun also founded Udacity, an online university with worldwide reach, and Kitty Hawk, a ‘flying car’ company. He has authored 11 books, 400 papers, holds 3 doctorates and has won numerous awards.”

The Super Bowl Ad That Was Too True To Be Too Funny

Sprint Super Bowl Ad

Did you see the Sprint Super Bowl ad (click the image), where a scientist gets laughed out of his lab by his impertinent artificially intelligent robots? It was funny, but in that groaning kind of way when humor is just a bit too true. Let’s break down the punchline:  “My coworkers” says the scientist, talking about robots, “laughed at me.” He responds to the robotic peer pressure with the human feeling of shame, and changes his cell phone provider to conform.

Wow. Get used to it. It could happen to you. True, the robots’ sense of humor was pretty immature. He chastises them “Guys, it wasn’t that funny.” But they’ll learn — that’s what artificial intelligence does — it learns, really fast. They’ll be doing sarcasm and irony soon — that is, when they’re not busy passing a university entrance exam, managing an investment portfolio, developing business strategy, practicing medicine. practicing law, writing up your news feeds, creating art, composing music … and generally doing all those other things everybody knew all along that robots surely would never be able to do.

Miami lawyer Luis Salazar used to think that way, until he met Ross. This is from a NY Times article from last March:

“Skeptical at first, he tested Ross against himself. After 10 hours of searching online legal databases, he found a case whose facts nearly mirrored the one he was working on. Ross found that case almost instantly.”

Ross is not a human. “He” never went to law school, never took a legal methods class, never learned to do research, never had a professor or partner critique his legal writing. “He” is machine intelligence. Not only did he find the clincher case in a fraction of the time Salazar did, he also did a nice job of writing up a legal memo:

“Mr. Salazar has been particularly impressed by a legal memo service that Ross is developing. Type in a legal question and Ross replies a day later with a few paragraphs summarizing the answer and a two-page explanatory memo.

“The results, he said, are indistinguishable from a memo written by a lawyer. ‘That blew me away,’ Mr. Salazar said. ‘It’s kind of scary. If it gets better, a lot of people could lose their jobs.’”

Yes, scary — especially when you consider the cost of legal research:  click here and enter “legal research” in the search field. Among other things, you’ll get an article about Ross and another about the cost of legal research. If Ross is that good, he could save a lot of firms a lot of money… and eliminate a lot of jobs along the way. (The Ross Intelligence website is worth a visit — there’s attorney Salazar on video, and an impressive banner of early adopting law firms, with a lot of names you’ll recognize.)

And speaking of things that were never supposed to happen, the NY Times article cites a McKinsey report that, using technology then available, 23 percent of a lawyer’s work could be fully automated. Given the explosion of AI in the past year, we are already way beyond that percentage.

How are you going to compete with that? You’re not. Consider this story from a source we’ve visited several times already (the book Plutocrats by Chrystia Freeland):

“In 2010, DLA Piper faced a court-imposed deadline of searching through 570,000 documents in one week. The firm… hired Clearwell, a Silicon Valley e-discovery company. Clearwell software did the job in two days. DLA Piper lawyers spent one day going through the results. After three days of work, the firm responded to the judge’s order with 3,070 documents. A decade ago, DLA Piper would have employed thirty associates full-time for six months to do that work.”

Note the date:  that happened eight years ago. Today, the whole thing would happen a lot faster, with much less human involvement.

I tried to get a robot to write this blog post, but didn’t succeed. Articoolo,com looked promising:  “Stop wasting your time,” its website trumpets, “let us do the writing for you!” The company is obviously fully in tune with the freelance job market we’ve been talking about:  “You no longer have to wait for someone on the other side of the world to write, proofread and send the content to you.” I tried a few topic entries, but the best it could do was to admit it had written an article but it wasn’t up to standards, so sorry… But then, it’s only available in beta. Give it time to learn.

I also sent an inquiry to the people at Ross Intelligence, asking if Ross could write an article about itself. I never heard back — he’s probably too busy signing up more firms to hire him.

More on robots and artificial intelligence next time.

The Super Bowl of Economics: Capitalism vs. Technology


Technology is the odds-on favorite.

In the multi-author collection Does Capitalism Have a Future?, Randall Collins, Emeritus Professor of Sociology at the University of Pennsylvania, observes that capitalism is subject to a “long-term structural weakness,” namely “ the technological displacement of labor by machines.”

Technology eliminating jobs is nothing new. From the end of the 18th Century through the end of the 20th, the Industrial Revolution swept a huge number of manual labor jobs into the dustbin of history. It didn’t happen instantly:  at the turn of the 20th Century, 40% of the USA workforce still worked on the farm. A half century later, that figure was 16%.

I grew up in rural Minnesota, where farm kids did chores before school, town kids baled hay for summer jobs, and everybody watched the weather and asked how the crops were doing. We didn’t know we were a vanishing species. In fact, “learning a trade” so you could “work with your hands” was still a moral and societal virtue. I chose carpentry. It was my first fulltime job after I graduated with a liberal arts degree.

Another half century later, at the start of the 21st Century, less than 2% of the U.S. workforce was still on the farm. In my hometown, our GI fathers beat their swords into plowshares, then my generation moved to the city and melted the plows down into silicon. And now the technological revolution is doing the same thing to mental labor that the Industrial revolution did to manual labor — only it’s doing it way faster, even though most of us aren’t aware that “knowledge workers” are a vanishing species. The following is from The Stupidity Paradox:  The Power and Pitfalls of Functional Stupidity at Work:

“1962… was the year the management thinker Peter Drucker was asked by The New York Times to write about what the economy would look like in 1980. One big change he foresaw was the rise of the new type of employee he called ‘knowledge workers.’

“A few years ago, Steven Sweets and Peter Meiksins decided they wanted to track the changing nature of work in the new knowledge intensive economy. These two US labour sociologists assembled large-scale statistical databases as well as research reports from hundreds of workplaces. What they found surprised them. A new economy full of knowledge workers was nowhere to be found.

“The researchers summarized their unexpected finding this way:  for every well-paid programmer working at a firm like Microsoft, there are three people flipping burgers at a restaurant like McDonald’s. It seems that in the ‘knowledge’ economy, low-level service jobs still dominate.

“A report by the US Bureau of Labor Statistics painted an even bleaker picture. One third of the US workforce was made up of three occupational groups:  office and administrative support, sales and related occupations, and food preparation and related work.”

And now — guess what? — those non-knowledge workers flipping your burgers might not be human. This is from “Robots Will Transform Fast Food” in this month’s The Atlantic:

“According to Michael Chui, a partner at the McKinsey Global Institute, many tasks in the food-service and accommodation industry are exactly the kind that are easily automated. Chui’s latest research estimates that 54 percent of the tasks workers perform in American restaurants and hotels could be automated using currently available technologies—making it the fourth-most-automatable sector in the U.S.

“Robots have arrived in American restaurants and hotels for the same reasons they first arrived on factory floors. The cost of machines, even sophisticated ones, has fallen significantly in recent years, dropping 40 percent since 2005, according to the Boston Consulting Group.

“‘We think we’ve hit the point where labor-wage rates are now making automation of those tasks make a lot more sense,’ Bob Wright, the chief operations officer of Wendy’s, said in a conference call with investors last February, referring to jobs that feature ‘repetitive production tasks.’

“The international chain CaliBurger, for example, will soon install Flippy, a robot that can flip 150 burgers an hour.”

That’s Flippy’s picture at the top of this post. Burger flippers are going the way of farmers — the Flippies of the world are busy eliminating one of the three main occupational groups in the U.S. And again, a lot of us aren’t aware this is going on.

Burger flipping maybe to particularly amenable to automation, but what about other knowledge-based jobs that surely a robot couldn’t do — like, let’s say, writing this column, or managing a corporation, or even… practicing law?

More to come.

Check out Kevin’s latest LinkedIn Pulse article:  Leadership and Life Lessons From an Elite Athlete and a Dying Man.