The Future Of Work Is Physical

Chris Kalaboukis
4 min readApr 11, 2017

A few months ago, I attended an event on the future of conversational interfaces and during this event, one of the speakers presented the state of the art in cloud robotics, a fairly new field. Prior to the advent of cloud robotics, robots were programmed and acted individually. With cloud robotics, the robots could learn from each other, since the intelligence powering the robot was in the cloud, and any findings one connected robot learned, the rest of the robots in the group would learn immediately.

One of the examples that the researcher presented was the simple task of clearing a table. You know, something that most of us humans can do once we hit kindergarten age. The ability to walk up to a table, recognize and grasp glasses, plates, cutlery, placemats, and other objects on the table, then place them in a bin, without breaking them is something that can be easily taught a human by the age of 6. They can even put them in the dishwasher.

Not so with robots. He described how even a simple task like this, to a human, would require immense processing power to complete. First, the robot would need to be able to recognize everything on the table, using computer vision. It would need to be able to “see” everything on the table, figure out which item is an individual item, among the patterned plates, placemats, and centerpiece. In order to do this, it would need to…

--

--