The present and future of AI

Finale doshi-velez on how ai is shaping our lives and how we can shape ai.

image of Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences

Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences. (Photo courtesy of Eliza Grinnell/Harvard SEAS)

How has artificial intelligence changed and shaped our world over the last five years? How will AI continue to impact our lives in the coming years? Those were the questions addressed in the most recent report from the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted at Stanford University, that will study the status of AI technology and its impacts on the world over the next 100 years.

The 2021 report is the second in a series that will be released every five years until 2116. Titled “Gathering Strength, Gathering Storms,” the report explores the various ways AI is  increasingly touching people’s lives in settings that range from  movie recommendations  and  voice assistants  to  autonomous driving  and  automated medical diagnoses .

Barbara Grosz , the Higgins Research Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) is a member of the standing committee overseeing the AI100 project and Finale Doshi-Velez , Gordon McKay Professor of Computer Science, is part of the panel of interdisciplinary researchers who wrote this year’s report. 

We spoke with Doshi-Velez about the report, what it says about the role AI is currently playing in our lives, and how it will change in the future.  

Q: Let's start with a snapshot: What is the current state of AI and its potential?

Doshi-Velez: Some of the biggest changes in the last five years have been how well AIs now perform in large data regimes on specific types of tasks.  We've seen [DeepMind’s] AlphaZero become the best Go player entirely through self-play, and everyday uses of AI such as grammar checks and autocomplete, automatic personal photo organization and search, and speech recognition become commonplace for large numbers of people.  

In terms of potential, I'm most excited about AIs that might augment and assist people.  They can be used to drive insights in drug discovery, help with decision making such as identifying a menu of likely treatment options for patients, and provide basic assistance, such as lane keeping while driving or text-to-speech based on images from a phone for the visually impaired.  In many situations, people and AIs have complementary strengths. I think we're getting closer to unlocking the potential of people and AI teams.

There's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: Over the course of 100 years, these reports will tell the story of AI and its evolving role in society. Even though there have only been two reports, what's the story so far?

There's actually a lot of change even in five years.  The first report is fairly rosy.  For example, it mentions how algorithmic risk assessments may mitigate the human biases of judges.  The second has a much more mixed view.  I think this comes from the fact that as AI tools have come into the mainstream — both in higher stakes and everyday settings — we are appropriately much less willing to tolerate flaws, especially discriminatory ones. There's also been questions of information and disinformation control as people get their news, social media, and entertainment via searches and rankings personalized to them. So, there's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: What is the responsibility of institutes of higher education in preparing students and the next generation of computer scientists for the future of AI and its impact on society?

First, I'll say that the need to understand the basics of AI and data science starts much earlier than higher education!  Children are being exposed to AIs as soon as they click on videos on YouTube or browse photo albums. They need to understand aspects of AI such as how their actions affect future recommendations.

But for computer science students in college, I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc.  I'm really excited that Harvard has the Embedded EthiCS program to provide some of this education.  Of course, this is an addition to standard good engineering practices like building robust models, validating them, and so forth, which is all a bit harder with AI.

I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc. 

Q: Your work focuses on machine learning with applications to healthcare, which is also an area of focus of this report. What is the state of AI in healthcare? 

A lot of AI in healthcare has been on the business end, used for optimizing billing, scheduling surgeries, that sort of thing.  When it comes to AI for better patient care, which is what we usually think about, there are few legal, regulatory, and financial incentives to do so, and many disincentives. Still, there's been slow but steady integration of AI-based tools, often in the form of risk scoring and alert systems.

In the near future, two applications that I'm really excited about are triage in low-resource settings — having AIs do initial reads of pathology slides, for example, if there are not enough pathologists, or get an initial check of whether a mole looks suspicious — and ways in which AIs can help identify promising treatment options for discussion with a clinician team and patient.

Q: Any predictions for the next report?

I'll be keen to see where currently nascent AI regulation initiatives have gotten to. Accountability is such a difficult question in AI,  it's tricky to nurture both innovation and basic protections.  Perhaps the most important innovation will be in approaches for AI accountability.

Topics: AI / Machine Learning , Computer Science

Cutting-edge science delivered direct to your inbox.

Join the Harvard SEAS mailing list.

Scientist Profiles

Finale Doshi-Velez

Finale Doshi-Velez

Herchel Smith Professor of Computer Science

Press Contact

Leah Burrows | 617-496-1351 | [email protected]

Related News

Head shot of SEAS Ph.D. alum Jacomo Corbo

Alumni profile: Jacomo Corbo, Ph.D. '08

Racing into the future of machine learning 

AI / Machine Learning , Computer Science

Harvard SEAS Ph.D. student Lucas Monteiro Paes wearing a white shirt and black glasses

Ph.D. student Monteiro Paes named Apple Scholar in AI/ML

Monteiro Paes studies fairness and arbitrariness in machine learning models

AI / Machine Learning , Applied Mathematics , Awards , Graduate Student Profile

Four people standing in a line, one wearing a Harvard sweatshirt, two holding second-place statues

A new phase for Harvard Quantum Computing Club

SEAS students place second at MIT quantum hackathon

Computer Science , Quantum Engineering , Undergraduate Student Profile

By using our website you agree to our use of cookies. Close

  • Reading Room
  • Book reviews
  • Articles & Shortlisted Essays
  • Salary surveys
  • Winning Rybczynski Essays
  • Newsletter Archive

04 March 2020

How robots change the world: their impact on regional inequalities.

How robots change the world [ pdf | 617 kB ]

Next 2018/19 Rybczynski Prize Essay (1) 12 March 2019

Previous 2020/21 Rybczynski Prize Essay 28 September 2021

how robots changed the world essay

The Rybczynski Prize

Discover more about this prestigious award.

To revisit this article, visit My Profile, then View saved stories .

  • Backchannel
  • Newsletters
  • WIRED Insider
  • WIRED Consulting

The WIRED Guide to Robots

Modern robots are not unlike toddlers: It’s hilarious to watch them fall over, but deep down we know that if we laugh too hard, they might develop a complex and grow up to start World War III. None of humanity’s creations inspires such a confusing mix of awe, admiration, and fear: We want robots to make our lives easier and safer, yet we can’t quite bring ourselves to trust them. We’re crafting them in our own image, yet we are terrified they’ll supplant us.

But that trepidation is no obstacle to the booming field of robotics. Robots have finally grown smart enough and physically capable enough to make their way out of factories and labs to walk and roll and even leap among us . The machines have arrived.

You may be worried a robot is going to steal your job, and we get that. This is capitalism, after all, and automation is inevitable. But you may be more likely to work alongside a robot in the near future than have one replace you. And even better news: You’re more likely to make friends with a robot than have one murder you. Hooray for the future!

The Complete History And Future of Robots

The definition of “robot” has been confusing from the very beginning. The word first appeared in 1921, in Karel Capek’s play R.U.R. , or Rossum's Universal Robots. “Robot” comes from the Czech for “forced labor.” These robots were robots more in spirit than form, though. They looked like humans, and instead of being made of metal, they were made of chemical batter. The robots were far more efficient than their human counterparts, and also way more murder-y—they ended up going on a killing spree .

R.U.R. would establish the trope of the Not-to-Be-Trusted Machine (e.g., Terminator , The Stepford Wives , Blade Runner , etc.) that continues to this day—which is not to say pop culture hasn’t embraced friendlier robots. Think Rosie from The Jetsons . (Ornery, sure, but certainly not homicidal.) And it doesn’t get much family-friendlier than Robin Williams as Bicentennial Man .

The real-world definition of “robot” is just as slippery as those fictional depictions. Ask 10 roboticists and you’ll get 10 answers—how autonomous does it need to be, for instance. But they do agree on some general guidelines : A robot is an intelligent, physically embodied machine. A robot can perform tasks autonomously to some degree. And a robot can sense and manipulate its environment.

Think of a simple drone that you pilot around. That’s no robot. But give a drone the power to take off and land on its own and sense objects and suddenly it’s a lot more robot-ish. It’s the intelligence and sensing and autonomy that’s key.

But it wasn’t until the 1960s that a company built something that started meeting those guidelines. That’s when SRI International in Silicon Valley developed Shakey , the first truly mobile and perceptive robot. This tower on wheels was well-named—awkward, slow, twitchy. Equipped with a camera and bump sensors, Shakey could navigate a complex environment. It wasn’t a particularly confident-looking machine, but it was the beginning of the robotic revolution.

Around the time Shakey was trembling about, robot arms were beginning to transform manufacturing. The first among them was Unimate , which welded auto bodies. Today, its descendants rule car factories, performing tedious, dangerous tasks with far more precision and speed than any human could muster. Even though they’re stuck in place, they still very much fit our definition of a robot—they’re intelligent machines that sense and manipulate their environment.

Robots, though, remained largely confined to factories and labs, where they either rolled about or were stuck in place lifting objects. Then, in the mid-1980s Honda started up a humanoid robotics program. It developed P3, which could walk pretty darn good and also wave and shake hands, much to the delight of a roomful of suits . The work would culminate in Asimo, the famed biped, which once tried to take out President Obama with a well-kicked soccer ball. (OK, perhaps it was more innocent than that.)

Today, advanced robots are popping up everywhere . For that you can thank three technologies in particular: sensors, actuators, and AI.

So, sensors. Machines that roll on sidewalks to deliver falafel can only navigate our world thanks in large part to the 2004 Darpa Grand Challenge, in which teams of roboticists cobbled together self-driving cars to race through the desert. Their secret? Lidar, which shoots out lasers to build a 3-D map of the world. The ensuing private-sector race to develop self-driving cars has dramatically driven down the price of lidar, to the point that engineers can create perceptive robots on the (relative) cheap.

Watch the Total Solar Eclipse Online Here

Reece Rogers

These Women Came to Antarctica for Science. Then the Predators Emerged

David Kushner

The Solar Eclipse Is the Super Bowl for Conspiracists

David Gilbert

He Got a Pig Kidney Transplant. Now Doctors Need to Keep It Working

Emily Mullin

Lidar is often combined with something called machine vision—2-D or 3-D cameras that allow the robot to build an even better picture of its world. You know how Facebook automatically recognizes your mug and tags you in pictures? Same principle with robots. Fancy algorithms allow them to pick out certain landmarks or objects .

Sensors are what keep robots from smashing into things. They’re why a robot mule of sorts can keep an eye on you, following you and schlepping your stuff around ; machine vision also allows robots to scan cherry trees to determine where best to shake them , helping fill massive labor gaps in agriculture.

New technologies promise to let robots sense the world in ways that are far beyond humans’ capabilities. We’re talking about seeing around corners: At MIT, researchers have developed a system that watches the floor at the corner of, say, a hallway, and picks out subtle movements being reflected from the other side that the piddling human eye can’t see. Such technology could one day ensure that robots don’t crash into humans in labyrinthine buildings, and even allow self-driving cars to see occluded scenes.

Within each of these robots is the next secret ingredient: the actuator , which is a fancy word for the combo electric motor and gearbox that you’ll find in a robot’s joint. It’s this actuator that determines how strong a robot is and how smoothly or not smoothly it moves . Without actuators, robots would crumple like rag dolls. Even relatively simple robots like Roombas owe their existence to actuators. Self-driving cars, too, are loaded with the things.

Actuators are great for powering massive robot arms on a car assembly line, but a newish field, known as soft robotics, is devoted to creating actuators that operate on a whole new level. Unlike mule robots, soft robots are generally squishy, and use air or oil to get themselves moving. So for instance, one particular kind of robot muscle uses electrodes to squeeze a pouch of oil, expanding and contracting to tug on weights . Unlike with bulky traditional actuators, you could stack a bunch of these to magnify the strength: A robot named Kengoro, for instance, moves with 116 actuators that tug on cables, allowing the machine to do unsettlingly human maneuvers like pushups . It’s a far more natural-looking form of movement than what you’d get with traditional electric motors housed in the joints.

And then there’s Boston Dynamics, which created the Atlas humanoid robot for the Darpa Robotics Challenge in 2013. At first, university robotics research teams struggled to get the machine to tackle the basic tasks of the original 2013 challenge and the finals round in 2015, like turning valves and opening doors. But Boston Dynamics has since that time turned Atlas into a marvel that can do backflips , far outpacing other bipeds that still have a hard time walking. (Unlike the Terminator, though, it does not pack heat.) Boston Dynamics has also begun leasing a quadruped robot called Spot, which can recover in unsettling fashion when humans kick or tug on it . That kind of stability will be key if we want to build a world where we don’t spend all our time helping robots out of jams. And it’s all thanks to the humble actuator.

At the same time that robots like Atlas and Spot are getting more physically robust, they’re getting smarter, thanks to AI. Robotics seems to be reaching an inflection point, where processing power and artificial intelligence are combining to truly ensmarten the machines . And for the machines, just as in humans, the senses and intelligence are inseparable—if you pick up a fake apple and don’t realize it’s plastic before shoving it in your mouth, you’re not very smart.

This is a fascinating frontier in robotics (replicating the sense of touch, not eating fake apples). A company called SynTouch, for instance, has developed robotic fingertips that can detect a range of sensations , from temperature to coarseness. Another robot fingertip from Columbia University replicates touch with light, so in a sense it sees touch : It’s embedded with 32 photodiodes and 30 LEDs, overlaid with a skin of silicone. When that skin is deformed, the photodiodes detect how light from the LEDs changes to pinpoint where exactly you touched the fingertip, and how hard.

Far from the hulking dullards that lift car doors on automotive assembly lines, the robots of tomorrow will be very sensitive indeed.

The Complete History And Future of Robots

Increasingly sophisticated machines may populate our world, but for robots to be really useful, they’ll have to become more self-sufficient. After all, it would be impossible to program a home robot with the instructions for gripping each and every object it ever might encounter. You want it to learn on its own, and that is where advances in artificial intelligence come in.

Take Brett. In a UC Berkeley lab, the humanoid robot has taught itself to conquer one of those children’s puzzles where you cram pegs into different shaped holes. It did so by trial and error through a process called reinforcement learning. No one told it how to get a square peg into a square hole, just that it needed to. So by making random movements and getting a digital reward (basically, yes, do that kind of thing again ) each time it got closer to success, Brett learned something new on its own . The process is super slow, sure, but with time roboticists will hone the machines’ ability to teach themselves novel skills in novel environments, which is pivotal if we don’t want to get stuck babysitting them.

Another tack here is to have a digital version of a robot train first in simulation, then port what it has learned to the physical robot in a lab. Over at Google , researchers used motion-capture videos of dogs to program a simulated dog, then used reinforcement learning to get a simulated four-legged robot to teach itself to make the same movements. That is, even though both have four legs, the robot’s body is mechanically distinct from a dog’s, so they move in distinct ways. But after many random movements, the simulated robot got enough rewards to match the simulated dog. Then the researchers transferred that knowledge to the real robot in the lab, and sure enough, the thing could walk—in fact, it walked even faster than the robot manufacturer’s default gait, though in fairness it was less stable.

13 Robots, Real and Imagined

Image may contain Art Painting Wood Figurine Human and Person

They may be getting smarter day by day, but for the near future we are going to have to babysit the robots. As advanced as they’ve become, they still struggle to navigate our world. They plunge into fountains , for instance. So the solution, at least for the short term, is to set up call centers where robots can phone humans to help them out in a pinch . For example, Tug the hospital robot can call for help if it’s roaming the halls at night and there’s no human around to move a cart blocking its path. The operator would them teleoperate the robot around the obstruction.

Speaking of hospital robots. When the coronavirus crisis took hold in early 2020, a group of roboticists saw an opportunity: Robots are the perfect coworkers in a pandemic. Engineers must use the crisis, they argued in an editorial , to supercharge the development of medical robots, which never get sick and can do the dull, dirty, and dangerous work that puts human medical workers in harm’s way. Robot helpers could take patients’ temperatures and deliver drugs, for instance. This would free up human doctors and nurses to do what they do best: problem-solving and being empathetic with patients, skills that robots may never be able to replicate.

The rapidly developing relationship between humans and robots is so complex that it has spawned its own field, known as human-robot interaction . The overarching challenge is this: It’s easy enough to adapt robots to get along with humans—make them soft and give them a sense of touch—but it’s another issue entirely to train humans to get along with the machines. With Tug the hospital robot, for example, doctors and nurses learn to treat it like a grandparent—get the hell out of its way and help it get unstuck if you have to. We also have to manage our expectations: Robots like Atlas may seem advanced, but they’re far from the autonomous wonders you might think.

What humanity has done is essentially invented a new species, and now we’re maybe having a little buyers’ remorse. Namely, what if the robots steal all our jobs? Not even white-collar workers are safe from hyper-intelligent AI, after all.

A lot of smart people are thinking about the singularity, when the machines grow advanced enough to make humanity obsolete. That will result in a massive societal realignment and species-wide existential crisis. What will we do if we no longer have to work? How does income inequality look anything other than exponentially more dire as industries replace people with machines?

These seem like far-out problems, but now is the time to start pondering them. Which you might consider an upside to the killer-robot narrative that Hollywood has fed us all these years: The machines may be limited at the moment, but we as a society need to think seriously about how much power we want to cede. Take San Francisco, for instance, which is exploring the idea of a robot tax, which would force companies to pay up when they displace human workers.

I can’t sit here and promise you that the robots won’t one day turn us all into batteries , but the more realistic scenario is that, unlike in the world of R.U.R. , humans and robots are poised to live in harmony—because it’s already happening. This is the idea of multiplicity , that you’re more likely to work alongside a robot than be replaced by one. If your car has adaptive cruise control, you’re already doing this, letting the robot handle the boring highway work while you take over for the complexity of city driving. The fact that the US economy ground to a standstill during the coronavirus pandemic made it abundantly clear that robots are nowhere near ready to replace humans en masse.

The machines promise to change virtually every aspect of human life, from health care to transportation to work. Should they help us drive? Absolutely. (They will, though, have to make the decision to sometimes kill , but the benefits of precision driving far outweigh the risks.) Should they replace nurses and cops? Maybe not—certain jobs may always require a human touch.

One thing is abundantly clear: The machines have arrived. Now we have to figure out how to handle the responsibility of having invented a whole new species.

The Complete History And Future of Robots

If You Want a Robot to Learn Better, Be a Jerk to It A good way to make a robot learn is to do the work in simulation, so the machine doesn’t accidentally hurt itself. Even better, you can give it tough love by trying to knock objects out of its hand.

Spot the Robot Dog Trots Into the Big, Bad World Boston Dynamics' creation is starting to sniff out its role in the workforce: as a helpful canine that still sometimes needs you to hold its paw.

Finally, a Robot That Moves Kind of Like a Tongue Octopus arms and elephant trunks and human tongues move in a fascinating way, which has now inspired a fascinating new kind of robot.

Robots Are Fueling the Quiet Ascendance of the Electric Motor For something born over a century ago, the electric motor really hasn’t fully extended its wings. The problem? Fossil fuels are just too easy, and for the time being, cheap. But now, it’s actually robots, with their actuators, that are fueling the secret ascendence of the electric motor.

This Robot Fish Powers Itself With Fake Blood A robot lionfish uses a rudimentary vasculature and “blood” to both energize itself and hydraulically power its fins.

Inside the Amazon Warehouse Where Humans and Machines Become One In an Amazon sorting center, a swarm of robots works alongside humans. Here’s what that says about Amazon—and the future of work.

This guide was last updated on April 13, 2020.

Enjoyed this deep dive? Check out more WIRED Guides .

how robots changed the world essay

Rob Reddick

The Keys to a Long Life Are Sleep and a Better Diet&-and Money

Matt Reynolds

Meet the Designer Behind Neuralink’s Surgical Robot

Amos Zeeberg

So You Want to Rewire Brains

Caitlin Kelly

Premium Content

how robots changed the world essay

The robot revolution has arrived

Machines now perform all sorts of tasks: They clean big stores, patrol borders, and help children with autism. But will they improve our lives?

If you’re like most people, you’ve probably never met a robot. But you will.

I met one on a windy, bright day last January, on the short-grass prairie near Colorado’s border with Kansas, in the company of a rail-thin 31-year-old from San Francisco named Noah Ready-Campbell. To the south, wind turbines stretched to the horizon in uneven ranks, like a silent army of gleaming three-armed giants. In front of me was a hole that would become the foundation for another one.

A Caterpillar 336 excavator was digging that hole—62 feet in diameter, with walls that slope up at a 34-degree angle, and a floor 10 feet deep and almost perfectly level. The Cat piled the dug-up earth on a spot where it wouldn’t get in the way; it would start a new pile when necessary. Every dip, dig, raise, turn, and drop of the 41-ton machine required firm control and well-tuned judgment. In North America, skilled excavator operators earn as much as $100,000 a year.

The seat in this excavator, though, was empty. The operator lay on the cab’s roof. It had no hands; three snaky black cables linked it directly to the excavator’s control system. It had no eyes or ears either, since it used lasers, GPS, video cameras, and gyroscope-like sensors that estimate an object’s orientation in space to watch over its work. Ready-Campbell, co-founder of a San Francisco company called Built Robotics , clomped across the coarse dirt, climbed onto the excavator, and lifted the lid of a fancy luggage carrier on the roof. Inside was his company’s product—a 200-pound device that does work that once required a human being.

“This is where the AI runs,” he said, pointing into the collection of circuit boards, wires, and metal boxes that made up the machine: Sensors to tell it where it is, cameras to let it see, controllers to send its commands to the excavator, communication devices that allow humans to monitor it, and the processor where its artificial intelligence, or AI, makes the decisions a human driver would. “These control signals get passed down to the computers that usually respond to the joysticks and pedals in the cab.”

face of robot looking line a toy.

Some roboticists believe people are more comfortable around robots that look like Curi, from the Socially Intelligent Machines Lab at Georgia Tech. If a robot seems too much like a human, they say, people’s acceptance can plummet into “the uncanny valley,” Masahiro Mori’s term for our feelings when a robot seems less like an enhanced machine and more like a disturbingly diminished human—or a corpse.

how robots changed the world essay

Others create machines that imitate humans in detail—like Harmony, an expressive talking head that attaches to a silicone and steel sex doll made by Abyss Creations in San Marcos, California.

When I was a child in the 20th century, hoping to encounter a robot when I grew up, I expected it would look and act human, like C-3PO from Star Wars. Instead, the real robots that were being set up in factories were very different. Today millions of these industrial machines bolt, weld, paint, and do other repetitive, assembly-line tasks. Often fenced off to keep the remaining human workers safe, they are what roboticist Andrea Thomaz at the University of Texas has called “mute and brute” behemoths.

Ready-Campbell’s device isn’t like that (although the Cat did have the words “CAUTION Robotic Equipment Moves Without Warning” stamped on its side). And of course it isn’t like C-3PO, either. It is, instead, a new kind of robot, far from human but still smart, adept, and mobile. Once rare, these devices—designed to “live” and work with people who have never met a robot—are migrating steadily into daily life.

Already, in 2020, robots take inventory and clean floors in Walmart. They shelve goods and fetch them for mailing in warehouses. They cut lettuce and pick apples and even raspberries. They help autistic children socialize and stroke victims regain the use of their limbs. They patrol borders and, in the case of Israel’s Harop drone, attack targets they deem hostile. Robots arrange flowers, perform religious ceremonies, do stand-up comedy, and serve as sexual partners.

faceless robot in blue plastic apron packing fried chicken between two persons.

And that was before the COVID-19 pandemic. Suddenly, replacing people with robots—an idea majorities of people around the world dislike, according to polls—looks medically wise, if not essential. ( Read more about the skyrocketing demand for robots during the pandemic. )

Robots now deliver food in Milton Keynes, England, tote supplies in a Dallas hospital, disinfect patients’ rooms in China and Europe, and wander parks in Singapore, nagging pedestrians to maintain social distance.

This past spring, in the middle of a global economic collapse, the robotmakers I’d contacted in 2019, when I started working on this article, said they were getting more, not fewer, inquiries from potential customers. The pandemic has made more people realize that “automation is going to be a part of work,” Ready-Campbell told me in May. “The driver of that had been efficiency and productivity, but now there’s this other layer to it, which is health and safety.”

Even before the COVID crisis added its impetus, technological trends were accelerating the creation of robots that could fan out into our lives. Mechanical parts got lighter, cheaper, and sturdier. Electronics packed more computing power into smaller packages. Breakthroughs let engineers put powerful data-crunching tools into robot bodies. Better digital communications let them keep some robot “brains” in a computer elsewhere—or connect a simple robot to hundreds of others, letting them share a collective intelligence, like a beehive’s.

The workplace of the near future “will be an ecosystem of humans and robots working together to maximize efficiency,” said Ahti Heinla, co-founder of the Skype internet-call platform, now co-founder and chief technology officer of Starship Technologies , whose six-wheeled, self-driving delivery robots are rolling around Milton Keynes and other cities in Europe and the United States.

Robots take inventory and clean at big stores. They patrol borders, perform religious ceremonies, and help autistic children.

“We’ve gotten used to having machine intelligence that we can carry around with us,” said Manuela Veloso, an AI roboticist at Carnegie Mellon University in Pittsburgh. She held up her smartphone. “Now we’re going to have to get used to intelligence that has a body and moves around without us.”

Outside her office, her team’s “ cobots ”—collaborative robots—roam the halls, guiding visitors and delivering paperwork. They look like iPads on wheeled display stands. But they move about on their own, even taking elevators when they need to (they beep and flash a polite request to nearby humans to push the buttons for them).

“It’s an inevitable fact that we are going to have machines, artificial creatures, that will be a part of our daily life,” Veloso said. “When you start accepting robots around you, like a third species, along with pets and humans, you want to relate to them.”

We’re all going to have to figure out how. “People have to understand that this isn’t science fiction; it’s not something that’s going to happen 20 years from now,” Veloso said. “It’s started to happen.”

Vidal Pérez likes his new co-worker.

a robot on four langs walking down street.

For seven years, working for Taylor Farms in Salinas, California, the 34-year-old used a seven-inch knife to cut lettuce. Bending at the waist, over and over, he would slice off a head of romaine or iceberg, shear off imperfect leaves, and toss it into a bin.

Since 2016, though, a robot has done the slicing. It’s a 28-foot-long, tractorlike harvester that moves steadily down the rows in a cloud of mist from the high-pressure water jet it uses to cut off a lettuce head every time its sensor detects one. The cut lettuce falls onto a sloped conveyor belt that carries it up to the harvester’s platform, where a team of about 20 workers sorts it into bins.

I met Pérez early one morning in June 2019, as he took a break from working a 22-acre field of romaine destined for Taylor’s fast-food and grocery store customers. A couple hundred yards away, another crew of lettuce cutters hunched over the plants, knives flashing as they worked in the old pre-robot style.

“This is better, because you get a lot more tired cutting lettuce with a knife than with this machine,” Pérez said. Riding on the robot, he rotates bins on the conveyor belt. Not all the workers prefer the new system, he said. “Some people want to stay with what they know. And some get bored with standing on the machine, since they’re used to moving all the time through a field.”

man in helmet and blue armor holding a power tool.

Some humans make use of wearable robots, or exoskeletons—combinations of sensors, computers, and motors. Arms with hooks attached, demonstrated by Sarcos Robotics engineer Fletcher Garrison, can lift up to 200 pounds—perhaps as an aid to airport luggage handlers.

a mechanical device encompassing a leg as a person walks on a treadmill

Yukio Taguchi, a 59-year-old paraplegic, wears HAL (Hybrid Assistive Limb), developed by Cyberdyne. Taguchi was a surfer and snowboarder for more than 30 years. After a spinal cord injury, he began to train with HAL two times a month at Tsukuba Robocare Center in Tsukuba, Japan.

robotic hand reaching for an apple.

Taylor Farms is one of the first major California agricultural companies to invest in robotic farming. “We’re going through a generational change … in agriculture,” Taylor Farms California president Mark Borman told me while we drove from the field in his pickup. As older workers leave, younger people aren’t choosing to fill the backbreaking jobs. A worldwide turn toward restrictions on cross-border migration, accelerated by COVID fears, hasn’t helped either. Farming around the world is being roboticized, Borman said. “We’re growing, our workforce is shrinking, so robots present an opportunity that’s good for both of us.”

It was a refrain I heard often last year from employers in farming and construction, manufacturing and health care: We’re giving tasks to robots because we can’t find people to do them.

At the wind farm site in Colorado, executives from the Mortenson Company , a Minneapolis-based construction firm that has hired Built’s robots since 2018, told me about a dire shortage of skilled workers in their industry. Built robots dug 21 foundations at the wind farm.

“Operators will say things like, Oh, hey, here come the job killers,” said Derek Smith, lean innovation manager for Mortenson. “But after they see that the robot takes away a lot of repetitive work and they still have plenty to do, that shifts pretty quickly.”

Once the robot excavator finished the dig we’d watched, a human on a bulldozer smoothed out the work and made ramps. “On this job, we have 229 foundations, and every one is basically the same spec,” Smith said. “We want to take away tasks that are repetitive. Then our operators concentrate on the tasks that involve more art.”

The pandemic’s tsunami of job losses hasn’t changed this outlook, robotmakers and users told me. “Even with a very high unemployment rate, you can’t just snap your fingers and fill jobs that need highly specialized skills, because we don’t have the people that have the training,” said Ben Wolff, chairman and CEO of Sarcos Robotics .

The Utah-based firm makes wearable robots called exoskeletons, which add the strength and precision of a machine to a worker’s movements. Delta Air Lines had just begun to test a Sarcos device with aircraft mechanics when the pandemic decimated air travel.

When I reached Wolff last spring, he was upbeat. “There is a short-term slowdown, but long term we expect more business,” he said.

Most employers are now looking to reduce contact among employees, and a device that lets one do the work of two might help. Since the pandemic began, Wolff told me, Sarcos has seen a jump in inquiries, some from companies he didn’t expect—for example, a major electronics firm, a pharmaceutical company, a meat-packer. The electronics- and pillmakers wanted to move heavy supplies with fewer people. The meat-packer was interested in spreading out its crowded workers.

how robots changed the world essay

The RBO Hand 3 uses compressed air in its silicone fingers. When the robot grasps an apple, a flower, or a human hand, the fingers naturally take the shape of the thing grasped. The physics of the situation allows versatility. This “soft robotics” approach to design can create cheaper, more versatile machines—which humans will like. “People are more comfortable with humanlike robot hands,” says roboticist Steffen Puhlmann.

In a world that now fears human contact, it won’t be easy to fill jobs caring for children or the elderly. Maja Matarić , a computer scientist and roboticist at the University of Southern California, develops “socially assistive robots”—machines that do social support rather than physical labor. One of her lab’s projects, for example, is a robot coach that leads an elderly user through an exercise routine, then encourages the human to go outside and walk.

“It says, ‘I can’t go outside, but why don’t you take a walk and tell me about it?’” Matarić told me. The robot is a white plastic head, torso, and arms that sits atop a rolling metal stand. But its sensors and software allow it to do some of what a human coach would do—for example, saying, “Bend your left forearm inward a little,” during exercise, or “Nice job!” afterward.

We walked around her lab—a warren of young people in cubicles, working on the technologies that might let a robot help keep the conversation going in a support group, for example, or respond in a way that makes a human feel like the machine is empathizing. I asked Matarić if people ever got creeped out at the thought of a machine watching over Grandpa.

“We’re not replacing caregivers,” she said. “We’re filling a gap. Grown-up children can’t be there with elderly parents. And the people who take care of other people in this country are underpaid and underappreciated. Until that changes, using robots is what we’ll have to do.”

Days after I visited Matarić’s lab, in a different world 20 miles due south of the university, hundreds of longshoremen were marching against robots. This was in the San Pedro section of Los Angeles, where container cranes tower over a landscape of warehouses and docks and modest residential streets. Generations of people in this tight-knit community have worked as longshoremen on the docks. The current generation didn’t like a plan to bring robot cargo handlers to the port’s largest terminal, even though such machines already are common in ports worldwide, including others in the Los Angeles area.

tall black metal robot with cover tool in his right hand.

Designers shape each robot according to its duties—and the needs of people it works with. The five-foot-nine-inch, 222-pound HRP-5P, developed at Japan’s National Institute of Advanced Industrial Science and Technology, has arms, legs, and a head and handles heavy loads in places such as construction sites and shipyards.

cylindrical white robot next to security guard.

In contrast, SQ-2, a security robot, is limbless and quietly unassuming at slightly more than four feet tall and 143 pounds. Its shape accommodates a 360-degree camera, a laser mapping system, and a computer that allows the robot to patrol on its own.

The dockworkers don’t expect the world to stop changing, said Joe Buscaino, who represents San Pedro on the Los Angeles City Council. San Pedro has gone through economic upheavals before, as fishing, canning, and shipbuilding boomed and busted. The problem with robots, Buscaino told me, is the speed with which employers are dropping them into workers’ lives.

“Years ago my dad saw that fishing was coming to an end, so he got a job in a bakery,” he said. “He was able to transition. But automation has the ability to take jobs overnight.”

Economists disagree a great deal about how much and how soon robots will affect future jobs. But many experts do agree on one thing: Some workers will have a much harder time adapting to robots.

“The evidence is fairly clear that we have many, many fewer blue-collar production jobs, assembly jobs, in industries that are adopting robots,” said Daron Acemoglu, an economist at MIT who has studied the effects of robots and other automation. “That doesn’t mean that future technology cannot create jobs. But the notion that we’re going to adopt automation technologies left, right, and center and also create lots of jobs is a purposefully misleading and incorrect fantasy.”

For all the optimism of investors, researchers, and entrepreneurs at start-ups, many people, such as Buscaino, worry about a future full of robots. They fear robots won’t take over just grunt work but the whole job, or at least the parts of it that are challenging, honorable—and well paid. (The latter process is prevalent enough that economists have a name for it: “de-skilling.”) People also fear robots will make work more stressful, perhaps even more dangerous.

robot and man working together.

Beth Gutelius, an urban planner and economist at the University of Illinois at Chicago who has researched the warehouse industry, told me about one warehouse she visited after it introduced robots. The robots were quickly delivering goods to humans for packing, and this was saving the workers a lot of walking back and forth. It also made them feel rushed and eliminated their chance to speak to one another.

You May Also Like

how robots changed the world essay

This ‘SmartBird’ Is the Next Thing in Drone Tech

how robots changed the world essay

Medieval robots? They were just one of this Muslim inventor's creations

how robots changed the world essay

The uncanny valley, explained: Why you might find AI creepy

Employers should consider that this kind of stress on employees “is not healthy, and it’s real, and it has impacts on the well-being of the workers,” said Dawn Castillo, an epidemiologist who manages occupational robot research at the National Institute for Occupational Safety and Health at the CDC. The Center for Occupational Robotics Research actually expects robot-related deaths “will likely increase over time,” according to its website. This is because there are more robots in more places with each passing year, but also because robots are working in new settings—where they meet people who don’t know what to expect and situations that their designers didn’t necessarily anticipate.

In San Pedro, after Buscaino won a city council vote to block the automation plan, the International Longshore and Warehouse Union negotiated what the union’s local chapter president called a “bittersweet” deal with Maersk, the Danish conglomerate that operates the container terminal. The dockworkers agreed to end the fight against robots in exchange for 450 mechanics getting “upskilled”: trained to work on the robots. Another 450 workers will be “reskilled”: trained to work at new, tech-friendly jobs.

How effective all that retraining will be, especially for middle-aged workers, remains to be seen, Buscaino said. A friend of his is a mechanic, whose background with cars and trucks leaves him well positioned to add robot maintenance to his skills. On the other hand, “my brother-in-law Dominic, who is a longshoreman today, he has no clue how to work on these robots. And he’s 56.”

The word “robot” is precisely 100 years old this year. It was coined by the Czech writer Karel Čapek, in a play that set the template for a century’s machine dreams and nightmares. The robots in that play, R.U.R., look and act like people, do all the work of humans—and wipe out the human race before the curtain falls.

man working on assembly line.

Robot partners come in many forms. At Fluidics Instruments in Eindhoven, Netherlands, an employee works with seven robot arms to assemble parts for oil and gas burners. Like traditional factory robots, these cobots are efficient and precise—able to produce a thousand nozzles an hour. But unlike older machines, they adapt quickly to changed specs or a new task.

nurses in blue garbs exiting an elevator with white robot on whiles.

At Medical City Heart Hospital in Dallas, nurses work with Moxi, a robot built to learn and then perform tasks that take nurses away from patients, such as fetching supplies, delivering lab samples, and removing bags of soiled linens.

Ever since, imaginary robots from the Terminator to Japan’s Astro Boy to those Star Wars droids have had a huge influence on the plans of robotmakers. They also have shaped the public’s expectations of what robots are and what they can do.

Tensho Goto is a monk in the Rinzai school of Japanese Zen Buddhism. A vigorous, sturdy man with a cheerful manner, Goto met me in a spare, elegant room at Kodai-ji, the 17th-century temple in Kyoto where he is the chief steward. He seemed the picture of tradition. Yet he has been dreaming of robots for many years. It began decades ago, when he read about artificial minds and thought about reproducing the Buddha himself in silicone, plastic, and metal. With android versions of the sages, he said, Buddhists could “hear their words directly.”

Once he began collaborating with roboticists at Osaka University, though, robot reality dampened the robot dream. He learned that “as AI technology exists today, it is impossible to create human intelligence, let alone the personages of those who have attained enlightenment.” But like many roboticists, he didn’t give up, instead settling for what is possible today.

It stands at one end of a white-walled room on the temple grounds: a metal and silicone incarnation of Kannon, the deity who in Japanese Buddhism embodies compassion and mercy. For centuries, temples and shrines have used statues to attract people and get them to focus on Buddhist tenets. “Now, for the first time, a statue moves,” Goto said.

Mindar , as the robot is called, delivers prerecorded sermons in a forceful, not-quite-human female voice, gently gesticulating with her arms and turning her head from side to side to survey the audience. When her eyes fall on you, you feel something—but it isn’t her intelligence. There is no AI in Mindar. Goto hopes that will change over time, and that his moving statue will become capable of holding conversations with people and answering their religious questions.

man working on computer and five robots by his desk.

Robot soccer players have been taking the field since 1996 as part of an international league called Robo-Cup. Pitting the robot teams against one another in local, regional, and world championships is part fun and part research for roboticists around the world—even though humans will remain better at the game for decades to come. Here, Ishan Durugkar, a Ph.D. student in computer science at the University of Texas, prepares to put his school’s squad, the UT Austin Villa robot soccer team, through some drills.

Across the Pacific, in a nondescript house in a quiet suburb of San Diego, I met a man who seeks to provide a different kind of intimate experience with robots. Artist Matt McMullen is CEO of a company called Abyss Creations, which makes realistic, life-size sex dolls. McMullen leads a team of programmers, robotics specialists, special-effects experts, engineers, and artists who create robot companions that can appeal to hearts and minds as well as sex organs.

The company has made silicone-skin, steel-skeleton RealDolls for more than a decade. They go for about $4,000. But these days, for an additional $8,000, a customer receives a robotic head packed with electronics that power facial expressions, a voice, and an artificial intelligence that can be programmed via a smartphone app.

Like Siri or Alexa, the doll’s AI gets to know the user via the commands and questions he or she gives it. Below the neck, for now, the robot is still a doll—its arms and legs move only when the user manipulates them.

“We don’t today have a real artificial intelligence that resembles a human mind,” McMullen acknowledges. “But I think we will. I think that is inevitable.” He has no doubt the market is there. “I think there are people who can greatly benefit from robots that look like people,” he said.

We are getting attached already to ones that don’t look much like us at all.

This isn’t science fiction. It’s not something that’s going to happen 20 years from now. It’s started. Manuela Veloso , Carnegie Mellon AI roboticist

Military units have held funerals for bomb-clearing robots blown up in action. Nurses in hospitals tease their robot colleagues. People in experiments have declined to rat out their robot teammates. As robots get more lifelike, people probably will invest them with even more affection and trust—too much, perhaps. The influence of fantasy robots leads people to think that today’s real machines are far more capable than they really are. Adapting well to their presence among us, experts told me, must start with realistic expectations.

Robots can be programmed or trained to do a well-defined task—dig a foundation, harvest lettuce—better or at least more consistently than humans can. But none can equal the human mind’s ability to do a lot of different tasks, especially unexpected ones. None has yet mastered common sense.

Today’s robots can’t match human hands either, said Chico Marks, a manufacturing engineering manager at Subaru’s auto plant in Lafayette, Indiana. The plant, like those of all carmakers, has used standard industrial robots for decades. It’s now gradually adding new types, for tasks such as moving self-guided carts that take parts around the plant. Marks showed me a combination of wires that would snake through a curving section near a future car’s rear door.

“Routing a wiring harness into a vehicle is not something that lends itself well to automation,” Marks said. “It requires a human brain and tactile feedback to know it’s in the right place and connected.”

Robot legs aren’t any better. In 1996 Veloso, the Carnegie Mellon AI roboticist, was part of a challenge to create robots that would play soccer better than humans by 2050. She was one of a group of researchers that year who created the RoboCup tournament to spur progress. Today RoboCup is a well-loved tradition for engineers on several continents, but no one, including Veloso, expects robots to play soccer better than humans anytime soon.

“It’s crazy how sophisticated our bodies are as machines,” she said. “We’re very good at handling gravity, dealing with forces as we walk, being pushed and keeping our balance. It’s going to be many years before a bipedal robot can walk as well as a person.”

woman in handicapped chair typing on laptop and smiling.

Through “tele-operation”—controlling a robot remotely via computer, smartphone, or even just eye movements—robots that navigate human spaces have expanded opportunities for people who are disabled. Though her mobility is limited by a neuromuscular disorder, Nozomi Murata, 34, works as a secretary in a Tokyo office via an OriHime robot created by OryLab. She tele-operates the robot from her home elsewhere in the city.

robot with white body holding a tray.

In the Minato City section of Tokyo, Murata’s tele-operated OriHime robot greets its inventor, Kentaro Yoshifuji, co-founder and CEO of OryLaboratory, which makes the robot. Yoshifuji created the device to alleviate loneliness by giving people a robotic means to connect directly with one another.

Robots are not going to be artificial people. We need to adapt to them, as Veloso said, as to a different species—and most robotmakers are working hard to engineer robots that make allowances for our human feelings. At the wind farm site, I learned that “bouncing” the toothed bucket of a big excavator against the ground is a sign of inexperience in a human operator. (The resulting jolt can actually injure the person in the cab.) To a robot excavator, the bounce makes little difference. Yet Built Robotics changed its robot’s algorithms to avoid bounce, because it looks bad to human professionals, and Mortenson wants workers of all species to get along.

It’s not just people who change as robots come on line. Taylor Farms, Borman told me, is working on a new light bulb–shaped lettuce with a longer stalk. It won’t taste or feel different; that shape is just easier for a robot to cut.

Bossa Nova Robotics makes a robot that roams thousands of stores in North America, including 500 Walmarts, scanning shelves to track inventory. The firm’s engineers asked themselves how friendly and approachable their robot should look. In the end it looks like a portable air conditioner with a six-and-a-half-foot-high periscope attached—no face or eyes.

“It’s a tool,” explained Sarjoun Skaff, Bossa Nova’s co-founder and chief technology officer. He and the other engineers wanted shoppers and workers to like the machine, but not too much. Too industrial or too strange, and shoppers would flee. Too friendly, and people would chat and play with it and slow down its work. In the long run, Skaff told me, robots and people will settle on “a common set of human-robot interaction conventions” that will enable humans to know “how to interpret what the robot is doing and how to behave around it.” But for now, robotmakers and ordinary people are feeling their way there.

Outside Tokyo, at the factory of Glory , a maker of money-handling devices, I stopped at a workstation where a nine-member team was assembling a coin-change machine. A plastic-sheathed sheet of paper displayed photos and names of three women, two men, and four robots.

The gleaming white, two-armed robots, which looked a little like the offspring of a refrigerator and WALL·E, were named after currencies. As I watched the team swiftly add parts to a coin changer, a robot named Dollar needed help a couple of times—once when it couldn’t peel the backing off a sticker. A red light near its station went on, and a human quickly left his own spot on the line to fix the problem.

Dollar has cameras on its “wrists,” but it also has a head with two camera eyes. “Conceptually it is meant to be a human-shaped robot,” explained manager Toshifumi Kobayashi. “So it has a head.”

That little accommodation didn’t immediately convince the real humans, said Shota Akasaka, 32, a boyish and smiling team leader. “I was really not sure that it would be able to do human work, that it would be able to screw in a screw,” he said. “When I saw the screw go in perfectly, I realized we were at the dawn of a new era.”

In a conference room northeast of Tokyo, I learned what it’s like to work with a robot in the closest way: by wearing it.

machine going between two rows of peach trees.

The exoskeleton, manufactured by a Japanese firm called Cyberdyne , consisted of two connected white tubes that curved across my back, a belt at my waist, and two straps on my thighs. It felt like being strapped into a parachute or an amusement park ride. I bent at the waist to lift a 40-pound container of water, which should have hurt my lower back. Instead, a computer in the tubes used the change in position to deduce that I was lifting an object, and motors kicked in to assist me. (More advanced users would have worn electrodes so the device could read the signals their brain was sending to their muscles.)

The robot was designed to assist only my back muscles; when I squatted and put the effort into my legs, as you’re supposed to, the device didn’t help much. Still, when it worked, it seemed like a magic trick—I felt the weight, then I didn’t.

Cyberdyne sees a large market in medical rehabilitation; it also makes a lower-limb exoskeleton that is being used to help people regain the use of their own legs. For many of its products, “another market will be for workers, so they can work longer and without risking injuries,” Cyberdyne spokesman Yudai Katami said.

Sarcos Robotics, the other maker of exoskeletons, is thinking along similar lines. One purpose of his devices, said CEO Wolff, was “allowing humans to be more productive so they can keep up with the machines that enable automation.”

Robots can do well-defined tasks, but none has mastered humans’ ability to multitask or use common sense.

Will we adapt to the machines more than they adapt to us? We might be asked to. Roboticists dream of machines that make life better, but companies sometimes have incentives to install robots that don’t. Robots, after all, don’t need paid vacations or medical insurance. Beyond that, many nations get a lot of tax revenue from labor, while encouraging automation with tax breaks and other incentives. Companies thus save money by cutting employees and adding robots.

“You get a lot of subsidies for installing equipment, especially digital equipment and robots,” Acemoglu said. “So that encourages firms to go for machines rather than humans, even if machines are no better.” Robots also are just more exciting than mere humans.

There is “a particular zeitgeist among many technologists and managers that humans are troublesome,” Acemoglu said. There’s this feeling of, “You don’t need them. They make mistakes. They make demands. Let’s go for automation.”

After Noah Ready-Campbell decided to go into construction robots, his father, Scott Campbell, spent more than three hours on a car ride gently asking him if this was really such a good idea. The elder Campbell, who used to work in construction himself, now represents the town of St. Johnsbury in Vermont’s general assembly. He quickly came to believe in his son’s work, but his constituents worry about robots, he told me, and it’s not all about economics. Perhaps it will be possible to give all our work to robots someday—even the work of religious ministry, even “sex work.” But Campbell’s constituents want to keep something for humanity: the work that makes humans feel valued.

robot with human face and monk praying together.

Mindar—a robotic incarnation of Kannon, the deity of mercy and compassion in Japanese Buddhism—faces Tensho Goto, a monk at the Kodaiji temple in Kyoto, Japan. Mindar, created by a team led by roboticist Hiroshi Ishiguro of Osaka University, can recite Buddhist teachings.

“What is important about work is not what you get for it but what you become by doing it,” Campbell said. “I feel like it’s profoundly true. That’s the most important thing about doing a job.”

A century after they were first dreamed up for the stage, real robots are making life easier and safer for some people. They’re also making it a bit more robot-like. For many companies, that’s part of the attraction.

“Right now every construction site is different, and every operator is an artist,” said Gaurav Kikani, Built Robotics’ vice president for strategy, operations, and finance. Operators like the variety; employers not so much. They save time and money when they know that a task is done the same way every time and doesn’t depend on an individual’s decisions. Though construction sites will always need human adaptability and ingenuity for some tasks, “with robots we see an opportunity to standardize practices and create efficiencies for the tasks where robots are appropriate,” Kikani said.

In the moments when someone has to decide whose preferences ought to prevail, technology itself has no answers. However far they advance, there’s one task that robots won’t help us solve: Deciding how, when, and where to use them.

Related Topics

  • ENGINEERING
  • ARTIFICIAL INTELLIGENCE

how robots changed the world essay

Hidden details from the Battle of the Bulge come to light

how robots changed the world essay

This 'Gate to Hell' has burned for decades. Will we ever shut it?

how robots changed the world essay

In 1969, the U.S. turned off Niagara Falls. Here’s what happened next.

how robots changed the world essay

Meet the inventor of the Bloodmobile

how robots changed the world essay

IVF revolutionized fertility. Will these new methods do the same?

  • History & Culture
  • Environment
  • Paid Content

History & Culture

  • History Magazine
  • Mind, Body, Wonder
  • Terms of Use
  • Privacy Policy
  • Your US State Privacy Rights
  • Children's Online Privacy Policy
  • Interest-Based Ads
  • About Nielsen Measurement
  • Do Not Sell or Share My Personal Information
  • Nat Geo Home
  • Attend a Live Event
  • Book a Trip
  • Inspire Your Kids
  • Shop Nat Geo
  • Visit the D.C. Museum
  • Learn About Our Impact
  • Support Our Mission
  • Advertise With Us
  • Customer Service
  • Renew Subscription
  • Manage Your Subscription
  • Work at Nat Geo
  • Sign Up for Our Newsletters
  • Contribute to Protect the Planet

Copyright © 1996-2015 National Geographic Society Copyright © 2015-2024 National Geographic Partners, LLC. All rights reserved

How artificial intelligence is transforming the world

Subscribe to techstream, darrell m. west and darrell m. west senior fellow - center for technology innovation , douglas dillon chair in governmental studies john r. allen john r. allen.

April 24, 2018

Artificial intelligence (AI) is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decision making—and already it is transforming every walk of life. In this report, Darrell West and John Allen discuss AI’s application across a variety of sectors, address issues in its development, and offer recommendations for getting the most out of AI while still protecting important human values.

Table of Contents I. Qualities of artificial intelligence II. Applications in diverse sectors III. Policy, regulatory, and ethical issues IV. Recommendations V. Conclusion

  • 49 min read

Most people are not very familiar with the concept of artificial intelligence (AI). As an illustration, when 1,500 senior business leaders in the United States in 2017 were asked about AI, only 17 percent said they were familiar with it. 1 A number of them were not sure what it was or how it would affect their particular companies. They understood there was considerable potential for altering business processes, but were not clear how AI could be deployed within their own organizations.

Despite its widespread lack of familiarity, AI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decisionmaking. Our hope through this comprehensive overview is to explain AI to an audience of policymakers, opinion leaders, and interested observers, and demonstrate how AI already is altering the world and raising important questions for society, the economy, and governance.

In this paper, we discuss novel applications in finance, national security, health care, criminal justice, transportation, and smart cities, and address issues such as data access problems, algorithmic bias, AI ethics and transparency, and legal liability for AI decisions. We contrast the regulatory approaches of the U.S. and European Union, and close by making a number of recommendations for getting the most out of AI while still protecting important human values. 2

In order to maximize AI benefits, we recommend nine steps for going forward:

  • Encourage greater data access for researchers without compromising users’ personal privacy,
  • invest more government funding in unclassified AI research,
  • promote new models of digital education and AI workforce development so employees have the skills needed in the 21 st -century economy,
  • create a federal AI advisory committee to make policy recommendations,
  • engage with state and local officials so they enact effective policies,
  • regulate broad AI principles rather than specific algorithms,
  • take bias complaints seriously so AI does not replicate historic injustice, unfairness, or discrimination in data or algorithms,
  • maintain mechanisms for human oversight and control, and
  • penalize malicious AI behavior and promote cybersecurity.

Qualities of artificial intelligence

Although there is no uniformly agreed upon definition, AI generally is thought to refer to “machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment and intention.” 3  According to researchers Shubhendu and Vijay, these software systems “make decisions which normally require [a] human level of expertise” and help people anticipate problems or deal with issues as they come up. 4 As such, they operate in an intentional, intelligent, and adaptive manner.

Intentionality

Artificial intelligence algorithms are designed to make decisions, often using real-time data. They are unlike passive machines that are capable only of mechanical or predetermined responses. Using sensors, digital data, or remote inputs, they combine information from a variety of different sources, analyze the material instantly, and act on the insights derived from those data. With massive improvements in storage systems, processing speeds, and analytic techniques, they are capable of tremendous sophistication in analysis and decisionmaking.

Artificial intelligence is already altering the world and raising important questions for society, the economy, and governance.

Intelligence

AI generally is undertaken in conjunction with machine learning and data analytics. 5 Machine learning takes data and looks for underlying trends. If it spots something that is relevant for a practical problem, software designers can take that knowledge and use it to analyze specific issues. All that is required are data that are sufficiently robust that algorithms can discern useful patterns. Data can come in the form of digital information, satellite imagery, visual information, text, or unstructured data.

Adaptability

AI systems have the ability to learn and adapt as they make decisions. In the transportation area, for example, semi-autonomous vehicles have tools that let drivers and vehicles know about upcoming congestion, potholes, highway construction, or other possible traffic impediments. Vehicles can take advantage of the experience of other vehicles on the road, without human involvement, and the entire corpus of their achieved “experience” is immediately and fully transferable to other similarly configured vehicles. Their advanced algorithms, sensors, and cameras incorporate experience in current operations, and use dashboards and visual displays to present information in real time so human drivers are able to make sense of ongoing traffic and vehicular conditions. And in the case of fully autonomous vehicles, advanced systems can completely control the car or truck, and make all the navigational decisions.

Related Content

Jack Karsten, Darrell M. West

October 26, 2015

Makada Henry-Nickie

November 16, 2017

Sunil Johal, Daniel Araya

February 28, 2017

Applications in diverse sectors

AI is not a futuristic vision, but rather something that is here today and being integrated with and deployed into a variety of sectors. This includes fields such as finance, national security, health care, criminal justice, transportation, and smart cities. There are numerous examples where AI already is making an impact on the world and augmenting human capabilities in significant ways. 6

One of the reasons for the growing role of AI is the tremendous opportunities for economic development that it presents. A project undertaken by PriceWaterhouseCoopers estimated that “artificial intelligence technologies could increase global GDP by $15.7 trillion, a full 14%, by 2030.” 7 That includes advances of $7 trillion in China, $3.7 trillion in North America, $1.8 trillion in Northern Europe, $1.2 trillion for Africa and Oceania, $0.9 trillion in the rest of Asia outside of China, $0.7 trillion in Southern Europe, and $0.5 trillion in Latin America. China is making rapid strides because it has set a national goal of investing $150 billion in AI and becoming the global leader in this area by 2030.

Meanwhile, a McKinsey Global Institute study of China found that “AI-led automation can give the Chinese economy a productivity injection that would add 0.8 to 1.4 percentage points to GDP growth annually, depending on the speed of adoption.” 8 Although its authors found that China currently lags the United States and the United Kingdom in AI deployment, the sheer size of its AI market gives that country tremendous opportunities for pilot testing and future development.

Investments in financial AI in the United States tripled between 2013 and 2014 to a total of $12.2 billion. 9 According to observers in that sector, “Decisions about loans are now being made by software that can take into account a variety of finely parsed data about a borrower, rather than just a credit score and a background check.” 10 In addition, there are so-called robo-advisers that “create personalized investment portfolios, obviating the need for stockbrokers and financial advisers.” 11 These advances are designed to take the emotion out of investing and undertake decisions based on analytical considerations, and make these choices in a matter of minutes.

A prominent example of this is taking place in stock exchanges, where high-frequency trading by machines has replaced much of human decisionmaking. People submit buy and sell orders, and computers match them in the blink of an eye without human intervention. Machines can spot trading inefficiencies or market differentials on a very small scale and execute trades that make money according to investor instructions. 12 Powered in some places by advanced computing, these tools have much greater capacities for storing information because of their emphasis not on a zero or a one, but on “quantum bits” that can store multiple values in each location. 13 That dramatically increases storage capacity and decreases processing times.

Fraud detection represents another way AI is helpful in financial systems. It sometimes is difficult to discern fraudulent activities in large organizations, but AI can identify abnormalities, outliers, or deviant cases requiring additional investigation. That helps managers find problems early in the cycle, before they reach dangerous levels. 14

National security

AI plays a substantial role in national defense. Through its Project Maven, the American military is deploying AI “to sift through the massive troves of data and video captured by surveillance and then alert human analysts of patterns or when there is abnormal or suspicious activity.” 15 According to Deputy Secretary of Defense Patrick Shanahan, the goal of emerging technologies in this area is “to meet our warfighters’ needs and to increase [the] speed and agility [of] technology development and procurement.” 16

Artificial intelligence will accelerate the traditional process of warfare so rapidly that a new term has been coined: hyperwar.

The big data analytics associated with AI will profoundly affect intelligence analysis, as massive amounts of data are sifted in near real time—if not eventually in real time—thereby providing commanders and their staffs a level of intelligence analysis and productivity heretofore unseen. Command and control will similarly be affected as human commanders delegate certain routine, and in special circumstances, key decisions to AI platforms, reducing dramatically the time associated with the decision and subsequent action. In the end, warfare is a time competitive process, where the side able to decide the fastest and move most quickly to execution will generally prevail. Indeed, artificially intelligent intelligence systems, tied to AI-assisted command and control systems, can move decision support and decisionmaking to a speed vastly superior to the speeds of the traditional means of waging war. So fast will be this process, especially if coupled to automatic decisions to launch artificially intelligent autonomous weapons systems capable of lethal outcomes, that a new term has been coined specifically to embrace the speed at which war will be waged: hyperwar.

While the ethical and legal debate is raging over whether America will ever wage war with artificially intelligent autonomous lethal systems, the Chinese and Russians are not nearly so mired in this debate, and we should anticipate our need to defend against these systems operating at hyperwar speeds. The challenge in the West of where to position “humans in the loop” in a hyperwar scenario will ultimately dictate the West’s capacity to be competitive in this new form of conflict. 17

Just as AI will profoundly affect the speed of warfare, the proliferation of zero day or zero second cyber threats as well as polymorphic malware will challenge even the most sophisticated signature-based cyber protection. This forces significant improvement to existing cyber defenses. Increasingly, vulnerable systems are migrating, and will need to shift to a layered approach to cybersecurity with cloud-based, cognitive AI platforms. This approach moves the community toward a “thinking” defensive capability that can defend networks through constant training on known threats. This capability includes DNA-level analysis of heretofore unknown code, with the possibility of recognizing and stopping inbound malicious code by recognizing a string component of the file. This is how certain key U.S.-based systems stopped the debilitating “WannaCry” and “Petya” viruses.

Preparing for hyperwar and defending critical cyber networks must become a high priority because China, Russia, North Korea, and other countries are putting substantial resources into AI. In 2017, China’s State Council issued a plan for the country to “build a domestic industry worth almost $150 billion” by 2030. 18 As an example of the possibilities, the Chinese search firm Baidu has pioneered a facial recognition application that finds missing people. In addition, cities such as Shenzhen are providing up to $1 million to support AI labs. That country hopes AI will provide security, combat terrorism, and improve speech recognition programs. 19 The dual-use nature of many AI algorithms will mean AI research focused on one sector of society can be rapidly modified for use in the security sector as well. 20

Health care

AI tools are helping designers improve computational sophistication in health care. For example, Merantix is a German company that applies deep learning to medical issues. It has an application in medical imaging that “detects lymph nodes in the human body in Computer Tomography (CT) images.” 21 According to its developers, the key is labeling the nodes and identifying small lesions or growths that could be problematic. Humans can do this, but radiologists charge $100 per hour and may be able to carefully read only four images an hour. If there were 10,000 images, the cost of this process would be $250,000, which is prohibitively expensive if done by humans.

What deep learning can do in this situation is train computers on data sets to learn what a normal-looking versus an irregular-appearing lymph node is. After doing that through imaging exercises and honing the accuracy of the labeling, radiological imaging specialists can apply this knowledge to actual patients and determine the extent to which someone is at risk of cancerous lymph nodes. Since only a few are likely to test positive, it is a matter of identifying the unhealthy versus healthy node.

AI has been applied to congestive heart failure as well, an illness that afflicts 10 percent of senior citizens and costs $35 billion each year in the United States. AI tools are helpful because they “predict in advance potential challenges ahead and allocate resources to patient education, sensing, and proactive interventions that keep patients out of the hospital.” 22

Criminal justice

AI is being deployed in the criminal justice area. The city of Chicago has developed an AI-driven “Strategic Subject List” that analyzes people who have been arrested for their risk of becoming future perpetrators. It ranks 400,000 people on a scale of 0 to 500, using items such as age, criminal activity, victimization, drug arrest records, and gang affiliation. In looking at the data, analysts found that youth is a strong predictor of violence, being a shooting victim is associated with becoming a future perpetrator, gang affiliation has little predictive value, and drug arrests are not significantly associated with future criminal activity. 23

Judicial experts claim AI programs reduce human bias in law enforcement and leads to a fairer sentencing system. R Street Institute Associate Caleb Watney writes:

Empirically grounded questions of predictive risk analysis play to the strengths of machine learning, automated reasoning and other forms of AI. One machine-learning policy simulation concluded that such programs could be used to cut crime up to 24.8 percent with no change in jailing rates, or reduce jail populations by up to 42 percent with no increase in crime rates. 24

However, critics worry that AI algorithms represent “a secret system to punish citizens for crimes they haven’t yet committed. The risk scores have been used numerous times to guide large-scale roundups.” 25 The fear is that such tools target people of color unfairly and have not helped Chicago reduce the murder wave that has plagued it in recent years.

Despite these concerns, other countries are moving ahead with rapid deployment in this area. In China, for example, companies already have “considerable resources and access to voices, faces and other biometric data in vast quantities, which would help them develop their technologies.” 26 New technologies make it possible to match images and voices with other types of information, and to use AI on these combined data sets to improve law enforcement and national security. Through its “Sharp Eyes” program, Chinese law enforcement is matching video images, social media activity, online purchases, travel records, and personal identity into a “police cloud.” This integrated database enables authorities to keep track of criminals, potential law-breakers, and terrorists. 27 Put differently, China has become the world’s leading AI-powered surveillance state.

Transportation

Transportation represents an area where AI and machine learning are producing major innovations. Research by Cameron Kerry and Jack Karsten of the Brookings Institution has found that over $80 billion was invested in autonomous vehicle technology between August 2014 and June 2017. Those investments include applications both for autonomous driving and the core technologies vital to that sector. 28

Autonomous vehicles—cars, trucks, buses, and drone delivery systems—use advanced technological capabilities. Those features include automated vehicle guidance and braking, lane-changing systems, the use of cameras and sensors for collision avoidance, the use of AI to analyze information in real time, and the use of high-performance computing and deep learning systems to adapt to new circumstances through detailed maps. 29

Light detection and ranging systems (LIDARs) and AI are key to navigation and collision avoidance. LIDAR systems combine light and radar instruments. They are mounted on the top of vehicles that use imaging in a 360-degree environment from a radar and light beams to measure the speed and distance of surrounding objects. Along with sensors placed on the front, sides, and back of the vehicle, these instruments provide information that keeps fast-moving cars and trucks in their own lane, helps them avoid other vehicles, applies brakes and steering when needed, and does so instantly so as to avoid accidents.

Advanced software enables cars to learn from the experiences of other vehicles on the road and adjust their guidance systems as weather, driving, or road conditions change. This means that software is the key—not the physical car or truck itself.

Since these cameras and sensors compile a huge amount of information and need to process it instantly to avoid the car in the next lane, autonomous vehicles require high-performance computing, advanced algorithms, and deep learning systems to adapt to new scenarios. This means that software is the key, not the physical car or truck itself. 30 Advanced software enables cars to learn from the experiences of other vehicles on the road and adjust their guidance systems as weather, driving, or road conditions change. 31

Ride-sharing companies are very interested in autonomous vehicles. They see advantages in terms of customer service and labor productivity. All of the major ride-sharing companies are exploring driverless cars. The surge of car-sharing and taxi services—such as Uber and Lyft in the United States, Daimler’s Mytaxi and Hailo service in Great Britain, and Didi Chuxing in China—demonstrate the opportunities of this transportation option. Uber recently signed an agreement to purchase 24,000 autonomous cars from Volvo for its ride-sharing service. 32

However, the ride-sharing firm suffered a setback in March 2018 when one of its autonomous vehicles in Arizona hit and killed a pedestrian. Uber and several auto manufacturers immediately suspended testing and launched investigations into what went wrong and how the fatality could have occurred. 33 Both industry and consumers want reassurance that the technology is safe and able to deliver on its stated promises. Unless there are persuasive answers, this accident could slow AI advancements in the transportation sector.

Smart cities

Metropolitan governments are using AI to improve urban service delivery. For example, according to Kevin Desouza, Rashmi Krishnamurthy, and Gregory Dawson:

The Cincinnati Fire Department is using data analytics to optimize medical emergency responses. The new analytics system recommends to the dispatcher an appropriate response to a medical emergency call—whether a patient can be treated on-site or needs to be taken to the hospital—by taking into account several factors, such as the type of call, location, weather, and similar calls. 34

Since it fields 80,000 requests each year, Cincinnati officials are deploying this technology to prioritize responses and determine the best ways to handle emergencies. They see AI as a way to deal with large volumes of data and figure out efficient ways of responding to public requests. Rather than address service issues in an ad hoc manner, authorities are trying to be proactive in how they provide urban services.

Cincinnati is not alone. A number of metropolitan areas are adopting smart city applications that use AI to improve service delivery, environmental planning, resource management, energy utilization, and crime prevention, among other things. For its smart cities index, the magazine Fast Company ranked American locales and found Seattle, Boston, San Francisco, Washington, D.C., and New York City as the top adopters. Seattle, for example, has embraced sustainability and is using AI to manage energy usage and resource management. Boston has launched a “City Hall To Go” that makes sure underserved communities receive needed public services. It also has deployed “cameras and inductive loops to manage traffic and acoustic sensors to identify gun shots.” San Francisco has certified 203 buildings as meeting LEED sustainability standards. 35

Through these and other means, metropolitan areas are leading the country in the deployment of AI solutions. Indeed, according to a National League of Cities report, 66 percent of American cities are investing in smart city technology. Among the top applications noted in the report are “smart meters for utilities, intelligent traffic signals, e-governance applications, Wi-Fi kiosks, and radio frequency identification sensors in pavement.” 36

Policy, regulatory, and ethical issues

These examples from a variety of sectors demonstrate how AI is transforming many walks of human existence. The increasing penetration of AI and autonomous devices into many aspects of life is altering basic operations and decisionmaking within organizations, and improving efficiency and response times.

At the same time, though, these developments raise important policy, regulatory, and ethical issues. For example, how should we promote data access? How do we guard against biased or unfair data used in algorithms? What types of ethical principles are introduced through software programming, and how transparent should designers be about their choices? What about questions of legal liability in cases where algorithms cause harm? 37

The increasing penetration of AI into many aspects of life is altering decisionmaking within organizations and improving efficiency. At the same time, though, these developments raise important policy, regulatory, and ethical issues.

Data access problems

The key to getting the most out of AI is having a “data-friendly ecosystem with unified standards and cross-platform sharing.” AI depends on data that can be analyzed in real time and brought to bear on concrete problems. Having data that are “accessible for exploration” in the research community is a prerequisite for successful AI development. 38

According to a McKinsey Global Institute study, nations that promote open data sources and data sharing are the ones most likely to see AI advances. In this regard, the United States has a substantial advantage over China. Global ratings on data openness show that U.S. ranks eighth overall in the world, compared to 93 for China. 39

But right now, the United States does not have a coherent national data strategy. There are few protocols for promoting research access or platforms that make it possible to gain new insights from proprietary data. It is not always clear who owns data or how much belongs in the public sphere. These uncertainties limit the innovation economy and act as a drag on academic research. In the following section, we outline ways to improve data access for researchers.

Biases in data and algorithms

In some instances, certain AI systems are thought to have enabled discriminatory or biased practices. 40 For example, Airbnb has been accused of having homeowners on its platform who discriminate against racial minorities. A research project undertaken by the Harvard Business School found that “Airbnb users with distinctly African American names were roughly 16 percent less likely to be accepted as guests than those with distinctly white names.” 41

Racial issues also come up with facial recognition software. Most such systems operate by comparing a person’s face to a range of faces in a large database. As pointed out by Joy Buolamwini of the Algorithmic Justice League, “If your facial recognition data contains mostly Caucasian faces, that’s what your program will learn to recognize.” 42 Unless the databases have access to diverse data, these programs perform poorly when attempting to recognize African-American or Asian-American features.

Many historical data sets reflect traditional values, which may or may not represent the preferences wanted in a current system. As Buolamwini notes, such an approach risks repeating inequities of the past:

The rise of automation and the increased reliance on algorithms for high-stakes decisions such as whether someone get insurance or not, your likelihood to default on a loan or somebody’s risk of recidivism means this is something that needs to be addressed. Even admissions decisions are increasingly automated—what school our children go to and what opportunities they have. We don’t have to bring the structural inequalities of the past into the future we create. 43

AI ethics and transparency

Algorithms embed ethical considerations and value choices into program decisions. As such, these systems raise questions concerning the criteria used in automated decisionmaking. Some people want to have a better understanding of how algorithms function and what choices are being made. 44

In the United States, many urban schools use algorithms for enrollment decisions based on a variety of considerations, such as parent preferences, neighborhood qualities, income level, and demographic background. According to Brookings researcher Jon Valant, the New Orleans–based Bricolage Academy “gives priority to economically disadvantaged applicants for up to 33 percent of available seats. In practice, though, most cities have opted for categories that prioritize siblings of current students, children of school employees, and families that live in school’s broad geographic area.” 45 Enrollment choices can be expected to be very different when considerations of this sort come into play.

Depending on how AI systems are set up, they can facilitate the redlining of mortgage applications, help people discriminate against individuals they don’t like, or help screen or build rosters of individuals based on unfair criteria. The types of considerations that go into programming decisions matter a lot in terms of how the systems operate and how they affect customers. 46

For these reasons, the EU is implementing the General Data Protection Regulation (GDPR) in May 2018. The rules specify that people have “the right to opt out of personally tailored ads” and “can contest ‘legal or similarly significant’ decisions made by algorithms and appeal for human intervention” in the form of an explanation of how the algorithm generated a particular outcome. Each guideline is designed to ensure the protection of personal data and provide individuals with information on how the “black box” operates. 47

Legal liability

There are questions concerning the legal liability of AI systems. If there are harms or infractions (or fatalities in the case of driverless cars), the operators of the algorithm likely will fall under product liability rules. A body of case law has shown that the situation’s facts and circumstances determine liability and influence the kind of penalties that are imposed. Those can range from civil fines to imprisonment for major harms. 48 The Uber-related fatality in Arizona will be an important test case for legal liability. The state actively recruited Uber to test its autonomous vehicles and gave the company considerable latitude in terms of road testing. It remains to be seen if there will be lawsuits in this case and who is sued: the human backup driver, the state of Arizona, the Phoenix suburb where the accident took place, Uber, software developers, or the auto manufacturer. Given the multiple people and organizations involved in the road testing, there are many legal questions to be resolved.

In non-transportation areas, digital platforms often have limited liability for what happens on their sites. For example, in the case of Airbnb, the firm “requires that people agree to waive their right to sue, or to join in any class-action lawsuit or class-action arbitration, to use the service.” By demanding that its users sacrifice basic rights, the company limits consumer protections and therefore curtails the ability of people to fight discrimination arising from unfair algorithms. 49 But whether the principle of neutral networks holds up in many sectors is yet to be determined on a widespread basis.

Recommendations

In order to balance innovation with basic human values, we propose a number of recommendations for moving forward with AI. This includes improving data access, increasing government investment in AI, promoting AI workforce development, creating a federal advisory committee, engaging with state and local officials to ensure they enact effective policies, regulating broad objectives as opposed to specific algorithms, taking bias seriously as an AI issue, maintaining mechanisms for human control and oversight, and penalizing malicious behavior and promoting cybersecurity.

Improving data access

The United States should develop a data strategy that promotes innovation and consumer protection. Right now, there are no uniform standards in terms of data access, data sharing, or data protection. Almost all the data are proprietary in nature and not shared very broadly with the research community, and this limits innovation and system design. AI requires data to test and improve its learning capacity. 50 Without structured and unstructured data sets, it will be nearly impossible to gain the full benefits of artificial intelligence.

In general, the research community needs better access to government and business data, although with appropriate safeguards to make sure researchers do not misuse data in the way Cambridge Analytica did with Facebook information. There is a variety of ways researchers could gain data access. One is through voluntary agreements with companies holding proprietary data. Facebook, for example, recently announced a partnership with Stanford economist Raj Chetty to use its social media data to explore inequality. 51 As part of the arrangement, researchers were required to undergo background checks and could only access data from secured sites in order to protect user privacy and security.

In the U.S., there are no uniform standards in terms of data access, data sharing, or data protection. Almost all the data are proprietary in nature and not shared very broadly with the research community, and this limits innovation and system design.

Google long has made available search results in aggregated form for researchers and the general public. Through its “Trends” site, scholars can analyze topics such as interest in Trump, views about democracy, and perspectives on the overall economy. 52 That helps people track movements in public interest and identify topics that galvanize the general public.

Twitter makes much of its tweets available to researchers through application programming interfaces, commonly referred to as APIs. These tools help people outside the company build application software and make use of data from its social media platform. They can study patterns of social media communications and see how people are commenting on or reacting to current events.

In some sectors where there is a discernible public benefit, governments can facilitate collaboration by building infrastructure that shares data. For example, the National Cancer Institute has pioneered a data-sharing protocol where certified researchers can query health data it has using de-identified information drawn from clinical data, claims information, and drug therapies. That enables researchers to evaluate efficacy and effectiveness, and make recommendations regarding the best medical approaches, without compromising the privacy of individual patients.

There could be public-private data partnerships that combine government and business data sets to improve system performance. For example, cities could integrate information from ride-sharing services with its own material on social service locations, bus lines, mass transit, and highway congestion to improve transportation. That would help metropolitan areas deal with traffic tie-ups and assist in highway and mass transit planning.

Some combination of these approaches would improve data access for researchers, the government, and the business community, without impinging on personal privacy. As noted by Ian Buck, the vice president of NVIDIA, “Data is the fuel that drives the AI engine. The federal government has access to vast sources of information. Opening access to that data will help us get insights that will transform the U.S. economy.” 53 Through its Data.gov portal, the federal government already has put over 230,000 data sets into the public domain, and this has propelled innovation and aided improvements in AI and data analytic technologies. 54 The private sector also needs to facilitate research data access so that society can achieve the full benefits of artificial intelligence.

Increase government investment in AI

According to Greg Brockman, the co-founder of OpenAI, the U.S. federal government invests only $1.1 billion in non-classified AI technology. 55 That is far lower than the amount being spent by China or other leading nations in this area of research. That shortfall is noteworthy because the economic payoffs of AI are substantial. In order to boost economic development and social innovation, federal officials need to increase investment in artificial intelligence and data analytics. Higher investment is likely to pay for itself many times over in economic and social benefits. 56

Promote digital education and workforce development

As AI applications accelerate across many sectors, it is vital that we reimagine our educational institutions for a world where AI will be ubiquitous and students need a different kind of training than they currently receive. Right now, many students do not receive instruction in the kinds of skills that will be needed in an AI-dominated landscape. For example, there currently are shortages of data scientists, computer scientists, engineers, coders, and platform developers. These are skills that are in short supply; unless our educational system generates more people with these capabilities, it will limit AI development.

For these reasons, both state and federal governments have been investing in AI human capital. For example, in 2017, the National Science Foundation funded over 6,500 graduate students in computer-related fields and has launched several new initiatives designed to encourage data and computer science at all levels from pre-K to higher and continuing education. 57 The goal is to build a larger pipeline of AI and data analytic personnel so that the United States can reap the full advantages of the knowledge revolution.

But there also needs to be substantial changes in the process of learning itself. It is not just technical skills that are needed in an AI world but skills of critical reasoning, collaboration, design, visual display of information, and independent thinking, among others. AI will reconfigure how society and the economy operate, and there needs to be “big picture” thinking on what this will mean for ethics, governance, and societal impact. People will need the ability to think broadly about many questions and integrate knowledge from a number of different areas.

One example of new ways to prepare students for a digital future is IBM’s Teacher Advisor program, utilizing Watson’s free online tools to help teachers bring the latest knowledge into the classroom. They enable instructors to develop new lesson plans in STEM and non-STEM fields, find relevant instructional videos, and help students get the most out of the classroom. 58 As such, they are precursors of new educational environments that need to be created.

Create a federal AI advisory committee

Federal officials need to think about how they deal with artificial intelligence. As noted previously, there are many issues ranging from the need for improved data access to addressing issues of bias and discrimination. It is vital that these and other concerns be considered so we gain the full benefits of this emerging technology.

In order to move forward in this area, several members of Congress have introduced the “Future of Artificial Intelligence Act,” a bill designed to establish broad policy and legal principles for AI. It proposes the secretary of commerce create a federal advisory committee on the development and implementation of artificial intelligence. The legislation provides a mechanism for the federal government to get advice on ways to promote a “climate of investment and innovation to ensure the global competitiveness of the United States,” “optimize the development of artificial intelligence to address the potential growth, restructuring, or other changes in the United States workforce,” “support the unbiased development and application of artificial intelligence,” and “protect the privacy rights of individuals.” 59

Among the specific questions the committee is asked to address include the following: competitiveness, workforce impact, education, ethics training, data sharing, international cooperation, accountability, machine learning bias, rural impact, government efficiency, investment climate, job impact, bias, and consumer impact. The committee is directed to submit a report to Congress and the administration 540 days after enactment regarding any legislative or administrative action needed on AI.

This legislation is a step in the right direction, although the field is moving so rapidly that we would recommend shortening the reporting timeline from 540 days to 180 days. Waiting nearly two years for a committee report will certainly result in missed opportunities and a lack of action on important issues. Given rapid advances in the field, having a much quicker turnaround time on the committee analysis would be quite beneficial.

Engage with state and local officials

States and localities also are taking action on AI. For example, the New York City Council unanimously passed a bill that directed the mayor to form a taskforce that would “monitor the fairness and validity of algorithms used by municipal agencies.” 60 The city employs algorithms to “determine if a lower bail will be assigned to an indigent defendant, where firehouses are established, student placement for public schools, assessing teacher performance, identifying Medicaid fraud and determine where crime will happen next.” 61

According to the legislation’s developers, city officials want to know how these algorithms work and make sure there is sufficient AI transparency and accountability. In addition, there is concern regarding the fairness and biases of AI algorithms, so the taskforce has been directed to analyze these issues and make recommendations regarding future usage. It is scheduled to report back to the mayor on a range of AI policy, legal, and regulatory issues by late 2019.

Some observers already are worrying that the taskforce won’t go far enough in holding algorithms accountable. For example, Julia Powles of Cornell Tech and New York University argues that the bill originally required companies to make the AI source code available to the public for inspection, and that there be simulations of its decisionmaking using actual data. After criticism of those provisions, however, former Councilman James Vacca dropped the requirements in favor of a task force studying these issues. He and other city officials were concerned that publication of proprietary information on algorithms would slow innovation and make it difficult to find AI vendors who would work with the city. 62 It remains to be seen how this local task force will balance issues of innovation, privacy, and transparency.

Regulate broad objectives more than specific algorithms

The European Union has taken a restrictive stance on these issues of data collection and analysis. 63 It has rules limiting the ability of companies from collecting data on road conditions and mapping street views. Because many of these countries worry that people’s personal information in unencrypted Wi-Fi networks are swept up in overall data collection, the EU has fined technology firms, demanded copies of data, and placed limits on the material collected. 64 This has made it more difficult for technology companies operating there to develop the high-definition maps required for autonomous vehicles.

The GDPR being implemented in Europe place severe restrictions on the use of artificial intelligence and machine learning. According to published guidelines, “Regulations prohibit any automated decision that ‘significantly affects’ EU citizens. This includes techniques that evaluates a person’s ‘performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.’” 65 In addition, these new rules give citizens the right to review how digital services made specific algorithmic choices affecting people.

By taking a restrictive stance on issues of data collection and analysis, the European Union is putting its manufacturers and software designers at a significant disadvantage to the rest of the world.

If interpreted stringently, these rules will make it difficult for European software designers (and American designers who work with European counterparts) to incorporate artificial intelligence and high-definition mapping in autonomous vehicles. Central to navigation in these cars and trucks is tracking location and movements. Without high-definition maps containing geo-coded data and the deep learning that makes use of this information, fully autonomous driving will stagnate in Europe. Through this and other data protection actions, the European Union is putting its manufacturers and software designers at a significant disadvantage to the rest of the world.

It makes more sense to think about the broad objectives desired in AI and enact policies that advance them, as opposed to governments trying to crack open the “black boxes” and see exactly how specific algorithms operate. Regulating individual algorithms will limit innovation and make it difficult for companies to make use of artificial intelligence.

Take biases seriously

Bias and discrimination are serious issues for AI. There already have been a number of cases of unfair treatment linked to historic data, and steps need to be undertaken to make sure that does not become prevalent in artificial intelligence. Existing statutes governing discrimination in the physical economy need to be extended to digital platforms. That will help protect consumers and build confidence in these systems as a whole.

For these advances to be widely adopted, more transparency is needed in how AI systems operate. Andrew Burt of Immuta argues, “The key problem confronting predictive analytics is really transparency. We’re in a world where data science operations are taking on increasingly important tasks, and the only thing holding them back is going to be how well the data scientists who train the models can explain what it is their models are doing.” 66

Maintaining mechanisms for human oversight and control

Some individuals have argued that there needs to be avenues for humans to exercise oversight and control of AI systems. For example, Allen Institute for Artificial Intelligence CEO Oren Etzioni argues there should be rules for regulating these systems. First, he says, AI must be governed by all the laws that already have been developed for human behavior, including regulations concerning “cyberbullying, stock manipulation or terrorist threats,” as well as “entrap[ping] people into committing crimes.” Second, he believes that these systems should disclose they are automated systems and not human beings. Third, he states, “An A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information.” 67 His rationale is that these tools store so much data that people have to be cognizant of the privacy risks posed by AI.

In the same vein, the IEEE Global Initiative has ethical guidelines for AI and autonomous systems. Its experts suggest that these models be programmed with consideration for widely accepted human norms and rules for behavior. AI algorithms need to take into effect the importance of these norms, how norm conflict can be resolved, and ways these systems can be transparent about norm resolution. Software designs should be programmed for “nondeception” and “honesty,” according to ethics experts. When failures occur, there must be mitigation mechanisms to deal with the consequences. In particular, AI must be sensitive to problems such as bias, discrimination, and fairness. 68

A group of machine learning experts claim it is possible to automate ethical decisionmaking. Using the trolley problem as a moral dilemma, they ask the following question: If an autonomous car goes out of control, should it be programmed to kill its own passengers or the pedestrians who are crossing the street? They devised a “voting-based system” that asked 1.3 million people to assess alternative scenarios, summarized the overall choices, and applied the overall perspective of these individuals to a range of vehicular possibilities. That allowed them to automate ethical decisionmaking in AI algorithms, taking public preferences into account. 69 This procedure, of course, does not reduce the tragedy involved in any kind of fatality, such as seen in the Uber case, but it provides a mechanism to help AI developers incorporate ethical considerations in their planning.

Penalize malicious behavior and promote cybersecurity

As with any emerging technology, it is important to discourage malicious treatment designed to trick software or use it for undesirable ends. 70 This is especially important given the dual-use aspects of AI, where the same tool can be used for beneficial or malicious purposes. The malevolent use of AI exposes individuals and organizations to unnecessary risks and undermines the virtues of the emerging technology. This includes behaviors such as hacking, manipulating algorithms, compromising privacy and confidentiality, or stealing identities. Efforts to hijack AI in order to solicit confidential information should be seriously penalized as a way to deter such actions. 71

In a rapidly changing world with many entities having advanced computing capabilities, there needs to be serious attention devoted to cybersecurity. Countries have to be careful to safeguard their own systems and keep other nations from damaging their security. 72 According to the U.S. Department of Homeland Security, a major American bank receives around 11 million calls a week at its service center. In order to protect its telephony from denial of service attacks, it uses a “machine learning-based policy engine [that] blocks more than 120,000 calls per month based on voice firewall policies including harassing callers, robocalls and potential fraudulent calls.” 73 This represents a way in which machine learning can help defend technology systems from malevolent attacks.

To summarize, the world is on the cusp of revolutionizing many sectors through artificial intelligence and data analytics. There already are significant deployments in finance, national security, health care, criminal justice, transportation, and smart cities that have altered decisionmaking, business models, risk mitigation, and system performance. These developments are generating substantial economic and social benefits.

The world is on the cusp of revolutionizing many sectors through artificial intelligence, but the way AI systems are developed need to be better understood due to the major implications these technologies will have for society as a whole.

Yet the manner in which AI systems unfold has major implications for society as a whole. It matters how policy issues are addressed, ethical conflicts are reconciled, legal realities are resolved, and how much transparency is required in AI and data analytic solutions. 74 Human choices about software development affect the way in which decisions are made and the manner in which they are integrated into organizational routines. Exactly how these processes are executed need to be better understood because they will have substantial impact on the general public soon, and for the foreseeable future. AI may well be a revolution in human affairs, and become the single most influential human innovation in history.

Note: We appreciate the research assistance of Grace Gilberg, Jack Karsten, Hillary Schaub, and Kristjan Tomasson on this project.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Support for this publication was generously provided by Amazon. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment. 

John R. Allen is a member of the Board of Advisors of Amida Technology and on the Board of Directors of Spark Cognition. Both companies work in fields discussed in this piece.

  • Thomas Davenport, Jeff Loucks, and David Schatsky, “Bullish on the Business Value of Cognitive” (Deloitte, 2017), p. 3 (www2.deloitte.com/us/en/pages/deloitte-analytics/articles/cognitive-technology-adoption-survey.html).
  • Luke Dormehl, Thinking Machines: The Quest for Artificial Intelligence—and Where It’s Taking Us Next (New York: Penguin–TarcherPerigee, 2017).
  • Shubhendu and Vijay, “Applicability of Artificial Intelligence in Different Fields of Life.”
  • Andrew McAfee and Erik Brynjolfsson, Machine Platform Crowd: Harnessing Our Digital Future (New York: Norton, 2017).
  • Portions of this paper draw on Darrell M. West, The Future of Work: Robots, AI, and Automation , Brookings Institution Press, 2018.
  • PriceWaterhouseCoopers, “Sizing the Prize: What’s the Real Value of AI for Your Business and How Can You Capitalise?” 2017.
  • Dominic Barton, Jonathan Woetzel, Jeongmin Seong, and Qinzheng Tian, “Artificial Intelligence: Implications for China” (New York: McKinsey Global Institute, April 2017), p. 1.
  • Nathaniel Popper, “Stocks and Bots,” New York Times Magazine , February 28, 2016.
  • Michael Lewis, Flash Boys: A Wall Street Revolt (New York: Norton, 2015).
  • Cade Metz, “In Quantum Computing Race, Yale Professors Battle Tech Giants,” New York Times , November 14, 2017, p. B3.
  • Executive Office of the President, “Artificial Intelligence, Automation, and the Economy,” December 2016, pp. 27-28.
  • Christian Davenport, “Future Wars May Depend as Much on Algorithms as on Ammunition, Report Says,” Washington Post , December 3, 2017.
  • John R. Allen and Amir Husain, “On Hyperwar,” Naval Institute Proceedings , July 17, 2017, pp. 30-36.
  • Paul Mozur, “China Sets Goal to Lead in Artificial Intelligence,” New York Times , July 21, 2017, p. B1.
  • Paul Mozur and John Markoff, “Is China Outsmarting American Artificial Intelligence?” New York Times , May 28, 2017.
  • Economist , “America v China: The Battle for Digital Supremacy,” March 15, 2018.
  • Rasmus Rothe, “Applying Deep Learning to Real-World Problems,” Medium , May 23, 2017.
  • Eric Horvitz, “Reflections on the Status and Future of Artificial Intelligence,” Testimony before the U.S. Senate Subcommittee on Space, Science, and Competitiveness, November 30, 2016, p. 5.
  • Jeff Asher and Rob Arthur, “Inside the Algorithm That Tries to Predict Gun Violence in Chicago,” New York Times Upshot , June 13, 2017.
  • Caleb Watney, “It’s Time for our Justice System to Embrace Artificial Intelligence,” TechTank (blog), Brookings Institution, July 20, 2017.
  • Asher and Arthur, “Inside the Algorithm That Tries to Predict Gun Violence in Chicago.”
  • Paul Mozur and Keith Bradsher, “China’s A.I. Advances Help Its Tech Industry, and State Security,” New York Times , December 3, 2017.
  • Simon Denyer, “China’s Watchful Eye,” Washington Post , January 7, 2018.
  • Cameron Kerry and Jack Karsten, “Gauging Investment in Self-Driving Cars,” Brookings Institution, October 16, 2017.
  • Portions of this section are drawn from Darrell M. West, “Driverless Cars in China, Europe, Japan, Korea, and the United States,” Brookings Institution, September 2016.
  • Yuming Ge, Xiaoman Liu, Libo Tang, and Darrell M. West, “Smart Transportation in China and the United States,” Center for Technology Innovation, Brookings Institution, December 2017.
  • Peter Holley, “Uber Signs Deal to Buy 24,000 Autonomous Vehicles from Volvo,” Washington Post , November 20, 2017.
  • Daisuke Wakabayashi, “Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam,” New York Times , March 19, 2018.
  • Kevin Desouza, Rashmi Krishnamurthy, and Gregory Dawson, “Learning from Public Sector Experimentation with Artificial Intelligence,” TechTank (blog), Brookings Institution, June 23, 2017.
  • Boyd Cohen, “The 10 Smartest Cities in North America,” Fast Company , November 14, 2013.
  • Teena Maddox, “66% of US Cities Are Investing in Smart City Technology,” TechRepublic , November 6, 2017.
  • Osonde Osoba and William Welser IV, “The Risks of Artificial Intelligence to Security and the Future of Work” (Santa Monica, Calif.: RAND Corp., December 2017) (www.rand.org/pubs/perspectives/PE237.html).
  • Ibid., p. 7.
  • Dominic Barton, Jonathan Woetzel, Jeongmin Seong, and Qinzheng Tian, “Artificial Intelligence: Implications for China” (New York: McKinsey Global Institute, April 2017), p. 7.
  • Executive Office of the President, “Preparing for the Future of Artificial Intelligence,” October 2016, pp. 30-31.
  • Elaine Glusac, “As Airbnb Grows, So Do Claims of Discrimination,” New York Times , June 21, 2016.
  • “Joy Buolamwini,” Bloomberg Businessweek , July 3, 2017, p. 80.
  • Mark Purdy and Paul Daugherty, “Why Artificial Intelligence is the Future of Growth,” Accenture, 2016.
  • Jon Valant, “Integrating Charter Schools and Choice-Based Education Systems,” Brown Center Chalkboard blog, Brookings Institution, June 23, 2017.
  • Tucker, “‘A White Mask Worked Better.’”
  • Cliff Kuang, “Can A.I. Be Taught to Explain Itself?” New York Times Magazine , November 21, 2017.
  • Yale Law School Information Society Project, “Governing Machine Learning,” September 2017.
  • Katie Benner, “Airbnb Vows to Fight Racism, But Its Users Can’t Sue to Prompt Fairness,” New York Times , June 19, 2016.
  • Executive Office of the President, “Artificial Intelligence, Automation, and the Economy” and “Preparing for the Future of Artificial Intelligence.”
  • Nancy Scolar, “Facebook’s Next Project: American Inequality,” Politico , February 19, 2018.
  • Darrell M. West, “What Internet Search Data Reveals about Donald Trump’s First Year in Office,” Brookings Institution policy report, January 17, 2018.
  • Ian Buck, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” February 14, 2018.
  • Keith Nakasone, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” March 7, 2018.
  • Greg Brockman, “The Dawn of Artificial Intelligence,” Testimony before U.S. Senate Subcommittee on Space, Science, and Competitiveness, November 30, 2016.
  • Amir Khosrowshahi, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” February 14, 2018.
  • James Kurose, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” March 7, 2018.
  • Stephen Noonoo, “Teachers Can Now Use IBM’s Watson to Search for Free Lesson Plans,” EdSurge , September 13, 2017.
  • Congress.gov, “H.R. 4625 FUTURE of Artificial Intelligence Act of 2017,” December 12, 2017.
  • Elizabeth Zima, “Could New York City’s AI Transparency Bill Be a Model for the Country?” Government Technology , January 4, 2018.
  • Julia Powles, “New York City’s Bold, Flawed Attempt to Make Algorithms Accountable,” New Yorker , December 20, 2017.
  • Sheera Frenkel, “Tech Giants Brace for Europe’s New Data Privacy Rules,” New York Times , January 28, 2018.
  • Claire Miller and Kevin O’Brien, “Germany’s Complicated Relationship with Google Street View,” New York Times , April 23, 2013.
  • Cade Metz, “Artificial Intelligence is Setting Up the Internet for a Huge Clash with Europe,” Wired , July 11, 2016.
  • Eric Siegel, “Predictive Analytics Interview Series: Andrew Burt,” Predictive Analytics Times , June 14, 2017.
  • Oren Etzioni, “How to Regulate Artificial Intelligence,” New York Times , September 1, 2017.
  • “Ethical Considerations in Artificial Intelligence and Autonomous Systems,” unpublished paper. IEEE Global Initiative, 2018.
  • Ritesh Noothigattu, Snehalkumar Gaikwad, Edmond Awad, Sohan Dsouza, Iyad Rahwan, Pradeep Ravikumar, and Ariel Procaccia, “A Voting-Based System for Ethical Decision Making,” Computers and Society , September 20, 2017 (www.media.mit.edu/publications/a-voting-based-system-for-ethical-decision-making/).
  • Miles Brundage, et al., “The Malicious Use of Artificial Intelligence,” University of Oxford unpublished paper, February 2018.
  • John Markoff, “As Artificial Intelligence Evolves, So Does Its Criminal Potential,” New York Times, October 24, 2016, p. B3.
  • Economist , “The Challenger: Technopolitics,” March 17, 2018.
  • Douglas Maughan, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” March 7, 2018.
  • Levi Tillemann and Colin McCormick, “Roadmapping a U.S.-German Agenda for Artificial Intelligence Policy,” New American Foundation, March 2017.

Artificial Intelligence

Governance Studies

Center for Technology Innovation

Artificial Intelligence and Emerging Technology Initiative

April 4, 2024

Nicol Turner Lee

March 28, 2024

Joseph B. Keller

how robots changed the world essay

How we built a robot that can evolve – and why it won’t take over the world

how robots changed the world essay

Lecturer in mechatronics, University of Cambridge

Disclosure statement

Fumiya Iida receives funding from the Swiss National Science Foundation.

University of Cambridge provides funding as a member of The Conversation UK.

View all partners

The latest research on robots is often described as if it were a step on the inexorable march toward a robot apocalypse straight out of the Terminator films . While there are risks in developing artificial intelligence that need to be taken seriously, reacting to every development in robotics with undue fear could stifle research and creativity.

For example, creating artificial intelligence that can design future versions of itself – effectively a robot that can reproduce and evolve – might help us discover innovations that humans might not consider on their own. It would need to be carefully monitored and controlled but, rather than something to fear, it could lead us to a greater understanding of the physical world and our own development.

Unnatural selection

Using artificial intelligence to improve a design by repeatedly copying it and adding a small change each time (iterative design) is not a novel approach, but it has so far been restricted to computer simulations . By modelling a group of lifeforms that can reproduce, you can simulate a process that’s similar to the natural selection of real biological evolution. The individuals that are most successful are more likely to reproduce and spread their own particular design. So after a number of generations you will eventually have an optimised version of the lifeform that a human designer may not have come across on their own.

Computer simulations of natural selection and evolution come with a series of advantages. Theoretically, the only limit to the number of generations and how fast they are produced is the computer’s speed. Models without promise can be easily discarded while potentially fruitful designs can be explored rapidly. And there is no need for a large supply of raw materials because computer memory is abundant, cheap, and takes up very little space.

The problem is that the simulated lifeforms may bear little resemblance to what can exist in the real world. Physical robots that can actually be built, meanwhile, are traditionally stuck in one shape for their entire lifecycle.

how robots changed the world essay

To overcome these issues, my colleagues and I have built a “mother” robot that can manufacture its own “children” without human intervention, as reported recently in PLOS One . We programmed it to produce simple robots, comprised of between one and five plastic cubes with a small motor inside, which are capable of crawling. The children are then autonomously tested to see which designs perform best.

Based on these results, the mother then produced a second generation using principles based on natural selection. It used the “virtual DNA” of the best first-generation children as a starting point for its designs in order to pass down preferential traits. The process was repeated hundreds of times and eventually the fittest individuals in the last generation performed a set locomotion task twice as quickly as the fittest individuals in the first generation.

The mother of invention

By allowing the mother to restlessly create hundreds of new shapes and gait patterns for her children, she produced designs that a human engineer might not have been able to build. The most interesting and important thing about this is that she effectively demonstrated creativity.

Unlike conventional mechanical systems such as packaging robots in factories, which repeat the same motions programmed by humans, our mother robot was able to autonomously construct children without influence by human designers. As a result, she can “invent” novel designs.

At the moment the children are too simple and restricted to become mothers themselves, so we don’t have a complete copy of natural evolution. As the technology advances, however, there’s no reason why this couldn’t happen in the future.

how robots changed the world essay

But isn’t it too dangerous to have robots evolving by themselves? We believe not. The aim of our research is to engineer the underlying mechanisms of creativity. We wanted to know how machines can handle unknown objects, how new ideas and designs can emerge from a statistical process, and how much time, energy, raw materials and other resources are needed to create anything truly novel.

The robot children created so far have given us some surprises with unique designs and motions that human engineers would be unlikely to consider in the first instance. But engineering is a bottom-up process to build up technology piece-by-piece by understanding why and how things work. So unlike biological creatures, our evolving robots are still, and will always be, within our expected boundaries and control.

  • Artificial intelligence (AI)
  • Natural selection

how robots changed the world essay

Project Officer, Student Volunteer Program

how robots changed the world essay

Audience Development Coordinator (fixed-term maternity cover)

how robots changed the world essay

Lecturer (Hindi-Urdu)

how robots changed the world essay

Director, Defence and Security

how robots changed the world essay

Opportunities with the new CIEHF

Table of Contents

What is robotics, types of robots, advantages and disadvantages of robots, the future of robotics: what’s the use of ai in robotics, a word about robot software, the future of robotics and robots, the future of robotics: how robots will change the world, choose the right program, how to get started in robotics, the future of robotics: how robots will transform our lives.

The Future of Robotics: How Robots Will Transform Our Lives

What comes to mind when you hear the word “robot”? Do you picture a metallic humanoid in a spaceship in the distant future? Perhaps you imagine a dystopian future where humanity is enslaved by its robot overlords. Or maybe you think of an automobile assembly line with robot-like machines putting cars together.

Whatever you think, one thing is sure: robots are here to stay. Fortunately, it seems likely that robots will be more about doing repetitive or dangerous tasks than seizing supreme executive power. Let’s look at robotics, defining and classifying the term, figuring out the role of Artificial Intelligence in the field, the future of robotics, and how robotics will change our lives.

Robotics is the engineering branch that deals with the conception, design, construction, operation, application, and usage of robots. Digging a little deeper, we see that robots are defined as an automatically operated machine that carries out a series of actions independently and does the work usually accomplished by a human.

Incidentally, robots don’t have to resemble humans , although some do. Look at images of automobile assembly lines for proof. Robots that appear human are typically referred to as “androids.” Although robot designers make their creations appear human so that people feel more at ease around them, it’s not always the case. Some people find robots, especially ones that resemble people, creepy.

Robots are versatile machines, evidenced by their wide variety of forms and functions. Here's a list of a few kinds of robots we see today:

  • Healthcare: Robots in the healthcare industry do everything from assisting in surgery to physical therapy to help people walk to moving through hospitals and delivering essential supplies such as meds or linens. Healthcare robots have even contributed to the ongoing fight against the pandemic, filling and sealing testing swabs and producing respirators.
  • Homelife: You need look no further than a Roomba to find a robot in someone's house. But they do more now than vacuuming floors; home-based robots can mow lawns or augment tools like Alexa.
  • Manufacturing: The field of manufacturing was the first to adopt robots, such as the automobile assembly line machines we previously mentioned. Industrial robots handle a various tasks like arc welding, material handling, steel cutting, and food packaging.
  • Logistics: Everybody wants their online orders delivered on time, if not sooner. So companies employ robots to stack warehouse shelves, retrieve goods, and even conduct short-range deliveries.
  • Space Exploration: Mars explorers such as Sojourner and Perseverance are robots. The Hubble telescope is classified as a robot, as are deep space probes like Voyager and Cassini.
  • Military: Robots handle dangerous tasks, and it doesn't get any more difficult than modern warfare. Consequently, the military enjoys a diverse selection of robots equipped to address many of the riskier jobs associated with war. For example, there's the Centaur, an explosive detection/disposal robot that looks for mines and IEDs, the MUTT, which follows soldiers around and totes their gear, and SAFFiR, which fights fires that break out on naval vessels.
  • Entertainment: We already have toy robots, robot statues, and robot restaurants. As robots become more sophisticated, expect their entertainment value to rise accordingly.
  • Travel: We only need to say three words: self-driving vehicles.

Become a AI & Machine Learning Professional

  • $267 billion Expected Global AI Market Value By 2027
  • 37.3% Projected CAGR Of The Global AI Market From 2023-2030
  • $15.7 trillion Expected Total Contribution Of AI To The Global Economy By 2030

Artificial Intelligence Engineer

  • Add the IBM Advantage to your Learning
  • Generative AI Edge

Post Graduate Program in AI and Machine Learning

  • Program completion certificate from Purdue University and Simplilearn
  • Gain exposure to ChatGPT, OpenAI, Dall-E, Midjourney & other prominent tools

Here's what learners are saying regarding our programs:

Indrakala Nigam Beniwal

Indrakala Nigam Beniwal

Technical consultant , land transport authority (lta) singapore.

I completed a Master's Program in Artificial Intelligence Engineer with flying colors from Simplilearn. Thanks to the course teachers and others associated with designing such a wonderful learning experience.

Akili Yang

Personal Financial Consultant , OCBC Bank

The live sessions were quite good; you could ask questions and clear doubts. Also, the self-paced videos can be played conveniently, and any course part can be revisited. The hands-on projects were also perfect for practice; we could use the knowledge we acquired while doing the projects and apply it in real life.

Like any innovation today, robots have their plusses and negatives. Here’s a breakdown of the good and bad about robots and the future of robotics.

  • They work in hazardous environments: Why risk human lives when you can send a robot in to do the job? Consider how preferable it is to have a robot fighting a fire or working on a nuclear reactor core.
  • They’re cost-effective: Robots don’t take sick days or coffee breaks, nor need perks like life insurance, paid time off, or healthcare offerings like dental and vision.
  • They increase productivity: Robots are wired to perform repetitive tasks ad infinitum; the human brain is not. Industries use robots to accomplish the tedious, redundant work, freeing employees to tackle more challenging tasks and even learn new skills.
  • They offer better quality assurance: Vigilance decrement is a lapse in concentration that hits workers who repeatedly perform the same functions. As the human’s concentration level drops, the likelihood of errors, poor results, or even accidents increases. Robots perform repetitive tasks flawlessly without having their performance slip due to boredom.

Disadvantages

  • They incur deep startup costs: Robot implementation is an investment risk, and it costs a lot. Although most manufacturers eventually see a recoup of their investment over the long run, it's expensive in the short term. However, this is a common obstacle in new technological implementation, like setting up a wireless network or performing cloud migration.
  • They might take away jobs: Yes, some people have been replaced by robots in certain situations, like assembly lines, for instance. Whenever the business sector incorporates game-changing technology, some jobs become casualties. However, this disadvantage might be overstated because robot implementation typically creates a greater demand for people to support the technology, which brings up the final disadvantage.
  • They require companies to hire skilled support staff: This drawback is good news for potential employees , but bad news for thrifty-minded companies. Robots require programmers, operators, and repair personnel. While job seekers may rejoice, the prospect of having to recruit professionals (and pay professional-level salaries!) may serve as an impediment to implementing robots.

Artificial Intelligence (AI) increases human-robot interaction, collaboration opportunities, and quality. The industrial sector already has co-bots, which are robots that work alongside humans to perform testing and assembly.

Advances in AI help robots mimic human behavior more closely, which is why they were created in the first place. Robots that act and think more like people can integrate better into the workforce and bring a level of efficiency unmatched by human employees.

Robot designers use Artificial Intelligence to give their creations enhanced capabilities like:

  • Computer Vision: Robots can identify and recognize objects they meet, discern details, and learn how to navigate or avoid specific items.
  • Manipulation: AI helps robots gain the fine motor skills needed to grasp objects without destroying the item.
  • Motion Control and Navigation: Robots no longer need humans to guide them along paths and process flows. AI enables robots to analyze their environment and self-navigate. This capability even applies to the virtual world of software. AI helps robot software processes avoid flow bottlenecks or process exceptions.
  • Natural Language Processing (NLP) and Real-World Perception: Artificial Intelligence and Machine Learning (ML) help robots better understand their surroundings, recognize and identify patterns, and comprehend data. These improvements increase the robot’s autonomy and decrease reliance on human agents.

Software robots are computer programs that perform tasks without human intervention, such as web crawlers or chatbots . These robots are entirely virtual and not considered actual robots since they have no physical characteristics.

This technology shouldn't be confused with robotic software loaded into a robot and determines its programming. However, it's normal to experience overlap between the two entities since, in both cases, the software is helping the entity (robot or computer program) perform its functions independent of human interaction.

Thanks to improved sensor technology and more remarkable advances in Machine Learning and Artificial Intelligence, robots will keep moving from mere rote machines to collaborators with cognitive functions. These advances, and other associated fields, are enjoying an upwards trajectory, and robotics will significantly benefit from these strides.

We can expect to see more significant numbers of increasingly sophisticated robots incorporated into more areas of life, working with humans. Contrary to dystopian-minded prophets of doom, these improved robots will not replace workers. Industries rise and fall, and some become obsolete in the face of new technologies, bringing new opportunities for employment and education.

That’s the case with robots. Perhaps there will be fewer human workers welding automobile frames, but there will be a greater need for skilled technicians to program, maintain, and repair the machines. In many cases, this means that employees could receive valuable in-house training and upskilling, giving them a set of skills that could apply to robot programming and maintenance and other fields and industries.

Robots will increase economic growth and productivity and create new career opportunities for many people worldwide. However, there are still warnings out there about massive job losses, forecasting losses of 20 million manufacturing jobs by 2030 , or how 30% of all jobs could be automated by 2030 .

But thanks to the consistent levels of precision that robots offer, we can look forward to robots handling more of the burdensome, redundant manual labor tasks, making transportation work more efficiently, improving healthcare, and freeing people to improve themselves. But, of course, time will tell how this all works out.

Supercharge your career in AI and ML with Simplilearn's comprehensive courses. Gain the skills and knowledge to transform industries and unleash your true potential. Enroll now and unlock limitless possibilities!

Program Name AI Engineer Post Graduate Program In Artificial Intelligence Artificial Intelligence & Machine Learning Bootcamp Geo All Geos All Geos US University Simplilearn Purdue Caltech Course Duration 11 Months 11 Months 6 Months Coding Experience Required Basic Basic Yes Skills You Will Learn 10+ skills including data structure, data manipulation, NumPy, Scikit-Learn, Tableau and more. 16+ skills including chatbots, NLP, Python, Keras and more. 12+ skills including Ensemble Learning, Python, Computer Vision, Statistics and more. Additional Benefits Get access to exclusive Hackathons, Masterclasses and Ask-Me-Anything sessions by IBM Applied learning via 3 Capstone and 12 Industry-relevant Projects Purdue Alumni Association Membership Free IIMJobs Pro-Membership of 6 months Resume Building Assistance 22 CEU Credits Caltech CTME Circle Membership Cost $$ $$$$ $$$ Explore Program Explore Program Explore Program

If you want to become part of the robot revolution (revolutionizing how we live and work, not an actual overthrow of humanity), Simplilearn has what you need to get started. The AI and Machine Learning Bootcamp , delivered in partnership with IBM and Caltech, covers vital robot-related concepts such as statistics, data science with Python, Machine Learning, deep learning, NLP, and reinforcement learning.

The bootcamp covers the latest tools and technologies from the AI ecosystem, featuring masterclasses by Caltech instructors and IBM experts, including hackathons and Ask Me Anything sessions conducted by IBM.

According to Ziprecruiter , AI Engineers in the US can earn a yearly average of $164,769, and Glassdoor reports that similar positions in India pay an annual average of ₹949,364.

Visit Simplilearn today and start an exciting new career with a fantastic future!

Our AI & Machine Learning Courses Duration And Fees

AI & Machine Learning Courses typically range from a few weeks to several months, with fees varying based on program and institution.

Get Free Certifications with free video courses

Machine Learning

AI & Machine Learning

Machine Learning

Artificial Intelligence Beginners Guide: What is AI?

Artificial Intelligence Beginners Guide: What is AI?

Learn from Industry Experts with free Masterclasses

Gain Gen AI expertise in Purdue's Applied Gen AI Specialization

Unlock Your Career Potential: Land Your Dream Job with Gen AI Tools

Make Your Gen AI & ML Career Shift in 2024 a Success with iHUB DivyaSampark, IIT Roorkee

Recommended Reads

Digital Transformation and Future of Tech Jobs in India: A Simplilearn Report 2020

How to Become a Robotics Engineer?

The Top Five Humanoid Robots

Report: The Future of IT Jobs in India

Robotics Engineer Salary by Experience and Location

The Top 10 Most Amazing RPA Projects of All Time

Get Affiliated Certifications with Live Class programs

  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.

Robots and your job: how automation is changing the workplace

A robotic arm prepares a cappuccino at the Barney Barista Bar of the Swiss F&P Robotics company in Zurich, Switzerland

A new study shows that robots are changing the workplace in unexpected ways. Image:  REUTERS/Arnd Wiegmann

.chakra .wef-1c7l3mo{-webkit-transition:all 0.15s ease-out;transition:all 0.15s ease-out;cursor:pointer;-webkit-text-decoration:none;text-decoration:none;outline:none;color:inherit;}.chakra .wef-1c7l3mo:hover,.chakra .wef-1c7l3mo[data-hover]{-webkit-text-decoration:underline;text-decoration:underline;}.chakra .wef-1c7l3mo:focus,.chakra .wef-1c7l3mo[data-focus]{box-shadow:0 0 0 3px rgba(168,203,251,0.5);} Knowledge @Wharton

how robots changed the world essay

.chakra .wef-9dduvl{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-9dduvl{font-size:1.125rem;}} Explore and monitor how .chakra .wef-15eoq1r{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;color:#F7DB5E;}@media screen and (min-width:56.5rem){.chakra .wef-15eoq1r{font-size:1.125rem;}} Artificial Intelligence is affecting economies, industries and global issues

A hand holding a looking glass by a lake

.chakra .wef-1nk5u5d{margin-top:16px;margin-bottom:16px;line-height:1.388;color:#2846F8;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-1nk5u5d{font-size:1.125rem;}} Get involved with our crowdsourced digital platform to deliver impact at scale

Stay up to date:, future of work.

  • A new survey-based study has explored how automation is changing the workplace.
  • In spite of popular beliefs, robots are not replacing workers, with data showing that increased automation actually leads to more hiring overall.
  • However, as a result of technology which reduces human error, managers of high-skilled workers may not be required as much.
  • Lynn Wu co-author of The Robot Revolution study encourages leaders to prepare for automation, to maximize the new benefits.

If you’re worried that robots are coming for your job, you can relax — unless you’re a manager.

A new survey-based study explains how automation is reshaping the workplace in unexpected ways. Robots can improve efficiency and quality, reduce costs, and even help create more jobs for their human counterparts. But more robots can also reduce the need for managers.

The study is titled “The Robot Revolution: Managerial and Employment Consequences for Firms.” The co-authors are Lynn Wu , professor of operations, information and decisions at Wharton; Bryan Hong , professor of entrepreneurship and management at the University of Missouri Kansas City’s Bloch School of Management; and Jay Dixon, an economist with Statistics Canada. The researchers said the study, which analyzed five years’ worth of data on businesses in the Canadian economy, is the most comprehensive of its kind on how automation affects employment, labor, strategic priorities, and other aspects of the workplace.

Wu recently spoke with Knowledge@Wharton about the paper and its implications for firms.

More robots, more workers

Contrary to popular belief, robots are not replacing workers. While there is some shedding of employees when firms adopt robots, the data show that increased automation leads to more hiring overall. That’s because robot-adopting firms become so much more productive that they need more people to meet the increased demand in production, Wu explained.

“Any employment loss in our data we found came from the non-adopting firms,” she said. “These firms became less productive, relative to the adopters. They lost their competitive advantage and, as a result, they had to lay off workers.”

Total employment time-indexed dummy regressions coefficient plot, NALMF sample

Armed with facts about automation, firms need to consider a bigger-picture strategy when bringing in robots, she said.

“The story is really about how do you leverage technology better to become more productive, to become more competitive? And how do you change your managerial firm practices so you can get the most out of your robot technologies?” Wu said.

Have you read?

Will robots be good or bad for our jobs here are lessons from japan, robots will soon be a necessity but they won't take all our jobs, ai and robots could create as many jobs as they displace, robots render some managers obsolete.

Certain kinds of managers become superfluous as businesses increase automation, according to the study. The drop is simply an effect of modern technology, Wu said. As different tasks and processes are automated, human error is drastically reduced. So, too, is the need for close monitoring of that work by managers.

“Technology can generate reports on what the robots did, what material they used, and they can aggregate it at the firm level, division level, to get lots of different operational metrics very easily,” Wu said. “And those are the kinds of things that managers tend to do.”

But it’s a bit more complex than that. The managerial decrease comes from the changing composition of employment. Although robot adoption results in increased employment, the increase is not uniform across skills, Wu said. Low-skilled workers, such as box packers, and high-skilled workers, such as engineers, grow in numbers, but middle-skilled workers become endangered.

“When you see a huge decrease in middle-skilled work and an increase in those extremes — high- and low-skilled labor — it means the type of managers you need to manage this new workforce will be different.”

Someone who supervises low-skilled workers can manage a lot more people when the firm brings in robots because of the standardization and efficiency. But the change is more ambiguous for managers of high-skilled workers. Those employees are typically responsible for innovation, rather than operations, which is harder to measure.

“Highly-skilled professionals are very good at what they do, better than their managers. They don’t need managers to tell them how to do their jobs or make sure they arrive to work on time,” Wu said. “Managing high-skilled workers is much more like coaching or advising. Managers advise them to help them to achieve the best they can at work, and that kind of skill is very different from supervising work.”

The revolution is inevitable

Wu said the robot revolution is “inevitable” given the leaps in artificial intelligence, machine learning, and other technologies rapidly transforming the workplace. She encouraged business leaders to embrace the change and explore strategies to maximize the benefits. For example, the study found that robots were associated with greater use of performance-based pay because automation reduces variance. In other words, it’s easier to meet production quotas when robots are on the job. Robots also reduce workplace injuries, according to the study.

“In the next couple of years, you’re going to see huge industry turbulence, if you haven’t seen it already,” she said. “The firms that figure it out, either by luck or by ingenuity, are going to kill it. And the firms that don’t figure it out are not.”

The Top 10 Emerging Technologies of 2023 report outlined the technologies poised to positively impact society in the next few years, from health technology to AI to sustainable computing.

The World Economic Forum’s Centre for the Fourth Industrial Revolution is driving responsible technology governance, enabling industry transformation, addressing planetary health, and promoting equity and inclusion.

Learn more about our impact:

  • Digital inclusion: Our EDISON Alliance is mobilizing leaders from across sectors to accelerate digital inclusion, having positively impacted the lives of 454 million people through the activation of 250 initiatives across 90 countries.
  • AI in developing economies: Our Centre for the Fourth Industrial Revolution Rwanda is promoting the adoption of new technologies in the country, enabling over 4,000 daily health consultations using AI.
  • Innovative healthcare: Our Medicine from the Sky initiative is using drones to deliver medicine to remote areas in India, completing over 950 successful drone flights.
  • AI for agriculture: We are working with the Government of India to scale up agricultural technology in the country, helping more than 7,000 farmers monitor the health of their crops and soil using AI.

Want to know more about our centre’s impact or get involved? Contact us .

Wu also urged business leaders to be mindful of their low-skilled workers. As a company automates and the middle-skill level shrinks, those entry-level workers lose upward mobility.

“In our old paradigm, we don’t expect people to stay on those jobs forever,” she said. “But now you notice that the career ladder is broken. There is no middle skill to go to. There are no supervisory jobs to go to. That means that the contract, where there is an implicit understanding that you will move up eventually from your low-skilled work, needs to be revisited because it’s changing.”

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:

The agenda .chakra .wef-n7bacu{margin-top:16px;margin-bottom:16px;line-height:1.388;font-weight:400;} weekly.

A weekly update of the most important issues driving the global agenda

.chakra .wef-1dtnjt5{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;} More on Future of Work .chakra .wef-17xejub{-webkit-flex:1;-ms-flex:1;flex:1;justify-self:stretch;-webkit-align-self:stretch;-ms-flex-item-align:stretch;align-self:stretch;} .chakra .wef-nr1rr4{display:-webkit-inline-box;display:-webkit-inline-flex;display:-ms-inline-flexbox;display:inline-flex;white-space:normal;vertical-align:middle;text-transform:uppercase;font-size:0.75rem;border-radius:0.25rem;font-weight:700;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;line-height:1.2;-webkit-letter-spacing:1.25px;-moz-letter-spacing:1.25px;-ms-letter-spacing:1.25px;letter-spacing:1.25px;background:none;padding:0px;color:#B3B3B3;-webkit-box-decoration-break:clone;box-decoration-break:clone;-webkit-box-decoration-break:clone;}@media screen and (min-width:37.5rem){.chakra .wef-nr1rr4{font-size:0.875rem;}}@media screen and (min-width:56.5rem){.chakra .wef-nr1rr4{font-size:1rem;}} See all

how robots changed the world essay

The six drivers of employee health every employer should know

Jacqueline Brassey, Lars Hartenstein, Patrick Simon and Barbara Jeffery

April 3, 2024

how robots changed the world essay

Remote work curbs salaries - new data

Victoria Masterson

March 25, 2024

how robots changed the world essay

Women inventors make gains, but gender gaps remain

March 19, 2024

how robots changed the world essay

6 work and workplace trends to watch in 2024

Kate Whiting

February 6, 2024

how robots changed the world essay

How using genAI to fuse creativity and technology could reshape the way we work

Kathleen O'Reilly

January 24, 2024

how robots changed the world essay

What do workers need to learn to stay relevant in the age of AI? Chief learning officers explain

Aarushi Singhania

January 19, 2024

Smart. Open. Grounded. Inventive. Read our Ideas Made to Matter.

Which program is right for you?

MIT Sloan Campus life

Through intellectual rigor and experiential learning, this full-time, two-year MBA program develops leaders who make a difference in the world.

A rigorous, hands-on program that prepares adaptive problem solvers for premier finance careers.

A 12-month program focused on applying the tools of modern data science, optimization and machine learning to solve real-world business problems.

Earn your MBA and SM in engineering with this transformative two-year program.

Combine an international MBA with a deep dive into management science. A special opportunity for partner and affiliate schools only.

A doctoral program that produces outstanding scholars who are leading in their fields of research.

Bring a business perspective to your technical and quantitative expertise with a bachelor’s degree in management, business analytics, or finance.

A joint program for mid-career professionals that integrates engineering and systems thinking. Earn your master’s degree in engineering and management.

An interdisciplinary program that combines engineering, management, and design, leading to a master’s degree in engineering and management.

Executive Programs

A full-time MBA program for mid-career leaders eager to dedicate one year of discovery for a lifetime of impact.

This 20-month MBA program equips experienced executives to enhance their impact on their organizations and the world.

Non-degree programs for senior executives and high-potential managers.

A non-degree, customizable program for mid-career professionals.

How AI helps acquired businesses grow

Leading the AI-driven organization

New AI insights from MIT Sloan Management Review

Credit: Shutterstock / Marish

Ideas Made to Matter

A new study measures the actual impact of robots on jobs. It’s significant.

Jul 29, 2020

Machines replacing humans in the workplace has been a perpetual concern since the Industrial Revolution, and an increasing topic of discussion with the rise of automation in the last few decades. But so far hype has outweighed information about how automation — particularly robots, which do not need humans to operate — actually affects employment and wages.

The recently published paper,  “Robots and Jobs: Evidence from U.S. Labor Markets,” by MIT professor Daron Acemoglu and Boston University professor Pascual Restrepo, PhD ’16, finds that industrial robots do have a negative impact on workers.

The researchers found that for every robot added per 1,000 workers in the U.S., wages decline by 0.42% and the employment-to-population ratio goes down by 0.2 percentage points — to date, this means the loss of about 400,000 jobs. The impact is more sizable within the areas where robots are deployed: adding one more robot in a commuting zone (geographic areas used for economic analysis) reduces employment by six workers in that area.

To conduct their research, the economists created a model in which robots and workers compete for the production of certain tasks.

Industries are adopting robots to various degrees, and effects vary in different parts of the country and among different groups — the automotive industry has adopted robots more than other sectors, and workers who are lower and middle income, perform manual labor, and live in the Rust Belt and Texas are among those most likely to have their work affected by robots.

“It’s obviously a very important issue given all of the anxiety and excitement about robots,” Acemoglu said. “Our evidence shows that robots increase productivity. They are very important for continued growth and for firms, but at the same time they destroy jobs and they reduce labor demand. Those effects of robots also need to be taken into account.”

“That doesn't mean we should be opposed to robots, but it does imply that a more holistic understanding of what their effects are needs to be part of the discussion … automation technologies generally don't bring shared prosperity by themselves,” he said. “They need to be combined with other technological changes that create jobs.”

Industrial robots are automatically controlled, reprogrammable, multipurpose machines that can do a variety of things like welding, painting, and packaging. They are fully autonomous and don’t need humans to operate them. Industrial robots grew fourfold in the U.S. between 1993 and 2007, Acemoglu and Restrepo write, to a rate of one robot per thousand workers. Europe is slightly ahead of the U.S. in industrial robot adoption; the rate there grew to 1.6 robots per thousand workers during that time span.

Improvements in technology adversely affect wages and employment through the displacement effect , in which robots or other automation complete tasks formerly done by workers. Technology also has more positive productivity effects by making tasks easier to complete or creating new jobs and tasks for workers. The researchers said automation technologies always create both displacement and productivity effects, but robots create a stronger displacement effect. 

Acemoglu and Restrepo looked at robot use in 19 industries, as well as census and American Community Survey data for 722 commuting zones, finding a negative relationship between a commuting zone’s exposure to robots and its post-1990 labor market outcomes.

Adding one robot to a geographic area reduces employment in that area by six workers.

Between 1990 and 2007, the increase in robots (about one per thousand workers) reduced the average employment-to-population ratio in a zone by 0.39 percentage points, and average wages by 0.77%, compared to commuting zones with no exposure to robots, they found. This implies that adding one robot to an area reduces employment in that area by about six workers.

But what happens in one geographic area affects the economy as a whole, and robots in one area can create positive spillovers. These benefits for the rest of the economy include reducing the prices of goods and creating shared capital income gains. Including this spillover, one robot per thousand workers has slightly less of an impact on the population as a whole, leading to an overall 0.2 percentage point reduction in the employment-to-population ratio, and reducing wages by 0.42%. Thus, adding one robot reduces employment nationwide by 3.3 workers.

In a separate study of robot adoption in France , Acemoglu and his co-authors found that French manufacturing firms that added robots became more productive and profitable, but that increases in robot use led to a decline in employment industrywide.

Disproportionate impacts

The impact of robots varies among different industries, geographic areas, and population groups. Unsurprisingly, the effect of robots is concentrated in manufacturing. The automotive industry has adopted robots more than any other industry, the researchers write, employing 38% of existing robots with adoption of up to 7.5 robots per thousand workers.

The electronics industry employs 15% of robots, while plastics and chemicals employ 10%. Employees in these industries saw the most negative effects, and researchers also estimate negative effects for workers in construction and retail, as well as personal services.  While the automotive industry adopted robots at a quicker pace and to a greater degree than other sectors, that industry did not drive the study’s results. The impact of robots was consistent when that industry was taken out of the equation, the researchers write. 

The automotive industry employs 38% of existing industrial robots.

Robots are most likely to affect routine manual occupations and lower and middle class workers, and particularly blue-collar workers, including machinists, assemblers, material handlers, and welders, Acemoglu and Restrepo write. Both men and women are affected by adoption of robots, though men slightly more. For men, impacts are seen most in manufacturing jobs. For women, the impacts were seen most in non-manufacturing jobs.

Robots negatively affect workers at all education levels, though workers without college degrees were impacted far more than those with a college degree or more. The researchers also found robot adoption does not have a positive effect on workers with master’s or advanced degrees, which could indicate that unlike other technology, industrial robots are not directly complementing high-skill workers.

Some parts of the United States saw relatively small adoption of robots, while in other states, including Kentucky, Louisiana, Missouri, Texas, and Virginia, robots have been adopted more along the order of two to five robots per thousand workers. In some parts of Texas, that number goes up to five to 10 per thousand workers, the researchers found. Detroit was the commuting zone with the highest exposure to robots.

Overall, robots have a mixed effect: replacing jobs that relatively high-wage manufacturing employees used to perform, while also making firms more efficient and more productive, Acemoglu said. Some areas are most affected by the mixed impact of robots. “In the U.S., especially in the industrial heartland, we find that the displacement effect is large,” he said. “When those jobs disappear, those workers go and take other jobs from lower wage workers. It has a negative effect, and demand goes down for some of the retail jobs and other service jobs.”

Acemoglu and Restrepo emphasize that looking at the future effect of robots includes a great deal of uncertainty, and it is possible the impact on employment and wages could change when robots become more widespread. Industries adopting more robots over the last few decades could have experienced other factors, like declining demand or international competition, and commuting zones could be affected by other negative shocks.

Related Articles

But the researchers said their paper is the first step in exploring the implications of automation, which will become increasingly widespread. There are relatively few robots in the U.S. economy today and the economic impacts could be just beginning.   

Robotic technology is expected to keep expanding, with an aggressive scenario predicting that robots will quadruple worldwide by 2025. This would mean 5.25 more robots per thousand workers in the U.S., and by the researchers’ estimate, a 1 percentage point lower employment-to-population ratio, and 2% lower wage growth between 2015 and 2025. In a more conservative scenario, the stock of robots could increase slightly less than threefold, leading to a 0.6 percentage point decline in the employment-to-population ratio and 1% lower wage growth.

The economic crisis spurred by the COVID-19 pandemic will further exacerbate the good and bad impacts of robots and technology, Acemoglu said. “The good because we are really dependent on digital technologies. If we didn't have these advanced digital technologies, we wouldn't be able to use Zoom or other things for teaching and teleconferencing. We would not be able to keep factories going in many areas because workers haven't fully gotten back to work,” he said. “But at the same time, by the same token, this increases the demand for automation. If the automation process was going too far or had some negative effects, as we find, then those are going to get multiplied as well. So we need to take those into account.”

Read "Robots and Jobs: Evidence from U.S. Labor Markets" 

A robot waters house plants

  • Skip to main content
  • Keyboard shortcuts for audio player

Life Kit

  • LISTEN & FOLLOW
  • Apple Podcasts
  • Google Podcasts
  • Amazon Music

Your support helps make our show possible and unlocks access to our sponsor-free feed.

The physical sensations of watching a total solar eclipse

Regina Barber, photographed for NPR, 6 June 2022, in Washington DC. Photo by Farrah Skeiky for NPR.

Regina G. Barber

how robots changed the world essay

Science writer David Baron witnesses his first total solar eclipse in Aruba, 1998. He says seeing one is "like you've left the solar system and are looking back from some other world." Paul Myers hide caption

Science writer David Baron witnesses his first total solar eclipse in Aruba, 1998. He says seeing one is "like you've left the solar system and are looking back from some other world."

David Baron can pinpoint the first time he got addicted to chasing total solar eclipses, when the moon completely covers up the sun. It was 1998 and he was on the Caribbean island of Aruba. "It changed my life. It was the most spectacular thing I'd ever seen," he says.

Baron, author of the 2017 book American Eclipse: A Nation's Epic Race to Catch the Shadow of the Moon and Win the Glory of the World , wants others to witness its majesty too. On April 8, millions of people across North America will get that chance — a total solar eclipse will appear in the sky. Baron promises it will be a surreal, otherworldly experience. "It's like you've left the solar system and are looking back from some other world."

Baron, who is a former NPR science reporter, talks to Life Kit about what to expect when viewing a total solar eclipse, including the sensations you may feel and the strange lighting effects in the sky. This interview has been edited for length and clarity.

how robots changed the world essay

Baron views the beginning of a solar eclipse with friends in Western Australia in 2023. Baron says getting to see the solar corona during a total eclipse is "the most dazzling sight in the heavens." Photographs by David Baron; Bronson Arcuri, Kara Frame, CJ Riculan/NPR; Collage by Becky Harlan/NPR hide caption

Baron views the beginning of a solar eclipse with friends in Western Australia in 2023. Baron says getting to see the solar corona during a total eclipse is "the most dazzling sight in the heavens."

What does it feel like to experience a total solar eclipse — those few precious minutes when the moon completely covers up the sun?

It is beautiful and absolutely magnificent. It comes on all of a sudden. As soon as the moon blocks the last rays of the sun, you're plunged into this weird twilight in the middle of the day. You look up and the blue sky has been torn away. On any given day, the blue sky overhead acts as a screen that keeps us from seeing what's in space. And suddenly that's gone. So you can look into the middle of the solar system and see the sun and the planets together.

Can you tell me about the sounds and the emotions you're feeling?

A total solar eclipse is so much more than something you just see with your eyes. It's something you experience with your whole body. [With the drop in sunlight], birds will be going crazy. Crickets may be chirping. If you're around other people, they're going to be screaming and crying [with all their emotions from seeing the eclipse]. The air temperature drops because the sunlight suddenly turns off. And you're immersed in the moon's shadow. It doesn't feel real.

Everything you need to know about solar eclipse glasses before April 8

Everything you need to know about solar eclipse glasses before April 8

In your 2017 Ted Talk , you said you felt like your eyesight was failing in the moments before totality. Can you go into that a little more?

The lighting effects are very weird. Before you get to the total eclipse, you have a progressive partial eclipse as the moon slowly covers the sun. So over the course of an hour [or so], the sunlight will be very slowly dimming. It's as if you're in a room in a house and someone is very slowly turning down the dimmer switch. For most of that time your eyes are adjusting and you don't notice it. But then there's a point at which the light's getting so dim that your eyes can't adjust, and weird things happen. Your eyes are less able to see color. It's as if the landscape is losing its color. Also there's an effect where the shadows get very strange.

how robots changed the world essay

Crescent-shaped shadows cast by the solar eclipse before it reaches totality appear on a board at an eclipse-viewing event in Antelope, Ore., 2017. Kara Frame and CJ Riculan/NPR hide caption

You see these crescents on the ground.

There are two things that happen. One is if you look under a tree, the spaces between leaves or branches will act as pinhole projectors. So you'll see tiny little crescents everywhere. But there's another effect. As the sun goes from this big orb in the sky to something much smaller, shadows grow sharper. As you're nearing the total eclipse, if you have the sun behind you and you look at your shadow on the ground, you might see individual hairs on your head. It's just very odd.

Some people might say that seeing the partial eclipse is just as good. They don't need to go to the path of totality.

A partial solar eclipse is a very interesting experience. If you're in an area where you see a deep partial eclipse, the sun will become a crescent like the moon. You can only look at it with eye protection. Don't look at it with the naked eye . The light can get eerie. It's fun, but it is not a thousandth as good as a total eclipse.

A total eclipse is a fundamentally different experience, because it's only when the moon completely blocks the sun that you can actually take off the eclipse glasses and look with the naked eye at the sun.

And you will see a sun you've never seen before. That bright surface is gone. What you're actually looking at is the sun's outer atmosphere, the solar corona. It's the most dazzling sight in the heavens. It's this beautiful textured thing. It looks sort of like a wreath or a crown made out of tinsel or strands of silk. It shimmers in space. The shape is constantly changing. And you will only see that if you're in the path of the total eclipse.

Watching a solar eclipse without the right filters can cause eye damage. Here's why

Shots - Health News

Watching a solar eclipse without the right filters can cause eye damage. here's why.

So looking at a partial eclipse is not the same?

It is not at all the same. Drive those few miles. Get into the path of totality.

This is really your chance to see a total eclipse. The next one isn't happening across the U.S. for another 20 years.

The next significant total solar eclipse in the United States won't be until 2045. That one will go from California to Florida and will cross my home state of Colorado. I've got it on my calendar.

The digital story was written by Malaka Gharib and edited by Sylvie Douglis and Meghan Keane. The visual editor is Beck Harlan. We'd love to hear from you. Leave us a voicemail at 202-216-9823, or email us at [email protected].

Listen to Life Kit on Apple Podcasts and Spotify , and sign up for our newsletter .

NPR will be sharing highlights here from across the NPR Network throughout the day Monday if you're unable to get out and see it in real time.

Correction April 3, 2024

In a previous audio version of this story, we made reference to an upcoming 2025 total solar eclipse. The solar eclipse in question will take place in 2045.

  • Life Kit: Life Skills
  • total eclipse
  • solar eclipse
  • Share full article

Advertisement

Supported by

Guest Essay

When I Became a Birder, Almost Everything Else Fell Into Place

An illustration showing a birder standing quietly looking through binoculars in four scenes. In the third scene, he says, “Amazing.”

Mr. Yong is a science writer whose most recent book, “An Immense World,” investigates animal perception.

Last September, I drove to a protected wetland near my home in Oakland, Calif., walked to the end of a pier and started looking at birds. Throughout the summer, I was breaking in my first pair of binoculars, a Sibley field guide and the Merlin song-identification app, but always while hiking or walking the dog. On that pier, for the first time, I had gone somewhere solely to watch birds.

In some birding circles, people say that anyone who looks at birds is a birder — a kind, inclusive sentiment that overlooks the forces that create and shape subcultures. Anyone can dance, but not everyone would identify as a dancer, because the term suggests, if not skill, then at least effort and intent. Similarly, I’ve cared about birds and other animals for my entire life, and I’ve written about them throughout my two decades as a science writer, but I mark the moment when I specifically chose to devote time and energy to them as the moment I became a birder.

Since then, my birder derangement syndrome has progressed at an alarming pace. Seven months ago, I was still seeing very common birds for the first time. Since then, I’ve seen 452 species, including 337 in the United States, and 307 this year alone. I can reliably identify a few dozen species by ear. I can tell apart greater and lesser yellowlegs, house and purple finches, Cooper’s and sharp-shinned hawks. (Don’t talk to me about gulls; I’m working on the gulls.) I keep abreast of eBird’s rare bird alerts and have spent many days — some glorious, others frustrating — looking for said rare birds. I know what it means to dip, to twitch, to pish . I’ve gone owling.

I didn’t start from scratch. A career spent writing about nature gave me enough avian biology and taxonomy to roughly know the habitats and silhouettes of the major groups. Journalism taught me how to familiarize myself with unfamiliar territory very quickly. I crowdsourced tips on the social media platform Bluesky . I went out with experienced birders to learn how they move through a landscape and what cues they attend to.

I studied up on birds that are famously difficult to identify so that when I first saw them in the field, I had an inkling of what they were without having to check a field guide. I used the many tools now available to novices: EBird shows where other birders go and reveals how different species navigate space and time; Merlin is best known as an identification app but is secretly an incredible encyclopedia; Birding Quiz lets you practice identifying species based on fleeting glances at bad angles.

This all sounds rather extra, and birding is often defined by its excesses. At its worst, it becomes an empty process of collection that turns living things into abstract numbers on meaningless lists. But even that style of birding is harder without knowledge. To find the birds, you have to know them. And in the process of knowing them, much else falls into place.

Birding has tripled the time I spend outdoors. It has pushed me to explore Oakland in ways I never would have: Amazing hot spots lurk within industrial areas, sewage treatment plants and random residential parks. It has proved more meditative than meditation. While birding, I seem impervious to heat, cold, hunger and thirst. My senses focus resolutely on the present, and the usual hubbub in my head becomes quiet. When I spot a species for the first time — a lifer — I course with adrenaline while being utterly serene.

I also feel a much deeper connection to the natural world, which I have long written about but always remained slightly distant from. I knew that the loggerhead shrike — a small but ferocious songbird — impales the bodies of its prey on spikes. I’ve now seen one doing that with my own eyes. I know where to find the shrikes and what they sound like. Countless fragments of unrooted trivia that rattled around my brain are now grounded in place, time and experience.

When I step out my door in the morning, I take an aural census of the neighborhood, tuning in to the chatter of creatures that were always there and that I might have previously overlooked. The passing of the seasons feels more granular, marked by the arrival and disappearance of particular species instead of much slower changes in day length, temperature and greenery. I find myself noticing small shifts in the weather and small differences in habitat. I think about the tides.

So much more of the natural world feels close and accessible now. When I started birding, I remember thinking that I’d never see most of the species in my field guide. Sure, backyard birds like robins and western bluebirds would be easy, but not black skimmers or peregrine falcons or loggerhead shrikes. I had internalized the idea of nature as distant and remote — the province of nature documentaries and far-flung vacations. But in the past six months, I’ve seen soaring golden eagles, heard duetting great horned owls, watched dancing sandhill cranes and marveled at diving Pacific loons, all within an hour of my house. “I’ll never see that” has turned into “Where can I find that?”

Of course, having the time to bird is an immense privilege. As a freelancer, I have total control over my hours and my ability to get out in the field. “Are you a retiree?” a fellow birder recently asked me. “You’re birding like a retiree.” I laughed, but the comment spoke to the idea that things like birding are what you do when you’re not working, not being productive.

I reject that. These recent years have taught me that I’m less when I’m not actively looking after myself, that I have value to my world and my community beyond ceaseless production and that pursuits like birding that foster joy, wonder and connection to place are not sidebars to a fulfilled life but their essence.

It’s easy to think of birding as an escape from reality. Instead, I see it as immersion in the true reality. I don’t need to know who the main characters are on social media and what everyone is saying about them, when I can instead spend an hour trying to find a rare sparrow. It’s very clear to me which of those two activities is the more ridiculous. It’s not the one with the sparrow.

More of those sparrows are imminent. I’m about to witness my first spring migration as warblers and other delights pass through the Bay Area. Birds I’ve seen only in drab grays are about to don their spectacular breeding plumages. Familiar species are about to burst out in new tunes that I’ll have to learn. I have my first lazuli bunting to see, my first blue grosbeak to find, my first least terns to photograph. I can’t wait.

Ed Yong is a science writer whose most recent book, “An Immense World,” investigates animal perception.

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips . And here’s our email: [email protected] .

Follow the New York Times Opinion section on Facebook , Instagram , TikTok , WhatsApp , X and Threads .

We've detected unusual activity from your computer network

To continue, please click the box below to let us know you're not a robot.

Why did this happen?

Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy .

For inquiries related to this message please contact our support team and provide the reference ID below.

IMAGES

  1. The Advancement of Robot Technology

    how robots changed the world essay

  2. How Robots Change the World and What Automation Really Means For Jobs

    how robots changed the world essay

  3. Robotics Essay

    how robots changed the world essay

  4. How robots will change the world

    how robots changed the world essay

  5. Essay on Robots in the Future

    how robots changed the world essay

  6. Robots in our life Free Essay Example

    how robots changed the world essay

VIDEO

  1. A.I.- David's Breakdown

  2. Write a short Essay on Robot🤖 in English|10lines on Robot|

  3. Robots in Sci-Fi Cinema

  4. ROBOTS changed LABOR FORCE PARTICIPATION? 🤖 #economics

  5. Singularity in Robot Films

  6. A.I.- Maintaining Numerical Superiority

COMMENTS

  1. Rybczynski prize-winning essay: How robots change the world

    Rybczynski prize-winning essay on the impact on regional inequalities. Oxford Economics' Edward Cone, Richard Holt and Michael Zielenziger won the 2019/20 Society of Professional Economists Rybczynski Prize Essay Competition, with a fascinating piece on the impact of robots on regional inequalities. The Rybczynski Prize is awarded to the best ...

  2. The future of robotics: How will robots change the world?

    However, recent developments in machine learning and artificial intelligence mean that we may see an increase in human-to-robot interactions in the future. The robotics industry is expected to grow significantly over the coming years. Estimates suggest that the sector could be worth as much as $260 billion by 2030.

  3. The present and future of AI

    The 2021 report is the second in a series that will be released every five years until 2116. Titled "Gathering Strength, Gathering Storms," the report explores the various ways AI is increasingly touching people's lives in settings that range from movie recommendations and voice assistants to autonomous driving and automated medical ...

  4. How robots change the world: Their impact on regional inequalities

    The spread of robots across the workplace, and especially within manufacturing, is widely known. This paper reports on how robots have impacted directly on manufacturing jobs, and indirectly on national GDP growth via productivity gains, across a range of advanced economies.

  5. The Complete History And Future of Robots

    The History of Robots. The definition of "robot" has been confusing from the very beginning. The word first appeared in 1921, in Karel Capek's play R.U.R., or Rossum's Universal Robots ...

  6. The robot revolution has arrived

    It was coined by the Czech writer Karel Čapek, in a play that set the template for a century's machine dreams and nightmares. The robots in that play, R.U.R., look and act like people, do all ...

  7. How robots change the world

    Abstract. The global robotics revolution is rapidly accelerating, as fast-paced technological advances converge. This will transform robots' capabilities and their ability to replace human workers, including in services where robot use is also set to rise steeply. The global stock of industrial robots multiplied three-fold over the past two ...

  8. AI and robotics: How will robots help us in the future?

    Recent advances in artificial intelligence (AI) are leading to the emergence of a new class of robot. These are machines that go beyond the traditional bots running preprogrammed motions; these are robots that can see, learn, think, and react to their surroundings. While we may not personally witness or interact with robots directly in our ...

  9. [PDF] How robots change the world

    How robots change the world. The spread of robots across the workplace, and especially within manufacturing, is widely known. This paper reports on how robots have impacted directly on manufacturing jobs, and indirectly on national GDP growth via productivity gains, across a range of advanced economies. By the use of econometric modelling, our ...

  10. How artificial intelligence is transforming the world

    The world is on the cusp of revolutionizing many sectors through artificial intelligence, but the way AI systems are developed need to be better understood due to the major implications these ...

  11. The world should welcome the rise of the robots

    The world should welcome the rise of the robots. T HE WORD "robot" was coined in 1920 by the Czech playwright Karel Capek. In "R.U.R." ("Rossum's Universal Robots") Capek imagined ...

  12. How we built a robot that can evolve

    The latest research on robots is often described as if it were a step on the inexorable march toward a robot apocalypse straight out of the Terminator films.While there are risks in developing ...

  13. The Future of Robotics: How Robots Will Transform Our Lives

    The Future of Robotics: How Robots Will Change the World Robots will increase economic growth and productivity and create new career opportunities for many people worldwide. However, there are still warnings out there about massive job losses, forecasting losses of 20 million manufacturing jobs by 2030 , or how 30% of all jobs could be ...

  14. PDF HOW ROBOTS CHANGE THE WORLD

    How Robots Change the World Over the past decade, a robotics revolution has captured the world's imagination. As their capabilities expand, so does the rate at which industries purchase and install these increasingly intelligent machines. Since 2010, the global stock of industrial robots has more than doubled—and innovations in engineering

  15. An expert explores how robots will affect the future of work

    A new survey-based study explains how automation is reshaping the workplace in unexpected ways. Robots can improve efficiency and quality, reduce costs, and even help create more jobs for their human counterparts. But more robots can also reduce the need for managers. The study is titled "The Robot Revolution: Managerial and Employment ...

  16. A new study measures the actual impact of robots on jobs. It's

    The researchers found that for every robot added per 1,000 workers in the U.S., wages decline by 0.42% and the employment-to-population ratio goes down by 0.2 percentage points — to date, this means the loss of about 400,000 jobs. The impact is more sizable within the areas where robots are deployed: adding one more robot in a commuting zone ...

  17. Robot Reality: How Robots Will Change Our Lives

    Robo Service. Reuters. As robots become more social, humans will encounter more and more robots in the service industry, said Illah Nourbakhsh, a professor of robotics at Carnegie Mellon ...

  18. How Robotics Could Take Over the World Essay

    These robots are known as the ICub, and excel at interacting with the human population safely. One of these robots, known as Molly, exists in Bristol, England (Honigsbaum). This small combination of metals, wires, and computer components actually helps the elderly with simple tasks that they would ordinarily need help with.

  19. How Do Robots Change The World?

    Robots Change the World. Introduction. Whether traversing a battlefield or a messy home, it is all the same to a robot's artificial intelligence (AI). Navigating around the home owner's treasured belongings might involve similar functions to avoiding dangers in a hostile environment. Perceiving the faces of the family members and their ...

  20. How Will Robots Change the World?

    How Robots Will Change the World. Robots are changing the world by helping humans do things better (with greater efficiency) and doing things that were not possible before. Robots facilitate disaster response, augment physical abilities, serve in areas where there's a need for interaction with people, and enable exploration beyond the ...

  21. How will AI change the world? 5 deep dives into the technology's future

    NPR Explains. AI is a multi-billion dollar industry. Friends are using apps to morph their photos into realistic avatars. TV scripts, school essays and resumes are written by bots that sound a lot ...

  22. Why robots should take more inspiration from plants

    Writing in Science Robotics, they described "FiloBot", a robot based on climbing plants. Like the real thing, FiloBot (from the Italian word for "tendril") is capable of growing, attaching ...

  23. Another A.I. Target: Food Waste

    April 4, 2024. The New York Times. A hotel chain installs a camera in its trash bins to spy on what guests are tossing. Turns out its breakfast croissants are too big. Many are going to waste ...

  24. Here's what it's like to view a total solar eclipse : Life Kit : NPR

    On April 8, millions of people across North America will get that chance — a total solar eclipse will appear in the sky. Baron promises it will be a surreal, otherworldly experience. "It's like ...

  25. When I Became a Birder, Almost Everything Else Fell Into Place

    It has proved more meditative than meditation. While birding, I seem impervious to heat, cold, hunger and thirst. My senses focus resolutely on the present, and the usual hubbub in my head becomes ...

  26. Donald Trump $6.5 Billion Net Worth Makes Him One Of World's Richest

    That means for the first time ever, Trump joined the ranks of the world's wealthiest 500 people on the Bloomberg Billionaires Index, with a fortune of $6.5 billion. "We have a great company ...