Automating humans

Most jobs are still out of reach of robots, which lack the dexterity required on an assembly line or the social grace needed on a customer service call. But in some cases, the humans doing this work are themselves being automated as if they were machines.

What’s happening: Even the most vigilant supervisor can only watch over a few workers at one time. But now, increasingly cheap AI systems can monitor every employee in a store, at a call center or on a factory floor, flagging their failures in real time and learning from their triumphs to optimize an entire workforce.

  • A network of surveillance cameras hooked up to special software can tally the seconds of each worker’s bathroom break or time each step of their work.
  • It can also keep workers safe, automatically detecting the absence of hard hats and gloves, for example, or people straying into the path of dangerous machines.
  • In some call centers, AI listens into every conversation, cataloging every word, who said it and how, and then scoring each agent.

Why it matters: Companies can use this data to juice workers” productivity and efficiency. Eventually, they could gather enough data from humans to train machines to mimic them.

«How often is an employee going out to smoke a cigarette? How long a lunch are they taking? How long are they sitting in the lunchroom?» These are the questions clients want answered with AI software, says Kim Hartman, CEO of Surveillance Secure, a D.C.-area company that installs security systems.

  • Hartman says his company has put in video analytics for several area retailers and restaurants that wanted to monitor their employees” productivity.

In a handful of factories in the U.S., cameras have been installed over each worker’s head in assembly lines as they put together car parts or electronics.

  • Software developed by Drishti, a Silicon Valley startup, watches these assemblers work, timing each step and checking for mistakes.
  • The videos let supervisors quickly figure out where something went wrong and teach a worker how to avoid repeating an error, says Drishti CEO Prasad Akella. It can also be used for training new hires.
  • And since AI is constantly watching the video streams, it can extract valuable data about timing and actions across the entire assembly line, which can inform new ways of assigning work.

«The most programmable machine on the planet today is still the human.»

— Drishti CEO Prasad Akella

«Employers and companies attempting to extract more value from its labor force by making that labor more efficient is nothing new,» says Jess Kutch, co-founder of Coworker.org, a nonprofit that helps workers organize. A century ago, managers used stopwatches to pursue efficiency under the banner of «scientific management,» or Taylorism.

But extreme monitoring enabled by new technologies can be inhumane, Kutch says.

  • «In low-wage work we’re seeing a lot more decisions that were made by a middle manager being outsourced to an algorithm,» says Aiha Nguyen of the research organization Data & Society.
  • «What workers are seeing, and have a fear of, is arbitrarily speeding up workplaces,» Nguyen tells Axios.

The creators of AI monitoring tools argue that their software benefits employers and employees.

  • Drishti provides workers and supervisors with valuable feedback, Akella says. Its software can call out high performers, reward efficiency-improving creativity and even keep workers from hurting themselves.
  • Akella argues that employees won’t be forced to work much faster and harder because turning up the heat would introduce unacceptable errors.
  • Call center agents monitored by AI software from CallMiner prefer being graded by an «impartial computer» over a human supervisor, says CTO Jeff Gallino.

What’s next: Extensive AI-annotated video or audio data about how people work is a potential gold mine for automation developers.

  • Robots are still too klutzy to take over assembly lines built for humans but could learn how to put together products in a machine-only environment.
  • Gallino says CallMiner could use information gathered from human agents to automate the «boring» parts of customer service calls.

Go deeper: Automated management for call centers (NYT)

 

-Fake data for real AI

AI systems have an endless appetite for data. For an autonomous car’s camera to identify pedestrians every time — not just nearly every time — its software needs to have studied countless examples of people standing, walking and running near roads.

Yes, but: Gathering and labeling those images is expensive and time consuming, and in some cases impossible. (Imagine staging a huge car crash.) So companies are teaching AI systems with fake photos and videos, sometimes also generated by AI, that stand in for the real thing.

The big picture: A few weeks ago, I wrote about the synthetic realities that surround us. Here, the machines that we now rely on — or may soon — are also learning inside their own simulated worlds.

How it works: Software that has been fed tons of human-labeled photos and videos can deduce the shapes, colors and movements that correspond, say, to a pedestrian.

  • But there’s an ever-present danger that the car will come across a person in a setting unlike any it’s seen before and, disastrously, fail to recognize them.
  • That’s where synthetic data can fill the gap. Computers can generate millions of scenes that an actual car might not experience, even after a million driving hours.

What’s happening: Startups like Landing.aiAI.ReverieCVEDIA and ANYVERSE can create super-realistic scenes and objects for AI systems to learn from.

  • Nvidia and others make synthetic worlds for digital versions of robots to play in, where they can test changes or learn new tricks to help them navigate the real world.
  • And autonomous vehicle makers like Waymo build their own simulationsto train or test their driving software.

Synthetic data is useful for any AI system that interacts with the world — not just cars.

  • In health care,made-up data can substitute for sensitive information about patients, mirroring characteristics of the population without revealing private details.
  • In manufacturing,«if you’re doing visual inspection on smartphones, you don’t have a million pictures of scratched smartphones,» says Andrew Ng, founder of Landing.ai and former AI head of Google and Baidu. «If you can get something to work with just 100 or 10 images, it breaks open a lot of new applications.»
  • In robotics,it’s helpful to imitate hard-to-find conditions. «It’s very expensive to go out and vary the lighting in the real world, and you can’t vary the lighting in an outdoor scene,» says Mike Skolones, director of simulation technology at Nvidia. But you can in a simulator.

«We’re still in the early days,» says Evan Nisselson of LDV Capital, a venture firm that invests in visual technology.

  • But, he says, synthetic data keeps getting closer to reality.
  • Generative adversarial networks — the same AI technology that drives most deepfakes — have helped vault synthetic data to new heights of realism.

 

-The deepfake threat to evidence

As deepfakes become more convincing and people are increasingly aware of them, the realistic AI-generated videos, images and audio threaten to disrupt crucial evidence at the center of the legal system.

Why it matters: Leaning on key videos in a court case — like a smartphone recording of a police shooting, for example — could become more difficult if jurors are more suspicious of them by default, or if lawyers call them into question by raising the possibility that they are deepfakes.

What’s happening: Elected officials, experts and the press have been warning about the potential future fallout for business or elections from deepfakes. But apart from a few high-profile examples, the tech so far has been used almost exclusively for porn, according to a landmark new report from Deeptrace Labs.

  • Plus, when President Trump and his supporters throw around accusations of «fake news» to discredit information that they don’t like, it can deepen the atmosphere of distrust.
  • All this could lead jurors or attorneys to falsely assume that a real video is faked, says Riana Pfefferkorn, associate director of surveillance and cybersecurity at Stanford’s Center for Internet and Society.

«This is dangerous in the courtroom context because the ultimate goal of the courts is to seek out truth,» says Pfefferkorn, who recently wrote an article about deepfakes in the courtroom for the Washington State Bar magazine.

  • «My fear is that the cultural worry could be weaponized to discredit [videos] and lead jurors to discount evidence that is authentic,» she tells Axios.
  • If a video’s authenticity comes into question, the burden shifts to the side that introduced it to prove it’s not fake — which can be expensive and take a long time.

Already, people accused of possessing child porn often claim that it’s computer-generated, says Hany Farid, a digital forensics expert at UC Berkeley. «I expect that in this and other realms, the rise of AI-synthesized content will increase the likelihood and efficacy of those claiming that real content is fake.»

Πηγή: axios.com

Σχετικά Άρθρα