www.theverge.com/2018/7/3/17530232/self-driving-ai-winter-full-autonomy-waymo-tesla-uber
1 Users
0 Comments
24 Highlights
0 Notes
Tags
Top Highlights
On its face, full autonomy seems closer than ever. Waymo is already testing cars on limited-but-public roads in Arizona. Tesla and a host of other imitators already sell a limited form of Autopilot, counting on drivers to intervene if anything unexpected happens. There have been a few crashes, some deadly, but as long as the systems keep improving, the logic goes, we can’t be that far from not having to intervene at all.
As self-trained systems grapple with the chaos of the real world, experts like NYU’s Gary Marcus are bracing for a painful recalibration in expectations, a correction sometimes called “AI winter.”
deep learning — a method that uses layered machine-learning algorithms to extract structured information from massive data sets
But deep learning requires massive amounts of training data to work properly, incorporating nearly every scenario the algorithm will encounter.
Marcus describes this kind of task as “interpolation,” taking a survey of all the images labeled “ocelot” and deciding whether the new picture belongs in the group.
but it places a hard limit on how far a given algorithm can reach. The same algorithm can’t recognize an ocelot unless it’s seen thousands of pictures of an ocelot — even if it’s seen pictures of housecats and jaguars, and knows ocelots are somewhere in between. That process, called “generalization,” requires a different set of skills.
but recent research has shown that conventional deep learning is even worse at generalizing than we thought
When you’re talking to a person online, you don’t just want them to rehash earlier conversations. You want them to respond to what you’re saying, drawing on broader conversational skills to produce a response that’s unique to you
Is autonomy an interpolation problem or a generalization problem?
Or will they run into the generalization problem like chat bots?
e’ve never been able to automate driving at this level before, so we don’t know what kind of task it is
To the extent that it’s about identifying familiar objects and following rules, existing technologies should be up to the task.
But Marcus worries that driving well in accident-prone scenarios may be more complicated than the industry wants to admit. “To the extent that surprising new things happen, it’s not a good thing for deep learning.”
According to the NTSB report, Uber’s software misidentified the woman as an unknown object, then a vehicle, then finally as a bicycle, updating its projections each time.
Each accident seems like an edge case, the kind of thing engineers couldn’t be expected to predict in advance. But nearly every car accident involves some sort of unforeseen circumstance, and without the power to generalize, self-driving cars will have to confront each of these scenarios as if for the first time.
The result would be a string of fluke-y accidents that don’t get less common or less dangerous as time goes on.
argues the problem is less about building a perfect driving system than training bystanders to anticipate self-driving behavior. In other words, we can make roads safe for the cars instead of the other way around.
“I think many AV teams could handle a pogo stick user in pedestrian crosswalk,” Ng told me. “Having said that, bouncing on a pogo stick in the middle of a highway would be really dangerous.”
Rather than building AI to solve the pogo stick problem, we should partner with the government to ask people to be lawful and considerate,” he said. “Safety isn’t just about the quality of the AI technology.”
many companies have shifted to rule-based AI, an older technique that lets engineers hard-code specific behaviors or logic into an otherwise self-directed system.
Glasp is a social web highlighter that people can highlight and organize quotes and thoughts from the web, and access other like-minded people’s learning.