The humans in Wall-E are shown to be mindless consumers.
By contrast, several of the robots have personalities and do things because they want to, not because they’re programmed to.
It does make a certain amount of sense (and of course is convenient for all the biological humans involved) to treat robots behaving robotically as inhuman tools, but view the robots that display human communication and individual choice as peers. It would be difficult to implement, though. When exactly does a robot pass from robotic to human? How do you test for human qualities?
I think in treating Auto as the story’s villain, though, human responsibility for the situation is avoided. We might intellectually understand that it’s really President Shelby Forthright’s fault for ordering Auto never to return, but the emotional victory is against Auto and it’s not directly tied back to the President.
Auto serves as a kind of scapegoat for the whole immense process of abusing and nearly destroying Earth, separating life from its connections to the Earth, and then putting it all out of mind. But of course, all the evidence tells us that there’s no way Auto could have been responsible for or even chosen to perpetrate any of these actions. He was just following orders as a simple robot.
The willingness of the remaining humans to push all that responsibility onto Auto is concerning – not because it’s unfair to Auto, who after all seems to be a non-sentient machine, but because the people are so blatantly abdicating responsibility and not truly working to counter their complicity.
Glasp is a social web highlighter that people can highlight and organize quotes and thoughts from the web, and access other like-minded people’s learning.