Like most programmers, I've always been interested in AI, or Artificial Intelligence. Countless books and movies have been based on machines that gain the intelligence of humans. Usually, the plot involves the machines turning against their creators.
But right now, AI is being put to other uses: machine vision, language translation, even managing restaurants.
The above tasks are accomplished quite well by humans, but we do error on occasion. To err is human, we all accept that (to some degree). But what happens when machines start to make the same mistakes. Will we accept it? I'm not sure that we will. I think that we will demand machines that don't make the same mistakes we do. What we really want is Super Artificial Intelligence; machines that are superior to us.
That doesn't bode well for the day its Humanity vs. The Machine.