The very idea of a rule against hurting humans implies that a robot knows:
1. What hurting means
is it pain? death? financial impact? what about indirect effects? If I help human 1 build a better mousetrap, I am indirectly harming some other human's way of life.
2. What people are
3. Where they are
These are highly non trivial problems. In fact, they're unsolvable to any degree of certainty. They only make sense in a *science fiction* book in which a highly talented author is telling you a story. In the real world, they are meaningless because of their computational intractibility.
In the real world, we use codes of ethics and/or morality. Such codes recognize the fact that there are no absolutes and sometimes making a decision that will ultimately cause harm to someone is inevitable.
So can we please stop with these damned laws already?