Asimov altered the rules in later stories to improve them, but they were never infallible.
“A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
Something smart enough to understand – what’s human, what harm is, how to prevent it – isn’t likely something that can be controlled at a low enough level for the rule to hold.
They’re a fun device to write stories to though.
I don’t know about blasphemy. The whole point of the three laws (in-universe) was that they were only the best human roboticists could do–all of the stories where they’re relevant are about how humans and robots ‘break’ them all the time.