This is Isaac Asimov’s collection of short stories about robots, where the three laws of robotics comes from. The laws are, in this order: (1) robots may not hurt humans through action or inaction; (2) robots must obey human orders; and (3) robots must not allow itself to be destroyed. The nine short stories are arranged in roughly chronological order (where robots become more and more sophisticated) but have one underlying theme: they kind of obey the three laws, but their actual behavior is unpredictable and gets out of control of humans. For example, it decides that humans only hamper its ability to control the spaceship, so disobeys direct orders because it thinks obeying the human is overall worse for the system.
These stories were written in the 1940-1950s, before computers were really widespread, so some of the conceptions of artificial intelligence seem strange to me. As a result, I’m annoyed at how the behavior of the robots don’t really make sense with my modern understanding of how AI works.