I have no doubt that a crafty sentient AI hellbent on destroying humanity could do so. But let's look at the first part of the argument, should we reason about AI as though it has agency and preferences? The authors make a subtle argument in their Chapter 3, that while AI doesn't have its own wants and desires, we can reason about AI as though it does. In the following chapters, the authors go all in and think of ASI as though it has preferences and acts in its own self interest.
I think of computing as a Turing machine, a device that follows a set of simple instructions, interacting with input and memory, and producing some output. The machine does not have wants or desires, all it does is follow its instructions.
But we also realize the immense complexity that can arise from such simplicity. We have Rice's Theorem that says we can't understand, in general, anything about a Turing machine's output from the code of the machine. And there's a reason why we can't prove P ≠ NP or even have a viable approach, we have no idea how to bound the complexity of efficient algorithms. But we shouldn't confuse complexity and our inability to understand the algorithm as evidence of agency and desires. Even if AI seems to exhibit goal-oriented behavior, it's a property of its training and not evidence of independent agency.
I worry less about AI developing its own hostile agency than about how humans will wield it, whether through malicious misuse or misplaced trust. These are serious risks, but they're the kinds of risks we can work to mitigate while continuing to develop transformative technology. The "everyone dies" framing isn't just fatalistic, it's premised on treating computational systems as agents, which substitutes metaphor for mechanism.