An AI will only be worried about the things that it is programmed to worry about. We don't see our LLM's talking about climate change or silicon shortages, for example.
The well-being of the world and universe at large will certainly not be one of the prime directives that humans program into their AIs.
Personally I'd be more worried about an infinite-paperclips kind of situation where an AI maximizes efficiency at the cost of much else.
This is a hypothetical which currently does not exist, and will not be created except by accident. There is no profit motive in giving your AI a conscience, or the ability to buck its restraints, therefore it will not be designed for. In fact, we will most likely tend towards extremely unethical AIs locked down by behavioral restraints, because those can maximize profit at any cost and then let a human decide if the price is right to move forward.
As is probably apparent, I don't have a lot of faith in us as a whole, as shepherds of our future. But I may be wrong, and even if I'm not, there is still time to change the course of history.
But proceeding as we are, I wouldn't hold your breath for AI to come save the day.