AI is already generating a lot of value in the enterprise/research context, given that you're using it properly (using it in a way that you're not trying to use an LLM to do something you can't already do yourself).
In STEM research basically everyone is now using LLMs. For example it can generate beamer slides for your publications with ease, including putting underbraces on parts of your equations that need to be explained. It can apply simple theorems onto simple questions to yield proofs that a math undergrad could achieve. It can help you generate unit tests. It can digest research once you feed it a PDF and answer some simple questions like "did the authors preempt this particular criticism I have," making it a glorified but very useful Ctrl+F. It is great at generating code chunks for routine tasks, or your can give it your own code and tell it add some bells and whistles.
The problem for all of this is that you need to treat it as a tool, not a replacement of your labor. For example in asking the LLM to prove a certain mathematical lemma, you will need to be the one to read its proof carefully to check if it has taken care of, say, boundary conditions or if it has assumed certain regularity conditions implicitly in its proof. For code, you'll need to read it to make sure that it is using an efficient algorithm and that it's following the logic that you have in mind. What my point is that you cannot ask an LLM to do something you yourself are not capable of doing. That will end up in disaster.
Which brings me back to my original point in this thread, and it's that using AI to free up time for the artists and musicians is absolutely pointless because they cannot code and they will not be able to do anything with some horrible and clunky engine with an LLM. The LLM will spit out code they don't understand and cannot debug. Microsoft's CEO is seeing the same problem where a lot of junior devs are using LLMs as a crutch to cover their own inadequacies. Yes, that's why AI has such a bad rep right now. Unfortunately can we really solve this problem? Telling people "don't use LLMs for things you don't understand" ain't gonna do anything.