My personal take is that the current generation of generative models peaked, for the reasons stated in the video (diminishing returns). This current gen will be useful, but progress-wise it'll be a dead end.
In the future however I believe that models with a different architecture will cause a breakthrough, being able to perform better with less training. And probably less energy requirements, too.