I guess Altman thought "The ai race comes 1st. If Openai will lose the race, there'll be nothing to be safe about." But Openai is rich. They can afford to devote a portion of their resources to safety research.
What if he thinks that the improvement of ai won't be exponential? What if he thinks that it'll be slow enough that Openai can start focusing on ai safety when they can see superintelligence's approach from the distance? That focusing on safety now is premature? That surely is a difference in opinion compared to Sutskever and Leike.
I think ai safety is key. I won't be :o if Sutskever and Leike will go to Google or Anthropic.
I was curious whether or not Google and Anthropic have ai safety initiatives. Did a quick search and saw this –
For Anthropic, my quick search yielded none.