OpenAI’s latest safety report is all about taking it slow with AI development—or so they say. But according to ex-researcher Miles Brundage, that’s just a fancy way of glossing over some not-so-great past decisions. Remember when GPT-2’s release was teased out in stages? Brundage spills the tea: that was the plan all along, not some groundbreaking safety epiphany. And let’s not ignore the elephant in the room—OpenAI’s ‘wait until it’s dangerous’ mindset is, well, a bit like playing Jenga with fire. 😉
Here’s the kicker: AI misinformation isn’t some distant nightmare; it’s here, and it’s wild (Google’s AI telling people to ‘eat rocks’ is just the tip of the iceberg). As for OpenAI’s transparency? Let’s just say it’s more ‘missing in action’ than ‘mission accomplished.’ Critics are calling them out for prioritizing flashy tech over actual safety. The race to AGI is on, but honestly, are we ready to pay the price?