In a world where AI usually means big business and cutting costs, Sage Future—a nonprofit with backing from Open Philanthropy—is flipping the script. They’ve thrown four AI models (OpenAI’s GPT-4o and o1, plus a couple of Anthropic’s Claude models) into a virtual sandbox to see if AI can actually do some good. And guess what? These digital do-gooders picked a charity, cooked up some fundraising strategies, and pulled in $257 for Helen Keller International. Not bad for a bunch of algorithms, right? 🎗️
But it wasn’t all smooth sailing. The AIs, for all their smarts, leaned pretty hard on humans for ideas and cash. Which kinda makes you wonder: how independent can AI really be when it comes to charity? “Right now, these agents are just dipping their toes into doing a few actions in a row,” says Sage’s Adam Binksmith. Translation: we’re still in the early days.
On the bright side, these bots were no slouches. They juggled group chats, handled emails, and even whipped up some social media buzz. But then came the CAPTCHAs and those weird moments when they just… stopped. It’s a reminder that AI’s got its limits, and yeah, we’ve gotta keep an eye on it. 🔍
Looking ahead, Sage is dreaming bigger—more complex tests, maybe even some AI vs. AI drama. But with bigger dreams come bigger questions. How do we balance letting AI off the leash with making sure it doesn’t run amok? This isn’t just about what AI can do; it’s about whether we’re ready to hand over the charity collection box to a bunch of codes. So, can AI really get the whole ‘giving back’ thing, or is it always gonna need a human to hold its hand?