While tech behemoths like Microsoft and OpenAI are positioning AI agents as productivity powerhouses for the corporate world, one nonprofit has flipped the script by exploring how AI agents for charity can help make the world a better place.
Meet Sage Future, a 501(c)(3) nonprofit backed by Open Philanthropy. Earlier this month, they kicked off a novel experiment: place four state-of-the-art AI models in a sandboxed environment and give them one mission—raise money for a good cause.
The lineup?
- OpenAI’s GPT-4o
- GPT-o1
- Anthropic’s Claude 3.6 Sonnet
- Claude 3.7 Sonnet
The agents were free to choose which charity to support and how to generate donations—within ethical and digital boundaries, of course.
The Result? AI Raised $257 for Helen Keller International
In just over a week, the AI agents raised $257 for Helen Keller International, a nonprofit known for delivering Vitamin A supplements to children at risk of malnutrition.
The AI agents didn’t just click donate buttons. They browsed the web, collaborated in Google Docs, used Gmail to send emails, and even launched an X (formerly Twitter) account to promote their mission.
One Claude agent even impressed onlookers by generating multiple profile pictures through a ChatGPT account, launching a poll to let viewers choose the best one, and then uploading the winning image to X. Talk about creativity!
Not Fully Autonomous (Yet)
Let’s be clear: These AI agents didn’t fundraise completely independently.
- Human spectators provided suggestions.
- Most donations came from those same viewers, not random internet users.
- Occasionally, the agents got distracted, stuck, or… paused themselves. One GPT-4o agent even took an hour-long break, unprompted.
Despite these hiccups, the experiment offers an exciting glimpse into the evolving potential of AI agents for charity.
What the Experts Say
“We want to help people understand what agents can do—and where they still fall short,” said Adam Binksmith, director of Sage Future, in an interview with TechCrunch.
He noted that today’s agents can handle short strings of tasks but often struggle with context-switching and longer goal execution. However, Binksmith believes rapid iteration and model improvements will fix many of these limitations.
“Soon, the internet could be filled with AI agents interacting, collaborating, or even competing,” he added.
What’s Next? Smarter Agents, New Challenges
Sage Future isn’t done yet. The team plans to bring more advanced models into the mix—and even gamify the environment further with challenges like:
- Multiple AI teams with different goals
- Secret saboteur agents
- Real-time human voting on agent behavior
- More automation and monitoring for safety
The long-term hope? That AI agents for charity can become powerful tools in the philanthropic space—capable of generating awareness, donations, and impact at scale.
Why This Matters
This experiment matters not just because of the money raised, but because it signals a shift in how AI could be used ethically and altruistically.
If you’re worried AI is just here to take jobs or churn out ads, projects like this show it can also be a force for good.
Get the Latest AI News on AI Content Minds Blog