Intentional intelligence: AI for good grantmakers

by | Dec 1, 2025 | Article

Article written by Dan Whitty, Senior Information Security Manager at Good Grants.

AI capabilities are advancing at an astonishing pace, leaving grantmakers divided: some eager to experiment, others hesitant to engage at all. Yet faster tools don’t automatically create better outcomes, especially in the context of responsible grantmaking.

A helpful parallel comes from Greek mythology:

When the Trojan Horse was welcomed into Troy, the danger wasn’t the wooden structure itself. The trouble lay in the assumptions the city made about what it contained and what it was permitted to do.

That’s the real lesson: The horse was just a vessel. The failure came from allowing it inside without scrutiny or boundaries.

Modern AI poses a similar challenge. It’s neither inherently beneficial nor harmful; it’s simply powerful. And in the world of grants management, the smartest approach is clear: intelligence shouldn’t be automatic; it should be intentional.

At Good Grants, our AI features are built around that principle. They’re not switched on by default. They don’t train on shared client data. And they never operate without transparency or oversight.

Grantmaking platforms steward uniquely sensitive, human-centred information, such as:

  • Applicant stories and funding rationales
  • Personal and organisational details
  • Reviewer comments and scoring
  • Funding recommendations
  • Committee notes and deliberations

This isn’t generic operational data—it’s the foundation of fair, trusted grant decisions.

If AI were enabled everywhere by default, organisations couldn’t confidently assure reviewers, applicants or stakeholders where data was processed or how it influenced decisions. In grantmaking, that uncertainty—not AI itself—is the real risk.

How to use AI safely in a grantmaking cycle

These practical, non-technical steps apply to any grants program introducing AI support:

1. Activate AI only where it serves the program’s purpose
Summaries, theme extraction, consistency checks and constructive feedback; not decision-making.

2. Maintain human decision authority
AI can help interpret, but reviewers, panels and staff remain accountable for outcomes.

3. Remove identifiable information before analysis
Protect applicants by masking personal or organisational data where possible.

4. Treat AI as an enhancement, not an autopilot
It expands your capacity and improves efficiency—but it shouldn’t replace thoughtful review.

5. Ask vendors tough questions
How do they isolate AI data? What guardrails prevent models from learning on sensitive content?

6. Document when and why AI is used
Clear records allow you to explain, audit and defend decisions later—no guesswork required.

As AI continues to evolve, grantmakers don’t need to avoid it or adopt it blindly. The real advantage comes from using AI with clarity, consent and strong governance.

By turning AI on with purpose, safeguarding sensitive program data and ensuring human judgment remains central, organisations can unlock AI as a force multiplier, supporting better grantmaking without compromising trust.

Categories

Follow our blog

This field is for validation purposes and should be left unchanged.
Name(Required)