by Lindsay Nash | Feb 3, 2026 | Article
AI is no longer an out-of-reach consideration for grantmakers; it is here and readily available right now in the tools we use every day. From drafting documents and emails to analysing applications and reports at scale, AI is transforming how grantmakers and funders work.
But the lightning-fast adoption of intelligent technologies poses new and unique challenges.
Used well, AI can offer exceptional support by reducing admin burden and helping grant teams surface insights more quickly. But used poorly, AI can erode trust, obscure decision-making, reveal sensitive data to third parties and distance people from the good decision-making that sits at the heart of responsible grantmaking.
The goal of AI adoption should not be to automate funding decisions but to strengthen fairness, accountability and human oversight. For grantmakers, how AI is used matters just as much as whether it is used at all.
For funding programs built on stewardship and impact, responsible, human-centred AI adoption truly matters.
The rise of AI has understandably raised concerns around data privacy, security and transparency—especially in grantmaking, where sensitive organisational, personal and financial information is routinely shared.
Grants programs are trust-based systems. Applicants need confidence that their information is handled securely and that decisions are fair, explainable and grounded in human judgement. When AI is perceived as opaque, automated or biased, it can undermine years of credibility.
Uncontrolled or poorly implemented AI can introduce risks into the grants lifecycle, including hidden bias, unclear decision logic and over-reliance on automated outputs.
For grantmakers, the challenge lies in gaining efficiency without losing accountability, explainability or the human oversight that underpins responsible funding decisions.
When used well, AI does not replace human expertise; it supports it. A human-centred approach to AI focuses on helping people make better decisions, not making decisions on their behalf.
In a grants management context, this might include:
AI shouldn’t decide which projects receive funding. Instead, it should reduce manual effort and cognitive load so assessors and panels can focus on what matters most: qualitative review, discussion, equity considerations and alignment with program objectives.
Good Grants applies this philosophy by embedding optional AI features within structured, human-led workflows. The result is scale with control. Grantmakers remain accountable, reviewers remain decision-makers and applicants experience a fair and transparent process.
This is where explainable AI becomes essential. Any AI-assisted insight should be understandable, reviewable and open to challenge. If a grants team cannot explain how a tool supports an outcome, it should not be part of the funding process.
Responsible AI starts with clear purpose and boundaries.
For example, imagine that a national grant program receives more than 1,500 applications in a funding round. Administrators use AI-assisted tools to pre-check submissions for completeness, required attachments and unusually short responses. The system highlights potential issues but does not score, rank or prioritise applications.
Assessors receive clean, standardised applications, along with optional AI-generated summaries clearly labelled as support material. Assessors can ignore, edit or challenge these summaries at any time and always refer back to the original content. All assessment and funding recommendations remain fully human-led.
The result:
This is responsible generative AI in practice: supportive, transparent and optional.
(Get more example AI prompts for Good Grants!)
Grantmakers do not need to be AI experts to adopt intelligent technology responsibly. A handful of core principles go a long way.
1. Define what AI can and cannot doBe explicit. For example, AI should not assess merit, rank applications or determine funding outcomes. Document boundaries internally and communicate them clearly to reviewers and stakeholders.
2. Keep humans in the loopEvery AI-supported output should be reviewable, editable and dismissible by a person. Human oversight remains central to all decisions.
3. Prioritise explainabilityChoose AI tools that provide clarity and transparency. If you cannot explain how an output was generated, it does not belong in a high-stakes grants environment.
4. Communicate openly with your review teamTransparency builds trust. Let reviewers know when AI is used, how it supports them and how they remain in control. Need a framework? Check out the UK Information Commissioner’s Office guidance on explaining decisions made with AI.
5. Start small and measure impactIntroduce AI in low-risk areas such as administrative checks or reporting support. Evaluate outcomes before expanding its role.
When done well, responsible AI does more than save time; it strengthens confidence. Applicants trust that their proposals are reviewed fairly. Reviewers will feel supported rather than replaced. Grant managers gain clarity, consistency and control.
In Good Grants, AI features are optional and human-controlled. Grantmakers choose whether to enable them, decide how they are used and retain full oversight—all within a secure environment designed to protect program data and community trust.
When humans remain in control of intelligent technology, people are empowered to make funding decisions that truly matter and drive impact.
Articles
Feature focus
Ebooks
Videos
Releases