by Lindsay Nash | Mar 31, 2026 | Article
Artificial intelligence is no longer on the horizon for grantmakers. It is here, fastly becoming embedded in the tools many nonprofits use every day. From summarising applications and reviewing reports to flagging incomplete submissions, AI can genuinely reduce the administrative load on stretched grant teams.
But with that opportunity comes a responsibility that grantmakers cannot afford to overlook: protecting the privacy and security of the sensitive data entrusted to them by applicants, grantees and communities.
AI governance does not have to be complicated. With a clear framework and the right platform, nonprofits can embrace AI without compromising data privacy, community trust or program integrity.
Grant programs are built on trust. When applicants share personal, financial and organisational information in good faith, they expect it to be handled securely and used only for the purpose it was collected. If that trust is broken, whether through a data breach, an opaque AI process or misuse of information, the consequences can be severe: reputational damage, legal exposure and, most importantly, harm to the very communities grantmakers exist to serve.
As global data privacy regulation continues to expand, from GDPR in the UK and EU to emerging frameworks across North America and Asia-Pacific, grantmakers face increasing compliance obligations. AI has added a new dimension to this landscape because AI tools that process personal data are themselves subject to privacy law.
AI tools that operate without clear governance can introduce a range of risks into the grantmaking process:
These are not hypothetical risks. Stanford HAI’s 2025 AI Index Report tracks a sharp year-on-year rise in AI-related privacy and security incidents, serving as a sober reminder that the gap between risk awareness and action has real consequences.
Whether your organisation is just beginning to explore AI or already using it across your grantmaking cycle, a simple governance framework will help you stay in control.
NTEN’s AI for Nonprofits Resource Hub is a helpful starting point, covering overarching principles, tool evaluation, data privacy and IT governance. Here is how to apply each area in a grantmaking context.
Start by articulating what AI can and cannot do in your program. For example: AI may support application review by summarising submissions, but it may not score, rank or prioritise applicants. Document these boundaries and share them with all staff and reviewers before any tool goes live.
Before deploying any AI feature, ask the vendor directly: Does this tool send data to external AI providers? Is processing contained within a secure environment? Can I choose which AI model is used? The answers will determine whether the tool is safe for grant program use.
Only collect data you genuinely need. Audit your application fields regularly and remove anything that is not necessary for grant review or compliance. Where AI is analysing submissions, ensure that personally identifiable information is masked wherever possible.
Every AI-generated output should be reviewable, editable and dismissible by a person. AI should reduce the cognitive load on grant teams, not replace their judgement. Assessors must remain the decision-makers, with full access to original submissions at all times.
AI governance is not a one-time exercise. Train your staff on the AI policy at the start of each grant cycle, include it in reviewer onboarding and revisit it annually as tools and regulations evolve.
Your grants management platform is the foundation on which any AI governance framework sits. The right platform will make responsible AI adoption straightforward rather than stressful. Look for a few non-negotiables:
Good Grants takes this approach with its optional, privacy-first AI tools. All processing happens within a secure virtual private cloud, applicant data is never shared with external model providers and grantmakers choose exactly when AI features are switched on.
For a practical look at how AI can support day-to-day grantmaking tasks, see our guide to building trust in grantmaking with human-centred AI.
AI governance does not need to be a barrier to innovation. For grantmakers, a clear, practical approach to data privacy protects applicants, preserves community trust and positions your organisation to use AI confidently and responsibly.
The grantmakers who get this right will build stronger, more transparent programs that funders, applicants and communities can genuinely rely on.
Start with your principles. Choose your tools carefully. And keep people at the heart of every decision.
Articles
Feature focus
Ebooks
Videos
Releases