AI governance made simple: How nonprofits can protect data privacy in the age of AI

by | Mar 31, 2026 | Article

Artificial intelligence is no longer on the horizon for grantmakers. It is here, fastly becoming embedded in the tools many nonprofits use every day. From summarising applications and reviewing reports to flagging incomplete submissions, AI can genuinely reduce the administrative load on stretched grant teams.

But with that opportunity comes a responsibility that grantmakers cannot afford to overlook: protecting the privacy and security of the sensitive data entrusted to them by applicants, grantees and communities.

AI governance does not have to be complicated. With a clear framework and the right platform, nonprofits can embrace AI without compromising data privacy, community trust or program integrity.

Why data privacy matters in grantmaking

Grant programs are built on trust. When applicants share personal, financial and organisational information in good faith, they expect it to be handled securely and used only for the purpose it was collected. If that trust is broken, whether through a data breach, an opaque AI process or misuse of information, the consequences can be severe: reputational damage, legal exposure and, most importantly, harm to the very communities grantmakers exist to serve.

As global data privacy regulation continues to expand, from GDPR in the UK and EU to emerging frameworks across North America and Asia-Pacific, grantmakers face increasing compliance obligations. AI has added a new dimension to this landscape because AI tools that process personal data are themselves subject to privacy law.

The risks of unmanaged AI in grant programs

AI tools that operate without clear governance can introduce a range of risks into the grantmaking process:

  • Data exposure. If AI tools send applicant data to external servers or third-party model providers, your organisation may be unknowingly transferring sensitive information outside your control and potentially breaching data protection obligations.
  • Opaque decision-making. AI-generated outputs that lack explainability can make it difficult to justify funding decisions to applicants, boards or regulators, which undermines transparency and fairness.
  • Unintended bias. AI models trained on historical data can reflect existing inequities. Without human oversight, this can skew recommendations in ways that disadvantage already-marginalised applicant groups.
  • Staff and reviewer uncertainty. Without a clear policy, staff may use AI tools inconsistently or avoid them altogether, creating uneven processes and missed opportunities.

These are not hypothetical risks. Stanford HAI’s 2025 AI Index Report tracks a sharp year-on-year rise in AI-related privacy and security incidents, serving as a sober reminder that the gap between risk awareness and action has real consequences.

A practical AI governance framework for grantmakers

Whether your organisation is just beginning to explore AI or already using it across your grantmaking cycle, a simple governance framework will help you stay in control.

NTEN’s AI for Nonprofits Resource Hub is a helpful starting point, covering overarching principles, tool evaluation, data privacy and IT governance. Here is how to apply each area in a grantmaking context.

1. Define your principles before you adopt any tool

Start by articulating what AI can and cannot do in your program. For example: AI may support application review by summarising submissions, but it may not score, rank or prioritise applicants. Document these boundaries and share them with all staff and reviewers before any tool goes live.

2. Evaluate tools for privacy by default

Before deploying any AI feature, ask the vendor directly: Does this tool send data to external AI providers? Is processing contained within a secure environment? Can I choose which AI model is used? The answers will determine whether the tool is safe for grant program use.

3. Apply data minimisation principles

Only collect data you genuinely need. Audit your application fields regularly and remove anything that is not necessary for grant review or compliance. Where AI is analysing submissions, ensure that personally identifiable information is masked wherever possible.

4. Always keep humans in the loop and in control

Every AI-generated output should be reviewable, editable and dismissible by a person. AI should reduce the cognitive load on grant teams, not replace their judgement. Assessors must remain the decision-makers, with full access to original submissions at all times.

5. Train your team and revisit your policy regularly

AI governance is not a one-time exercise. Train your staff on the AI policy at the start of each grant cycle, include it in reviewer onboarding and revisit it annually as tools and regulations evolve.

What to look for in an AI-ready grants platform

Your grants management platform is the foundation on which any AI governance framework sits. The right platform will make responsible AI adoption straightforward rather than stressful. Look for a few non-negotiables:

  • Privacy-first AI processing. All AI operations should run within a secure, private environment — not shared across client data or sent to external model providers.
  • Opt-in, not opt-out. AI features should be optional and configurable, so your organisation retains full control over when and how they are used.
  • Transparency and audit trails. You should be able to see exactly what AI has processed, when and how — supporting explainability and compliance.
  • Compliance coverage. Look for platforms built to meet GDPR, SOC 2 and other relevant standards, with data residency options if required.

Good Grants takes this approach with its optional, privacy-first AI tools. All processing happens within a secure virtual private cloud, applicant data is never shared with external model providers and grantmakers choose exactly when AI features are switched on.

For a practical look at how AI can support day-to-day grantmaking tasks, see our guide to building trust in grantmaking with human-centred AI.

The bottom line on AI governance for grantmakers

AI governance does not need to be a barrier to innovation. For grantmakers, a clear, practical approach to data privacy protects applicants, preserves community trust and positions your organisation to use AI confidently and responsibly.

The grantmakers who get this right will build stronger, more transparent programs that funders, applicants and communities can genuinely rely on.

Start with your principles. Choose your tools carefully. And keep people at the heart of every decision.

Categories

Follow our blog

This field is for validation purposes and should be left unchanged.
Name(Required)