A guide to Actionfunder’s core scoring criteria, how to set your own funder criteria, and advice on writing effective score descriptors. Read more about the AI Scoring Tool here.
1. How scoring works
Every application processed by the AI Scoring Tool is scored on a scale of 1 to 5 for each criteria. Each score reflects the quality and strength of evidence found within the application itself, and is accompanied by a written rationale drawn from the applicant’s responses.
- Score 1 – Little or no evidence. Significant concerns or gaps are present.
- Score 2 – Some limited evidence, but the application falls short in important ways. Clarifications are likely to be needed.
- Score 3 – Adequate evidence. The criteria is broadly met but some detail or specificity is missing.
- Score 4 – Good evidence. The criteria is well met with only minor gaps or areas for improvement.
- Score 5 – Strong, detailed evidence. The criteria is fully and compellingly met.
There are two types of criteria used in the scoring process: Actionfunder’s core criteria (applied to every project automatically) and funder-set criteria (defined by you to reflect your fund’s priorities). Both are described in detail below.
2. ActionFunder’s core scoring criteria
Actionfunder’s core criteria are applied automatically to every application across all funds. They are designed to assess the fundamental quality of each application and whether the proposed project is realistic and deliverable. There are two core criteria: Quality of Application and Attainability.
| 1. Quality of Application* | |
| Consider: This criteria assesses how well the application has been written. It considers: effective and full use of the word count; accuracy of spelling, grammar, and punctuation; clarity and completeness of responses; and whether all questions have been fully and directly answered. | |
| Score | Descriptor |
| 1 | The application is very poorly written with significant spelling, grammar, or punctuation errors throughout. Responses are unclear, incomplete, or do not address the questions asked. Word count is substantially underused or the content is padded without substance. |
| 2 | The application has notable issues with writing quality that affect clarity. Some questions are only partially answered or responses lack sufficient detail. Errors are frequent enough to hinder understanding. |
| 3 | The application is adequately written and generally understandable. Most questions are answered but some responses could be more complete or better articulated. Minor errors are present but do not significantly impact readability. |
| 4 | The application is well written and easy to follow. Questions are answered clearly and with appropriate detail. Word count is used effectively. Errors are minimal and do not detract from the overall quality. |
| 5 | The application is excellently written throughout, with no or negligible errors. All questions are answered fully and with clarity. The word count is used to maximum effect, with every response adding meaningful information. |
*Please note: The ‘Quality of Application’ criteria applies to written applications only. If your fund accepts video pitches, a separate criteria is used to assess these.
| 2. Attainability | |
| Consider: This criteria evaluates whether the project is realistic and deliverable within the proposed timeframe and budget. It considers: the feasibility of the project plan; the appropriateness of budget allocation; and the capacity and experience of the organisation to deliver. | |
| Score | Descriptor |
| 1 | The project plan is vague or unrealistic. There is no credible budget, or the budget bears no relation to the activities described. There is little or no evidence that the organisation has the capacity or experience to deliver the project. |
| 2 | The project plan has significant gaps and the timeline appears unachievable. The budget lacks detail or appears poorly considered. There are notable concerns about the organisation’s ability to deliver. |
| 3 | The project plan is broadly feasible but may have some gaps or risks that are not addressed. The budget is generally appropriate but lacks some detail. The organisation appears capable but evidence of capacity is limited. |
| 4 | The project plan is realistic and well thought through. The budget is appropriate and reasonably detailed. There is good evidence that the organisation has the experience and capacity to deliver the project. |
| 5 | The project plan is highly credible with a clear, achievable timeline. The budget is detailed, realistic, and well justified. There is strong evidence that the organisation has the skills, experience, and infrastructure to deliver the project successfully. |
3. Setting your own funder criteria
In addition to Actionfunder’s core criteria, you can define your own criteria to reflect what matters most to your fund. Funder-set criteria allow you to assess applications against your specific priorities – whether that’s geographic focus, thematic alignment, target beneficiaries, or anything else that’s central to your fund’s mission.
Each criteria you create should follow the same structure: a criteria name, a brief description of what to consider, and clear scoring descriptors for each score from 1 to 5. The more specific and detailed your descriptors, the more accurately the tool will be able to assess applications against them.
| Good to know: Your criteria should be assessable from the project pitch itself. Avoid criteria that require information the applicant hasn’t been asked to provide – if it’s not in the application, the AI won’t have the information it needs to score it. |
The example below illustrates the level of detail that works well for funder-set criteria.
| Community Impact | |
| Consider: Is there clear evidence of community need? Has the applicant engaged with the community they intend to serve? Are the proposed outcomes realistic and measurable? Will the project create lasting benefit beyond the funding period? | |
| Score | Descriptor |
| 1 | No evidence of community need or benefit. The application does not describe who the project will help or how. There is no indication that the community has been consulted. |
| 2 | Vague mention of community benefit with little supporting detail. The target community is poorly defined and there is no evidence of consultation or engagement. Proposed outcomes are unclear or unmeasurable. |
| 3 | Some evidence of community need with a reasonable plan to address it. The target community is identified but evidence of need may rely on assumptions rather than data or consultation. Outcomes are stated but could be more specific or measurable. |
| 4 | Clear evidence of community need supported by data or consultation. The project has a well-thought-out plan with defined, measurable outcomes. There is a reasonable explanation of how the community will benefit both during and after the funding period. |
| 5 | Compelling evidence of significant community need supported by robust data, consultation, or lived experience. The project presents a detailed and innovative plan with strong, measurable outcomes. There is a clear strategy for sustaining impact beyond the funded period. |
4. Tips for writing effective criteria
Well written criteria make a significant difference to the quality of the AI’s assessments. Here are some principles to keep in mind when setting your funder criteria.
✅ Be specific about what you’re looking for. Vague criteria produce vague assessments. The more clearly you describe what a strong or weak response looks like at each score level, the more accurately the tool can apply it.
✅ Make each score level meaningfully distinct. Each descriptor from 1 to 5 should represent a clear step up in quality. Avoid overlapping language that could apply to more than one score level.
✅ Anchor descriptors in evidence. Rather than saying “the applicant demonstrates community engagement,” say “there is evidence of consultation with the community, supported by data, quotes, or examples.” Evidence-based descriptors are easier to assess objectively.
✅ Keep criteria assessable from the application. Only set criteria that can be evaluated from what applicants have been asked to write. If your application form doesn’t include a question about a particular topic, the AI won’t have the information it needs to score it.
✅ Align criteria with your application questions. If you ask applicants about sustainability, make sure you have a criteria that assesses their answer. There should be a clear line between what you ask and what you assess.
✅ Use plain language. Avoid jargon or sector-specific terminology that could be interpreted in different ways. Clear, plain language in your criteria leads to more consistent scoring.
✅ Consider your fund’s context. Think about what a realistic Score 5 application looks like given your fund’s typical applicants. For grassroots or early-stage organisations, your expectations may differ from a fund aimed at more established charities.
| Need help setting your criteria? Speak to your Customer Success Manager who can review your draft criteria and offer suggestions |
| Related article The AI Scoring Tool Read our overview of what the tool does, how it works, and why funders choose to use it. |