The Chan Zuckerberg Initiative’s core values center around people, technology, collaboration, and open science. We adhere to those values in both proposal selection and evaluation of progress.
Applications will be evaluated based on the current and potential impact, documentation, user support, usability, and reliability of the plugin(s) involved and the feasibility of the proposal—each of which will be assessed through quantitative and qualitative factors. Relevant materials will be provided by the applicants and obtained by CZI from publicly available sources where possible (e.g., the napari hub, Github, image.sc, etc.). Independent expert review will be solicited, and final decisions will be made by CZI staff in consultation with our expert advisors.
Impact will assess the potential of the project to provide easy access to reproducible, quantitative bioimage analysis, in alignment with our mission to support the science and technology that will make it possible to cure, prevent, or manage all diseases by the end of the century. Impact will be assessed qualitatively and quantitatively. Reviewers will evaluate:
- Usage of the plugin as evidenced through downloads, citations, and other metrics.
- Alignment of the plugin(s) to areas currently prioritized by CZI Science.
- Scalability of impact through interoperability with other napari plugins and scientific python tools.
- Opportunity for the plan of work to improve the impact of the plugin(s).
Documentation will assess the resources available to users of the plugin(s) to evaluate whether the plugin is relevant to their needs, how to use the plugin, and where to find additional resources. Reviewers will evaluate:
- The comprehensiveness of the plugin(s)’s description and metadata on the napari hub.
- The availability and discoverability of in-depth guides and tutorials.
- Opportunity for the plan of work to improve the documentation of the plugin(s).
User support will assess the availability and quality of user support available to users of the plugin. Reviewers will evaluate:
- Clear presence of user support channels.
- Timeliness of responses to bug reports, feature requests, questions by the plugin developer and user community.
- Opportunity for the plan of work to improve the user support of the plugin(s).
Usability will assess the opportunity for the project to ensure plugin(s) can be used by researchers with a wide range of physical abilities, computing resources, and programming knowledge. Reviewers will evaluate:
- Ease of installation of the plugin through the standalone napari desktop application.
- Conformance to best practices for a11y in user interface design.
- Opportunity for the plan of work to improve the usability of the plugin(s).
Reliability will assess the capability of the plugin(s) to deliver a reliable user experience and reliable scientific results. Reviewers will evaluate:
- Availability of the plugin in the standalone napari desktop application (through conda-forge).
- Adoption of npe2, the second generation napari plugin engine.
- Existence and coverage of automated testing.
- Availability of source code under an OSI-approved open source software license.
- Opportunity for the plan of work to improve the reliability of the plugin(s).
Feasibility will assess the plan of work described in the proposal and whether it can be accomplished given the requested budget, additional funding sources, and key personnel involved. Reviewers will evaluate the following based on qualitative materials:
- Specificity and clarity of plan of work to be accomplished.
- Proposed use of funds (relative to plan of work).
- Likelihood of the work being accomplished.
- Plan for tracking and validating progress against goals.
Alongside qualitative materials, expert evaluation may utilize metrics when available and applicable such as:
- Number of downloads of the plugin(s) from PyPI and conda-forge.
- Mentions of the plugin(s) in public forums such as Twitter and image.sc.
- Traffic and engagement of users on the napari hub.
- Number of bug reports, feature requests, and pull requests by the user community.
- Volume of support requests in public forums and timeliness of responses.
- Number of citations or mentions of the software project in scientific literature.
There is no expectation of any specific number of awards for this RFA program. The Chan Zuckerberg Initiative reserves the sole right to not recommend the funding of any applications. CZI does not provide individual feedback on decisions for unfunded proposals.