Introduction

Institutional core facilities, also known as shared research resources or recharge centers, play a crucial role in ensuring that scientists from any field, at any rank, have access to high-quality instrumentation, expertise, and best practices for the technologies needed to conduct research experiments, regardless of their funding source.1 Thus, core facilities are pivotal anchors for collaboration and cutting-edge science within an institution. As technological innovation continues to increase exponentially in certain sectors, 2-3 core facilities aligned with their university’s strategic initiatives serve as a “force multiplier.”2 Zwick specifically argues that investment into the technology and services within the core has widespread impacts by facilitating the recruitment and retention of faculty, providing critical education opportunities for trainees, developing collaborative research teams, reducing the space needed for high-end instrumentation across campus, and increasing the competitiveness of grants by leaning on the expertise of the core staff and equipment within the cores to demonstrate the institution’s ability to support a research project.

In addition to housing high-end instrumentation and technical expertise, core facilities also serve as critical gateways and advisors in the conduct of rigorous, reproducible, and transparent (RRT) research within their respective fields. In one study, over 98% of core directors and staff reported using at least one tool per day to support RRT within their core, including established standard operating procedures (SOPs), quality control documentation, data management policies, training, technical support, and equipment management plans.2 Within the same survey and a follow-up conducted several years later, core leadership also identified the need for mandatory consultations with core staff early in the experimental design stage and method validation as two important activities that would directly affect RRT outcomes for the research their cores support.3,4 In a separate study, primarily conducted in European universities, a majority of core facility directors found it difficult to motivate users to follow best practices.5 While core directors and staff spend considerable time institutionalizing best practices in RRT within their core, there are universal challenges in ensuring customers’ adherence to best practices in research design and analysis upstream or downstream of the work conducted within the core.

Assessing experimental design and practicality through pilot and feasibility studies (PFS) is considered a crucial milestone in scientific inquiry.6–8 Defined as “a small-scale test of the methods and procedures to be used on a larger scale if the pilot study demonstrates that these methods and procedures can work”,9 a pilot study serves as a critical test of the experimental protocol’s feasibility. Often at the cutting edge of biomedical science and translational discovery, pilot studies may be high-risk—but also high-reward. The importance of PFS has been endorsed by the National Institutes of Health (NIH) through investments in small- and early-stage grant mechanisms to fund exploratory research (including R34/U34, R33, R03, R21, R34, P20, UH2, and R61 programs).10 Furthermore, NIH-funded Clinical and Translational Science Award (CTSA) hubs and P30 Center Core Grant mechanisms also require the establishment of a pilot studies module at each location to fund institutional PFS.11 While each program may be slightly different, they provide general support for the conceptual, research design, and proof-of-concept phases of an experiment that could then be used for subsequent research proposals (e.g., R34) or evaluating the feasibility of an approach to conducting a clinical trial (e.g., U34). Moreover, many principal investigators consider preliminary data in an R01 project proposal necessary for competitiveness in funding and to demonstrate the rigor and reproducibility of a research project.10

The costs of purchasing capital equipment and employing specialized technical staff to operationalize these experiments are typically beyond the scope of most pilot and feasibility funding sources. Some PFS programs capitalize on this, providing or restricting funding for projects that use specific core facilities, to be spent only on invoices issued by those facilities; colloquially, these may be known as core voucher programs. While the authors are aware of several such programs at other academic institutions, only one program has been described in the literature- a core voucher program operated at the University of California at Los Angeles through a Clinical and Translational Science Institute grant specifically for funding translational science in core facilities.12

We describe a pilot program that funds explicit data generation and collection, conducted solely within the biomedical core facilities at the UNC-CH School of Medicine, and that also incorporates the recommended mandate for core facility project consultations.3,4 We also provide return on investment (ROI) measures from data collected after the first two funding years, as well as recommendations and considerations for research leadership at other institutions that wish to develop similar programs.

Materials and Methods

Program Development, Description, and Evolution

The UNC-CH Office of Research Technologies (ORT) is a division of the UNC-CH School of Medicine (SOM) Office of Research. The SOM Office of Research broadly supports research within the SOM through infrastructure and space management, education, faculty development, faculty recruitment and retention, long-term strategic planning, promotion of research integrity, and access to cutting-edge technology for SOM Principal Investigators (PIs). The ORT is responsible for the latter aim. To this end, ORT established the Core Facility Advocacy Committee (CFAC) in 2014. The CFAC is composed of five SOM faculty members, each with expertise in one of the following technological areas: Animal Models, Biochemistry, Clinical/Translational, Genomics, and Imaging. The committee also includes stakeholders from ORT and the SOM Office of Research, as well as ad hoc members from other units that support core facilities across the broader UNC-CH campus. The CFAC manages an annual budget of approximately $1.2 million to invest in core facilities located within the SOM or that support a large number of SOM faculty. A major portion of the annual budget supports research core facilities directly through purchasing or cost-sharing capital equipment, providing institutional support funds for state and federal instrumentation grant opportunities, and subsidizing new method development as well as serving other general core needs, including emergency equipment repair and replacement.

In addition to providing funds to modernize core equipment and technological services, the UNC-CH ORT and CFAC developed and managed a core voucher program in Fiscal Years (FY) 2020, 2023, 2024, and 2025. Each awarded voucher provided an applicant with up to $10,000 in funding to be directly spent at core facilities housed within the UNC-CH SOM. Applications were considered for research proposals that would generate pilot data for currently unfunded research projects, develop new methodologies, or develop multi-core pipelines. Program changes were implemented in subsequent years to increase its strategic impact. For example, starting in FY23, there was an additional requirement that the applicant be a new core customer or that the proposed experiment use an instrument or service not previously utilized by the applicant. This program was established as a strategic investment in the cores to build a stronger user base and to establish core facilities and technologies as foundational resources for new grant submissions arising from data derived from the pilot studies. We chose not to use this program to target specific funding mechanisms because we wanted the impact opportunity to be broad, encompassing a wide range of core facilities within the UNC-CH SOM. Institutions with fewer cores, or those that wish to encourage faculty to utilize specific, strategic cores or apply for specific, strategic grants, may find this limitation helpful.

The CFAC and the UNC SOM Office of Research determined the program’s annual budget. As shown in Table 1, program funding increased from FY20 through FY24. In FY25, funding was reduced due to anticipated competing financial priorities.

Table 1.Number of applications received and awarded in all voucher award funding cycles
Fiscal Year Total Number of Applications Total Number of Applications Awarded Award Percentage Amount Awarded Cores Utilized
2020 120 27 22% $230,343 19
2023 67 28 42% $253,688 26
2024 87 30 34% $282,303 25
2025 85 18 21% $168,474 21

Application Requirements and Process

Applications for this funding program were accepted and evaluated annually through a single request for applications. Applicants were limited to employees of UNC-CH, but could be PIs of any faculty rank, research staff, or trainees (postdoctoral fellows or graduate students). Applicants were required to submit a detailed budget, a project timeline, and a one-page NIH-style specific aims document. If the total proposed project budget exceeded program funding limits, applicants were required to specify the funding source that would cover the remaining expenses for project completion. Broad eligibility was intentional and strategic to ensure that access to pilot data was agnostic to the role or rank of the applicants. Table 2 provides a breakdown of the number of applications received (by rank or title), with the number of projects funded (in parentheses).

Table 2.Distribution of number of applications and funded projects (in parentheses) by title/rank and fiscal year
Fiscal Year Total Number of Applications Graduate Student Postdoctoral Fellow Clinical or Research Fellow Assistant Professor Research or Clinical Assistant Professor Research or Clinical Associate Professor Associate Professor Professor Res. Spec. or Staff Scientist Instructor
2020 120 3(17) 4(19) 0(2) 7(34) 2(11) 0(4) 4(18) 5(14) 0(1)
2023 67 4(6) 0(2) 1(2) 13(25) 1(1) 0(0) 9(17) 0(5) 1(2)
2024 87 6(17) 7(21) 1(2) 6(16) 1(2) 1(3) 2(14) 3(5) 2(6) 1(1)
2025 85 1(12) 3(20) 1(1) 4(16) 1(8) 1(2) 4(13) 2(8) 0(3)

Additionally, applicants were required to consult with core directors before submitting their proposal. Starting in FY23, two new requirements were implemented: 1) applicants submitted attestation statements signed by the director of each core certifying the proposal met each core’s standards for RRT in research; 2) applicants were also required to sign a form confirming that they understood the guidelines for authorship and acknowledgment of core facility resources and committed to ensure such guidelines were met in any publications that resulted from the project.

The request for applications was sent to directors of cores eligible for participation in the program, department chairs, center directors, and the UNC Postdoctoral Association, for dissemination to their respective listservs, with applications open for acceptance during a 6-week window. Core directors were given the authority to set additional internal deadlines to ensure sufficient time to meet with interested applicants before the program deadline. Applications received between FY20 and FY25 proposed the use of 52 core facilities across a broad range of technological categories, as shown in Table 3 below. The total number of core facilities indicated in the applications exceeds the total number of applications received due to the presence of many multi-core proposals or pipeline projects—in fact, 34.8% of all applications proposed to use more than one UNC-CH core facility.

Seven UNC-CH SOM core facilities accounted for over half of the total applications received (Table 3), with the highest representation from a small subset of cores that offer critical infrastructure technologies and services in proteomics and metabolomics, genomic sequencing, histology, microscopy, and flow cytometry. A similar asymmetric distribution of core facility representation was reported previously in another evaluation of core voucher programs13 and may be a dynamic reflection of core demand as well as the success of advertising efforts by the stakeholders of specific core facilities.

Table 3.Total number of applications per UNC Chapel Hill core facility proposed through the Core Voucher Program between FY20 and FY25.
Core Name Category Number of Applications
Michael Hooker Metabolomics and Proteomics Core Biochemistry 73
High-Throughput Sequencing Facility Genomics 56
Pathology Services Core Animal Models 43
Advanced Analytics Genomics 32
Microscopy Services Laboratory Imaging, Cytometry, and Microscopy 25
Flow Cytometry Core Imaging, Cytometry, and Microscopy 20
Histology Research Core Animal Models 19
Hooker Imaging Core Imaging, Cytometry, and Microscopy 17
Bioinformatics and Analytics Research Collaborative Genomics 16
Peptide Expression and Purification and Macromolecular X-Ray Crystallography Core Biochemistry 16
BRIC Small Animal Imaging Facility Imaging, Cytometry, and Microscopy 16
Biomolecular NMR Laboratory Biochemistry 14
Mass Cytometry Core Imaging, Cytometry, and Microscopy 14
Neuroscience Microscopy Core Imaging, Cytometry, and Microscopy 13
Preclinical Research Unit Animal Models 11
Macromolecular Interactions Facility Biochemistry 11
CGIBD Histology Core Animal Models 9
Microbiome Core Genomics 8
Human Pluripotent Stem Cell Core Animal Models 8
CRISPR Core Genomics 7
Translational Genomics Laboratory Genomics 7
Animal Models Core Animal Models 7
R.L. Juliano Structural Bioinformatics Core Biochemistry 7
BRIC Center for Animal MRI Imaging, Cytometry, and Microscopy 6
Immune Monitoring and
Genomics Facility
Genomics 5
Vector Core Genomics 5
Zebrafish Aquaculture Core Animal Models 5
Cryo-EM Core Biochemistry 5
High-Throughput Peptide Synthesis Core Biochemistry 5
UNC NeuroTools Genomics 4
LCCC Tissue Procurement Facility Clinical/Translational 4
MLI Tissue Procurement and Cell Culture Facility Clinical/Translational 4
Tissue Culture Facility Clinical/Translational 4
BRIC Radiochemistry Imaging, Cytometry, and Microscopy 4
IDDRC Mouse Behavioral Phenotying Core Animal Models 3
Systems Genetics Core Facility Animal Models 3
Metabolism and Metabolomics Core Biochemistry 3
Connected Health Applications and Interventions Core Clinical/Translational 3
Respiratory TRACTS Core Clinical/Translational 3
BRIC Human Imaging Imaging, Cytometry, and Microscopy 3
Mammalian Genotyping Core Genomics 2
Animal Clinical Chemistry Core Animal Models 2
Cardiovascular Physiology and Phenotyping Animal Models 2
Gnotobiotic Rodent Resource Center Animal Models 2
Clinical Genomic Analysis Core Clinical/Translational 2
Lenti shRNA Core Genomics 1
Vironomics Core Genomics 1
Animal Metabolism and Phenotyping Animal Models 1
Research Radiation Core Animal Models 1
CFAR Clinical Pharmacology and Analytical Chemistry Biochemistry 1
Nanomedicine Characterization Core Biochemistry 1
CIDD Data Science Core Clinical/Translational 1

Review Process

Applications received two independent reviews by UNC-CH SOM faculty using a scoring rubric modeled after the NIH study section system. This scoring mechanism was familiar to the reviewers and emphasized the significance, innovation, and feasibility of the proposed study. One reviewer per application was a CFAC member, and CFAC members recruited secondary reviewers as volunteers. The proposals were scored on Scientific Innovation, Significance, and Approach, and reviewers provided written feedback on their strengths and weaknesses and on their appropriateness for this funding mechanism. Numerical scores for each application were averaged and ranked. Additional non-scored considerations were given to whether the applicant was a faculty member or trainee, whether the application would support method development that the core could scale into a marketable core service if successful, whether there had been a significant discrepancy in scores between reviewers (identified as a difference in over 2 points in the total score by each reviewer for a single application), and whether any core was over-represented in the number of applications awarded. The CFAC made final funding decisions during a committee meeting.

In FY2020, core directors were also included in the review process due to their technological expertise. However, feedback from the core directors indicated that they experienced some conflict of interest when making funding decisions for their core customers. Thus, core directors were not included in the official review process for subsequent fiscal year programs. Instead, core facility directors were required to sign off on all applications, indicating they had met with the applicant and the applicant’s project met the core’s standards for RRT.

Program Follow-Up

Approximately 12 months post-award, a one-year progress report was requested of all awardees. Applicants were asked to provide a scientific update on the experiments conducted and discuss the impact of the program to date on scholarly output, including:

  • number of publications in preparation, under review, or published containing data obtained through the voucher program funding

  • number of and year one direct costs of grants in preparation, under review, or awarded whose submission contained pilot data funded through the voucher program. Identifying information internal to the grant management system at UNC-CH was requested for awarded grants

  • number of trainees (undergraduate students, graduate students, and postdoctoral scholars) who assisted with the project

  • any new methodologies or core services that arose from the project

  • whether the applicant planned to continue using the core in the future

This timeline was lengthened for the first iteration of the program, which occurred during the initial phases of the SARS-CoV-2 pandemic when UNC-CH closed non-essential research operations. For FY20, initial one-year progress report data were collected at about 18 months post-award. Beginning in FY24, an additional outcome was collected (oral symposia or posters presented).

Final reports detailing updates to the above metrics were collected at 24 months post-award. For FY20 voucher program recipients, in line with the delay in collecting the initial progress reports, final reports were collected 36 months post-award. At least three attempts were made to collect data at each data point. If a recipient left UNC-CH during the reporting period, efforts were made to identify a current contact e-mail address.

Results

Program Evaluation

Return on investment (ROI) was evaluated using data from the final reports. Primary data of interest included the grant type, and Year One Direct Costs associated with grants awarded that utilized data collected as part of a voucher within the grant, publications that resulted from the data collected as part of the voucher project, number and type of trainees whose education was supported through their effort on the voucher project, and new services/method development for the core. These data points were chosen based on feedback from leadership that funded this program, as they were meaningful metrics for understanding the program’s short-term impact and importance. The data for the ROI analysis presented here were sourced from the FY20 and FY23 final reports for the core voucher program. Project completion rates were evaluated and determined based on responses to requests for either project or final reports. For FY20, 24 of 27 awards provided a report at least once. For FY23, 26 of 28 awardees provided a report at least once. Five of the non-completions were due to the voucher recipient leaving UNC-CH during the award period, while one recipient failed to provide either the progress or final report for the voucher award.

Grant award data was confirmed through the UNC-CH institutional award management system. Only grants that were indicated as submitted to the sponsor and under review at the first report were evaluated here. As shown in Table 4, the data collected during the FY20 and FY23 voucher programs led to 14 funded grants and an additional 20 grants under revision or review. Collectively, the funded grants had year one direct costs of $4,804,813 – resulting in $9.93 in grant funding for every $1.00 invested in the voucher program (ROI=1:9.93). The grants awarded also varied by type. Of the 14 funded grants, eight were for R01 research projects, three were for the exploratory/developmental R21 mechanism, one was for a grant from a private foundation, one was an F31 predoctoral fellowship, one was for an R35 Outstanding Investigator Award, and one was for a T32 institutional predoctoral award.

Table 4.Grants awarded that utilized data collected through the voucher program
Fiscal Year Total Spending Grants Awarded Grants Under Review/In Prep at time of Final Report Year 1 Direct Costs of Awarded Grants ROI
2020 $230,034 10* 3* $3,417,842 1:14.86
2023 $253,688 4+ 17+ $1,387,071 1:5.47
Total $483,722 14 20 $4,804,913 1:9.93

*Data for FY20 award recipients was collected 36 months post-award
+Data for FY23 award recipients was collected 24 months post-award

Differences between FY20 and FY23 existed at the time of the final report for the number of grants awarded and grants under review or still in preparation. Of the 13 grants awarded or under review/in preparation as of the final report for FY20, ten (77%) had been awarded, with three (23%) pending. For FY23, at the time of final report, 4 of 21 (19%) had been awarded, with 17 (81%) still in preparation or under review. In considering these differences, we remind readers that the FY20 final report was collected about 36 months post-award, while the FY23 final report was collected 24-months post-award. This extra year was meaningful in the timeline for grant submissions and reviews and likely explains the shift in grant status.

The voucher program’s ROI increases to 1:14.9 ($14.90 in grant funding for every $1 of investment) when restricting the sample to the FY20 voucher awards. This increase, corresponding to one additional year post-award, suggests that the actual impact on the university may be significantly larger than initially observed. The contribution of time-to-final report to this difference in ROI between the cohorts is even more apparent when considering the inversion in the number of grants awarded versus those under review or in preparation (3 and 17 for FY20 and FY23, respectively) at the time of data collection.

Another metric evaluated was the number of papers that were published, in preparation, or under review that contained data collected as part of the voucher project. Across FY20 and FY23, 16 papers were published that included data collected as part of the voucher program, with an additional 14 papers in preparation or under review (Table 5). Again, differences in the ratio of papers published to papers in preparation or under review between the two fiscal years were apparent and attributed to the different timelines at which final reports were collected for FY20 (36 months from time of award) compared to FY23 (24 months from time of award).

Table 5.Papers published with data collected through the voucher program at the time of the final report
Fiscal Year Papers Published Papers in Preparation or Review
2020 15* 2*
2023 1+ 12+
Total 16 14

*Data for FY20 award recipients was collected 36 months post-award.
+Data for FY23 award recipients was collected 24 months post-award

In addition to measures of investment yield and scholarly output, we were also interested in the impact of this core voucher program on education, one of the three pillars of the UNC-CH SOM’s mission. Across both fiscal years, voucher project execution was completed by six undergraduates, 23 graduate students, 12 postdoctoral candidates, and one clinical research fellow (Table 6). These trainees received specific or additional training in sample preparation, equipment utilization, data analysis, or experience in data dissemination through manuscript preparation, poster presentations, and oral symposia. The significance of the research projects was also reflected in several awards presented to the trainees. Two graduate students whose dissertations included data collected through the voucher program received UNC Impact Awards, which recognize exceptional graduate students whose dissertation research “contributes to the educational, economic, physical, social or cultural well-being of North Carolina communities and citizens.”13

Table 6.Number and categorization of trainee participants in the funded voucher program projects (FY20 and FY23)
Fiscal Year Undergraduates Graduate Students Postdoctoral Candidates Clinical Research Fellow
2020 1 13 5 0
2023 5 10 7 1
Total 6 23 12 1

We also assessed the benefits afforded to the participant core facilities themselves. Funding from the FY20 and FY23 voucher programs resulted in 21 new core customers, with the increase in the latter cohort attributable to that iteration’s new customer requirement. Additionally, some voucher projects allowed core facilities to develop and optimize methods, resulting in new core services available to the broader core community. In total, nine new core services, animal models, and protocols have been developed since the program’s inception (Table 7).

Table 7.New customers and novel core services generated following participation in the core voucher program
Fiscal Year New Core Customers Novel Method/Technique/Model Development/Optimization
2020 4 6
2023 17 3
Total 21 9

Discussion

There is clear evidence that pilot and feasibility funding awards positively impact the universities that fund them, increasing the likelihood of securing sponsored awards. For example, the use of internal pilot grants and mock grant reviews at a school of nursing increased external funding and other scholarly metrics, such as peer-reviewed publications, compared with investigators who did not use these opportunities.14 And, evaluation of a CTSA-funded pilot project program, which provided both PFS funding and mentorship, indicated that there was a return of $9.36 in grant funding for every $1.00 spent in initial pilot funding.15 Core facility-focused voucher programs also exist at other institutions, including Johns Hopkins University and Duke University. However, this is the first manuscript to the authors’ knowledge to describe the implementation and return on investment of a core voucher program outside of a clinical/translational institute, as well as the first to specifically and intentionally require core directors to have discussions with applicants regarding rigor, reproducibility, and transparency in research conducted within the core.

The UNC-CH core facility voucher program has substantially enhanced the research and education enterprises of UNC-CH, resulting in a nearly 10-fold ROI on grant funding from the first two program years alone, contributing to numerous scholarly publications, and supporting the education and professional development of multiple trainees and faculty. This program has also contributed to developing a sustainable customer base by increasing the number of customers at each core facility and by facilitating collaborations between core facilities and voucher award recipients.

Our program is distinguished from similar programs in two ways. First, we did not limit applicants by rank and allowed applications from trainees. This has been highlighted anecdotally as an impactful aspect of this program by both trainees and their faculty mentors. One graduate student reported, “This project has been an invaluable opportunity for me to learn new techniques and expand my skillset beyond doing molecular biology and biochemical techniques during my PhD training.” Another faculty member also highlighted the benefit of this opportunity for their graduate student: “This project offered [applicant] the opportunity to hone grant writing skills, build an experimental plan for a model system outside of her expertise, coordinate the execution of the project and gain valuable skills conducting assays with mammalian tissue. She had no prior experience in designing a mouse study, which will be valuable for her future studies.

Second, emphasizing the need for project consultations with core directors was a strategic approach to address concerns about scientific RRT within the study. Though we have not explicitly measured whether these consultations led to improved RRT outcomes to date, our core directors take their position at the forefront of scientific RRT quite seriously. Many UNC-CH core directors have voluntarily developed and posted statements concerning the specific RRT practices for their core, as well as best practices in sample preparation and data analysis that may be the responsibility of the data originator. The emphasis on RRT in this program aligns with the broader UNC-CH core ecosystem’s focus on this critical research component.

Limitations and Challenges

These positive findings notwithstanding, analysis of our core voucher program’s impact has clear limitations. Several variables we report on in this paper—new grant funding, trainees supported, new core customers, and new core services—can only be correlated with the impact of receiving voucher program funding, owing to the lack of control conditions in this dataset. Additionally, only following the effects of the voucher funding for two years post-award may be insufficient to capture the impact of a voucher program fully. Data collected at the two-year time point (FY23 cohort) indicated that many papers and grants using data collected through the voucher program were still in preparation or under review. Data collected at 36 months post-award (FY20 cohort) were more resolved, with fewer grants and papers awaiting acceptance. Institutions considering implementing a voucher program should ensure their plan to evaluate the program’s impact balances the scientific timeline with the resources available to support outreach, data collection, and data management. Finally, though the program was designed to address an opportunity to improve RRT in research, we were unable to directly measure the impact of core consultations on RRT for the funded projects.

The administration of the core voucher program has undergone continuous improvement through multiple iterations. These changes are partially due to additional restrictions or priorities from the UNC School of Medicine Dean’s Office, which funds this program. Specifically, the funding unit intended for funds to be spent by the end of the same fiscal year in which the awards were made, which was a challenge for many projects. The time from the request for applications to the announcement of funding decisions was typically 60-75 days. This time was necessary for the applications to be organized and sent to the evaluators, for the evaluators (who were all faculty with active research programs, teaching obligations, clinical obligations, and other administrative and supervisory roles) to score their assigned applications, to establish the internal reporting and monitoring tools to track the awards, and to communicate funding decisions with individualized feedback and instruction for each application. However, this timeline meant that there was typically less than seven months left in the fiscal year to conduct the proposed experiments. Inevitably, project delays arose for some proposals. While some of these delays could not be anticipated (e.g., the university’s closure of non-emergency operations during the early stages of the SARS-CoV-2 pandemic), others were due to experimental challenges. To meet demand, our office has developed a process for requesting and accommodating well-justified project extension requests for up to 1 year. We recommend that units considering developing a core voucher program, particularly those with a similar fiscal or calendar-year funding timeline, develop an upfront plan for receiving and evaluating requests for project extensions. We also developed a policy for transferring funds to an account to be managed at the departmental level, though this carries the challenge of ensuring the funds are fully utilized and spent only on the intended core facility deliverables.

Another aspect of program administration was tracking and communicating award balances to the voucher recipients. Through FY25, the ORT held the funds for voucher recipients and monitored monthly spending. It became apparent that some recipients, particularly trainees, were inexperienced in managing project budgets. Our office developed reporting tools to communicate award balances and information about charges against the original award amount. This added transparency not only allowed our office to plan for fund carryover requests but also ensured voucher program recipients had the tools to manage their project budgets, reducing accidental overspending or underspending on awards. Finally, the extended communication between program administration and the awardees provided additional opportunities to remind recipients about acknowledgment requirements from both the core facilities involved in their research projects and the funding office for the voucher program. However, managing and disbursing funds through our office increases the administrative and accounting support required to track awards and correct erroneous transactions. Units considering establishing a core voucher program should evaluate the availability of administrative support when determining whether to transfer funds to the awardee’s department and accept the risk that funds will be spent in a manner unintended by the program or managed centrally by the office funding the program.

Conclusion

Despite statistical limitations in drawing causal conclusions about the program’s influence, anecdotal and post-award follow-up evidence suggest that the program is impactful for those who receive funds. Investing in core voucher programs may be increasingly important today, as changing federal funding priorities necessitate new research models or technologies in the laboratories of prospective PIs—a task often most cost-effectively implemented through core facilities. We believe that investing in programs like the core voucher program not only supports critical research in faculty laboratories but also lays the foundation for collaboration with core facilities within an institution.


Acknowledgements

The authors thank Dr. Blossom Damania and the UNC-CH School of Medicine Office of Research for funding this program. We also acknowledge Annabelle Stein, who assisted in program development, Leah Combs and Skylar Gudasz for providing administrative support, the members of the UNC-CH Core Facility Advocacy Committee, specifically Dr. Richard Cheney, Dr. Li Qian, Dr. John Sondek, Dr. Terry Furey, Dr. Dominic Ciavatta, Dr. Zoe McElligott, Dr. Neeta Vora, and Dr. Chad Pecot as well as the numerous faculty recruited for secondary review of these applications over the years. We thank Dr. Robert Lang for his thoughtful review of this manuscript. Finally, we thank the UNC-CH SOM core facilities and their directors who have supported and invested their time in this program since its inception.

Financial Support / Conflict of Interest Disclosures

The authors declare no financial support or associated conflicts of interest.