Lessons learned from using linked administrative data to evaluate the Family Nurse Partnership in England and Scotland

Main Article Content

Francesca L Cavallaro
Rebecca Cannings-John
Fiona Lugg-Widger
Ruth Gilbert
Eilis Kennedy
Sally Kendall
Michael Robling
Katie L Harron


“Big data” – including linked administrative data – can be exploited to evaluate interventions for maternal and child health, providing time- and cost-effective alternatives to randomised controlled trials. However, using these data to evaluate population-level interventions can be challenging.

We aimed to inform future evaluations of complex interventions by describing sources of bias, lessons learned, and suggestions for improvements, based on two observational studies using linked administrative data from health, education and social care sectors to evaluate the Family Nurse Partnership (FNP) in England and Scotland.

We first considered how different sources of potential bias within the administrative data could affect results of the evaluations. We explored how each study design addressed these sources of bias using maternal confounders captured in the data. We then determined what additional information could be captured at each step of the complex intervention to enable analysts to minimise bias and maximise comparability between intervention and usual care groups, so that any observed differences can be attributed to the intervention.

Lessons learned include the need for i) detailed data on intervention activity (dates/geography) and usual care; ii) improved information on data linkage quality to accurately characterise control groups; iii) more efficient provision of linked data to ensure timeliness of results; iv) better measurement of confounding characteristics affecting who is eligible, approached and enrolled.

Linked administrative data are a valuable resource for evaluations of the FNP national programme and other complex population-level interventions. However, information on local programme delivery and usual care are required to account for biases that characterise those who receive the intervention, and to inform understanding of mechanisms of effect. National, ongoing, robust evaluations of complex public health evaluations would be more achievable if programme implementation was integrated with improved national and local data collection, and robust quasi-experimental designs.


“Big data” – including administrative data – offers promising avenues for evaluating child health interventions. Although randomised controlled trials (RCTs) are the gold standard for intervention evaluation, they are costly and time-consuming, and they include only a selected subset of service users who consent to enrolment. Observational studies using routinely collected administrative data offer potentially cost- and time-saving alternatives to RCTs, with the advantage of data being available for whole populations eligible for regional or national interventions. These studies offer exciting opportunities for ongoing evaluation of existing population health interventions to inform policy-making in a timely way, and large sample sizes enable detection of effects in subgroups or for rare outcomes. While administrative data can also be used to support long-term follow-up in RCTs, a growing number of observational evaluations in maternal and child health are designed using cohorts derived entirely from unconsented use of de-identified administrative data [14]. This includes two observational studies evaluating an intensive home-visiting programme for vulnerable younger mothers – the Family Nurse Partnership (FNP) – in England and Scotland [5, 6].

The main limitation of observational studies using administrative data to evaluate complex interventions is that researchers cannot randomly assign participants to ‘intervention’ and ‘control’ groups. Randomisation ensures those who do and do not receive an intervention are comparable at baseline, enabling observed differences to be attributed to the intervention. In contrast, when using observational data, important differences often exist between individuals who do and do not participate in an intervention, introducing the possibility of confounding (or indication bias). For example, if practitioners target enrolment to women with increased risks of adverse maternal and child health outcomes (e.g. with complex social needs), expectant mothers who are enrolled in the FNP will be more vulnerable than those not enrolled, leading to indication bias. Several quasi-experimental methods have been developed to help replicate randomisation in observational studies and minimise bias associated with confounding [7, 8]. These methods may adjust for unmeasured as well as measured confounders under the assumption that all characteristics affecting intervention assignment and outcome have been measured [9]. Replication studies have shown some RCT results can be reproduced using administrative health data using these methods [10, 11].

Linked administrative data from health, education and social care sectors have been used to support FNP evaluations in both England and Scotland (Table 1). The FNP aims to improve child health and development through intensive home-visiting from a dedicated Family Nurse [12]. It is usually offered to pregnant women aged ≤19, up to 28 weeks of pregnancy, although these conditions have been relaxed recently. The FNP has a comparatively strong evidence base, based on three US RCTs showing benefits for maternal and child health outcomes [1317]. In England, a RCT showed no evidence for an effect on short- and medium-term primary outcomes (including birthweight and maltreatment outcomes by age six), but did provide evidence of benefit on secondary outcomes including child development outcomes [18, 19]. Our observational evaluations were conducted to capture the real-world effect of the FNP, including smaller effects and effects among particularly vulnerable young mothers.

England evaluation Scotland evaluation
Study design / Approach for dealing with confounding Propensity score matched cohorts comparing outcomes for mothers (and their children) ever enrolled in FNP with similar mothers (based on characteristics at enrolment) who were eligible but not enrolled, within the same area and time. Cohort study comparing outcomes for mothers (and their children) ever enrolled in FNP with mothers eligible for FNP who were pregnant in a time/area when FNP was not offered. Regression models adjusted for maternal and infant characteristics.
Definition of cases (FNP mothers) Women aged 13-19 years at last menstrual period enrolled in the FNP up to 28 weeks gestation first delivery with live birth in English NHS hospital (eligible if previous pregnancy ended in miscarriage or termination, but not if previous stillbirth) Women aged ≤19 years at last menstrual period enrolled in the FNP up to 28 weeks gestation first-time mother-to-be (eligible if previous pregnancy ended in miscarriage, stillbirth or termination) living in an FNP-recruiting NHS Health board area
Definition of controls Women aged 13-19 years at last menstrual period antenatal booking appointment up to 28 weeks gestation living in an FNP catchment area at the time of booking appointment first live birth in English NHS hospital (no previous deliveries) Women aged ≤19 years at last menstrual period antenatal booking appointment up to 28 weeks gestation living in an FNP catchment area when FNP recruitment was not offered: in the 12 months prior to initiation of FNP recruitment or post FNP recruitment between periods of FNP recruitment (temporary suspensions due to caseload capacity) first live birth (no previous live births)
Study dates Births between 1 April 2010 and 31 March 2017 (FNP mothers and controls) FNP mothers with antenatal booking appointment between 1 January 2010 and 31 March 2016 Controls eligible for FNP with antenatal booking appointment between 1 January 2009 and 31 March 2016
Geographical coverage 136/152 local authorities in England with active FNP site 10/14 NHS Health boards in Scotland
Data approvals for unconsented use of data Nottingham Research Ethics Committee, Department for Education, NHS Digital, and Confidentiality Advisory Group Public Benefit and Privacy Panel (PBPP) NHS and the Scottish Government Education Analytical Services (EAS). Ethical review not required by South East Scotland Research Ethics Service
Data sources for mothers and children Hospital Episode Statistics (HES) FNP Information System (IS) National Pupil Database (NPD) NHS Scotland Health FNP Scottish Information System (SIS) Education Analytical Services (EAS)
Maternal characteristics adjusted for Health characteristics Age at last menstrual period Ethnicity Area-level deprivation Gestation at antenatal booking appointment History of unplanned mental health-, adversity- and chronic condition-related hospital admissions* History of Accident & Emergency attendance* Health characteristics Age at last menstrual period Ethnicity Area-level deprivation Gestational age at antenatal booking Ever dispensed medication for asthma or depression Diabetes at antenatal booking appointment Body Mass Index at antenatal booking Current smoker at booking appointment Drug misuse during pregnancy Ever injected illegal drugs prior to pregnancy Alcohol consumed in a typical week (recorded at booking appointment) Previous pregnancy
Social care characteristics Ever had a child protection plan or been a child looked after Social care Ever been on the child protection register Looked after child before/at booking appointment
Educational characteristics Ever recorded as having Special Educational Needs Ever received Free School Meals Ever in IDACI bottom decile Educational attainment at Key Stage 2 and 4 Ever excluded, in pupil referral unit or alternative provision Ever persistently absent in a term Educational characteristics Ever had additional student needs Ever received Free School Meals Left school by antenatal booking appointment Ever excluded from school
Geographic characteristic FNP site area Geographic characteristic Health board of residence
Child outcomes described Preterm birth Low birthweight Mode of delivery Stillbirth Discharge to social services at birth Unplanned hospital admissions for injury or maltreatment** Unplanned hospital admissions (any diagnosis)** Accident & Emergency attendances** Referral to outpatient services (uptake and non-attendance)** Looked after status*** Child in Need status*** Death** Good level of development in the early years assessment Educational attainment at Key Stage 1 Special Educational Needs status*** School attendance*** Breastfeeding (at birth and at 6-8 weeks) Birthweight Passive smoking in the home Safe home environment††† Preterm birth Body Mass Index††† Gross/fine motor skills Registered with/attended dentist Hospital admission for dental procedure††† Hospital admission for serious injuries††† Accident & Emergency attendances††† Accidental injuries††† Child development concerns†, †† Personal/social and behavioural difficulty Speech, language and communication concern Physical or motor impairment Vision concern/impairment Hearing concern/impairment Student need concern†† Other student need†† More able pupil†† Child attainment at Primary 4 (5-6 years) Child protection investigations††† Investigations requiring a case conference††† Type of concern identified at case conference††† Length of time on child protection register††† Child registered as result of conferences††† Child de-registered††† Looked after status†, †† Placement††† Placed for adoption†††
Maternal outcomes described Accident & Emergency attendances** Unplanned adversity-related hospital admissions after childbirth** Unplanned hospital admissions (any diagnosis) after childbirth** Subsequent birth within 18 months of childbirth** Death** Return to education Educational attainment at Key Stage 4 (where applicable) Alcohol/substance misuse during pregnancy Childcare use Return to education within 24 months of child birth Educational attainment Subsequent birth††† Inter-pregnancy interval†††
Table 1: Description of observational studies evaluating the Family Nurse Partnership (FNP) using de-identified, linked administrative data in England and Scotland. *in the 2 years prior to 20 weeks of pregnancy. Adversity-related admissions include diagnoses of self-harm, substance misuse, and violence. **up to 2 years and 7 years after childbirth. ***between starting school and age 7. up to 27–30 month after childbirth. 4–5 years after childbirth. †††up to 2 years and 5–6 years of age after childbirth.

The objective of this paper is to describe some sources of bias common in observational evaluations using administrative data, using an exemplar of the English and Scottish evaluations of FNP that used cross-sectoral, linked administrative data. We describe the lessons learned from these evaluations to inform future studies evaluating child health interventions.



We determined the sources of bias and challenges inherent in observational evaluations using administrative data, describing lessons learned from our joint experience in conducting evaluations of the FNP in England and Scotland. Our aim was to help inform other studies using administrative data to evaluate complex interventions in health. We first considered different sources of bias within the data, by systematically assessing potential biases at each stage of the recruitment, enrolment, and data collection stages chronologically, and how these could affect evaluation results. We explored how each study design addressed these sources of bias using maternal characteristics captured in the data. We then determined what additional information could be collected at each step of the complex intervention to enable researchers to minimise bias and maximise comparability between intervention and control groups, to enable better estimation of intervention effects. The next section describes the data sources and study design used in each evaluation.

Data sources and study design

Both studies used similar approaches to construct a retrospective cohort of adolescent mothers, using de-identified, linked administrative data (Table 1). We used Hospital Episode Statistics in England and Maternity Inpatient and Day Case (SMR02) in Scotland, to identify all births in NHS hospitals to mothers aged ≤19 in similar time periods. Our studies included over 110,000 mothers (c. 26,000 in FNP) in England and over 8,000 (c. 3,000 in FNP) in Scotland [20, 21]. Mothers were linked to their children in hospital data [21, 22]. In Scotland, all mothers and children were additionally linked to General/Acute Inpatient and Day Case (SMR01), Child Health Systems Programme Pre-School and School. Mothers and their children were linked to education and children’s social care information (National Pupil Database in England, and Education Analytical Services in Scotland), including Children in Need and Children Looked After returns [5, 6]. Mothers and children enrolled in FNP were identified through linkage of hospital data to the FNP (Scottish) Information System.

We used two approaches to minimise biases due to differences between those enrolled or not in FNP (Table 2). In each study, we made use of information recorded in administrative data on characteristics likely to affect whether mothers were eligible, approached or enrolled in FNP (Figure 1). In England, we used propensity score matching of mothers enrolled in FNP to not enrolled controls in the same time period and area (Table 1) based on characteristics before enrolment or 20 weeks of pregnancy. This approach assumes that any observed differences in maternal and child outcomes are attributable to FNP enrolment, assuming there is no unmeasured confounding [9].

Figure 1: Study designs for the evaluation of the FNP in England and Scotland, with potential biases.

Bias Description Impact on effect estimates Information needed to avoid or assess likely bias
Indication bias due to FNP nurses deciding which mothers to approach (unmeasured confounding) Family nurses prioritise the more vulnerable mothers among those meeting eligibility criteria, and so those enrolled may have been more likely than those not enrolled to experience adverse outcomes. Underestimation of the effect of the intervention. Knowledge of which characteristics prioritised for enrolment in each site (including start and end dates of these prioritisation characteristics); availability of data on these characteristics and other important maternal characteristics for adjustment purposes
Misclassification bias of eligibility for FNP In analyses, mothers may have been assigned to different groups than the ones they should be in, because eligibility is incorrectly defined. Bias in either/both directions: random misclassification is likelyto underestimate the intervention effect, but bias in misclassification may under- or over-estimate intervention effect. Detailed recording of programme meta-data including site activity dates and geography, in order to correctly define eligible groups of mothers who were and were not enrolled or eligible for the intervention.
Consent bias for enrolment in FNP Mothers who were offered the intervention but who declined may have been different to those who were not offered the intervention. Bias in either/both directions. Those who were offered the intervention but who declined may be a mixture of the most vulnerable and the least vulnerable mothers. Individual-level or aggregate data on characteristics of all mothers-to-be offered enrolment, and those who declined vs. who accepted enrolment.
Linkage bias Linkage error (e.g. missed links or false links*) can mean that subgroups of the population were differentially excluded from the analysis cohort, or had missing data on variables obtained through linkage. Missed links can also lead to misclassification bias (see above). Bias in either/both directions. It is difficult to ascertain the direction of effect, particularly when there are multiple complex linkages and when the impact of linkage errors work in opposite direction. Detailed information about the characteristics of mothers more or less likely to link (subgroup-specific linkage rates), in order to identify groups that might be most affected by linkage error.
Measurement bias Usual care for mothers not enrolled was not captured; some outcomes were measured by different professionals depending on whether the mother was enrolled in the intervention or not. Bias in either direction. FNP nurses may have been morelikely to record positive outcomes if they have built a stronger relationship with enrolled mothers, but might also have been more likely to pick up on areas of need (ascertainment/surveillance bias). Improved, high-quality data on community health contacts are needed at the individual level (including e.g. public health or adolescent pregnancy midwife services, average number of health visitor contacts, number of children’s centres).
Table 2: Potential sources of bias in evaluations of FNP in England and Scotland using linked administrative data and information needed to assess their likely extent. *Missed links occur when a mother in the FNP Information System data is not linked to her health/education record and therefore appears twice in the data – once as an FNP mother with no linked health/education data, and once in the health data as being a mother who was not enrolled in the FNP; false links are likely to be less common, and occur when an FNP record is linked to the wrong health/education record, causing a mother not enrolled in the FNP to appear as though she was enrolled.

In Scotland, we used a different natural experiment study design to compare mothers enrolled in FNP with all mothers who met FNP eligibility criteria but who were pregnant at a time when the programme was not recruiting in their area (Table 1). Mothers enrolled in FNP and controls were not matched. This study used multivariable regression to adjust for characteristics measured at antenatal booking appointment that differed between mothers enrolled in FNP and controls, aiming to ensure comparability between groups.

Both methods have advantages and disadvantages. The unmatched comparison retained all mothers enrolled in FNP in the analysis, but excluded those who were eligible but not enrolled during a time in which FNP was offered. The controls were all eligible mothers at times when FNP was not enrolling mothers into the programme, some of whom would be more vulnerable mothers likely to be enrolled had the FNP been offered [20]. The propensity score analysis used more closely matched intervention and control mothers but may limit the generalisability of findings, by excluding some mothers enrolled in FNP.

Patient and public involvement

In England, we held several workshops with young mothers while designing our study, including mothers enrolled in the FNP and not. Both studies engaged a lay representative on the Study Steering Committees. In Scotland, patient and public engagement was carried out separately by the Scottish Government [23]. Much of the engagement work that the study team conducted as part of the RCT 2-6 year follow-up [19] (running in parallel) was transferable to the Scottish evaluation.


Potential sources of bias in FNP evaluations

There were important differences in the characteristics of mothers enrolled in the FNP and those who were not, in England and Scotland (Appendix Tables 1 and 2) [20]. Achieving comparability between these two groups was at the core of the study design for each evaluation. Table 2 summarises the potential sources of bias arising and the likely impact on effect estimates. Biases such as misclassification or consent bias are not intrinsic to administrative data, but in practice often concern evaluation studies using such data. In addition, there may be other biases operating that we did not identify. The following sections explore how each study design addressed these potential sources of bias.

Indication bias due to unmeasured confounders

Our different approaches – propensity score matching in England and unmatched adjustment for maternal characteristics in Scotland – both aimed to ensure comparability between mothers enrolled in FNP and controls, and therefore to minimise biases due to confounding in order to attribute observed differences to the intervention effect. However, assessing the extent to which indication bias was avoided was challenging: although the propensity score matching approach achieved balanced characteristics for measured variables, it was by definition not possible to evaluate the balance between groups in terms of unmeasured characteristics. We cannot know if groups were balanced on other important characteristics also associated with both enrolment and outcomes. For example, some important vulnerabilities (such as family violence) may not be disclosed until a trusting relationship has been built with providers, and may not be captured in administrative data at all [2426].

In England, FNP eligibility criteria were broad (all first-time mothers aged ≤19 living in an FNP site catchment area and enrolling before 28 weeks of pregnancy were eligible). Since resources were insufficient to guarantee universal offer (only ~25% of eligible mothers were enrolled), individual FNP sites were encouraged to develop their own local criteria for targeting, with many sites prioritising younger adolescent mothers. Knowledge of sites’ targeting strategies over time would have helped us assess to what extent these strategies were successful in enrolling their target group and in improving outcomes.

Misclassification bias due to lack of programme delivery data

In both England and Scotland, we needed to define the population of teenage mothers who would have been eligible for the FNP, but who were not enrolled due to living in an area in which the FNP was not offered at the time of their pregnancy. If information on recruitment dates was inaccurate, misclassification bias could occur, where mothers were categorised as being eligible for the FNP when they were not, or vice versa (Figure 1). Site activity dates and geography were key to defining these populations, but this information was not readily available and is not typically captured in administrative datasets. In England, the FNP was rolled out in >130 local authorities, at different times. In Scotland, the FNP was rolled out in 10 health boards over a six-year period with different teams and cohorts occurring within sites and over time. Sites merged and split over time, site boundaries moved, and sites discontinued or joined the FNP at different times.

To address this challenge, in England, we drew a tentative list of site dates and catchment areas based on FNP Information System data, which was reviewed in detailed conversation with the FNP National Unit. In Scotland, distribution of enrolment in FNP across health board areas over time had been compiled during the assessment of the evaluation [27], but required further detail from the FNP Scottish Information System team and verification after the enrolment dates had been received. Dates when recruitment was temporarily suspended due to caseload capacity being reached were also ascertained.

Consent bias due to lack of information on mothers who declined the intervention

It was not possible to identify eligible mothers offered enrolment but who did not consent to participate in either country. In England, data on mothers who declined the programme were not collected. In Scotland, the Public Benefit and Privacy Panel did not permit the unconsented linkage of data on individuals who had declined the programme, even though data on these individuals were available. In England, these mothers were included in the control group as a result. This could lead to consent bias: if mothers who declined were more vulnerable than those who accepted, it might lead us to underestimate the intervention effect. English FNP sites had limited aggregate information on these mothers. Some sites reported that, although a small number were particularly vulnerable mothers (e.g. involved with social care services), the majority of mothers who declined were those with strong social support.

Missing data due to linkage bias

Linkages of health, education and social care data were performed by NHS Digital and the Department for Education in England, and Electronic Data Research and Innovation Service in Scotland. These organisations provided limited information on linkage quality, which limited our ability to assess the extent to which linkage error may have caused bias. In England, 83% of adolescent mothers in our cohort linked to the National Pupil Database. Some unlinked mothers would genuinely not have been captured in this database due to attending an independent school or a school in a different country. We were unable to evaluate the extent of missed links (mothers who were in the National Pupil Database, but who we could not link) among the 17% of unlinked mothers. For Scotland, match rates for linkage were not provided; as the cohort was created from the health records, we assumed all records were linked to the health datasets. However, 14% of mothers were potentially not linked to any Education Analytical Services dataset.

The extent to which these missed links lead to bias depends on how the unlinked records are dealt with in analysis [28]. Determining the potential direction of bias is complex, particularly when successive linkages are performed (such as FNP data linked to health data, then to educational data). In both countries, the control group was created by excluding those who had linked to FNP Information Systems (Figure 1). In England, hospital records for the 1.5% of FNP mothers who did not link to Hospital Episode Statistics would mistakenly have been treated as belonging to the control group. Similarly in Scotland, the 1.5% of FNP mothers who did not link to SMR02 would have been excluded from the FNP arm [21]. This lack of certainty around the “true” denominator means that linkage errors could contribute to misclassification bias (Table 2).

Bias may be introduced if the success of linkage depends on characteristics associated with outcomes. Individuals who should have, but did not, link (missed links) may have higher rates of adverse outcomes [29]. For example, children of Black or Asian ethnicity often have lower linkage rates and ethnic group is associated with risk of adverse outcomes [30]. Differential exclusion of some groups due to missed links may therefore underestimate the intervention effect.

Missing data – a problem that is well characterised in observational studies – may be introduced in linked administrative data studies when records fail to link. Moreover, certain characteristics were only available for a sub-sample of mothers or their children because different data sources covered different periods. In both countries, we used a complete case analysis whereby education and social care outcomes were evaluated for mothers/children linking to the relevant records (as well as multiple imputation to retain mothers with missing data as a secondary/sensitivity analysis).

In England, we attempted to identify groups of mothers who were more at risk of linkage bias or missing data by comparing the characteristics of FNP mothers who did and did not link with hospital and educational records (Appendix Table 3).

Outcome ascertainment bias and interpretation of outcomes reported in administrative data

Outcomes measured differently between the FNP and control groups may induce outcome ascertainment bias. For example, increased contact with families enrolled in FNP may lead to lower thresholds for referrals to social services, and any observed lack of effect or even increased risk of maltreatment in the FNP group complicates the interpretation of whether the true risk of maltreatment was lower, similar, or higher than in the control group. Moreover, child health outcomes measured at 10 days and 6–8 weeks postpartum were recorded by Family Nurses for enrolled mothers and health visitors for controls in Scotland, introducing further potential ascertainment bias if, for example, Family Nurses were more likely to record previously known issues not obvious during the checks, or less likely to record if these issues were being managed.

Outcomes captured in administrative data may also be proxies for the outcomes of real interest, making interpretation a challenge for several reasons. Firstly, determining whether an outcome is a positive or negative effect can be complex. For example, higher child A&E attendance rates may represent higher incidence of accidents, or more appropriate care-seeking behaviour by parents. This challenge is not specific to observational studies: indeed, the England RCT highlighted difficulties in interpreting maltreatment outcomes recorded in administrative data [19]. In Scotland, outcomes for which the study team were unable to pre-specify a hypothesised direction of effect were considered descriptive.

Some outcomes which are central to the FNP logic model – e.g. quality of parent-child relationships – were not captured in administrative data. Valid and reliable assessment of subjective and behavioural outcomes, often central to home-visiting programmes, is usually achieved through prospective measurement using specialist tools, and not usually recorded in routine datasets.

Lastly, ascertainment of usual care received by control mothers is important for interpreting results. Usual care for adolescent mothers differs substantially between local authorities (including varying numbers of health visitors contacts and additional services) [31, 32]. In England, although national data on health visiting is collected, this is not yet well completed nor disaggregated by maternal age [33, 34]. In Scotland, community health data is underdeveloped compared to hospital data. Bespoke data collection was not feasible within the timeframe of our studies: we were therefore unable to include a quantitative measure of usual care in our models, limiting the precision of our intervention effect estimates. Understanding variations in usual care provision among both mothers enrolled in FNP and controls is necessary to better estimate the incremental effect of FNP and account for any unexpected variation in usual care during the evaluation period. Such information would allow more nuanced interpretation of results, including, for example, if the programme worked better in one local area than another.

Data approval and access delays

It took four years in England and five years Scotland, from data applications being submitted to the final linked dataset being available for analysis (Appendix Table 4 and Table 5). Although not inherently due to the nature of administrative data, delays are a widespread issue across countries with available large administrative datasets. Lengthy application processes, and delays in receiving administrative data have been widely documented [3538]. Cross-sectoral data linkage adds other delays, including data providers sending identifier information to trusted third parties for linkage, and in migrating data to a single trusted research environment. In Scotland, requirements to create additional data sharing/processing agreements, memoranda of understanding for each of the 10 health boards, and a leaflet on data usage for new FNP clients [21], contributed to delays. Displacement of staff due to the pandemic and while waiting for data also caused delays. These delays impeded on analysis time: linked data were finally available one month before the initial grant endpoint in England, an insufficient period within which to deliver results based on extensive administrative data cleaning, assessment of linkage quality, construction of study cohorts, and optimisation of quasi-experimental approaches.

Discussion: suggested improvements for observational evaluations of complex interventions using linked administrative data

Lessons learned and suggestions for using administrative data to evaluate complex interventions are summarised in Table 3.

Challenges and lessons learned Suggestions for improvement
Evaluation of complex interventions requires detailed national and local data on programme implementation about who is eligible, approached, enrolled in the intervention with similar information for usual care. This information is crucial to minimise biases, enable fair and robust comparisons, and increase confidence that differences in outcomes can be attributed to the intervention, rather than to the characteristics of the people selected for intervention. Researchers should work in partnership with practitioners, commissioners and communities to ensure that evaluations are integrated into the design and implementation of interventions. Programme managers and care leads shoulddocument detailed information about programme delivery and usual care (including activity dates and catchment area), across local areas and over time. Programme managers should ensure detailed information are recorded on the characteristics of those who are approached and offered an intervention, and those who declined. Programme managers should provide consistent guidelines aboutprogramme targeting and prioritisation, where resources are insufficient for universal offer. Targeting should be documented in detail, including where guidelines change over time ordiffer across local areas.
Information on linkage data quality can be limited, making it challenging to define accurate denominators and comparator groups. Linkage organisations should provide detailed data on linkage quality (see GUILD reporting tool...[41]).
Constructing a comparable control group is limited by measured characteristics, introducing the possibility of unmeasured confounding. Researchers should assess the likelihood of unmeasured confounding.
Interpreting outcomes reported in administrative data – particularly regarding health or social services contact – is challenging without accurate and complete measures of need. Researchers should conduct and funders should fund process evaluations and qualitative studies alongside quantitative impact analyses.
Data approval and access delays may impede substantially on data analysis time, even when applications are submitted several years before the planned grant start date. Data providers should streamline processes to minimise data access delays and enable timely information for evidence-based policy-making.
Table 3: Challenges, lessons learned, and suggestions for improvement for observational studies using administrative data to evaluate complex interventions.

Assess the likelihood of unmeasured confounding

As well as careful comparisons between characteristics of cases and controls at baseline/enrolment, we suggest researchers reflect thoughtfully on what unavailable characteristics would have been important to control for, and use sensitivity analyses with different control cohorts, as well as – where possible – alternative approaches altogether to examine stability of results (e.g. sensitivity methods taking into consideration correlations of unmeasured confounders, calculating e-values or quantitative bias analyses [39]).

Document programme delivery information and usual care

Programme managers should document intervention delivery prospectively, including activity dates, catchment areas and eligibility. Usual care should also be documented by care leads in a complete, standardised way, by area and over time. A searchable, central repository for reporting programme activity would support knowledge of programme delivery and usual care, and help identify eligible populations in quasi-experimental methods. Such documentation would facilitate ongoing evaluations of what works, where, and for whom, for all interventions, contributing to a paradigm shift toward a culture of embedded, near real-time evaluation supporting evidence-based policy-making.

Document programme targeting

Guidelines for targeting interventions to the eligible population should be determined consistently within local areas, and explicitly documented by programme managers and care leads. This would enable researchers to understand which key characteristics need to be adjusted for, and support evaluation of which prioritisation strategies are most effective. Targeting information could be enhanced using linked primary care data and information on household members (e.g. fathers), filling important gaps in our understanding of how children and families interact with services [40].

Provide data on linkage quality

Detailed conversations with organisations performing linkages are crucial to understanding linkage approaches, and what decisions may lead to linkage errors. Linkage organisations should provide data on linkage quality (e.g. match strength/step stratified by important characteristics, criteria linked to each step [41]) to help researchers better understand linkage rates obtained. Identifying biases from linkage error can be complex and study-specific, however examining the percentage of missed links stratified according to important characteristics is one initial step researchers can take to help identify potential bias due to exclusion of some groups [29, 30]. We also encourage researchers to report linkage rates to enable comparisons between specific populations or datasets.

Conduct process evaluation and qualitative research alongside quantitative evaluation

Researchers should conduct process evaluations and qualitative research – funded by research funders – to provide a better understanding of the mechanisms of effect, and explanations of observed effects [42]. For example, in-depth interviews with parents, nurses and commissioners would contribute to an explanatory model of how data are collected and used on the ground, how programme criteria are developed, and the extent to which families are involved in developing new programmes that meet their needs. In addition, collecting more enriched quantitative data on short- and long-term outcomes thought to be impacted by early interventions (such as parent-child interaction, and child emotional and developmental outcomes) would help us understand programme effects on these important outcomes.

Streamline processes to minimise data access delays

Recommendations for improving timeliness of access to data for research in the public good have been outlined elsewhere, including streamlining applications across different data providers, consolidating trusted research environments to enable reuse of linked data, and increasing capacity among data providers [37, 43]. Reducing data access delays is crucial for more efficient use of research funds, and more timely research findings to support evidence-based policy-making (for observational studies and RCTs using administrative data).


Linkage of administrative data presents exciting opportunities for efficient evaluation of large-scale, complex public health interventions [44]. However, a lack of detailed data on how programmes are defined and how they are adapted locally, alongside other important challenges outlined here, limit the success of these approaches. This can lead to difficulties in interpreting results, contradictory or unintuitive findings, and continuing uncertainty about the effectiveness of interventions [45, 46].

Improved information on programme delivery, targeting, and important confounders, alongside careful design of observational evaluations, implementation of quasi-experimental methods and interpretation of results, could help facilitate ongoing evaluations that are integrated into the design and roll-out of large-scale interventions. Integration of research into system-wide practice is key: innovative approaches such as experimental birth cohorts that are designed to evaluate local interventions in real time may also help generate meaningful evidence on the effectiveness of programmes to improve maternal and child health [47, 48]. Reducing data delays would also help realise the efficiency of using administrative data rather than conducting RCTs. Findings of intervention evaluations should help stimulate exploration with practitioners about how programmes can be improved. These suggestions are particularly important for understanding the effectiveness of large new investments such as the Start for Life offer in England.


The authors are grateful to Jan van der Meulen for his contribution to identifying and dealing with challenges in the England evaluation.

Statement on conflicts of interests

The authors declare no competing interests.

Ethics statement

In England, support was obtained from Nottingham Research Ethics Committee [ref 18/EM/0014], NHS Digital [ref NIC136916], the Department for Education [ref DR190430.02] and the Confidentiality Advisory Group [ref 18/CAG/0013]. In Scotland, NHS Ethical Review was not required by South-East Scotland Research Ethics Service as it was considered a service evaluation. Approval from the Public Benefit and Privacy Panel was granted [ref 1516-0040], and from the Education Analytical Services.

Data availability statement

We are unable to share the individual data used for this study. English Hospital Episode Statistics and FNP data can be requested through NHS Digital, National Pupil Database data can be requested through the Department for Education. Scottish health and education data can be requested through the electronic Data Research and Innovation Service, Public Health Scotland.


FNP Family Nurse Partnership
NHS National Health Service
RCT Randomised Controlled Trial

Funding Statement

The FNP evaluation in England was funded by the NIHR Health Services Research and Delivery panel (17/99/19), and the FNP evaluation in Scotland was funded by the Scottish Government. This research was supported in part by the NIHR Great Ormond Street Hospital Biomedical Research Centre. The Centre for Trials Research at Cardiff University receives funding from Health and Care Research Wales and Cancer Research UK. The work was undertaken with the support of The Centre for the Development and Evaluation of Complex Interventions for Public Health Improvement (DECIPHer), a UK Clinical Research Collaboration (UKCRC) Public Health Research Centre of Excellence. KH is funded in part by the Wellcome Trust (212953/Z/18/Z). RG is funded in part through the NIHR Children and Families Policy Research Unit and Senior Investigator Award, and the Health Data Research UK (grant No. LOND1), which is funded by the UK Medical Research Council and eight other funders.


  1. Green BL, Sanders MB, Tarte J. Using administrative data to evaluate the effectiveness of the Healthy Families Oregon home visiting program: 2-year impacts on child maltreatment & service utilization. Children and Youth Services Review. 2017;75:77–86. 10.1016/j.childyouth.2017.02.019

  2. Horak S, Ward C. Evaluating a state child care assistance program using administrative data. Evaluation and Program Planning. 2022;92:102094. 10.1016/j.evalprogplan.2022.102094

  3. Sabo S, Butler M, McCue K, Wightman P, Pilling V, Celaya M, et al. Evaluation protocol to assess maternal and child health outcomes using administrative data: a community health worker home visiting programme. BMJ Open. 2019;9(12):e031780. Epub 2019/12/13. 10.1136/bmjopen-2019-031780. PubMed 31826891; PubMed Central PMCID: PMC6924704.

  4. Welsh Government. Analysis of Flying Start outcomes using linked data: emerging findings 2021 [June 2022]. Available from: https://gov.wales/sites/default/files/pdf-versions/2021/2/4/1614245731/analysis-flying-start-outcomes-using-linked-data-childcare-and-foundation-phase-baseline-assessments.pdf.

  5. Cavallaro FL, Gilbert R, Wijlaars L, Kennedy E, Swarbrick A, van der Meulen J, et al. Evaluating the real-world implementation of the Family Nurse Partnership in England: protocol for a data linkage study. BMJ Open. 2020;10(5):e038530. 10.1136/bmjopen-2020-038530.

  6. Lugg-Widger FV, Robling M, Lau M, Paranjothy S, Pell J, Sanders J, et al. Evaluation of the effectiveness of the Family Nurse Partnership home visiting programme in first time young mothers in Scotland: a protocol for a natural experiment. International Journal of Population Data Science. 2020;5(1). 10.23889/ijpds.v5i1.1154.

  7. Tugwell P, Knottnerus JA, McGowan J, Tricco A. Big-5 Quasi-Experimental designs. Journal of Clinical Epidemiology. 2017;89:1–3. 10.1016/j.jclinepi.2017.09.010.

  8. Craig P, Cooper C, Gunnell D, Haw S, Lawson K, Macintyre S, et al. Using natural experiments to evaluate population health interventions: new Medical Research Council guidance. Journal of Epidemiology and Community Health. 2012;66(12):1182. 10.1136/jech-2011-200375.

  9. Austin PC. An Introduction to Propensity Score Methods for Reducing the Effects of Confounding in Observational Studies. Multivariate Behav Res. 2011;46(3):399-424. Epub 06/08. 10.1080/00273171.2011.568786. PubMed 21818162.

  10. Sims S, Anders J, Zieger L. The Internal Validity of the School-Level Comparative Interrupted Time Series Design: Evidence From Four New Within-Study Comparisons. Journal of Research on Educational Effectiveness. 2022:1-22. 10.1080/19345747.2022.2051652.

  11. Wing K, Williamson E, Carpenter JR, Wise L, Schneeweiss S, Smeeth L, et al. Medications for chronic obstructive pulmonary disease: a historical non-interventional cohort study with validation against RCT results. 2021;25:51. 10.3310/hta25510.

  12. Olds DL, Hill PL, O’Brien R, Racine D, Moritz P. Taking preventive intervention to scale: The nurse-family partnership. Cognitive and Behavioral Practice. 2003;10(4):278–90. 10.1016/S1077-7229(03)80046-9.

  13. Eckenrode J, Ganzel B, Henderson CR, Jr., Smith E, Olds DL, Powers J, et al. Preventing child abuse and neglect with a program of nurse home visitation: the limiting effects of domestic violence. Jama. 2000;284(11):1385-91. Epub 2000/09/16. 10.1001/jama.284.11.1385. PubMed 10989400.

  14. Olds DL, Henderson CR, Jr., Kitzman H. Does prenatal and infancy nurse home visitation have enduring effects on qualities of parental caregiving and child health at 25 to 50 months of life? Pediatrics. 1994;93(1):89-98. Epub 1994/01/01. PubMed 8265329.

  15. Olds DL, Kitzman H, Hanks C, Cole R, Anson E, Sidora-Arcoleo K, et al. Effects of nurse home visiting on maternal and child functioning: age-9 follow-up of a randomized trial. Pediatrics. 2007;120(4):e832–45. Epub 2007/10/03. 10.1542/peds.2006-2111. PubMed 17908740; PubMed Central PMCID: PMC2839449.

  16. Kitzman H, Olds DL, Knudtson MD, Cole R, Anson E, Smith JA, et al. Prenatal and/or Infancy Nurse Home Visiting and 18-Year Outcomes of a Randomized Trial. Pediatrics. 2019. Epub 2019/11/22. 10.1542/peds.2018-3876. PubMed 31748254.

  17. Olds DL, Robinson J, Pettitt L, Luckey DW, Holmberg J, Ng RK, et al. Effects of home visits by paraprofessionals and by nurses: age 4 follow-up results of a randomized trial. Pediatrics. 2004;114(6):1560–8. Epub 2004/12/03. 10.1542/peds.2004-0961. PubMed 15574615.

  18. Robling M, Bekkers M-J, Bell K, Butler CC, Cannings-John R, Channon S, et al. Effectiveness of a nurse-led intensive home-visitation programme for first-time teenage mothers (Building Blocks): a pragmatic randomised controlled trial. The Lancet. 2016;387(10014):146–55. 10.1016/S0140-6736(15)00392-X

  19. Robling M, Lugg-Widger F, Cannings-John R, Sanders J, Angel L, Channon S, et al. The Family Nurse Partnership to reduce maltreatment and improve child health and development in young children:the BB:2–6 routine data-linkage follow-up to earlier RCT. Public Health Research. 2021;9(2). 10.3310/phr09020

  20. Cavallaro F, Gilbert R, Wijlaars L, Harron K. OP44 Are the most vulnerable mothers in England being targeted for additional support during pregnancy and early motherhood? An analysis of characteristics of enrolment in the family nurse partnership using linked administrative data. Journal of Epidemiology and Community Health. 2021;75(Suppl 1):A21–A. 10.1136/jech-2021-SSMabstracts.44

  21. Cannings-John R, Lugg-Widger F, Lau M, Paranjothy S, Pell J, Sanders J, et al. Family Nurse Partnership evaluation: methods and process: Scottish Government; 2020 [June 2022]. Available from: https://www.gov.scot/publications/evaluation-family-nurse-partnership-scotland-methods-paper-process-success-linkages/pages/1/.

  22. Harron K, Gilbert R, Cromwell D, van der Meulen J. Linking Data for Mothers and Babies in De-Identified Electronic Health Data. PLOS ONE. 2016;11(10):e0164667. 10.1371/journal.pone.0164667.

  23. Revaluation of the Family Nurse Partnership in Scotland: Scottish Government; 2019 [June 2022]. Available from: https://www.gov.scot/publications/revaluation-family-nurse-partnership-scotland/.

  24. Feder GS, Hutson M, Ramsay J, Taket AR. Women exposed to intimate partner violence: expectations and experiences when they encounter health care professionals: a meta-analysis of qualitative studies. Arch Intern Med. 2006;166(1):22-37. Epub 2006/01/13. 10.1001/archinte.166.1.22. PubMed 16401807.

  25. McTavish JR, Kimber M, Devries K, Colombini M, MacGregor JCD, Wathen N, et al. Children’s and caregivers’ perspectives about mandatory reporting of child maltreatment: a meta-synthesis of qualitative studies. BMJ Open. 2019;9(4):e025741. 10.1136/bmjopen-2018-025741

  26. Lewis NV, Feder GS, Howarth E, Szilassy E, McTavish JR, MacMillan HL, et al. Identification and initial response to children’s exposure to intimate partner violence: a qualitative synthesis of the perspectives of children, mothers and professionals. BMJ Open. 2018;8(4):e019761. 10.1136/bmjopen-2017-019761

  27. Wimbush E, Craig P, Jepson R. Evaluability assessment of the Family Nurse Partnership in Scotland: NHS Health Scotland; 2015 [June 2022]. Available from: http://www.healthscotland.com/documents/26102.aspx.

  28. Doidge JC, Harron KL. Reflections on modern methods: linkage error bias. International Journal of Epidemiology. 2019. 10.1093/ije/dyz203.

  29. Libuy N, Harron K, Gilbert R, Caulton R, Cameron E, Blackburn R. Linking education and hospital data in England: linkage process and quality. International Journal of Population Data Science. 2021;6(1). 10.23889/ijpds.v6i1.1671

  30. Grath-Lone LM, Libuy N, Etoori D, Blackburn R, Gilbert R, Harron K. Ethnic bias in data linkage. Lancet Digit Health. 2021;3(6):e339. 10.1016/s2589-7500(21)00081-9. PubMed 34045000.

  31. Bennett V. Continuing the mandation of the universal five health visiting checks: Public Health England; 2017 [March 2020]. Available from: https://publichealthmatters.blog.gov.uk/2017/03/01/continuing-the-mandation-of-the-universal-five-health-visiting-checks/.

  32. Robling M, Cannings-John R, Channon S, Hood K, Moody G, Poole R, et al. What is usual care for teenagers expecting their first child in England? A process evaluation using key informant mapping and participant survey as part of the Building Blocks randomised controlled trial of specialist home visiting. BMJ Open. 2018;8(5):e020152. 10.1136/bmjopen-2017-020152.

  33. Fraser C, Harron K, Barlow J, Bennett S, Woods G, Shand J, et al. Variation in health visiting contacts for children in England: cross-sectional analysis of the 2–21/2 year review using administrative data (Community Services Dataset, CSDS). BMJ Open. 2022;12(2):e053884. 10.1136/bmjopen-2021-053884.

  34. Public Health England. Health visitor service delivery metrics: 2019 to 2020 2021. Available from: https://www.gov.uk/government/statistics/health-visitor-service-delivery-metrics-2019-to-2020.

  35. Morris H, Lanati S, Gilbert R. Challenges of administrative data linkages: experiences of Administrative Data Research Centre for England (ADRC-E) researchers. International Journal of Population Data Science. 2018;3(2). 10.23889/ijpds.v3i2.566.

  36. Dattani N, Hardelid P, Davey J, Gilbert R, Research obotWGot, Paediatrics PDotRCo, et al. Accessing electronic administrative health data for research takes time. Archives of Disease in Childhood. 2013;98(5):391–2. 10.1136/archdischild-2013-303730

  37. Taylor JA, Crowe S, Espuny Pujol F, Franklin RC, Feltbower RG, Norman LJ, et al. The road to hell is paved with good intentions: the experience of applying for national data for linkage and suggestions for improvement. BMJ Open. 2021;11(8):e047575. Epub 2021/08/21. 10.1136/bmjopen-2020-047575. PubMed 34413101; PubMed Central PMCID: PMC8378388.

  38. Cavallaro F, Lugg-Widger F, Cannings-John R, Harron K. Reducing barriers to data access for research in the public interest—lessons from covid-19 BMJ2020. Available from: https://blogs.bmj.com/bmj/2020/07/06/reducing-barriers-to-data-access-for-research-in-the-public-interest-lessons-from-covid-19/.

  39. VanderWeele TJ, Ding P. Sensitivity Analysis in Observational Research: Introducing the E-Value. Annals of Internal Medicine. 2017;167(4):268–74. 10.7326/m16-2607

  40. Lut I, Harron K, Hardelid P, O’Brien M, Woodman J. ‘What about the dads?’ Linking fathers and children in administrative data: A systematic scoping review. Big Data & Society. 2022;9(1):20539517211069299. 10.1177/20539517211069299

  41. Gilbert R, Lafferty R, Hagger-Johnson G, Harron K, Zhang LC, Smith P, et al. GUILD: GUidance for Information about Linking Data sets. Journal of public health (Oxford, England). 2018;40(1):191-8. Epub 2017/04/04. 10.1093/pubmed/fdx037. PubMed 28369581; PubMed Central PMCID: PMC5896589.

  42. Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008;337:a1655. 10.1136/bmj.a1655

  43. Goldacre B, Morley J. Better, Broader, Safer: Using health data for research and analysis. A review commissioned by the Secretary of State for Health and Social Care: Department of Health and Social Care; 2022. Available from: https://www.gov.uk/government/publications/better-broader-safer-using-health-data-for-research-and-analysis.

  44. Craig P, Katikireddi SV, Leyland A, Popham F. Natural Experiments: An Overview of Methods, Approaches, and Contributions to Public Health Intervention Research. Annu Rev Public Health. 2017;38:39-56. Epub 2017/01/27. 10.1146/annurev-publhealth-031816-044327. PubMed 28125392; PubMed Central PMCID: PMC6485604.

  45. Belsky J, Melhuish E, Barnes J, Leyland AH, Romaniuk H. Effects of Sure Start local programmes on children and families: early findings from a quasi-experimental, cross sectional study. BMJ. 2006;332(7556):1476. 10.1136/bmj.38853.451748.2F.

  46. Cattan S, Conti G, Farquharson C, Ginja R. The health effects of Sure Start London: Institute of Fiscal Studies; 2019 [June 2022]. Available from: https://dera.ioe.ac.uk/33510/1/R155-The-health-effects-of-Sure-Start.pdf.

  47. Dickerson J, Bird PK, McEachan RRC, Pickett KE, Waiblinger D, Uphoff E, et al. Born in Bradford’s Better Start: an experimental birth cohort study to evaluate the impact of early life interventions. BMC Public Health. 2016;16(1):711. 10.1186/s12889-016-3318-0

  48. Dickerson J, Bird PK, Bryant M, Dharni N, Bridges S, Willan K, et al. Integrating research and system-wide practice in public health: lessons learnt from Better Start Bradford. BMC Public Health. 2019;19(1):260. 10.1186/s12889-019-6554-2


Article Details

How to Cite
Cavallaro, F., Cannings-John, R., Lugg-Widger, F., Gilbert, R., Kennedy, E., Kendall, S., Robling, M. and Harron, K. (2023) “Lessons learned from using linked administrative data to evaluate the Family Nurse Partnership in England and Scotland”, International Journal of Population Data Science, 8(1). doi: 10.23889/ijpds.v8i1.2113.

Most read articles by the same author(s)

1 2 3 4 5 6 7 8 9 > >>