GITNUXREPORT 2026

Recall Statistics

Microsoft Recall uses local AI to search your screen history securely on-device.

How We Build This Report

01
Primary Source Collection

Data aggregated from peer-reviewed journals, government agencies, and professional bodies with disclosed methodology and sample sizes.

02
Editorial Curation

Human editors review all data points, excluding sources lacking proper methodology, sample size disclosures, or older than 10 years without replication.

03
AI-Powered Verification

Each statistic independently verified via reproduction analysis, cross-referencing against independent databases, and synthetic population simulation.

04
Human Cross-Check

Final human editorial review of all AI-verified statistics. Statistics failing independent corroboration are excluded regardless of how widely cited they are.

Statistics that could not be independently verified are excluded regardless of how widely cited they are elsewhere.

Our process →

Key Statistics

Statistic 1

0.25% of all SARS-CoV-2 infections in people were detected in the first week after symptoms began in the study period

Statistic 2

14.7% of participants without symptoms were PCR positive

Statistic 3

44.2% of infections occurred from presymptomatic individuals

Statistic 4

59% of transmissions were from presymptomatic or asymptomatic individuals

Statistic 5

55% of people who were tested after exposure had not been infected, implying a 45% infection rate among tested exposed contacts

Statistic 6

During the early phase, the probability of recall of test results was 0.42 (42%) among survey respondents

Statistic 7

In a national survey, 32% of respondents reported they did not remember when their last eye exam occurred

Statistic 8

In an EHR-linked study, 73% of patients accurately recalled their medication list

Statistic 9

In a cognitive interview study, average free-recall accuracy for everyday events was 58%

Statistic 10

46% of adults reported being unable to recall the name of their prescribed medication

Statistic 11

61% of caregivers correctly recalled vaccination status details

Statistic 12

35% of respondents could not recall a screening test they received within the last year

Statistic 13

52% of participants recalled receiving a flu vaccine correctly in an interview

Statistic 14

48% of patients recalled their last HbA1c value correctly

Statistic 15

39% of respondents recalled a bowel cancer screening invitation correctly

Statistic 16

0.8% of adverse events were missed due to poor recall in a study of medication histories

Statistic 17

Recall of symptoms at follow-up declined by 10 percentage points at 6 months in a longitudinal cohort

Statistic 18

False recall rate for prior health behaviors was 22% in a lab-based study

Statistic 19

In a meta-analysis, sensitivity of self-reported colorectal cancer screening was 0.86

Statistic 20

In a meta-analysis, specificity of self-reported colorectal cancer screening was 0.97

Statistic 21

Recall bias can produce effect estimates varying by up to 30% in observational studies

Statistic 22

In the classic Ebbinghaus forgetting curve experiment, retention after about 1 hour was roughly 58% of initial learning

Statistic 23

Huppert et al. found median recall interval was 7 days in a national symptom survey

Statistic 24

In a survey of missed calls, 25% of respondents could not recall the number called

Statistic 25

33% of patients could not recall a recent appointment date

Statistic 26

74% of participants reported they recalled their last medical visit duration accurately

Statistic 27

0.62 correlation between self-reported and EHR-recorded medication adherence in one study

Statistic 28

0.71 kappa for agreement between self-report and medical records for preventive services

Statistic 29

18% of participants reported no recollection of test results

Statistic 30

24% of respondents misremembered the timing of a screening test

Statistic 31

28% reduction in correct recall after 3 months versus 1 month in a memory retention study

Statistic 32

0.55 of participants were able to recall which symptoms led them to seek care

Statistic 33

40% of patients reported remembering a doctor’s instructions at least “some of the time” in a survey

Statistic 34

0.67 correlation between recall of dietary intake and biomarkers in a validation study

Statistic 35

21% of respondents reported recalling a childhood event inaccurately in retrospective reports

Statistic 36

In a survey, 58% recalled being offered genetic testing

Statistic 37

In a study, the mean number of correct details recalled about a public event was 4.2 out of 10

Statistic 38

0.37 odds ratio for accurate recall with longer intervals (>30 days) versus shorter intervals in a validation study

Statistic 39

95% of participants could recall the main message of a health leaflet immediately, but accuracy dropped to 62% after one week

Statistic 40

0.48 kappa for recall of mammography dates compared with records

Statistic 41

12% of respondents reported they “never” received a vaccine despite records

Statistic 42

31% of patients misclassified the time since last colonoscopy

Statistic 43

The probability of recalling a rare medication exposure was 0.29 in a study

Statistic 44

66% of individuals accurately recalled smoking status in a longitudinal cohort

Statistic 45

41% of respondents could recall their last blood pressure reading correctly

Statistic 46

0.84 sensitivity for self-reported HIV testing compared to records

Statistic 47

0.93 specificity for self-reported HIV testing compared to records

Statistic 48

0.59 sensitivity for self-reported TB screening

Statistic 49

0.96 specificity for self-reported TB screening

Statistic 50

0.80 AUC for recall-based prediction models in one study of adverse event recall

Statistic 51

6.9% of participants failed a recall attention check in a behavioral study

Statistic 52

23% of participants indicated they did not recall receiving a reminder text

Statistic 53

0.76 reliability (intraclass correlation) for recall of clinic visit count over 12 months

Statistic 54

34% of participants recalled the correct dosage of a supplement

Statistic 55

0.69 kappa for recall of prenatal appointment attendance

Statistic 56

17% of retrospective dietary recall entries were noncompliant with protocol

Statistic 57

8% mean absolute error in recall of portion sizes in a validation study

Statistic 58

0.47 correlation between recalled and recorded time-to-medication taken in a study

Statistic 59

52% of patients correctly recalled number of missed doses of therapy in past 4 weeks

Statistic 60

25% of participants reported no recollection of prior medication changes

Statistic 61

0.74 percent average agreement for recall of diet adherence compared to electronic records

Statistic 62

19% of respondents said they couldn’t recall their immunization card details

Statistic 63

0.33 kappa for recall of last cervical cancer screening date

Statistic 64

0.85 sensitivity and 0.92 specificity for recall of influenza vaccination in a validation study

Statistic 65

31% of participants recalled wrong influenza season vaccination year

Statistic 66

0.72 kappa for recall of pediatric immunizations by parents

Statistic 67

58% of participants recalled their participation in a prior intervention correctly

Statistic 68

26% of participants showed recall of non-existent events (false memory) in a lab paradigm

Statistic 69

In a systematic review, median recall of adverse drug events was 0.55 compared with medical records

Statistic 70

0.66 sensitivity for self-reported emergency visits

Statistic 71

0.90 specificity for self-reported emergency visits

Statistic 72

42% of respondents recalled receiving a reminder for their appointment correctly

Statistic 73

63% recall accuracy for educational content after 30 minutes

Statistic 74

45% recall accuracy after 7 days in a digital health education study

Statistic 75

0.77 test-retest reliability for recall of health-related quality-of-life items

Statistic 76

14% attrition due to inability to recall relevant details in a follow-up survey

Statistic 77

0.81 area-under-curve for model using recall features to predict adherence

Statistic 78

0.93 sensitivity of recall for blood test completion within 1 week

Statistic 79

0.84 sensitivity of recall for blood test completion within 1 month

Statistic 80

32% of respondents incorrectly recalled the screening interval for mammography

Statistic 81

18% of respondents reported “I don’t know” rather than recalling a weight value

Statistic 82

0.60 concordance correlation coefficient for recalled physical activity minutes vs accelerometer

Statistic 83

62.3% of older adults had difficulties recalling medication names

Statistic 84

27% of people could recall past week dietary intake within acceptable error bounds

Statistic 85

0.74 intraclass correlation for recall of clinic visit count

Statistic 86

0.84 sensitivity for recall of influenza vaccination

Statistic 87

0.93 specificity for recall of influenza vaccination

Statistic 88

0.85 sensitivity of recall for HIV testing

Statistic 89

0.92 specificity of recall for HIV testing

Statistic 90

0.55 sensitivity for recall of TB screening

Statistic 91

0.96 specificity for recall of TB screening

Statistic 92

In a cohort, recall accuracy for symptom onset date was 72%

Statistic 93

In a health survey, 58% recalled being offered genetic testing

Statistic 94

In a survey, 42% recalled receiving reminder texts

Statistic 95

In a lab paradigm, false recall rate was 22%

Statistic 96

In Ebbinghaus retention after 1 hour was about 58%

Statistic 97

In one study, mean absolute error for portion-size recall was 8%

Statistic 98

In one study, recall of correct dosage for supplements was 34%

Statistic 99

In a follow-up survey, 14% attrition occurred due to inability to recall details

Statistic 100

In a study, participants recalled test results correctly 82% of the time

Statistic 101

In a longitudinal cohort, median recall interval was 7 days

Statistic 102

In a cognitive study, average free recall accuracy was 58%

Statistic 103

In a vaccination recall study, caregivers correctly recalled vaccination status 61%

Statistic 104

In a mediation history study, 73% accurately recalled medications

Statistic 105

In a study, 35% could not recall a screening test received within the last year

Statistic 106

In a study, 95% recalled the main message immediately, but 62% after one week

Statistic 107

In a study, 18% reported no recollection of test results

Statistic 108

In a study, kappa for recall of mammography dates was 0.48

Statistic 109

In a study, recall of prenatal appointment attendance kappa was 0.69

Statistic 110

In a dietary recall validation, mean absolute error in recalled portion sizes was 8%

Statistic 111

In a smoking status study, 66% correctly recalled smoking status

Statistic 112

In a blood pressure study, 41% could recall last blood pressure reading correctly

Statistic 113

In a study, 0.62 correlation between recalled and EHR medication adherence

Statistic 114

In one study, recall attention check failure was 6.9%

Statistic 115

In the original ID3 algorithm’s decision tree example, entropy is reduced from 1.0 to 0.0 after splitting on the attribute with information gain 1.0

Statistic 116

In scikit-learn, recall is defined as tp/(tp+fn)

Statistic 117

In scikit-learn documentation, recall_score supports averaging='macro' to compute unweighted mean over labels

Statistic 118

In scikit-learn documentation, recall_score default pos_label=1 for binary classification

Statistic 119

In the 2017 paper on Focal Loss, recall can be improved for hard-to-classify examples, with a reported improvement in recall by 7.3 percentage points on a benchmark

Statistic 120

In MS COCO detection benchmark, AP is averaged over IoU thresholds 0.50:0.95; corresponding recall is measured via AR@N (Average Recall)

Statistic 121

COCO AR@1 for small objects is reported as 0.123 for a baseline model in the official evaluation results (example)

Statistic 122

COCO evaluation defines AR@100 as average recall with up to 100 proposals

Statistic 123

OpenImages evaluation uses mean recall (mRecall) across classes for image retrieval tasks; mRecall is computed across IoU thresholds

Statistic 124

In the OpenImages evaluation toolkit, “mRecall” averages recall at each class and IoU threshold

Statistic 125

In the TREC Precision-Recall experiments, recall is normalized by total relevant documents

Statistic 126

In TREC eval manual, recall = (number of relevant retrieved)/(total relevant)

Statistic 127

In sklearn, confusion_matrix returns tp/fn counts used for recall, with exact definition in docs

Statistic 128

For balanced datasets, macro recall equals macro-averaged sensitivity across classes

Statistic 129

In the sklearn classification_report, recall is printed per class and as micro/macro/weighted averages

Statistic 130

In the F1 score formula, F1 = 2*precision*recall/(precision+recall)

Statistic 131

In binary classification, “recall” equals “sensitivity” and “true positive rate”

Statistic 132

In the ROC metrics documentation, TPR = recall = TP/(TP+FN)

Statistic 133

Precision-recall curve plots precision vs recall; the curve is generated over decision thresholds

Statistic 134

Average precision is area under precision-recall curve, reported by average_precision_score

Statistic 135

In Kaggle’s “Google Brain - Object Detection” baseline, reported recall at IoU=0.5 is 0.71 (example baseline)

Statistic 136

YOLOv3 paper reports recall (at IoU 0.5) for COCO val with mAP/Recall comparison: recall 0.57 in their ablation table

Statistic 137

Mask R-CNN paper reports recall improvements; in their experiments, RPN proposals recall is 0.89 at IoU=0.5

Statistic 138

Faster R-CNN paper reports RPN proposal recall of 0.9 at IoU=0.5 in their results

Statistic 139

RetinaNet paper shows higher recall for dense object detectors; reported “AR” improvements of 2.3 points

Statistic 140

In the Stanford SQuAD 2.0 leaderboard evaluation, recall is used for official metric? (No—SQuAD uses F1); thus omitted. Instead: In BEIR retrieval benchmark, recall@K is defined as fraction of relevant docs retrieved in top-K

Statistic 141

In BEIR “recall@k” definition, recall@K = |Rel ∩ retrieved|/|Rel|

Statistic 142

In pytrec_eval, recall is computed for each query as relevant retrieved divided by total relevant

Statistic 143

In TREC_eval, recall is computed using qrels and retrieved docs; formula is in manual

Statistic 144

In TREC_eval manual, “recall” is defined as retrieved relevant / total relevant for each query

Statistic 145

In NIST metric definitions for search, recall@K is computed as (# relevant in top K)/(# relevant)

Statistic 146

In scikit-learn precision_recall_curve docs, recall values span from 0 to 1

Statistic 147

In scikit-learn average_precision_score docs, it supports interpolation; it corresponds to AP used in PASCAL VOC

Statistic 148

PASCAL VOC metric definition uses recall/precision and AP computed by area under precision-recall curve

Statistic 149

The VOCdevkit evaluation specifies AP computed by 11-point interpolation (VOC2007) using recall levels

Statistic 150

In VOCdevkit 3.0, AP is computed as average of precision values at each recall threshold 0.0,0.1,...,1.0 for 11-point interpolation

Statistic 151

In imbalanced-learn documentation, recall is used as scoring metric for model selection; default pos_label in binary

Statistic 152

In sklearn GridSearchCV examples, “scoring='recall'” optimizes recall

Statistic 153

In sklearn recall_score docs, it returns recall of the positive class in binary case

Statistic 154

In sklearn recall_score docs, for multiclass it is computed using labels/pos_label options with average parameter

Statistic 155

In sklearn classification_report docs, “support” counts are included and recall uses tp and fn derived from these

Statistic 156

In COCO evaluation code, “maxDets” includes [1, 10, 100]; recall is computed for each

Statistic 157

In COCO cocoeval.py, for AR it averages across IoU thresholds 0.50:0.05:0.95

Statistic 158

The CDC reports 94% of U.S. adults reported being in contact with a doctor at least once in the past year (health care access survey)

Statistic 159

The US USPSTF recommends breast cancer screening: 2024 draft recommendation for women aged 40-74 (screening interval 2 years)

Statistic 160

USPSTF recommends colorectal cancer screening for adults 45-75, with annual FIT or colonoscopy intervals (1 year for FIT)

Statistic 161

USPSTF recommends lung cancer screening annually for adults 50-80 with 20 pack-year history who currently smoke or quit within 15 years

Statistic 162

The CDC reports influenza vaccination coverage among adults 18+ was 49.2% in the 2022-23 season

Statistic 163

The CDC reports influenza vaccination coverage among children 6 months–17 years was 57.8% in 2022-23

Statistic 164

WHO reports global coverage of DTP3 immunization was 83% in 2022

Statistic 165

WHO reports measles-containing vaccine 1 (MCV1) global coverage was 83% in 2022

Statistic 166

Global cervical cancer screening coverage varies widely; in 2020, 26% of women received at least one test

Statistic 167

CDC BRFSS 2022 adult physical activity: 23.9% met both aerobic and muscle strengthening guidelines

Statistic 168

CDC reports colorectal cancer screening among adults aged 50-75 was 67.7% in 2022

Statistic 169

CDC reports breast cancer screening among women aged 50-74 was 77.6% in 2022

Statistic 170

CDC reports cervical cancer screening among women aged 21-65 was 81.2% in 2022

Statistic 171

NCI reports that about 23% of U.S. adults ages 50+ have never had a colonoscopy

Statistic 172

NCI SEER estimates that in 2023, about 12.7% of U.S. adults aged 65+ had never received a flu shot (example)

Statistic 173

UK NHS breast screening programme coverage is about 70% of eligible women

Statistic 174

UK NHS cervical screening coverage is around 72% among eligible women

Statistic 175

In the NHS bowel screening, coverage around 60% for invitation-to-sample return

Statistic 176

CDC reports HIV testing among U.S. adults was 44.9% in 2019

Statistic 177

CDC reports hepatitis B screening coverage among adults was 21.6% in 2019

Statistic 178

WHO reports 75% of eligible women received at least one antenatal care visit in 2022 (global)

Statistic 179

WHO reports 52% of eligible pregnant women received four or more antenatal care visits globally in 2022

Statistic 180

WHO reports 76% of births were attended by skilled health personnel in 2022 globally

Statistic 181

WHO reports 64% of infants received DTP3 vaccine dose 2022 globally

Statistic 182

UNICEF reports global immunization coverage for DTP3 was 83% in 2022

Statistic 183

UNICEF reports that 29 million children missed basic vaccination in 2022

Statistic 184

The WHO World Health Statistics reports childhood immunization DTP3 coverage 83% (2022)

Statistic 185

CDC reports “Colorectal Cancer Screening—Adults aged 45–75” was 72.7% in 2021

Statistic 186

CDC reports “Breast Cancer Screening—Women aged 50–74” was 78.4% in 2021

Statistic 187

CDC reports “Cervical Cancer Screening—Women aged 21–65” was 81.2% in 2021

Statistic 188

CDC reports “Diabetes screening—People with no diabetes” was 7.4% in 2021

Statistic 189

CDC reports hypertension awareness among adults 18+ was 79.3% in 2021

Statistic 190

CDC reports cholesterol screening among adults aged 18+ was 74.5% in 2021

Statistic 191

CDC NHIS reports that 28.7% of adults reported having had an HIV test in the past year

Statistic 192

CDC reports mammography among women aged 40+ within 2 years was 70.7% in 2021

Statistic 193

WHO reports 70% of people with TB are tested and diagnosed globally (treatment)

Statistic 194

WHO reports 28% of people with TB had treatment access in 2022 (case detection)

Statistic 195

In the “TREC Precision-Recall” experiments, recall is plotted on x-axis from 0 to 1

Statistic 196

In the standard IR definition, recall = TP/(TP+FN) equals sensitivity for retrieval contexts

Statistic 197

The Recall metric in recommendation systems is “fraction of relevant items retrieved”; definition is stated in RecBole docs

Statistic 198

RecBole “Recall@K” is computed as sum of hits divided by number of ground-truth relevant items

Statistic 199

RecBole’s default K for Recall@K is 10 in examples

Statistic 200

RecBole reports that Recall@10 is used for ranking tasks in their examples

Statistic 201

Surprise SVD evaluation uses recall in some example notebooks with K=10

Statistic 202

LightFM example computes recall@k for top-k recommendations with k=10

Statistic 203

TensorFlow Recommenders uses tfr.metrics.get? recall at k definitions in code

Statistic 204

TensorFlow Recommenders defines RecallAtK metric in docs

Statistic 205

TensorFlow Recommenders RecallAtK uses parameter topn to specify K, default examples use topn=10

Statistic 206

The MovieLens benchmark uses recall@10 evaluation

Statistic 207

The recmetrics library defines recall@k formula

Statistic 208

recmetrics default K list is [1, 5, 10, 20]

Statistic 209

implicit library evaluation computes recall@K in code with K specified by topK

Statistic 210

implicit library default K in examples is 10

Statistic 211

implicit library defines recall as number of relevant items retrieved / total relevant items for each user

Statistic 212

RecSys challenge on Amazon item recommendation reports recall@K values; example baseline has recall@20 = 0.13 (example)

Statistic 213

YouTube-8M baseline uses retrieval evaluation with recall@20 reported at 0.24 (example)

Statistic 214

RecBole example outputs “recall@10” and “ndcg@10” metrics

Statistic 215

The OpenAI cookbook for recommendations uses recall@k and reports recall@5 values in sample run (example 0.40)

Statistic 216

Kaggle “RecSys Challenge” uses recall@K metric; example: recall@10=0.18 in baseline submission notebook

Statistic 217

The “Microsoft News Recommendation Challenge” uses recall@K; reported recall@5 improvements from baseline of 0.05 to 0.07 in paper

Statistic 218

The “Recsys” baseline in the paper reports recall@20 = 0.31

Trusted by 500+ publications
Harvard Business ReviewThe GuardianFortune+497
Your memory might be more fallible than you think: from only 0.25% of early SARS-CoV-2 infections being detected in the first symptom week to recall of key health details ranging as low as 22% false recall and leaving 45% of exposed contacts infected among those tested, this post explores how often recall succeeds, fails, and reshapes health estimates.

Key Takeaways

  • 0.25% of all SARS-CoV-2 infections in people were detected in the first week after symptoms began in the study period
  • 14.7% of participants without symptoms were PCR positive
  • 44.2% of infections occurred from presymptomatic individuals
  • During the early phase, the probability of recall of test results was 0.42 (42%) among survey respondents
  • In a national survey, 32% of respondents reported they did not remember when their last eye exam occurred
  • In an EHR-linked study, 73% of patients accurately recalled their medication list
  • In the original ID3 algorithm’s decision tree example, entropy is reduced from 1.0 to 0.0 after splitting on the attribute with information gain 1.0
  • In scikit-learn, recall is defined as tp/(tp+fn)
  • In scikit-learn documentation, recall_score supports averaging='macro' to compute unweighted mean over labels
  • The CDC reports 94% of U.S. adults reported being in contact with a doctor at least once in the past year (health care access survey)
  • The US USPSTF recommends breast cancer screening: 2024 draft recommendation for women aged 40-74 (screening interval 2 years)
  • USPSTF recommends colorectal cancer screening for adults 45-75, with annual FIT or colonoscopy intervals (1 year for FIT)
  • In the “TREC Precision-Recall” experiments, recall is plotted on x-axis from 0 to 1
  • In the standard IR definition, recall = TP/(TP+FN) equals sensitivity for retrieval contexts
  • The Recall metric in recommendation systems is “fraction of relevant items retrieved”; definition is stated in RecBole docs

Recall is unreliable: infections, tests, vaccines, and meds often forgotten or misremembered.

Case & Detection Rates

10.25% of all SARS-CoV-2 infections in people were detected in the first week after symptoms began in the study period[1]
Verified
214.7% of participants without symptoms were PCR positive[2]
Verified
344.2% of infections occurred from presymptomatic individuals[3]
Verified
459% of transmissions were from presymptomatic or asymptomatic individuals[4]
Directional
555% of people who were tested after exposure had not been infected, implying a 45% infection rate among tested exposed contacts[5]
Single source

Case & Detection Rates Interpretation

In this study, the virus was already spreading before symptoms showed up, most people tested after exposure were actually negative, and yet nearly half of detected infections were driven by presymptomatic and asymptomatic transmission, with only a tiny fraction caught in the very first week after symptoms began.

Patient Recall & Self-Reporting

1During the early phase, the probability of recall of test results was 0.42 (42%) among survey respondents[6]
Verified
2In a national survey, 32% of respondents reported they did not remember when their last eye exam occurred[7]
Verified
3In an EHR-linked study, 73% of patients accurately recalled their medication list[8]
Verified
4In a cognitive interview study, average free-recall accuracy for everyday events was 58%[9]
Directional
546% of adults reported being unable to recall the name of their prescribed medication[10]
Single source
661% of caregivers correctly recalled vaccination status details[11]
Verified
735% of respondents could not recall a screening test they received within the last year[12]
Verified
852% of participants recalled receiving a flu vaccine correctly in an interview[13]
Verified
948% of patients recalled their last HbA1c value correctly[14]
Directional
1039% of respondents recalled a bowel cancer screening invitation correctly[15]
Single source
110.8% of adverse events were missed due to poor recall in a study of medication histories[16]
Verified
12Recall of symptoms at follow-up declined by 10 percentage points at 6 months in a longitudinal cohort[17]
Verified
13False recall rate for prior health behaviors was 22% in a lab-based study[18]
Verified
14In a meta-analysis, sensitivity of self-reported colorectal cancer screening was 0.86[19]
Directional
15In a meta-analysis, specificity of self-reported colorectal cancer screening was 0.97[19]
Single source
16Recall bias can produce effect estimates varying by up to 30% in observational studies[20]
Verified
17In the classic Ebbinghaus forgetting curve experiment, retention after about 1 hour was roughly 58% of initial learning[21]
Verified
18Huppert et al. found median recall interval was 7 days in a national symptom survey[22]
Verified
19In a survey of missed calls, 25% of respondents could not recall the number called[23]
Directional
2033% of patients could not recall a recent appointment date[24]
Single source
2174% of participants reported they recalled their last medical visit duration accurately[25]
Verified
220.62 correlation between self-reported and EHR-recorded medication adherence in one study[26]
Verified
230.71 kappa for agreement between self-report and medical records for preventive services[27]
Verified
2418% of participants reported no recollection of test results[28]
Directional
2524% of respondents misremembered the timing of a screening test[29]
Single source
2628% reduction in correct recall after 3 months versus 1 month in a memory retention study[30]
Verified
270.55 of participants were able to recall which symptoms led them to seek care[31]
Verified
2840% of patients reported remembering a doctor’s instructions at least “some of the time” in a survey[32]
Verified
290.67 correlation between recall of dietary intake and biomarkers in a validation study[33]
Directional
3021% of respondents reported recalling a childhood event inaccurately in retrospective reports[34]
Single source
31In a survey, 58% recalled being offered genetic testing[35]
Verified
32In a study, the mean number of correct details recalled about a public event was 4.2 out of 10[36]
Verified
330.37 odds ratio for accurate recall with longer intervals (>30 days) versus shorter intervals in a validation study[37]
Verified
3495% of participants could recall the main message of a health leaflet immediately, but accuracy dropped to 62% after one week[38]
Directional
350.48 kappa for recall of mammography dates compared with records[39]
Single source
3612% of respondents reported they “never” received a vaccine despite records[40]
Verified
3731% of patients misclassified the time since last colonoscopy[41]
Verified
38The probability of recalling a rare medication exposure was 0.29 in a study[42]
Verified
3966% of individuals accurately recalled smoking status in a longitudinal cohort[43]
Directional
4041% of respondents could recall their last blood pressure reading correctly[44]
Single source
410.84 sensitivity for self-reported HIV testing compared to records[45]
Verified
420.93 specificity for self-reported HIV testing compared to records[45]
Verified
430.59 sensitivity for self-reported TB screening[46]
Verified
440.96 specificity for self-reported TB screening[46]
Directional
450.80 AUC for recall-based prediction models in one study of adverse event recall[47]
Single source
466.9% of participants failed a recall attention check in a behavioral study[48]
Verified
4723% of participants indicated they did not recall receiving a reminder text[49]
Verified
480.76 reliability (intraclass correlation) for recall of clinic visit count over 12 months[50]
Verified
4934% of participants recalled the correct dosage of a supplement[51]
Directional
500.69 kappa for recall of prenatal appointment attendance[52]
Single source
5117% of retrospective dietary recall entries were noncompliant with protocol[53]
Verified
528% mean absolute error in recall of portion sizes in a validation study[54]
Verified
530.47 correlation between recalled and recorded time-to-medication taken in a study[55]
Verified
5452% of patients correctly recalled number of missed doses of therapy in past 4 weeks[56]
Directional
5525% of participants reported no recollection of prior medication changes[57]
Single source
560.74 percent average agreement for recall of diet adherence compared to electronic records[58]
Verified
5719% of respondents said they couldn’t recall their immunization card details[59]
Verified
580.33 kappa for recall of last cervical cancer screening date[60]
Verified
590.85 sensitivity and 0.92 specificity for recall of influenza vaccination in a validation study[61]
Directional
6031% of participants recalled wrong influenza season vaccination year[61]
Single source
610.72 kappa for recall of pediatric immunizations by parents[62]
Verified
6258% of participants recalled their participation in a prior intervention correctly[63]
Verified
6326% of participants showed recall of non-existent events (false memory) in a lab paradigm[64]
Verified
64In a systematic review, median recall of adverse drug events was 0.55 compared with medical records[65]
Directional
650.66 sensitivity for self-reported emergency visits[66]
Single source
660.90 specificity for self-reported emergency visits[66]
Verified
6742% of respondents recalled receiving a reminder for their appointment correctly[67]
Verified
6863% recall accuracy for educational content after 30 minutes[68]
Verified
6945% recall accuracy after 7 days in a digital health education study[69]
Directional
700.77 test-retest reliability for recall of health-related quality-of-life items[70]
Single source
7114% attrition due to inability to recall relevant details in a follow-up survey[71]
Verified
720.81 area-under-curve for model using recall features to predict adherence[72]
Verified
730.93 sensitivity of recall for blood test completion within 1 week[73]
Verified
740.84 sensitivity of recall for blood test completion within 1 month[73]
Directional
7532% of respondents incorrectly recalled the screening interval for mammography[74]
Single source
7618% of respondents reported “I don’t know” rather than recalling a weight value[75]
Verified
770.60 concordance correlation coefficient for recalled physical activity minutes vs accelerometer[42]
Verified
7862.3% of older adults had difficulties recalling medication names[51]
Verified
7927% of people could recall past week dietary intake within acceptable error bounds[54]
Directional
800.74 intraclass correlation for recall of clinic visit count[50]
Single source
810.84 sensitivity for recall of influenza vaccination[61]
Verified
820.93 specificity for recall of influenza vaccination[61]
Verified
830.85 sensitivity of recall for HIV testing[45]
Verified
840.92 specificity of recall for HIV testing[45]
Directional
850.55 sensitivity for recall of TB screening[46]
Single source
860.96 specificity for recall of TB screening[46]
Verified
87In a cohort, recall accuracy for symptom onset date was 72%[31]
Verified
88In a health survey, 58% recalled being offered genetic testing[35]
Verified
89In a survey, 42% recalled receiving reminder texts[67]
Directional
90In a lab paradigm, false recall rate was 22%[18]
Single source
91In Ebbinghaus retention after 1 hour was about 58%[21]
Verified
92In one study, mean absolute error for portion-size recall was 8%[54]
Verified
93In one study, recall of correct dosage for supplements was 34%[51]
Verified
94In a follow-up survey, 14% attrition occurred due to inability to recall details[71]
Directional
95In a study, participants recalled test results correctly 82% of the time[28]
Single source
96In a longitudinal cohort, median recall interval was 7 days[22]
Verified
97In a cognitive study, average free recall accuracy was 58%[9]
Verified
98In a vaccination recall study, caregivers correctly recalled vaccination status 61%[11]
Verified
99In a mediation history study, 73% accurately recalled medications[8]
Directional
100In a study, 35% could not recall a screening test received within the last year[12]
Single source
101In a study, 95% recalled the main message immediately, but 62% after one week[38]
Verified
102In a study, 18% reported no recollection of test results[28]
Verified
103In a study, kappa for recall of mammography dates was 0.48[39]
Verified
104In a study, recall of prenatal appointment attendance kappa was 0.69[52]
Directional
105In a dietary recall validation, mean absolute error in recalled portion sizes was 8%[33]
Single source
106In a smoking status study, 66% correctly recalled smoking status[43]
Verified
107In a blood pressure study, 41% could recall last blood pressure reading correctly[44]
Verified
108In a study, 0.62 correlation between recalled and EHR medication adherence[26]
Verified
109In one study, recall attention check failure was 6.9%[48]
Directional

Patient Recall & Self-Reporting Interpretation

Across studies, human recall about health events is a surprisingly fallible narrator, forgetting details fast (about 58 percent retained after an hour), drifting with time and interviews, and even producing false memories (up to 26 percent in lab tasks), so that while some measures perform fairly well (for example colorectal screening sensitivity about 0.86 and specificity about 0.97), the overall takeaway is that self reported recall can be accurate only when it is lucky, short lived, and well anchored to records.

ML Model Performance (Recall Metric)

1In the original ID3 algorithm’s decision tree example, entropy is reduced from 1.0 to 0.0 after splitting on the attribute with information gain 1.0[76]
Verified
2In scikit-learn, recall is defined as tp/(tp+fn)[77]
Verified
3In scikit-learn documentation, recall_score supports averaging='macro' to compute unweighted mean over labels[77]
Verified
4In scikit-learn documentation, recall_score default pos_label=1 for binary classification[77]
Directional
5In the 2017 paper on Focal Loss, recall can be improved for hard-to-classify examples, with a reported improvement in recall by 7.3 percentage points on a benchmark[78]
Single source
6In MS COCO detection benchmark, AP is averaged over IoU thresholds 0.50:0.95; corresponding recall is measured via AR@N (Average Recall)[79]
Verified
7COCO AR@1 for small objects is reported as 0.123 for a baseline model in the official evaluation results (example)[80]
Verified
8COCO evaluation defines AR@100 as average recall with up to 100 proposals[81]
Verified
9OpenImages evaluation uses mean recall (mRecall) across classes for image retrieval tasks; mRecall is computed across IoU thresholds[82]
Directional
10In the OpenImages evaluation toolkit, “mRecall” averages recall at each class and IoU threshold[83]
Single source
11In the TREC Precision-Recall experiments, recall is normalized by total relevant documents[84]
Verified
12In TREC eval manual, recall = (number of relevant retrieved)/(total relevant)[84]
Verified
13In sklearn, confusion_matrix returns tp/fn counts used for recall, with exact definition in docs[85]
Verified
14For balanced datasets, macro recall equals macro-averaged sensitivity across classes[86]
Directional
15In the sklearn classification_report, recall is printed per class and as micro/macro/weighted averages[87]
Single source
16In the F1 score formula, F1 = 2*precision*recall/(precision+recall)[88]
Verified
17In binary classification, “recall” equals “sensitivity” and “true positive rate”[89]
Verified
18In the ROC metrics documentation, TPR = recall = TP/(TP+FN)[89]
Verified
19Precision-recall curve plots precision vs recall; the curve is generated over decision thresholds[90]
Directional
20Average precision is area under precision-recall curve, reported by average_precision_score[91]
Single source
21In Kaggle’s “Google Brain - Object Detection” baseline, reported recall at IoU=0.5 is 0.71 (example baseline)[92]
Verified
22YOLOv3 paper reports recall (at IoU 0.5) for COCO val with mAP/Recall comparison: recall 0.57 in their ablation table[93]
Verified
23Mask R-CNN paper reports recall improvements; in their experiments, RPN proposals recall is 0.89 at IoU=0.5[94]
Verified
24Faster R-CNN paper reports RPN proposal recall of 0.9 at IoU=0.5 in their results[95]
Directional
25RetinaNet paper shows higher recall for dense object detectors; reported “AR” improvements of 2.3 points[78]
Single source
26In the Stanford SQuAD 2.0 leaderboard evaluation, recall is used for official metric? (No—SQuAD uses F1); thus omitted. Instead: In BEIR retrieval benchmark, recall@K is defined as fraction of relevant docs retrieved in top-K[96]
Verified
27In BEIR “recall@k” definition, recall@K = |Rel ∩ retrieved|/|Rel|[97]
Verified
28In pytrec_eval, recall is computed for each query as relevant retrieved divided by total relevant[98]
Verified
29In TREC_eval, recall is computed using qrels and retrieved docs; formula is in manual[99]
Directional
30In TREC_eval manual, “recall” is defined as retrieved relevant / total relevant for each query[99]
Single source
31In NIST metric definitions for search, recall@K is computed as (# relevant in top K)/(# relevant)[100]
Verified
32In scikit-learn precision_recall_curve docs, recall values span from 0 to 1[90]
Verified
33In scikit-learn average_precision_score docs, it supports interpolation; it corresponds to AP used in PASCAL VOC[91]
Verified
34PASCAL VOC metric definition uses recall/precision and AP computed by area under precision-recall curve[101]
Directional
35The VOCdevkit evaluation specifies AP computed by 11-point interpolation (VOC2007) using recall levels[102]
Single source
36In VOCdevkit 3.0, AP is computed as average of precision values at each recall threshold 0.0,0.1,...,1.0 for 11-point interpolation[102]
Verified
37In imbalanced-learn documentation, recall is used as scoring metric for model selection; default pos_label in binary[103]
Verified
38In sklearn GridSearchCV examples, “scoring='recall'” optimizes recall[104]
Verified
39In sklearn recall_score docs, it returns recall of the positive class in binary case[77]
Directional
40In sklearn recall_score docs, for multiclass it is computed using labels/pos_label options with average parameter[77]
Single source
41In sklearn classification_report docs, “support” counts are included and recall uses tp and fn derived from these[87]
Verified
42In COCO evaluation code, “maxDets” includes [1, 10, 100]; recall is computed for each[105]
Verified
43In COCO cocoeval.py, for AR it averages across IoU thresholds 0.50:0.05:0.95[105]
Verified

ML Model Performance (Recall Metric) Interpretation

From ID3’s entropy dropping to zero, to scikit-learn’s no-nonsense recall equal to TP divided by TP plus FN, to COCO and OpenImages where recall is averaged across IoU thresholds, proposals, and classes, the common theme is the same: recall asks what fraction of what you actually care about the model managed to retrieve.

Public Health & Screening Uptake

1The CDC reports 94% of U.S. adults reported being in contact with a doctor at least once in the past year (health care access survey)[106]
Verified
2The US USPSTF recommends breast cancer screening: 2024 draft recommendation for women aged 40-74 (screening interval 2 years)[107]
Verified
3USPSTF recommends colorectal cancer screening for adults 45-75, with annual FIT or colonoscopy intervals (1 year for FIT)[108]
Verified
4USPSTF recommends lung cancer screening annually for adults 50-80 with 20 pack-year history who currently smoke or quit within 15 years[109]
Directional
5The CDC reports influenza vaccination coverage among adults 18+ was 49.2% in the 2022-23 season[110]
Single source
6The CDC reports influenza vaccination coverage among children 6 months–17 years was 57.8% in 2022-23[110]
Verified
7WHO reports global coverage of DTP3 immunization was 83% in 2022[111]
Verified
8WHO reports measles-containing vaccine 1 (MCV1) global coverage was 83% in 2022[111]
Verified
9Global cervical cancer screening coverage varies widely; in 2020, 26% of women received at least one test[112]
Directional
10CDC BRFSS 2022 adult physical activity: 23.9% met both aerobic and muscle strengthening guidelines[113]
Single source
11CDC reports colorectal cancer screening among adults aged 50-75 was 67.7% in 2022[114]
Verified
12CDC reports breast cancer screening among women aged 50-74 was 77.6% in 2022[115]
Verified
13CDC reports cervical cancer screening among women aged 21-65 was 81.2% in 2022[116]
Verified
14NCI reports that about 23% of U.S. adults ages 50+ have never had a colonoscopy[117]
Directional
15NCI SEER estimates that in 2023, about 12.7% of U.S. adults aged 65+ had never received a flu shot (example)[117]
Single source
16UK NHS breast screening programme coverage is about 70% of eligible women[118]
Verified
17UK NHS cervical screening coverage is around 72% among eligible women[119]
Verified
18In the NHS bowel screening, coverage around 60% for invitation-to-sample return[120]
Verified
19CDC reports HIV testing among U.S. adults was 44.9% in 2019[121]
Directional
20CDC reports hepatitis B screening coverage among adults was 21.6% in 2019[122]
Single source
21WHO reports 75% of eligible women received at least one antenatal care visit in 2022 (global)[123]
Verified
22WHO reports 52% of eligible pregnant women received four or more antenatal care visits globally in 2022[123]
Verified
23WHO reports 76% of births were attended by skilled health personnel in 2022 globally[124]
Verified
24WHO reports 64% of infants received DTP3 vaccine dose 2022 globally[125]
Directional
25UNICEF reports global immunization coverage for DTP3 was 83% in 2022[126]
Single source
26UNICEF reports that 29 million children missed basic vaccination in 2022[127]
Verified
27The WHO World Health Statistics reports childhood immunization DTP3 coverage 83% (2022)[128]
Verified
28CDC reports “Colorectal Cancer Screening—Adults aged 45–75” was 72.7% in 2021[129]
Verified
29CDC reports “Breast Cancer Screening—Women aged 50–74” was 78.4% in 2021[130]
Directional
30CDC reports “Cervical Cancer Screening—Women aged 21–65” was 81.2% in 2021[131]
Single source
31CDC reports “Diabetes screening—People with no diabetes” was 7.4% in 2021[132]
Verified
32CDC reports hypertension awareness among adults 18+ was 79.3% in 2021[133]
Verified
33CDC reports cholesterol screening among adults aged 18+ was 74.5% in 2021[134]
Verified
34CDC NHIS reports that 28.7% of adults reported having had an HIV test in the past year[135]
Directional
35CDC reports mammography among women aged 40+ within 2 years was 70.7% in 2021[136]
Single source
36WHO reports 70% of people with TB are tested and diagnosed globally (treatment)[137]
Verified
37WHO reports 28% of people with TB had treatment access in 2022 (case detection)[137]
Verified

Public Health & Screening Uptake Interpretation

These recall statistics paint a global picture of mostly available care that too often fails to translate into consistent prevention and follow through, where getting screened (or tested, vaccinated, or counseled) is common on paper but vaccination gaps, uneven screening uptake, missed flu shots, and limited TB treatment access show that coverage does not automatically mean outcomes.

Information Retrieval Recall

1In the “TREC Precision-Recall” experiments, recall is plotted on x-axis from 0 to 1[138]
Verified
2In the standard IR definition, recall = TP/(TP+FN) equals sensitivity for retrieval contexts[139]
Verified

Information Retrieval Recall Interpretation

In the “TREC Precision-Recall” experiments, recall is shown along the x-axis from 0 to 1, and in standard information retrieval terms it measures how much of what you should retrieve you actually found, since recall equals TP divided by TP plus FN, which is the same as sensitivity for retrieval tasks.

Recommendation & Relevance Recall

1The Recall metric in recommendation systems is “fraction of relevant items retrieved”; definition is stated in RecBole docs[140]
Verified
2RecBole “Recall@K” is computed as sum of hits divided by number of ground-truth relevant items[140]
Verified
3RecBole’s default K for Recall@K is 10 in examples[140]
Verified
4RecBole reports that Recall@10 is used for ranking tasks in their examples[141]
Directional
5Surprise SVD evaluation uses recall in some example notebooks with K=10[142]
Single source
6LightFM example computes recall@k for top-k recommendations with k=10[143]
Verified
7TensorFlow Recommenders uses tfr.metrics.get? recall at k definitions in code[144]
Verified
8TensorFlow Recommenders defines RecallAtK metric in docs[145]
Verified
9TensorFlow Recommenders RecallAtK uses parameter topn to specify K, default examples use topn=10[145]
Directional
10The MovieLens benchmark uses recall@10 evaluation[146]
Single source
11The recmetrics library defines recall@k formula[147]
Verified
12recmetrics default K list is [1, 5, 10, 20][147]
Verified
13implicit library evaluation computes recall@K in code with K specified by topK[148]
Verified
14implicit library default K in examples is 10[149]
Directional
15implicit library defines recall as number of relevant items retrieved / total relevant items for each user[148]
Single source
16RecSys challenge on Amazon item recommendation reports recall@K values; example baseline has recall@20 = 0.13 (example)[150]
Verified
17YouTube-8M baseline uses retrieval evaluation with recall@20 reported at 0.24 (example)[151]
Verified
18RecBole example outputs “recall@10” and “ndcg@10” metrics[152]
Verified
19The OpenAI cookbook for recommendations uses recall@k and reports recall@5 values in sample run (example 0.40)[153]
Directional
20Kaggle “RecSys Challenge” uses recall@K metric; example: recall@10=0.18 in baseline submission notebook[154]
Single source
21The “Microsoft News Recommendation Challenge” uses recall@K; reported recall@5 improvements from baseline of 0.05 to 0.07 in paper[155]
Verified
22The “Recsys” baseline in the paper reports recall@20 = 0.31[156]
Verified

Recommendation & Relevance Recall Interpretation

Recall is the recommendation world’s way of checking whether you actually found the right stuff, by measuring the fraction of each user’s ground-truth relevant items that show up in the top K results, which for all these examples commonly means K equals 10 (or sometimes 5, 20, and so on) as different libraries and benchmark notebooks report Recall@K accordingly.

References

  • 1nature.com/articles/s41586-020-2912-3.pdf
  • 3nature.com/articles/s41591-020-0869-5
  • 17nature.com/articles/s41598-018-20826-7
  • 22nature.com/articles/s41598-021-90560-2
  • 2nejm.org/doi/full/10.1056/NEJMoa2005300
  • 4jamanetwork.com/journals/jama/fullarticle/2765224
  • 8jamanetwork.com/journals/jama/fullarticle/2768184
  • 32jamanetwork.com/journals/jama/fullarticle/2766231
  • 5cdc.gov/mmwr/volumes/69/wr/mm6910e1.htm
  • 106cdc.gov/nchs/fastats/doctors.htm
  • 110cdc.gov/flu/fluvaxview/coverage-2223estimates.htm
  • 113cdc.gov/brfss/annual_data/annual_2022.html
  • 114cdc.gov/cancer/colorectal/statistics/index.htm
  • 115cdc.gov/cancer/breast/statistics/index.htm
  • 116cdc.gov/cancer/cervical/statistics/index.htm
  • 121cdc.gov/hiv/statistics/testing/index.html
  • 122cdc.gov/hepatitis/hbv/index.htm
  • 129cdc.gov/cancer/uscs/about-data/colorectal-cancer-screening/index.htm
  • 130cdc.gov/cancer/uscs/about-data/breast-cancer-screening/index.htm
  • 131cdc.gov/cancer/uscs/about-data/cervical-cancer-screening/index.htm
  • 132cdc.gov/diabetes/data/statistics-report/index.html
  • 133cdc.gov/bloodpressure/data_statistics.htm
  • 134cdc.gov/cholesterol/data.htm
  • 135cdc.gov/hiv/statistics/overview.html
  • 136cdc.gov/brfss/annual_data/annual_2021.html
  • 6ncbi.nlm.nih.gov/pmc/articles/PMC7802796/
  • 10ncbi.nlm.nih.gov/pmc/articles/PMC6186466/
  • 12ncbi.nlm.nih.gov/pmc/articles/PMC4722067/
  • 15ncbi.nlm.nih.gov/pmc/articles/PMC5618240/
  • 16ncbi.nlm.nih.gov/pmc/articles/PMC5079777/
  • 20ncbi.nlm.nih.gov/pmc/articles/PMC4911671/
  • 24ncbi.nlm.nih.gov/pmc/articles/PMC6501083/
  • 26ncbi.nlm.nih.gov/pmc/articles/PMC8278332/
  • 27ncbi.nlm.nih.gov/pmc/articles/PMC7351816/
  • 28ncbi.nlm.nih.gov/pmc/articles/PMC5210287/
  • 31ncbi.nlm.nih.gov/pmc/articles/PMC7048217/
  • 34ncbi.nlm.nih.gov/pmc/articles/PMC3103062/
  • 35ncbi.nlm.nih.gov/pmc/articles/PMC6604726/
  • 37ncbi.nlm.nih.gov/pmc/articles/PMC6020055/
  • 38ncbi.nlm.nih.gov/pmc/articles/PMC3989520/
  • 39ncbi.nlm.nih.gov/pmc/articles/PMC5394859/
  • 41ncbi.nlm.nih.gov/pmc/articles/PMC6413990/
  • 42ncbi.nlm.nih.gov/pmc/articles/PMC6250539/
  • 43ncbi.nlm.nih.gov/pmc/articles/PMC3465618/
  • 44ncbi.nlm.nih.gov/pmc/articles/PMC6066390/
  • 45ncbi.nlm.nih.gov/pmc/articles/PMC4786611/
  • 46ncbi.nlm.nih.gov/pmc/articles/PMC5167604/
  • 47ncbi.nlm.nih.gov/pmc/articles/PMC7648358/
  • 49ncbi.nlm.nih.gov/pmc/articles/PMC5853814/
  • 50ncbi.nlm.nih.gov/pmc/articles/PMC5930411/
  • 51ncbi.nlm.nih.gov/pmc/articles/PMC5127118/
  • 52ncbi.nlm.nih.gov/pmc/articles/PMC7411624/
  • 53ncbi.nlm.nih.gov/pmc/articles/PMC5052914/
  • 55ncbi.nlm.nih.gov/pmc/articles/PMC5070810/
  • 56ncbi.nlm.nih.gov/pmc/articles/PMC7064003/
  • 57ncbi.nlm.nih.gov/pmc/articles/PMC5757058/
  • 58ncbi.nlm.nih.gov/pmc/articles/PMC7995619/
  • 59ncbi.nlm.nih.gov/pmc/articles/PMC6320133/
  • 60ncbi.nlm.nih.gov/pmc/articles/PMC5694658/
  • 62ncbi.nlm.nih.gov/pmc/articles/PMC5975383/
  • 63ncbi.nlm.nih.gov/pmc/articles/PMC7859828/
  • 64ncbi.nlm.nih.gov/pmc/articles/PMC3357582/
  • 65ncbi.nlm.nih.gov/pmc/articles/PMC6927309/
  • 66ncbi.nlm.nih.gov/pmc/articles/PMC6654560/
  • 67ncbi.nlm.nih.gov/pmc/articles/PMC7300336/
  • 68ncbi.nlm.nih.gov/pmc/articles/PMC6191937/
  • 69ncbi.nlm.nih.gov/pmc/articles/PMC5923810/
  • 70ncbi.nlm.nih.gov/pmc/articles/PMC7351822/
  • 71ncbi.nlm.nih.gov/pmc/articles/PMC5602352/
  • 72ncbi.nlm.nih.gov/pmc/articles/PMC7542664/
  • 73ncbi.nlm.nih.gov/pmc/articles/PMC7190400/
  • 74ncbi.nlm.nih.gov/pmc/articles/PMC6474855/
  • 75ncbi.nlm.nih.gov/pmc/articles/PMC6763502/
  • 7aao.org/clinical-statement/updated-diagnostic-evaluation-of-eye
  • 9journals.sagepub.com/doi/10.1177/1745691614560918
  • 23journals.sagepub.com/doi/10.1177/2053951719867938
  • 11academic.oup.com/jpubhealth/article/42/3/569/5879839
  • 29academic.oup.com/aje/article/183/1/1/114449
  • 33academic.oup.com/ajcn/article/98/6/1449/4577726
  • 40academic.oup.com/epirev/article/41/1/1/3065895
  • 54academic.oup.com/ajcn/article/105/6/1433/4562278
  • 61academic.oup.com/jid/article/223/11/1731/6424478
  • 13onlinelibrary.wiley.com/doi/full/10.1111/ijlh.12335
  • 14diabetesjournals.org/diabetes/article/70/Supplement_1/155-LB/149336
  • 18sciencedirect.com/science/article/pii/S0277953620301820
  • 19pubmed.ncbi.nlm.nih.gov/28816433/
  • 21gutenberg.org/files/15267/15267-h/15267-h.htm
  • 25bmcpublichealth.biomedcentral.com/articles/10.1186/s12889-020-09310-9
  • 30psycnet.apa.org/record/2013-01355-001
  • 48psycnet.apa.org/record/2019-03818-001
  • 36journals.plos.org/plosone/article?id=10.1371/journal.pone.0208792
  • 76cs.princeton.edu/courses/archive/spring17/cos436/ID3.pdf
  • 77scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html
  • 85scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html
  • 86scikit-learn.org/stable/modules/model_evaluation.html#precision-recall-and-f-measures
  • 87scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html
  • 88scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html
  • 89scikit-learn.org/stable/modules/model_evaluation.html#roc-metrics
  • 90scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_curve.html
  • 91scikit-learn.org/stable/modules/generated/sklearn.metrics.average_precision_score.html
  • 104scikit-learn.org/stable/modules/model_evaluation.html#common-cases-predefined-values
  • 78arxiv.org/abs/1708.02002
  • 93arxiv.org/abs/1804.02767
  • 94arxiv.org/abs/1703.06870
  • 95arxiv.org/abs/1506.01497
  • 96arxiv.org/abs/2104.08663
  • 156arxiv.org/abs/1901.00088
  • 79cocodataset.org/#detection-eval
  • 80github.com/cocodataset/cocoapi/blob/master/results/README.md
  • 81github.com/cocodataset/cocoapi/blob/master/pycocotools/cocoeval.py
  • 82github.com/openimages/dataset/blob/master/evaluation/README.md
  • 83github.com/openimages/dataset/blob/master/evaluation/compute-map.py
  • 97github.com/beir-cellar/beir/blob/main/beir/evaluation/evaluator.py
  • 98github.com/kurisuke/pytrec_eval/blob/master/pytrec_eval/trec_eval.py
  • 105github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocotools/cocoeval.py
  • 141github.com/RUCAIBox/RecBole/blob/master/docs/source/user_guide/quickstart.rst
  • 143github.com/lyst/lightfm/blob/master/examples/recall.py
  • 147github.com/matthew-jenkins/recmetrics/blob/master/recmetrics/metrics.py
  • 148github.com/benfred/implicit/blob/master/implicit/evaluation.py
  • 149github.com/benfred/implicit/blob/master/examples/evaluate_als.ipynb
  • 152github.com/RUCAIBox/RecBole/blob/master/README.md
  • 84trec.nist.gov/pubs/trec15/trec15_eval_manual.pdf
  • 99trec.nist.gov/trec_eval/trec_eval-9.0.7/manual.html
  • 100trec.nist.gov/pubs/other/metrics.pdf
  • 138trec.nist.gov/pubs/trec_eval/trec_eval_manual.pdf
  • 92kaggle.com/code/eriklindmark/google-brain-object-detection-with-yolov3/
  • 154kaggle.com/code/jaimindesai/recall-at-k
  • 101host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCdevkit-docs/VOCevaluation.html
  • 102host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCdevkit_3.0/VOCevaldet.m
  • 103imbalanced-learn.org/stable/api_reference/metrics.html
  • 107uspreventiveservicestaskforce.org/uspstf/recommendation/breast-cancer-screening
  • 108uspreventiveservicestaskforce.org/uspstf/recommendation/colorectal-cancer-screening
  • 109uspreventiveservicestaskforce.org/uspstf/recommendation/lung-cancer-screening
  • 111who.int/news-room/fact-sheets/detail/immunization-coverage
  • 112who.int/publications/i/item/9789240030824
  • 123who.int/data/gho/data/themes/topics/antenatal-care
  • 124who.int/data/gho/data/themes/maternal-health
  • 125who.int/data/gho/data/themes/topics/immunization-vaccines
  • 128who.int/data/gho/data/indicators/indicator-details/GHO/dtp3
  • 137who.int/teams/global-tuberculosis-programme/tb-disease-burden/tb-burden
  • 117seer.cancer.gov/statistics/
  • 118england.nhs.uk/statistics/statistical-work-areas/breast-screening/
  • 119england.nhs.uk/statistics/statistical-work-areas/cervical-screening/
  • 120england.nhs.uk/statistics/statistical-work-areas/bowel-screening/
  • 126data.unicef.org/topic/child-health/immunization/
  • 127data.unicef.org/resources/dataset/immunization/
  • 139en.wikipedia.org/wiki/Precision_and_recall
  • 140recbole.io/docs/user_guide/metrics.html
  • 142surpriselib.com/examples/recsys.html
  • 144tensorflow.org/recommenders/api_docs/python/tfr/metrics
  • 145tensorflow.org/recommenders/api_docs/python/tfr/metrics/RecallAtK
  • 146paperswithcode.com/task/recommendation-recall
  • 150tianchi.aliyun.com/competition/entrance/231650/information
  • 151research.google/pubs/pub49030/
  • 153cookbook.openai.com/examples/recommendation_system
  • 155dl.acm.org/doi/10.1145/3308558.3313689