Data validity is a very important aspect of cancer registries in ensuring data quality for research and interventions. This study
focused on evaluating the repeatability of manual coding of cancer reports in the South African National Cancer Registry (NCR).
This cross-sectional study used the Delphi technique to classify 48 generic tumour sites into sites that would be most likely
(“difficult”) and least likely (“not difficult”) to give rise to discordant results among coders. Reports received from the Charlotte
Maxeke Academic Hospital were manually recoded by five coders (2 301 reports, e.g. approximately 400 reports each) for intracoder
agreement; and by four coders (400 reports) for inter-coder agreement. Unweighted kappa statistics were calculated
and interpreted using Byrts’ criteria. After four rounds of the Delphi technique, consensus was reached on the classification of
91.7% (44/48) of the sites. The remaining four sites were classified according to modal expert opinion. The overall kappa
was higher for intra-coder agreement (0.92) than for inter-coder agreement (0.89). “Not difficult” tumour sites reflected better
agreement than “difficult” tumour sites. Ten sites (skin other, basal cell carcinoma of the skin, connective tissue, other specified,
lung, colorectal, prostate, oesophagus, naso-oropharynx and primary site unknown) were among the top 80% misclassified sites.
The repeatability of manual coding at the NCR was rated as “good” according to Byrts’ criteria. Misclassified sites should be
prioritised for coder training and the strengthening of the quality assurance system.