Repeatability of manual coding of cancer reports in the South African National Cancer Registry, 2010

Show simple item record

dc.contributor.author Dube, Nomathemba Michell
dc.contributor.author Girdler-Brown, B.V. (Brendan)
dc.contributor.author Tint, Khin-San
dc.contributor.author Kellett, Patricia
dc.date.accessioned 2014-04-22T12:21:08Z
dc.date.available 2014-04-22T12:21:08Z
dc.date.issued 2013
dc.description This study was carried out by Nomathemba Dube in partial fulfilment of the requirements for her Master’s degree in Public Health in the School of Health Systems and Public Health at the University of Pretoria. en_US
dc.description.abstract Data validity is a very important aspect of cancer registries in ensuring data quality for research and interventions. This study focused on evaluating the repeatability of manual coding of cancer reports in the South African National Cancer Registry (NCR). This cross-sectional study used the Delphi technique to classify 48 generic tumour sites into sites that would be most likely (“difficult”) and least likely (“not difficult”) to give rise to discordant results among coders. Reports received from the Charlotte Maxeke Academic Hospital were manually recoded by five coders (2 301 reports, e.g. approximately 400 reports each) for intracoder agreement; and by four coders (400 reports) for inter-coder agreement. Unweighted kappa statistics were calculated and interpreted using Byrts’ criteria. After four rounds of the Delphi technique, consensus was reached on the classification of 91.7% (44/48) of the sites. The remaining four sites were classified according to modal expert opinion. The overall kappa was higher for intra-coder agreement (0.92) than for inter-coder agreement (0.89). “Not difficult” tumour sites reflected better agreement than “difficult” tumour sites. Ten sites (skin other, basal cell carcinoma of the skin, connective tissue, other specified, lung, colorectal, prostate, oesophagus, naso-oropharynx and primary site unknown) were among the top 80% misclassified sites. The repeatability of manual coding at the NCR was rated as “good” according to Byrts’ criteria. Misclassified sites should be prioritised for coder training and the strengthening of the quality assurance system. en_US
dc.description.librarian am2014 en_US
dc.description.librarian ay2014
dc.description.sponsorship The South African Field Epidemiology and Laboratory Training Programme (SAFELTP), funded by the Centers for Disease Control and Prevention (CDC) en_US
dc.description.uri http://www.sajei.co.za/index.php/SAJEI en_US
dc.identifier.citation Dube, N, Girdler-Brown, B, Tint, K & Kellett, P 2013, 'Repeatability of manual coding of cancer reports in the South African National Cancer Registry, 2010', Southern African Journal of Epidemiology and Infection, vol. 28, no. 3, pp. 157-165. en_US
dc.identifier.issn 1015-8782 (print)
dc.identifier.issn 2220-1084 (online)
dc.identifier.uri http://hdl.handle.net/2263/39678
dc.language.iso en en_US
dc.publisher MedPharm Publications en_US
dc.rights © SAJEI en_US
dc.subject Cancer reports en_US
dc.subject Manual coding en_US
dc.subject Repeatability en_US
dc.subject Kappa score en_US
dc.subject South African National Cancer Registry (NCR) en_US
dc.subject.lcsh Cancer -- Research -- South Africa en
dc.subject.lcsh Delphi method en
dc.title Repeatability of manual coding of cancer reports in the South African National Cancer Registry, 2010 en_US
dc.type Article en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record