Randomized controlled trials — studies in which participants are randomly allocated to an experimental or control condition to evaluate intervention efficacy or other outcomes — have been critical to advancing new therapeutic approaches. For example, a 2017 trial was conducted to evaluate whether anatomical implants were aesthetically superior to round implants in breast augmentation, finding no difference between the two [1]. Despite the methodological rigor commonly associated with randomized controlled trials across medical specialties, these studies are uncommon and complicated in the field of plastic surgery [2]. In a study of 3 plastic surgery journals, only 1.83% of all publications were RCTs [3], and other investigators have indicated that approximately 85% of aesthetic surgery articles are case reports, expert reviews, or opinion pieces [4]. Despite these findings, the annual output of plastic surgery RCTs appears to be increasing on a wide array of topics [5].
With the increasing number of RCTs being conducted and published in the field, clear, accurate, and complete reporting of trial information is critical for readers to properly evaluate a trial’s methodology and arrive at appropriate conclusions about its merits and applicability to patients. The Consolidated Standards of Reporting Trials (CONSORT) statement [6] provides guidance on reporting important trial details, such as randomization, blinding, and allocation concealment. CONSORT has been widely adopted by medical journals, and adherence to reporting all CONSORT items is either required or recommended by many of them. The CONSORT website lists 585 journals as endorsers [7]. In a previous investigation, we found that 35% of plastic surgery journals mentioned CONSORT in their instructions to authors [8]; Agha et al. [9] reports that 23% of surgery journals mentioned the CONSORT statement. CONSORT has been used as bright-line criteria to evaluate the completeness of reporting of randomized trials. In plastic surgery, Kari [5] found wide variation in CONSORT item reporting: sample size calculation was only reported in 13% of trials. Randomization method was reported in 30% of cases, allocation concealment in 19% of cases, and blinding in 52% of cases. Agha et al. found that for a sample of plastic surgery trials, median CONSORT score was 11.5 of 23 items, with compliance being poorest for intervention/comparator details (7%), randomization implementation (11%) and blinding (26%) [10]. The extremely low rate of intervention reporting signals the need for further evaluation.
The importance of thorough intervention reporting is widely recognized, so a reporting guideline subsequent to CONSORT was created to address its limited guidance for reporting trial interventions. This guideline, the Template for Intervention Description and Replication (TIDieR) checklist [11], contains the minimum recommended items for describing an intervention. TIDieR addresses the why, what, who provided, how, where, when and how much, tailoring, modifications, and how well the intervention was adhered to. Such detail enhances reproducibility and permits health providers to make better judgments about applying these interventions in their clinical practice. In the present study, we will apply the TIDieR checklist to evaluate the completeness of intervention reporting of RCTs in plastic surgery. This study will expand current knowledge by elucidating the specific components that require improvement and by establishing a baseline for future study comparisons. The primary outcome of this study is to evaluate the completeness of intervention reporting. The secondary outcomes are to evaluate whether particular trial characteristics are associated with improved intervention reporting and whether publication of TIDieR led to improvements in trial reporting for those published in prominent plastic surgery journals.
This cross-sectional review protocol will be published a priori https://osf.io.
SJ and MV will search PubMed (which includes the MEDLINE collection) for RCTs published in plastic surgery journals. PubMed is an internationally recognized database and has been used previously in similar methodological reviews. To enact this search, we will use PubMed’s clinical trial (Clinical Trial[ptyp]) filter and limit by publication date. The following journals will be included: Plastic and Reconstructive Surgery; Journal of Plastic, Reconstructive, & Aesthetic Journal; Annals of Plastic Surgery; Aesthetic Surgery Journal; Aesthetic Plastic Surgery; Journal of Craniofacial surgery; The Cleft palate- craniofacial surgery; Journal of Reconstructive Microsurgery; and JAMA Facial Plastic Surgery. We included the top 15 journals based on their ranking in the 2019 Google h5-index “Plastic and Reconstructive Surgery” subcategory. We selected Google Scholar Metrics for journal identification, since the platform is open access and not paywalled as other journal ranking platforms are. Journals that were not specific to plastic surgery were excluded (e.g., Ophthalmic Plastic & Reconstructive Surgery). SJ and MV will search these journals on PubMed to identify RCTs published in them from January 1, 2011-December 31, 2013 and from January 1, 2016-December 31, 2018. These dates were selected, because TIDieR was published in March 2014, and we wanted to select a cohort of studies before publication of TIDieR and a cohort following its publication for comparison. The period between TIDieR publication and the start date of the second search permits 21 months for TIDieR to take effect.
The search will include the electronic International Standard Serial Numbers (ISSN) rather than journal name. The ISSN numbers for each journal are as follows:
The following search strings will be deployed:
Pre-TIDieR search string: ((((((((“1075-1270″[Journal] OR ”1748-6815″[Journal]) OR “0148-7043″[Journal]) OR ”1090-820X“[Journal]) OR ”0364-216X“[Journal]) OR ”1049-2275″[Journal]) OR “1055-6656″[Journal]) OR ”0743-684X“[Journal]) OR ”0032-1052″[Journal]) OR “2168-6076″[Journal] AND (Clinical Trial[ptyp] AND (”2011/01/01″[PDAT] : “2013/12/31″[PDAT]))
Post-TIDieR search string: ((((((((“1075-1270″[Journal] OR ”1748-6815″[Journal]) OR “0148-7043″[Journal]) OR ”1090-820X“[Journal]) OR ”0364-216X“[Journal]) OR ”1049-2275″[Journal]) OR “1055-6656″[Journal]) OR ”0743-684X“[Journal]) OR ”0032-1052″[Journal]) OR “2168-6076″[Journal] AND (Clinical Trial[ptyp] AND (”2016/01/01″[PDAT] : “2018/12/31″[PDAT]))
Following the execution of these searches, we will take a random sample of 150 records from each search, for a total of 300 records to screen.
We will include all types of randomized controlled trials, including parallel arm, cross-overs, and cluster trials. Nonrandomized studies, observational studies, subgroup analyses, cost-effectiveness studies, animal studies, cadaver studies, letters to the editor, follow-up studies, secondary analyses, systematic reviews, meta-analyses, and case reports will be excluded. Other study types may be excluded but will be evaluated on a case-by-case basis and protocol will be amended as needed. Finally, studies must be published in English to be included.
The secondary outcomes will be to (1) evaluate factors associated with intervention reporting (journal, intervention type, number of authors, month and year of publication, sample size, type of study, type of intervention, hypothesis, type of blinding used, trial location (US or International), number of conducting centers, funding source, endorsement of CONSORT or TIDieR by authors, endorsement of CONSORT or TIDieR by journals, data source (primary paper, website, published protocol or previously published paper, and online supplementary material), whether the trial is registered) and (2) compare the completeness of intervention reporting before and after TIDiER publication.
A training module will be developed by SJ and MV prior to screening and data extraction. All investigators will participate in the training module prior to study commencement. The module will include an overview of the study, review of Hoffman et al. [11] including a brief review of all 12 checklist items, and data extraction for one RCT, led by SJ but performed as a group. Following training, each of the investigators tasked with data extraction (SE and SR) will extract data from a second RCT in a masked fashion. Data will be compared and discrepancies will be discussed in an effort to calibrate inter-rater agreement. Next, SE and SR will extract data from three additional RCTs, again in a masked fashion, and results will be discussed. We will estimate interrater agreement, and if acceptable, these investigators will continue with data extraction. If interrater reliability is not adequate, we will repeat this training process. We will define adequate interrater reliability as weighted k ≧ 0.70.
Rayyan, a systematic review application, will be used to screen PubMed records for eligibility. SE and SR will screen all records in an independent, masked fashion. Following the initial screening exercise, these investigators will hold a consensus meeting to resolve any disagreements in screening. RA and MV will be available for third party adjudication on an as needed basis.
Investigators will use a Google form designed by SJ. The form contains all 12 checklist items as described by Hoffman et al. [11] as well as title of the study, name of the journal, number of authors, month and year of publication, sample size, type of study, type of intervention, hypothesis, type of blinding used, trial location (US or International), number of conducting centers, funding source, endorsement of CONSORT or TIDieR by authors, endorsement of CONSORT or TIDieR by journals, data source (primary paper, website, published protocol or previously published paper, and online supplementary material), whether the trial is registered, and if so, the registry name and number.
All data extraction will be performed by the same investigators (SE and SR) who performed the initial screening. As before, they will be blinded to the other investigator’s decisions. Following data extraction and scoring, both investigators will meet and resolve any discrepancies. RA and MV will be available for third party adjudication on an as needed basis.
Descriptive statistics — frequencies and percentages — will be used to summarize adherence to individual checklist items. Visual inspections of histograms of the TIDieR summary scores and results from the Shapiro-Wilk test for normality will be used to make decisions about normality. We will report means (standard deviations) if normality is upheld or medians (interquartile ranges) if violated. We will conduct a multiple regression in which composite TIDieR scores will be regressed on the extracted variables (listed above) to determine whether particular characteristics are associated with improved intervention reporting. Finally, we will apply a regression discontinuity design to evaluate the effectiveness of TIDieR as an intervention for improved reporting practices.
We plan to publish this review in a peer-reviewed journal. We may also present this review at local and/or national conferences.
The authors declare no conflicts of interest. This investigation will be funded by an Oklahoma State University Presidential Research Grant.
Ethical approval was not needed for this study.
This investigation will be funded by an Oklahoma State University Presidential Research Grant.
SJ and MV devised this study. SJ and MV drafted this article and all authors revised it and approved the final version for publication.
None.
Matt Vassar and Sam Jellison are the guarantors of this study.
Name of the registry: N/a Unique Identifying number or registration ID: N/a Hyperlink to the registration (must be publicly accessible): N/a.
Hidalgo, D.A. and Weinstein, A.L. (2017). Intraoperative comparison of anatomical versus round implants in breast augmentation. Plastic Reconstr. Surg. 139: 587–596.
Hassanein, A.H., Herrera, F.A. and Hassanein, O. (2011). Challenges of randomized controlled trial design in plastic surgery. Can. J. Plast. Surg. 19: e28–e29. [PubMed]
Momeni, A., Becker, A., Antes, G., Diener, M.K., Blümle, A. and Stark, B.G. (2009). Evidence-based plastic surgery: controlled trials in three plastic surgical journals (1990 to 2005). Ann. Plast. Surg. 62: 293–296. [PubMed]
Chang, E.Y., Pannucci, C.J. and Wilkins, E.G. (2009). Quality of clinical studies in aesthetic surgery journals: a 10-year review. Aesth. Surg. J. 29: 144–147.
Karri, V. (2006). Randomised clinical trials in plastic surgery: Survey of output and quality of reporting. J. Plastic, Reconstr. Aesth. Surg. 59: 787–796.
Schulz, K.F., Altman, D.G., Moher, D. and CONSORT Group (2010). CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. Int. J. Surg. 9(2011): 672–677.
Consort - Endorsers, n.d. http://www.consort-statement.org/about-consort/endorsers (accessed June 5, 2019).
Checketts, J.X., Cook, C., Imani, S., Duckett, L. and Vassar, M. (2018). An evaluation of reporting guidelines and clinical trial registry requirements among plastic surgery journals. Ann. Plast. Surg. 81: 215–219. [PubMed]
Agha, R.A., Barai, I., Rajmohan, S., Lee, S., Anwar, M.O., Fowler, A.J., Orgill, D.P. and Altman, D.G. (2017). Support for reporting guidelines in surgical journals needs improvement: a systematic review. Int. J. Surg. 45: 14–17. [PubMed]
Agha, R.A., Camm, C.F., Doganay, E., Edison, E., Siddiqui, M.R.S. and Orgill, D.P. (2014). Randomised controlled trials in plastic surgery: a systematic review of reporting quality. Eur. J. Plast. Surg. 37: 55–62. [PubMed]
Hoffmann, T.C., Glasziou, P.P., Boutron, I., Milne, R., Perera, R., Moher, D., Altman, D.G., Barbour, V., Macdonald, H., Johnston, M., Lamb, S.E., Dixon-Woods, M., McCulloch, P., Wyatt, J.C., Chan, A.-W. and Michie, S. (2014). Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ 348g1687
Supplementary data to this article can be found online at https://s3-eu-west-1.amazonaws.com/ubiquity-partner-network/ijs/journal/ijsp/IJSP42.zip.