This tutorial is to introduce the definition, process, and tools of quality assessment in the systematic literature review.
If you are new to my channel, you can check out the previous events together with this one to get started with the systematic literature review as a research approach.
EP11 Systematic Literature Review Planning: workflow, literature scoping, and review protocol (https://youtu.be/qukb-VytjxQ)
EP12 Develop search strategy: fishing relevant literature for your research (https://youtu.be/9cH5I03jbg0)
EP13 Literature screening: inclusion and exclusion
(https://youtu.be/BCdveqka-E4)
You can browse other previous research sharing in this YouTube list of mine (https://www.youtube.com/playlist?list...)
Please kindly subscribe if you want to be reminded when I have new videos published on YouTube.
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
Quality assessment in systematic literature review
1. Quality assessment in SLR
Systematic Literature Review Workshop (SLRW)
Dr. Jingjing Lin
Assistant Professor
Centre of IT-based Education
Toyohashi University of Technology
lin.jingjing.qc@tut.jp
2. Don't judge a book by its cover. (English idiom)
Do not take published studies at face value. (Boland, Cherry, and Dickson, 2017, p.109)
3. Which side are you on?
• It is published in
Nature! It must be a
good quality study. I
have no doubt of it.
• It is published in Nature! But
the sample is rather small
than usual, I doubt its
generalizability.
(Boland, Cherry, and Dickson, 2017)
4. Why you
need QA in
your SLR?
Develop a greater understanding of your studies
and others’ results.
Distinguish between good-quality and poor-
quality studies.
More likely to draw meaningful conclusions from
the data.
Offer you opportunity to acquire critical
appraisal skill.
Validated SLR checklist often includes it to
evaluate SLR’s quality.
(Boland, Cherry, and Dickson, 2017)
5. What is quality?
Quality of individual studies Quality of your SLR
The degree to which a study employs measures to
minimize error and bias in its design, conduct, and
analysis. (Khan et al., 2003, p.39)
To check against validated SLR checklist tools
(Boland, Cherry, and Dickson, 2017)
6. “This is a good quality study”.
Translated into:
• “I am confident that this study’s design, conduct and analysis are
robust to provide results that are credible, trustworthy and
generalizable, and are highly likely to be a true representation of the
results of the tested intervention, phenomenon, or exposure.”
(Boland, Cherry, and Dickson, 2017)
7. When to do QA
Before data extraction During data extraction After data extraction
• When you intend to exclude
poor quality studies in your SLR
NA • You will be blind towards the
quality of individual studies
when extracting data, and your
report is likely to be biased.
• Your greater familiarity with
pooled studies can help you
answer QA questions.
(Boland, Cherry, and Dickson, 2017)
8. Main elements of QA
Element/Bias type What is it Significance
Selection bias Is the sample representative of the population? Generalizability, transferability
Allocation bias How participants get assigned to treatment group? Any human that
influences or interferes such allocation?
Types of design influence such allocation strategy. The stronger the design,
the less bias from allocation to treatment.
Performance bias Is anyone aware of the treatment or made blinded?
(participants, intervention providers, study investigators)
Can such awareness or blindness cause bias in the study?
Detection bias Is anyone aware of the treatment or made blinded?
(people who measure the outcome)
Attrition bias Proportion of participants who stopped the treatment either by self
dropout or withdrawal by the study.
Weaken the generalizability, give insight to the compliance rates, treatment
groups inequality due to participant size change
Reporting bias Are all outcomes stated to be measured actually reported despite its being
beneficial/favorable or not to the authors? Are some results measured
post hoc for the sake of enhancing favorable outcomes?
Reasons that lead to the failure of reporting all stated outcomes need to be
provided, the treatment appears more favorable than it really is
Confounders Participants’ characteristics are similar across all treatments?
(gender, age, health status, social status, etc.)
Participants should be equally balanced otherwise there is a risk that results
are biased in favor of one group than the other.
Concurrent or subsequent intervention Was any other treatment received by participants beside the study
intervention?
Participants should be treated in the same way to reduce the risk of other
effects, which weakens the study intervention’s effect.
Analysis Were the data for all participants included in final analysis even whose
withdrew ones?
If there are data missing for a number of participants and these data are not
considered, published results will not properly reflect the results of the
study.
Funding bias Who funded the study? Funders may want your study to favor positive outcomes instead of negative
outcomes.
QA tools ask questions about bias. Bias types are many. Assessment of
risk of bias covers at least six bias as colored in red in the table below
according to EUnetHTA (2015)
(Boland, Cherry, and Dickson, 2017)
9. Selection bias
Is the sample representative of the population?
Duckling image from: https://pixabay.com/illustrations/ptak-brzydkie-kacz%C4%85tko-bajki-2468019/
10. Treatment Group: Ducks are fed with only company A’s
duck food
Control Group: Ducks are fed with usual farm-food
Allocation bias
How participants get assigned to treatment group?
11. Image by Clker-Free-Vector-Images from Pixabay
Image by ChaminaGallery from Pixabay
“I am eating healthy food.
I will run more to get fit!”
“I need to give special
care to this fat duck
because the
investigators said it is
the most important
duck.”
“Need to feed this fat
duck from treatment
group better.”
Performance bias
Is anyone aware of the treatment or made blinded? (participants, intervention providers, study investigators)
The Hawthorne Effect/Observer Effect
12. Image by Clker-Free-Vector-Images from Pixabay
“This fat duck’s performance is indeed better than other ducks!”
Detection bias
Is anyone aware of the treatment or made blinded? (people who measure the outcome)
13. Attrition bias
Proportion of participants who stopped the treatment either by self dropout or withdrawal by the study.
14. Reporting bias
Are all outcomes stated to be measured actually reported despite its being beneficial/favorable or not to the
authors? Are some results measured post hoc for the sake of enhancing favorable outcomes?
Control Group: Ducks are fed with usual
farm-food
16. QA steps in SLR
1. Note the design(s) of your included studies.
2. Identify the type(s) of QA tool(s) to suit your review.
3. Choose the appropriate QA tool(s).
4. Carry out QA using the appropriate tool(s).
5. Tabulate and summarize results of your QA.
6. Think about how your QA results might impact on the conclusions
and recommendations of your SLR.
(Boland, Cherry, and Dickson, 2017)
17. It is the study design that guide your choice of
QA tool, not the review topic area.
(Boland, Cherry, and Dickson, 2017, p.112)
18. Strength Level Design Randomization Control
High Level 1 RCT Yes Yes
Meta-analysis of level 1 studies with homogeneous results No No
Level 2 Prospective cohort study (comparative, therapeutic) No Yes
Meta-analysis of level 1 or level 2 studies with heterogeneous
results
No No
Level 3 Retrospective cohort study No Yes
Case control study No Yes
Meta analysis of level 3 studies No No
Level 4 Case series No No
Low Level 5 Case report No No
Expert opinion No No
Personal observation No No
The strength of the experimental design is therefore largely reliant on 4 factors: Randomization, Control Groups, Sample Size, and Generalizability.
Different study designs: Levels of evidence
https://www.hydroassoc.org/research-101-levels-of-evidence-in-hydrocephalus-clinical-research-studies/
20. Non-randomized studies (NRS)
e.g., Each participant is assigned to a treatment group based on the previous participant’s assignment.
https://images.app.goo.gl/chfsWhrDCLU63Mnn8
21. Cohort study (prospective comparative study or retrospective cohort study)
https://images.app.goo.gl/jwsJcGd4zUeste2QA https://images.app.goo.gl/zyZ9R2WqETHFbGXC8
22. Case-control
A group of participants with a
particular conditions are
matched for age and other
characteristics with a control
group of participants who do
not have the conditions.
https://images.app.goo.gl/qpDvU3mfhchGXx6a8
23. Case series A person or (series of people) who has been given a similar
treatment is followed for a specific time period.
https://images.app.goo.gl/YKr9n7YZ56FN7zwP6
24. Cross-sectional Data are collected from a number of people or other sources
(e.g., databases) at one point in time.
https://images.app.goo.gl/rVqGvD9yGgPuZ5gDA
26. QA steps in SLR
1. Note the design(s) of your included studies.
2. Identify the type(s) of QA tool(s) to suit your review.
3. Choose the appropriate QA tool(s).
4. Carry out QA using the appropriate tool(s).
5. Tabulate and summarize results of your QA.
6. Think about how your QA results might impact on the conclusions
and recommendations of your SLR.
(Boland, Cherry, and Dickson, 2017)
27. QA tools
• ROBINS-I tool or previously known as ACROBAT-NRSI (A Cochrane Risk of Bias
Assessment Tool)
• the Berger/ISPOR questionnaire
• the Cowley checklist
• the Downs-Black checklist
• EPHPP (Effective Public Health Practice Project Quality Assessment Tool)
• the GRACE checklist (Good ReseArch for Comparative Effectiveness)
• MINORS (Methodological Index for Non-randomised Studies)
• NOS (Newcastle-Ottawa Scale)
• the Reisch-Tyson checklist
• RoBANS (Risk of Bias Assessment Tool for Non-randomised Studies)
• TFCPS (Task Force on Community Preventive Services)
(EUnetHTA, 2015)
28. QA tools Items Year Applicable study designs
ROBINS-I tool or previously known as ACROBAT-NRSI (A Cochrane Risk of Bias
Assessment Tool)
22-29 2014 NRS (incl. cohort studies + case-control studies),
RCT
Berger/ISPOR questionnaire 33
Cowley checklist 13 1995 NRS, RCT, uncontrolled case series
Downs-Black checklist 27 1998 NRS, RCT
EPHPP (Effective Public Health Practice Project Quality Assessment Tool)
GRACE checklist (Good ReseArch for Comparative Effectiveness) 11
MINORS (Methodological Index for Non-randomised Studies) 12 NRS, RCT
NOS (Newcastle-Ottawa Scale) 8 NRS
Reisch-Tyson checklist 57 1989 Any study design
RoBANS (Risk of Bias Assessment Tool for Non-randomised Studies) 8 2013 NRS (not to non-comparative studies)
TFCPS (Task Force on Community Preventive Services) 26+23 NRS, RCT
Thomas 21 (no date) Any study design
Zaza et al. 22 2000 Any study design
Critical appraisal skills program (CASP) 2013 NRS, RCT, SLR
Centre for reviews and dissemination guidance (health care studies) x 2009 SLR guideline, guides to criteria important in the
assessment of studies
Joanna Briggs Institute (health care studies) x 2014 SLR guideline, , a set of appraisal tools for a range
of study designs
Social care institute for excellence (SCIE) x 2010 SLR guideline, minimum generic criteria for
assessing quality of primary research.
(EUnetHTA, 2015; Boland, Cherry, & Dickson, 2017)
30. QA steps in SLR
1. Note the design(s) of your included studies.
2. Identify the type(s) of QA tool(s) to suit your review.
3. Choose the appropriate QA tool(s).
4. Carry out QA using the appropriate tool(s).
5. Tabulate and summarize results of your QA.
6. Think about how your QA results might impact on the conclusions
and recommendations of your SLR.
(Boland, Cherry, and Dickson, 2017)
33. References
• Boland, A., Cherry, G., & Dickson, R. (Eds.). (2017). Doing a
systematic review: A student's guide. Sage.
• EUnetHTA. (2015). WP: Internal validity of non-randomised
studies (nrs) on interventions, 2015.
• Peterson, J., Welch, V., Losos, M., & Tugwell, P. J. O. O. H. R. I.
(2011). The Newcastle-Ottawa scale (NOS) for assessing the
quality of nonrandomised studies in meta-analyses. Ottawa:
Ottawa Hospital Research Institute.