Table of Contents page



 

                            The Process of EBL: Critical Appraisal of the Evidence



 


 

Tutorial goals applicable to this lesson: 

  • Evaluate the applicability of evidence-based practice for your setting.

 

Objectives met by this lesson:  

  • Critically appraise resource documents and data for their validity and applicability to your setting.
  • Discuss barriers to implementation of evidence-based practice for your setting.

Question 7 text     

Total time required: 30 minutes for prerequisite reading; 20 minutes for reading through the module, and 90 minutes for the hands-on component.

 

Reading assignments prior to module:  Booth & Brice Ch. 9: Appraising the Evidence

 

Greenhalgh T, Taylor R. How to read a paper. BMJ series. Assessing the methodological quality of published papers  http://bmj.bmjjournals.com/cgi/content/full/315/7103/305 

NOTE: This series, available entirely online (see the EBM section of the tutorial’s Bibliography page for the series URL) may help you to consider and evaluate how the LIS question sets were created.  You will be asked later to reflect on how well the questions provided here work. In addition to evaluating the usefulness of the question sets, you are requested to reflect on how such questions might help to evaluate the sources  we use frequently, such as surveys on listserves and ‘how we done it good’ articles that are not scholarly.

 


 

Take a minute to write about a recent experience you have had (or are aware of) where you or others used published LIS literature to support decision-making.  How were you made aware of the information? Was the information directly applicable to your own situation?  (short essay question box here)

 

This module is all about finding and then evaluating what you find, in order to avoid ‘reinventing the wheel’ with library projects.  Why not use what you can of others’ experiences?  We do that all the time anyhow, sharing experiences and lessons learned at conferences, on listserves, and (less often) more formally, through published literature. 

 

By the time you finish the module, you’ll be able to critically appraise published LIS research for its validity and applicability.  Following your hands-on practice of these skills, you will reflect on your experiences and assess how well the proposed evaluative questions function with LIS literature. You’ll also have an opportunity to to assess their use with information available through other, frequently-used channels, such as LIS listserves.

 


Remember the 5-step process of EBL as it has been described:

1. Formulate a clearly-stated, answerable question 
2. Track down the best evidence
3. Critically appraise the evidence
4. Apply to clinical practice
5. Evaluate performance

Recall that in the hands-on component of the previous module, you constructed a search grid, proposed search terms and phrases, and ran a quick search in CSA LISA to see what your search strategy would retrieve. You also evaluated the performance of the resource (LISA), reflecting on the experience, how the ‘well built question’ functioned to assist in the process, and how you might improve on your retrieval.

In this module, we’ll build directly on the previous hands-on experience, adding in the next step: critical evaluation of what you find.

Unlike the field of medicine, the literature with potential application to LIS practices is not overwhelmingly vast. As discussed in the previous module, it is more likely to be scattered among the resources of disparate disciplines than to be concentrated in a few, highly-regarded repositories. No matter where it's located, its application to practice must be preceded by a critical appraisal process – or we risk accepting biased information, potentially wasting time and resources.

The usefulness of any evidence is directly related to its validity, reliability, and applicability. Validity concerns the amount of freedom from bias. Reliability is about the 'trustworthiness' of a study's results - in other words, whether the study would obtain the same (or similar) results if the same conditions were replicated (reproducibility). Questions of applicability ask whether the intervention being tested has made a measurable difference for the chosen population and in the particular setting. Often, this difference is expressed statistically.

An important thing to note, especially if little evidence can be found, is that even a poorly designed or less than rigorously conducted research study may still provide useful information. Being aware of a study's weaknesses (and most or all studies have them!) is just part of the process involved in assessment. Of course, a major 'weakness' if it can be called that, is that research findings from other locations can never perfectly match your own setting, and so must be appraised with the important question in mind about how the findings from another setting may be applied to your own.

Checklists for critical appraisal have been created in every field doing evidence-based practice, and are a key component of the model. The existence of such carefully-created criteria for evaluation assists the reader in a number of ways, including helping to remember important criteria, reducing uncertainty, and providing ways to measure or evaluate a body of data using uniform standards. In fact, previously established methods for appraisal have led to the creation of guidelines and standards, as is the case in EBM, where uniform critical appraisals of sets of research findings (for example, systematic reviews or even meta-analyses) have helped to establish "gold standards" for diagnosis and treatment. Perhaps you can understand, reading this, why it would be difficult to overstate the importance of critical appraisal!

This tutorial does not address methods of evaluating statistical data, though future changes may include it.

The checklist included here is from Evidence-based Practice for Information Professionals: a handbook, chapter 9 (Booth & Brice, 2004). It's most suitable to the evaluation of a user study.

 

 

Twelve questions to help you make sense of a user study

 

A. Is the study a close representation of the truth?

1. Does the study address a closely focused issue?
2. Does the study position itself in the context of other studies?
3. Is there a direct comparison that provides an additional frame of reference?
4. Were those involved in collection of data also involved in delivering a service to the user group?
5. Were the methods used in selecting the users appropriate and clearly described?
6. Was the planned sample of users representative of all users (actual and eligible) who might be included in the study?

 

B. Are the results credible and repeatable?

7. What was the response rate and how representative was is of the population under study?
8. Are the results complete and have they been analyzed in an easily interpretable way?
9. Are any limitations in the methodology (that might have influenced results) identified and discussed?

 

C. Will the results help you in your own practice?

10. Can the results be applied to your local population?
11. What are the implications of the study for your practice?

 - in terms of current deployment of services?
 - in terms of cost?
 - in terms of the expectations or needs of your users?

12. What additional information do you need to obtain locally to assist you in responding to the findings of this study?

 

Other questions to ask:

13. Does the research design appear to fit the topic of the study?
14. What, if any, potential bias could be present?
15. Does the author discuss and account for weaknesses of the study?
16. Are methods discussed in a transparent fashion, so that the study can be evaluated on that basis?

 

A second checklist is intended for use in evaluating a needs analysis (from Booth & Brice, 2004).

Twelve questions to help you make sense of a needs analysis.

 

A. Is the study a close representation of the truth?

1. Does the study address a closely focused issue?
2. Does the study position itself in the context of other studies?
3. Is there a direct comparison that provides an additional frame of reference?
4. Were those involved in collection of data also involved in delivering a service to the user group?
5. Were the methods used in selecting the users appropriate and clearly described?
6. Was the planned sample of users representative of all users (actual and eligible) who might be included in the study?

 

B. Are the results credible and repeatable?

7. What was the response rate and how representative was is of the population under study?
8. Are the results complete and have they been analyzed in an easily interpretable way?
9. What attempts have been made to ensure reliability of responses?

 

C. Will the results help you in your own practice?

10. Can the results be applied to your local population?
11. What are the implications of the study for your practice?

 - in terms of current deployment of services?
 - in terms of cost?
 - in terms of the expectations or needs of your users?

12. What additional information do you need to obtain locally to assist you in responding to the findings of this study?

 

A third set, adapted from Trisha Greenhalgh’s critical evaluation article paper on assessing clinical systematic reviews, is still under development for LIS studies:

 

Twelve questions to help you make sense of a systematic review.  

 

A. Is the study a close representation of the truth?

1. Can you find an important question which the systematic review addressed?

2. Does the review address a closely focused issue?

3. Was a thorough search done of the appropriate databases, and were other potentially important sources explored?

4. Were the methods used in selecting the studies appropriate and clearly described?

5. Was the planned sample of studies representative of all studies (actual and eligible) that might be included in the review?

 

B. Are the results credible and repeatable?

6. Was methodological quality assessed and the trials weighted accordingly?

7. How sensitive are the results to the way the review has been done?

8. Have the numerical results been interpreted with common sense and due regard to the broader aspects of the problem?

9. Are the results complete and have they been analyzed in an easily interpretable way?

10. Are weaknesses such as biased sampling methods discussed in assessing the studies included in the review?

 

C. Will the results help you in your own practice?

10. Can the results be applied to your local population?
11. What are the implications of the study for your practice?

 - in terms of current deployment of services?
 - in terms of cost?
 - in terms of the expectations or needs of your users?

12. What information do you need to obtain locally to assist you in responding to the findings of this study?

 

 

 
 

Hands-on practice

 

This hands-on experience is intended to build directly on the hands-on exercise you did for the last module, where you built a search grid, proposed terms and phrases, and conducted several short searches in order to evaluate that part of the process.  Now, you are asked to select two of the articles you found during your search, and – using one or more of the evaluative question sets you just saw in the main part of this module – critically evaluate them.  Finally, you will once again reflect on your experiences, sharing those thoughts with others through the class bulletin board.

 

Instructions for this assignment

 

Your responses should be brief but comprehensive, covering each of the questions asked by the question set you decide to use.  Use a word processing program to complete this assignment.  When you’re done, post your document to the bulletin board (link).

 

What should be included in your response:

 

  • Looking back at module 3, determine what type of research study would be most appropriate to the well-built question:  Explain your choice.
  • Provide the citation and abstract for your chosen article.
  • List the evaluative question set you consider appropriate.
  • Using a critical appraisal checklist, evaluate the research study.  It will help with this process if you copy and paste your chosen checklist to a Word document.
  • What is your conclusion about the validity, reliability, and applicability of this research article in answering your question? Please support your answer with direct references tothe text of the article when possible and appropriate.
  • How well do you think the proposed evaluative questions function with LIS literature?
  • Lastly, use what you have practiced in evaluating published LIS literature by considering one of these documents, each the result of surveys conducted on the MedLib-L listserv:

 

1. http://listserv.buffalo.edu/cgi-bin/wa?A2=ind0103B&L=MEDLIB-L&P=R5865&I=-3

 

2. http://listserv.buffalo.edu/cgi-bin/wa?A2=ind0506A&L=MEDLIB-L&P=R3408&I=-3

 

3. http://listserv.buffalo.edu/cgi-bin/wa?A2=ind0201B&L=MEDLIB-L&P=R607&I=-3

 

·         Summarize your thoughts about the process of evaluation for the information in these survey response documents. Be sure to include the URL of your chosen document with your summary.  Answer the following questions:

o Could you answer all the questions from any of the evaluative question lists?

o What is your evaluation of the evidence provided?

o How would you use the evidence provided?

 

 

 

 

 

 

Your work will be assessed based on its content, organization, and clarity.   I have provided the form I’ll be using to perform that assessment, below.   

 

Possible points: 100  

A      86-100 points

B       70-85 points

C      58-69 points

D      50-57 points

 

Note: This tutorial is intended to encourage thought and discussion, so there is no ‘grade’ assigned except to allow you to self-assess your learning.  If you are not satisfied with your performance, you may wish to review the concepts we’ve covered in the pertinent areas. 

 

Assessment Criteria

 

 

Points:

10

 

5

 

1

Content

 

10 points

Extent to which the directions for the assignment are followed (e.g., pasting in citations)

 

All directions are followed completely.

 

Most directions are followed (no more than 2 missing).

 

Few of the directions are followed (more than 2 are missing.

10 points

Completeness of responses to the evaluative questions, and   extent to which responses are supported by reference to the text as appropriate.

For the question sets, every question is considered.  When this is not possible, there is an explanation provided.  Responses are well supported by references to the text as appropriate.

 

 

For the question sets, some questions are not answered (1-2) without explanation. Responses are sometimes supported by direct references to the text.

 

 

For the question sets, some questions are not answered (more than 2) without explanation.  Responses are not supported by direct references to the text.

10 points

Evidence of well thought-out evaluations and responses to the questions.

Answers to the evaluative questions are accurate in reflecting the quality of evidence provided by the materials being evaluated.

 

 

Answers to the evaluative questions are usually accurate in reflecting the quality of evidence provided by the materials being evaluated.

 

 

 

 

Answers to the evaluative questions are often inaccurate in reflecting the quality of evidence provided by the materials being evaluated.

 

Points:

10

 

5

 

1

10 points

Extent to which reflective statements incorporate consideration of the key issues discussed thus far in the tutorial and readings.

 

Answers clearly reflect consideration of the key issues discussed thus far in the tutorial and readings.

 

While answers reflect some consideration of key issues discussed thus far in the tutorial and readings, they are incomplete.

 

Answers show little or no consideration of key issues discussed thus far in the tutorial and readings.

10 points

Assessment of the chosen article’s validity, reliability, and applicability.

Summary is complete and well-considered, incorporating all of the findings of the critical evaluation process.

 

Summary is incomplete but well-considered, incorporating most of the findings of the critical evaluation process.

 

 

Summary is both incomplete and ill-considered, failing to incorporate many elements of the critical evaluation process.

10 points

Assessment of non-published evidence.

Summary is complete and well-considered, answering all questions thoughtfully.

 

Summary is incomplete but well-considered, or does not attempt to answer all the questions.

 

Summary is both incomplete and ill-considered, or does not attempt to answer most of the questions.

10 points

Choice of type of research study for the question

Choice is logical and well-supported.

 

Choice is logical but not well-supported.

 

Choice is neither logical nor well-supported.

10 points

Suitability of evaluative question set for the need

Choice is logical and well-supported.

 

Choice is logical but not well-supported.

 

Choice is neither logical nor well-supported.

Organization and clarity

 

 

10 points

Extent to which the document organizes responses.

The document is well-organized, systematically addressing questions posed by the assignment. 

 

The document is mostly well-organized, but could be improved upon. 

 

The document is not well-organized, failing to systematically address questions posed by the assignment. 

10 points

Clarity with which responses are written , easily comprehensible by the reader.

The writing is clear and ideas are easily comprehensible.

 

The writing is not easily comprehensible, making it harder to understand the points being made.

 

The writing is unclear and confusing, leaving the reader in doubt about the writer’s conclusions.

 

 

 

 

 

 

 04/16/2006

Question 7 text