Skip to Main Content
Augusta University

Psychological Sciences

A one-stop-shop for finding journal articles, reference material, and more for psychology topics.

Tips for evaluating research articles for use

How do you find the time to determine if a research article is useful for your topic when they are 20 pages long? By strategic reading!

A scholarly research article that typically has the components of an Abstract, Introduction, Literature Review, Methodology, Results, Discussion/Conclusion, and Reference List. Some articles may have additional components. You don't need to read the entire article to determine whether it could be helpful to you. You can strategically read certain parts.

 

Determine if a research article is potentially useful for your topic:
  1. Read the abstract for a general summary of the journal article. Does the abstract and introduction seem relevant to your research question?
    If YES, move on to step 2.
    If NO, discard.
  2. Jump to the discussion section (this might have a conclusion sub-section) at the end of the article to get a summary of the article's findings. Is the content relevant?
    If YES, move on to step 3.
    If NO, discard. 
  3. Hop back up to the introductionSkim through the beginning part of the introduction to see how the author plans to approach the subject. At this point, you should have a good idea of what the article is about and if it is useful to your topic or not.

 

If you have determined the article to be potentially useful:
  • Critically read the article in chronological order, spending as much time as necessary to do so:
    • What is the author's position? Does their argument seem valid?
    • Does the author seem neutral or too biased? 
  • Take notes on the source, keeping your research question in mind so your notes are relevant.

Adapted from Meriam Library, California State University, Chico (2018) and Western University (2012)

Evaluate sources with the CRAAP Test

The CRAAP test
Picture via ThePinsta.com
The CRAAP Test

Librarians in the US and from around the world recommend the CRAAP Test as it's applicable to all source types, and it's easy to remember!

Zoom into the "Take the CRAAP Test" infographic on the right and save a copy for yourself.

Alternatively, download a PDF at the end of this box.

C stands for
Currency

The timeliness of the information

  • When was the information published or posted?
  • Has the information been revised or updated?
  • Does your topic require current information, or will older sources work as well? Are the links functional?
R stands for

 

Relevance

The importance of the information for your needs

  • Does the information relate to your topic or answer your question?
  • Who is the intended audience?
  • Is the information at an appropriate level (i.e. not too elementary or advanced for your needs)?
  • Have you looked at a variety of sources before determining this is one you will use?
  • Would you be comfortable citing this source in your research paper?
A stands for

 

Authority

The source of the information

  • Who is the author/publisher/source/sponsor?
  • What are the author's credentials or organizational affiliations?
  • Is the author qualified to write on the topic?
  • Is there contact information, such as a publisher or email address?
  • Does the URL reveal anything about the author or source? examples: .com .edu .gov .org .net
A stands for
Accuracy

The reliability, truthfulness, and correctness of the content

  • Where does the information come from?
  • Is the information supported by evidence?
  • Has the information been reviewed or refereed?
  • Can you verify any of the information in another source or from personal knowledge?
  • Does the language or tone seem unbiased and free of emotion?
  • Are there spelling, grammar or typographical errors?
P stands for
Purpose

The reason the information exists

  • What is the purpose of the information? Is it to inform, teach, sell, entertain or persuade?
  • Is the information fact, opinion, or propaganda?
  • Does the point of view appear objective and impartial?
  • Are there political, ideological, cultural, religious, institutional or personal biases?

"CRAAP Test" infographic and wording reproduced from UC San Diego Library, who adapted it from CSU Chicco

Questions to Ask When Critiquing a Research Article

The following is a list of great questions to ask when you're critiquing an article. It is not exhaustive nor is it necessary to answer every question but it provides examples of the types of questions you should ask when looking at how well a study was conducted and reported.

 

Introduction

Problem

1.     Is there a statement of the problem?

2.     Is the problem “researchable”? That is, can it be investigated through the collection and analysis of data?

3.     Is background information on the problem presented?

4.     Is the significance of the problem discussed?

5.     Does the problem statement indicate the variables of interest and the specific relationship between those variables which are investigated? When necessary, are variables directly or operationally defined?

Review of Related Literature

1.     Is the review comprehensive?

2.     Are all cited references relevant to the problem under investigation?

3.     Are most of the sources primary, i.e., are there only a few or no secondary sources?

4.     Have the references been critically analyzed and the results of various studies compared and contrasted, i.e., is the review more than a series of abstracts or annotations?

5.     Does the review conclude with a brief summary of the literature and its implications for the problem investigated?

6.     Do the implications discussed form an empirical or theoretical rationale for the hypotheses which follow?

Hypotheses

1.     Are specific questions to be answered listed or specific hypotheses to be tested stated?

2.     Does each hypothesis state an expected relationship or difference?

3.     If necessary, are variables directly or operationally defined?

4.     Is each hypothesis testable?

Method

Subjects

1.     Are the size and major characteristics of the population studied described?

2.     If a sample was selected, is the method of selecting the sample clearly described?

3.     Is the method of sample selection described one that is likely to result in a representative, unbiased sample?

4.     Did the researcher avoid the use of volunteers?

5.     Are the size and major characteristics of the sample described?

6.     Does the sample size meet the suggested guideline for minimum sample size appropriate for the method of research represented?    

Instruments

1.     Is the rationale given for the selection of the instruments (or measurements) used?

2.     Is each instrument described in terms of purpose and content?

3.     Are the instruments appropriate for measuring the intended variables?

4.     Is evidence presented that indicates that each instrument is appropriate for the sample under study?

5.     Is instrument validity discussed and coefficients given if appropriate?

6.     Is reliability discussed in terms of type and size of reliability coefficients?

7.     If appropriate, are subtest reliabilities given?

8.     If an instrument was developed specifically for the study, are the procedures involved in its development and validation described?

9.     If an instrument was developed specifically for the study, are administration, scoring or tabulating, and interpretation procedures fully described?

Design and Procedure

1.     Is the design appropriate for answering the questions or testing the hypotheses of the study?

2.     Are the procedures described in sufficient detail to permit them to be replicated by another researcher?

3.     If a pilot study was conducted, are its execution and results described as well as its impact on the subsequent study?

4.     Are the control procedures described?

5.     Did the researcher discuss or account for any potentially confounding variables that he or she was unable to control for?

Results

1.     Are appropriate descriptive or inferential statistics presented?

2.     Was the probability level, α, at which the results of the tests of significance were evaluated, specified in advance of the data analyses?

3.     If parametric tests were used, is there evidence that the researcher avoided violating the required assumptions for parametric tests?

4.     Are the tests of significance described appropriate, given the hypotheses and design of the study?

5.     Was every hypothesis tested?

6.     Are the tests of significance interpreted using the appropriate degrees of freedom?

7.     Are the results clearly presented?

8.     Are the tables and figures (if any) well organized and easy to understand?

9.     Are the data in each table and figure described in the text?

Discussion (Conclusions and Recommendation)

1.     Is each result discussed in terms of the original hypothesis to which it relates?

2.     Is each result discussed in terms of its agreement or disagreement with previous results obtained by other researchers in other studies?

3.     Are generalizations consistent with the results?

4.     Are the possible effects of uncontrolled variables on the results discussed?

5.     Are theoretical and practical implications of the findings discussed?

6.     Are recommendations for future action made?

7.     Are the suggestions for future action based on practical significance or on statistical significance only, i.e., has the author avoided confusing practical and statistical significance?

8.     Are recommendations for future research made?

Adapted from Critique of a Research Article Activity. Retrieved from http://web.csulb.edu/~arezaei/EDP520/critique.htm.

Librarian

Profile Photo
Erin Prentiss
she/her/hers
Contact:
706-667-4901