Using generative AI (GenAI) to write your work or any content generated by AI without permission from your lecturer and full and proper acknowledgment is a breach of academic integrity. GenAI tools often produce output that sounds credible but is inaccurate, false, simplistic, out of date, or even biased. GenAI will often not identify or link to original sources of information. So, being able to evaluate the credibility of GenAI output is an essential aspect of academic writing.
Generative AI tools are not trained every day. How old is the training data used for this tool?
Has the training data been updated recently? If not, how does it affect the value of the output?
Does this type of information get updated? Does this matter for your topic?
Is there likely to be more recent information available elsewhere in scholarly sources e.g. a journal article or book?
The response given also depends on the prompts that you input and your basic understanding of the topic.
Is the output relevant to your assignment or gives basic definitions of concepts that will help you with understanding the topic? Is there likely to be better information elsewhere?
GenAI output can often be very general in nature lacking in-depth details. How detailed is the output?
Is this output aimed at the correct audience e.g. an academic audience vs beginners or the general public?
Typically, generative AI tools will not disclose the sources of the information in the output.
Who wrote it? What are their qualifications? Where do they work?
Are they likely to have a good understanding of this field?
Is the information in the original source peer-reviewed?
Copyright concerns, i.e., the original authors of material used in training are not acknowledged by the GenAI tool.
GenAI output is frequently inaccurate, biased or completely incorrect.
Is the information reliable? Are there any obvious errors or missing facts?
Do the conclusions match the data?
Is there any bias? Have all sides been considered?
Can you find the original source? Can you verify the information given by the GenAI tool?
References given to support the output are frequently inaccurate or completely made up. So, verify their accuracy via the library or Google Scholar.
Would it be better to consult more scholarly sources e.g. peer-reviewed journal articles or books?
AI tools are only as accurate as their training data which may have biases that influence the output generated.
The following article describes the problems with 'references' generated by ChatGPT and the unreliability of the model for conducting research.
de Grijs, R. (2023, April 27). Guest post: Artificial intelligence not yet intelligent enough to be a trusted research aid. The Scholarly Kitchen. https://scholarlykitchen.sspnet.org/2023/04/27/guest-post-artificial-intelligence-not-yet-intelligent-enough-to-be-a-trusted-research-aid/?informz=1&nbd=&nbd_source=informz
![]() ![]() |
Federation University Australia acknowledges the Traditional Custodians of the lands and waters where our campuses, centres and field stations are located and we pay our respects to Elders past and present, and extend our respect to all Aboriginal and Torres Strait Islander and First Nations Peoples. |