Using generative AI to write your work, or using any content generated by AI without full and proper acknowledgment is a breach of academic integrity. Generative AI content is often unreliable. These tools often produce content that sounds credible but is inaccurate, false, simplistic, out of date, or even biased. GenAI will often not identify or link to the original sources. Being able to evaluate the credibility of your sources is an essential aspect of academic writing.
When was this source published?
How old are the references and data used?
Has this source, or its data, been updated?
Does this type of information get updated?
Is there likely to be more recent information available elsewhere in scholarly sources e.g. a journal article or book?
The information that generative AI tools are trained on may not be current or up to date. They are not trained every day, so the knowledge they have may stop at a certain date, meaning they do not have information about recent events or sources. Even if they are updated, it may not be clear when that was.
Is this information relevant to your assignment? Is there likely to be better information?
Is this aimed at the correct audience?
Content from generative AI tools can be generic in nature, and may not be a suitable academic level for university. The response given also depends on the prompts that you input, and quality prompts usually require understanding how the tool works, the content, and critical thinking.
Who wrote it? What are their qualifications?
Where do they work? Who do they work for?
Are they likely to have a good understanding of this field?
Typically, generative AI tools will not disclose where they got the information from. This means we don't know who created the information used, and therefore we cannot assess their qualifications, experience, and likelihood of being an expert in their field. Even when asked to include references and citations, these are frequently inaccurate or completely made up. Additionally, there are concerns around the copyright of AI generated content. If the tools are trained on content created by people, those people are not credited or acknowledged by the AI tool.
Is the information reliable?
Can you find the original source?
What is the quality of the presentation? Are there significant errors?
Do the conclusions match the data?
Have all sides been considered?
AI generated content has been shown to be frequently inaccurate, biased or completely incorrect. Any claim made in AI responses would need to be independently checked for accuracy.
Why has the article been written?
Is there any obvious bias? Is the author or their employer likely to get a benefit out of the recommendations?
Is it recommending a particular course of action or therapy? Does the data support this? Are any alternatives considered?
AI tools are only as accurate as the information they are trained on, and the algorithms that creates the responses may have inbuilt biases. There have been a number of examples of AI generated text that includes significant bias. Additionally, these are commercial tools and therefore my be influenced by wanting to make a profit. A useful way to think about AI responses is: What might a response to this prompt look like?
The following article describes the problems with 'references' generated by ChatGPT and the unreliability of the model for conducting research.
de Grijs, R. (2023, April 27). Guest post: Artificial intelligence not yet intelligent enough to be a trusted research aid. The Scholarly Kitchen. https://scholarlykitchen.sspnet.org/2023/04/27/guest-post-artificial-intelligence-not-yet-intelligent-enough-to-be-a-trusted-research-aid/?informz=1&nbd=&nbd_source=informz
Federation University Australia acknowledges the Traditional Custodians of the lands and waters where our campuses, centres and field stations are located and we pay our respects to Elders past and present, and extend our respect to all Aboriginal and Torres Strait Islander and First Nations Peoples. |