Skip to Main Content

Generative artificial intelligence: Use at University

Ethical considerations

Gen AI output can be useful and creative, but it requires you to evaluate the tool's output and the ethical context of the tool and data. 

The datasets behind tools are often scraped from the open web, and there may not be much academic content, as much of it is behind paywalls on academic databases. 

The ownership of the tools will often differ and may affect the content output. Some tools are intended to meet customer needs rather than provide information or 'truth', and their output and data may reflect the ideological stance of their owner. 

Check FedCite or the pages on this guide for advice on how to cite or acknowledge your use of these tools.

Remember that submitting work as your own when created by another person or AI (ChatGPT or other generative AI) without authorisation and acknowledgement is Academic Misconduct, which may result in disciplinary action. If you need extra support, contact the library or ASK desk on any campus, Studiosity, Turnitin (Text matching), and the Writing Centre. See also: Declaration of indirect GenAI Use  and Referencing direct GenAI use.
Ownership of the tool and who developed it can have a bearing on the filters, algorithms, and data the tool is learning from. Some tools are developed to meet customer expectations, not to provide accurate or credible results or scholarly content.
There are numerous copyright considerations when using a GenAI tool, ranging from the content the tool is trained on being used without permission to the copyright ownership of the output. Don't upload copyright material to public Gen AI tools, for example, to summarise content such as PDFs, images, audio, or visual recordings including sources created by your teacher for Moodle, unless the terms and conditions of the content allow you to do so, and then only to the locked down private Gen AI tools, e.g., MS Co-pilot Federation University version). See also: Copyright tab in this guide.
Can you locate information about what data the tools are learning from? The datasets or limitations? The year range of that data?
Tools trained on open web data will reflect the data of the open web –including biases, misinformation, and fact.
Tools trained on open web data will reflect the data of the open web –including biases (e.g., gender or racial), misinformation and fact.
The output of the tool can change even when using the same prompts. One user may get a different response than another and a different response over time.
Don’t upload any personal information, as your content may be used for learning for the AI tool. The output of the tool can change even when using the same prompts. One user may get a different response than another and a different response over time.
Tools that are free today may incur a license fee in the future.

Green Artificial Intelligence “… incorporates sustainable practices and techniques in model design, training, and deployment that aim to reduce the associated environmental cost and carbon footprint” (Bolón-Canedo et al., 2024, p. 1). Therefore, Green AI needs to balance growth in capability and accuracy with a reduced carbon footprint by using less energy and natural resources such as water (Bolón-Canedo et al., 2024; Stanford University Human-Centred Artificial Intelligence, 2024).

For more detail read the following research articles:

Alzoubi, Y. I., & Mishra, A. (2024). Green artificial intelligence initiatives: Potentials and challenges. Journal of cleaner production, 468, Article 143090. https://doi.org/10.1016/j.jclepro.2024.143090

Bolón-Canedo, V., Morán-Fernández, L., Cancela, B., & Alonso-Betanzos, A. (2024). A review of green artificial intelligence: Towards a more sustainable future. Neurocomputing, 599, Article 128096. https://doi.org/10.1016/j.neucom.2024.128096

Lannelongue, L., Grealey, J., & Inouye, M. (2021). Green algorithms: Quantifying the carbon footprint of computation. Advanced Science, 8(12), Article 2100707. https://doi.org/10.1002/advs.202100707

Stanford University Human-Centred Artificial Intelligence. (2024). Artificial intelligence index report 2024. https://aiindex.stanford.edu/wp-content/uploads/2024/05/HAI_AI-Index-Report-2024.pdf

Tabbakh, A., Al Amin, L., Islam, M., Mahmud, G. M. I., Chowdhury, I. K., & Mukta, M. S. H. (2024). Towards sustainable AI: A comprehensive framework for green AI. Discover sustainability, 5(1), 408-414. https://doi.org/10.1007/s43621-024-00641-4

Verdecchia, R., Sallou, J., & Cruz, L. (2023). A systematic review of green AI. Wiley Mining and Knowledge Discovery, 13(4), Article e1507. https://doi.org/10.1002/widm.1507

Artificial Intelligence and Academic Integrity video with a quiz

Sweetman, R. (2023). Some harm considerations of large language models (LLMs) [H5P].  eCampusOntario H5P Studio. https://h5pstudio.ecampusontario.ca/content/51741

 

Recommended books