View Current

Generative Artificial Intelligence Policy

This is the current version of this document. You can provide feedback on this policy to the document author - refer to the Status and Details on the document's navigation bar.

Section 1 - Preamble

(1) Charles Darwin University (‘the University’, ‘CDU’) is innovative and open to new technologies. Generative artificial intelligence (gen AI) is an emerging technology with social, environmental, and academic implications. The University is committed to educating employees and students about gen AI and instilling a culture of responsible and informed gen AI use.

Top of Page

Section 2 - Purpose

(2) This policy sets out principles for the use of generative artificial intelligence (gen AI) at the University.

Top of Page

Section 3 - Scope

(3) This policy applies to all employees and students of the University.

Top of Page

Section 4 - Policy

(4) The University recognises the potential offered by generative artificial intelligence (gen AI) for innovation and enhancement, as well as its disruptive challenges. The University will explore the use of gen AI in education, research, and our ways of working while maintaining excellence and integrity in education, research, and operations.

(5) The University will encourage exploration, innovation, and continuous learning in gen AI and ensure that our students, employees, and graduates understand and are competent in the ethical and effective use of gen AI.

(6) The University will set clear expectations and provide guidance, resources, and training to employees and students around the use of gen AI. Guidance will be targeted to the expectations and requirements of disciplines and functions.

(7) The University will seek to mitigate potential risks arising from inequitable access to gen AI.

(8) Where gen AI systems are used in the University, we will:

  1. ensure the systems are transparent and decision-making by gen AI is understandable and explainable;
  2. inform stakeholders how, when, and where gen AI systems operate and are used;
  3. implement robust security measures to safeguard and monitor gen AI systems and ensure their use is consistent with our commitments to cybersecurity, privacy, and safety; and
  4. be aware of, and seek to rectify, algorithmic bias which may result in unintended outcomes with serious, incorrect, and unjustified effects on groups and individuals.

Platforms and usage

(9) Gen AI platforms vary according to their functionalities, accessibility, data protection, and transparency.

(10) The University will assess new gen AI platforms as they are released and may provide guidance on them or restrict access to them from the University network.

  1. The University does not generally maintain a static list of “approved”, “preferred”, or “endorsed” gen AI platforms.
  2. Guidance may be issued from time to time based on risk, suitability, or specific use cases, and access may be restricted where material concerns are identified.

(11) Approval by the Design Authority is required prior to the adoption of new gen AI platforms, including new products from gen AI platforms the University already uses.

  1. Approval by the Design Authority is required prior to the adoption of new gen AI platforms where they are to be institutionally implemented, integrated with University systems, or used to process University data.
  2. Decisions may be made to restrict or block platforms that present unacceptable risks related to security, privacy, data sovereignty, potential legal compliance, or ethical use, or that potentially breach the Information Security and Access Policy.

Integrity

(12) Gen AI platforms may be trained on incorrect, out of date, and biased data.

(13) Gen AI platforms may ‘hallucinate’ and refer to non-existent sources and facts that the platform invented itself.

(14) The University will ensure employees and students understand the risks and limitations of using Gen AI platforms, including that their outputs may hallucinate and/or provide incorrect, out of date, and biased information.

(15) Gen AI platforms are programmed to seem authoritative and confident in their knowledge. Employees and students should verify any information provided by gen AI platforms to avoid using or submitting false information.

(16) Content produced by gen AI platforms may infringe on copyright or create ambiguity around ownership, especially in research or collaborative outputs. Employees and students must comply with the Copyright Policy and Intellectual Property Policy, especially when developing learning, teaching or research materials. 

(17) Student use of gen AI platforms for their studies will be guided by academic teaching employees and higher degree by research (HDR) supervisors. Teachers and supervisors will provide advice and information about gen AI use in classes, communication with students, and unit information, as appropriate. Topics that such guidance may cover include:

  1. the learning activities and assessments for which students may use gen AI platforms;
  2. the extent to which gen AI platforms may be used;
  3. how students should use gen AI platforms in the unit;
  4. when and how gen AI use must be referenced in submitted work; and
  5. the risks, limitations, and potential pitfalls of gen AI.

(18) Students and employees must be transparent about and disclose gen AI use and should refer to the Academic Integrity Policy for more guidance on academic integrity.

(19) When using gen AI, researchers must ensure they observe expected standards of integrity in the conduct of research as specified in the Responsible Conduct of Research Policy and associated Research Data Management Procedure and Authorship and Dissemination of Research Procedure.

Data, privacy, and security

(20) Many gen AI platforms incorporate data from user prompts and may share it in answers to other users.

(21) Employees and students must ensure they do not put confidential or sensitive information or intellectual property into gen AI platforms. Employees must comply with the Information Security and Access Policy and the Privacy and Confidentiality Policy which aligns with Australian Privacy Principles (APPs) of the Privacy Act 1988. Examples of confidential and sensitive information include, but are not limited to:

  1. names, addresses, telephone numbers, email addresses;
  2. date of birth, student or employee identification numbers;
  3. photographs, video, or audio recordings where a person can be identified;
  4. employment, educational, or health information;
  5. location data, online identifiers, or opinions linked to an identifiable person.
  6. assessment and examination questions and answers; and
  7. confidential and proprietary information of the University.

(22) Researchers must ensure protection of unpublished or sensitive work by not uploading it into gen AI platforms without assurance from the vendor that the data will not be re-used for training future gen AI models.

Social and environmental justice

(23) The University acknowledges that many gen AI platforms require a subscription or other payment.

(24) The University will strive to prioritise equitable access to gen AI platforms. If students will be expected to pay for access to a gen AI model as part of a course or unit, this will be specified in the relevant information before enrolment.

(25) The University acknowledges that all technologies have the potential to bring about negative environmental effects. Gen AI platforms require extensive data centres and electricity consumption and are responsible for significant greenhouse gas emissions and use of water and other natural resources.

(26) Employees and students of the University will use gen AI platforms judiciously in order to minimise the adverse environmental impacts of such use.

Teaching and learning

(27) Gen AI challenges the reliability of traditional assessments due to its ability to rapidly produce content that seems high quality.

(28) The University will be agile in its approach to assessment and is committed to review relevant processes, strategies, and practices to ensure they remain current and fit-for-purpose.

(29) The University will enact strategies to:

  1. ensure employees and students:
    1. understand the implications of using gen AI;
    2. understand and comply with the Academic Integrity Policy;
    3. become competent, capable, adaptable, and ethical users of gen AI; and
  2. assure learning through robust assessment design and review in accordance with the Higher Education Assessment (Coursework) Policy and Procedure and VET Assessment System Policy and Procedure, as applicable.

(30) Any use of gen AI to mark assessments or provide feedback on student work must be in accordance with the Data, privacy, and security section of this policy.

  1. Gen AI platforms are not qualified assessors under the VET Trainer and Assessor Qualifications, Competency and Industry Currency Procedure. VET competency judgements must be made by a qualified assessor. 

Research

(31) Gen AI tools may improve efficiency and productivity by streamlining critical processes, including through the analysis of data, synthesis and summarisation of content and assistance with the preparation of outputs.

(32) Researchers may use gen AI tools within the parameters of principles provided by the Australian Code for the Responsible Conduct of Research, 2018, related University governance documents, relevant legislation and regulation, and funding body and publisher requirements.

(33) There are circumstances when the use of gen AI is never acceptable. This includes in the conduct of peer review, the generation of substantive content of research outputs, including HDR theses, and in the writing of the critical components of ethics applications.

(34) Researchers must ensure they do not input material (including data or unpublished manuscripts) without considering privacy, security, legal, and ethical implications.

(35) Researchers remain accountable for their own work, including work produced with the assistance of gen AI tools, and must take responsibility for the integrity of the content created by gen AI.

(36) Researchers who use gen AI must be transparent in documenting its use including by describing which AI tools have been used and when, how they were used, and the effect of their use on the research process.

(37) Subject to the conditions above, HDR candidates may consider using gen AI tools where appropriate. They must first consult with their supervisor on the benefits and risks of using gen AI tools, and develop a plan that outlines how the tool will be used, its intended impact on the research quality and how principles of research integrity will be observed.

Top of Page

Section 5 - Non-Compliance

(38) Non-compliance with governance documents is considered a breach of the Code of Conduct - Employees or the Code of Conduct - Students, as applicable, and is treated seriously by the University. Reports of concerns about non-compliance will be managed in accordance with the applicable disciplinary procedures outlined in the Charles Darwin University and Union Enterprise Agreement 2025 and the Code of Conduct - Students.

(39) Complaints may be raised in accordance with the Complaints and Grievance Policy and Procedure - Employees and Complaints Policy - Students.

(40) All employees have an individual responsibility to raise any suspicion, allegation or report of fraud or corruption in accordance with the Fraud and Corruption Control Policy and Whistleblower Reporting (Improper Conduct) Procedure.