Assessment 2 : ICT902 Artificial Intelligence and Machine Learning

ICT902 Artificial Intelligence and Machine Learning

Semester 1 2026

Assessment 2 - Ethical Considerations in AI Solutions – 30% Guide

(Open Assessment)

Submission Deadline: Sunday 19 April 2026 11:59 PM

Total Assessment weighting – 30%

Purpose of this assessment

This assessment aims to develop student's ability to critically examine ethical issues arising from the development and deployment of Artificial Intelligence (AI) technologies in real-world contexts. By engaging with a contemporary AI application or scenario, students will apply conceptual knowledge, ethical reasoning frameworks, and analytical thinking to identify potential risks, biases, governance challenges, and societal implications. The task requires students to evaluate the responsible use of AI systems, assess their broader organizational and social impact, and formulate well-reasoned, evidence-based recommendations. Through this process, students will strengthen their capacity for critical reflection, professional judgment, and the production of a structured academic report that demonstrates ethical awareness and responsible AI practice in complex decision-making environments.

Demonstrate achievement of this learning outcome:

ULO 4: Explore and critically assess the ethical considerations surrounding the development and deployment of AI technologies in society.

Task description:

This is an individual assessment designed to evaluate student's ability to critically analyse ethical considerations arising from the development and deployment of Artificial Intelligence (AI) systems in real-world contexts. Students will be assigned a contemporary AI application or scenario situated within a realistic organisational or societal setting. The scenario will outline the purpose of the AI system, its operational environment, relevant stakeholders, data usage practices, and key ethical concerns.

The task requires students to produce a comprehensive written report of approximately 2000 words that demonstrates critical analysis, ethical reasoning, conceptual understanding, and professional academic communication. The report must evaluate the ethical implications of the AI application, assess risks and governance challenges, and propose well-reasoned recommendations to support responsible AI development and deployment.

This assessment aims to simulate a professional ethical review of an AI solution. Students will be required to:

  • Analyse the assigned AI scenario and identify key ethical issues and stakeholders
  • Examine potential risks including bias, fairness, transparency, accountability, privacy, and societal impact
  • Evaluate the organisational, regulatory, and governance implications of deploying the AI system
  • Propose evidence-based recommendations to mitigate ethical risks and enhance responsible AI practice
  • Critically reflect on the broader implications of AI adoption in organisational and societal contexts

To complete this assessment, students are required to:

  1. Critically analyse the assigned AI application or scenario, identifying ethical risks and contextual factors.
  2. Evaluate potential impacts on individuals, organisations, and society, including issues of fairness, bias, privacy, and accountability.
  3. Apply relevant ethical principles, governance considerations, and responsible AI concepts introduced in the unit.
  4. Develop structured, evidence-based recommendations to improve ethical design, implementation, and oversight of the AI system.
  5. Demonstrate academic rigour through clear argumentation, appropriate referencing, and integration of scholarly or industry sources.

The final submission must include:

  • A structured written report (PDF, approximately 2000 words) analysing the AI scenario, evaluating ethical implications, and presenting recommendations.
  • Appropriate academic references supporting ethical analysis and argumentation.

This assessment aims to help students:

  • Critically evaluate ethical challenges in AI development and deployment.
  • Apply responsible AI principles to realistic organisational scenarios.
  • Formulate structured and evidence-based ethical recommendations.
  • Communicate complex ethical and technical considerations in a professional academic format.
  • Demonstrate readiness to engage responsibly with AI technologies in professional practice.

Structure:

This assessment must be submitted in an academic report format, including the provided assessment cover sheet from the ICT 902 Moodle page. The report should include an introduction, main body, conclusion, recommendations, and a reference list.

Formatting: This assessment should be submitted with a word count of 2,000 words using either Calibri or Times New Roman font, size 12. The document should be double-spaced, with a minimum of ten references in APA 7 format. The assessment carries a total weight of 30%.

  • Headings and Subheadings: Use a clear and consistent hierarchy for headings and subheadings. For instance, main headings should be in bold and a larger font size, while subheadings should be bold with a smaller font size.
  • Font Style and Size: Ensure consistency in font style (e.g., Calibri or Times New Roman) and size (12-point for body text). The entire document should be double-spaced with uniform paragraph spacing.
  • Alignment: Keep the text left-aligned for better readability. The body text should be justified for a neat, organised appearance.
  • Proofreading and Editing: Review your work for grammatical errors and clarity. Consider using tools like Grammarly or peer feedback to improve writing quality.

Due Date: Week 7

Resources Available: Lecture slides and notes from weeks one to ten. Videos available in the "Readings and Viewings" section of OASIS.

Guide: The guide below provides a concise overview of how to approach the assessment.

  1. SCI Cover Page: (No word count)
  2. Table of Contents: (No word count, structured outline)
  3. Introduction (300–350 words)
  • Introduce the assigned AI application or scenario and its relevance in today's data-driven environment.
  • Highlight the growing importance of ethical oversight in AI development and deployment.
  • Outline how the report will examine the AI system, analyse ethical risks, evaluate governance implications, and propose responsible AI recommendations.
  1. Body of the Report:

Paragraph 1: AI System and Impact Analysis (400–450 words)

  • Describe the nature and purpose of the AI system (e.g., predictive model, automated decision system, intelligent assistant).
  • Identify affected stakeholders, including individuals, organisations, and society.
  • Assess potential impacts such as fairness concerns, bias, privacy risks, transparency limitations, or accountability gaps.
  • Discuss possible regulatory or compliance implications where relevant.

Paragraph 2: Ethical Risk and Governance Evaluation (450–500 words)

  • Identify key ethical risks associated with the AI system, including data quality concerns, bias in training data, model opacity, or decision-making autonomy.
  • Evaluate organisational responsibility, governance structures, and oversight mechanisms.
  • Assess risk severity, potential harm, and long-term societal implications.
  • Integrate relevant ethical principles and responsible AI concepts introduced in the unit.

Paragraph 3: Responsible AI Strategy and Mitigation Framework (450–500 words)

  • Propose a structured strategy to mitigate identified ethical risks.
  • Recommend governance mechanisms, transparency practices, monitoring processes, or policy interventions.
  • Discuss accountability structures, fairness auditing, and continuous evaluation mechanisms.
  • Justify recommendations using scholarly and industry sources.
  1. Conclusion (200–250 words)
  • Summarise the key ethical concerns identified in the analysis.
  • Reflect on the importance of responsible AI governance in sustaining trust and legitimacy.
  • Emphasise the long-term benefits of embedding ethical principles into AI system design and deployment.
  1. Recommendations (250–300 words)
  • Provide clear, actionable recommendations to strengthen ethical AI development and oversight.
  • Suggest improvements to governance frameworks, data practices, transparency mechanisms, or monitoring processes.
  • Present a structured roadmap for responsible and sustainable AI implementation.
  1. References (No word count)
  • Minimum 10 credible academic and industry sources cited in APA 7 format.
  • Sources may include academic journals, government guidelines, industry frameworks, and professional standards related to AI ethics and governance.

Total Word Count: 2,000 words

This includes ethical analysis, governance evaluation, responsible AI strategy development, and structured recommendations. The report must integrate conceptual understanding and critical reasoning aligned with ethical principles and responsible AI practice discussed in the unit.

Assessment submission

Before the due date, each group is allowed three (3) submission attempts, providing an opportunity to check for unintended plagiarism using text-matching software. As a team, review the similarity report together, discuss any necessary revisions, and ensure your final submission reflects the original work. If the similarity score is 31% or higher, collaborate to revise the content before making your final submission, as high similarity may indicate academic misconduct.

Academic Integrity and Misconduct

Students must submit original work and uphold academic integrity at Southern Cross Institute (SCI). The Academic Integrity Policy and Procedure outlines the principles of academic honesty and details the consequences of misconduct, including plagiarism, recycling, fabricating information, collusion, cheating in examinations, contract cheating, artificial intelligence tools, dishonest behaviour etc. SCI utilises Turnitin to encourage proper citation practices and to detect potential academic misconduct.

Ethical Use of Generative Artificial Intelligence (GenAI) Tools

Refer to the Quick Guide for Students created by the Learning Support Team for best practices in using GenAI tools. While GenAI can assist with idea generation, structuring, and drafting, students must carefully review, paraphrase, and properly reference any AI-generated content if used. Overreliance on AI may raise academic integrity concerns such as fabricating information.

Creating a reference to ChatGPT or other AI models and software

As per American Psychological Association (2020), the reference and in-text citations for ChatGPT are formatted as follows:

OpenAI. (2023). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat

  • Parenthetical citation: (OpenAI, 2023)
  • Narrative citation: OpenAI (2023)

Note: Although here we focus on ChatGPT, they can be adapted to the use of other large language models (e.g., Bard), algorithms, and similar software.

For further details, please refer to the ICT902 Unit Outline and ICT902 Unit Assessment Guide for additional information or contact your Lecturer. Please refer to the next page for the marking rubric for Assessment 2.

Rubric for Assessment 2 - Ethical Considerations in AI Solutions (30%) – OPEN ASSESSMENT

CriteriaFail (0 – 49%)Pass (50 - 64%)Credit (65-74%)Distinction (75-84%)High Distinction (85 – 100%)
Research Quality and Solution Feasibility (10%)Insufficient research conducted, with few or no feasible solutions presented. Relies on non-credible or unsupported ideas.Adequate research conducted, presenting moderately feasible solutions supported by credible sources.Above-average quality of research, presenting feasible solutions backed by good sources.Very good research, presenting highly feasible solutions supported by strong academic and professional sources.Exceptional research, presenting innovative and highly feasible solutions with extensive academic support.
Understanding of the challenge in terms of theories and concepts (20%)Not adequately understood the challenge in terms of the theories and concepts studied (eg have used terminology incorrectly or design/prototype is based on theoretically/conceptually incorrect assumptions or misconceived the issue/problem)Adequately understood the challenge in terms of the theories and concepts studied (eg have correctly used terminology and design/prototype is based on theoretically/conceptually correct assumptions)Adequately understood the challenge in terms of the theories and concepts studied to an above average standardAdequately understood the challenge in terms of the theories and concepts studied to a very good standardAdequately understood the challenge in terms of the theories and concepts studied to an exceptional standard
Coherence of analysis justifying design/prototype (10%)The rationale for the design/prototype is illogical and/or poorly reasoned (eg because it relies on unfounded assumptions or misunderstands the theories and concepts applied)The rationale for the design/prototype is mostly logical and well-reasonedThe rationale for the design/prototype is logical and well-reasoned to an above average standardThe rationale for the design/prototype is logical and well-reasoned to a very good standardThe rationale for the design/prototype is logical and well-reasoned to an exceptional standard
Support for design/prototype (10%)The design/prototype is insufficiently supported by theory and/or evidenceThe design/prototype is supported by theory and/or evidenceThe design/prototype is supported by theory and/or evidence to an above average standardThe design/prototype is supported by theory and/or evidence to a very good standardThe design/prototype is supported by theory and/or evidence to an exceptional standard

Note: This report is provided as a sample for reference purposes only. For further guidance, detailed solutions, or personalized assignment support, please contact us directly.

Ethical Considerations in AI-Based Recruitment Systems

1. Introduction (≈320 words)

Artificial Intelligence (AI) has become a transformative force in modern organisations, particularly in automating decision-making processes such as recruitment. AI-driven hiring systems are increasingly used to screen resumes, rank candidates, and even conduct automated interviews. These systems promise efficiency, cost reduction, and improved decision-making by analysing large volumes of applicant data. However, their deployment raises significant ethical concerns, particularly regarding fairness, transparency, accountability, and privacy.

In today’s data-driven environment, organisations rely heavily on algorithmic systems to enhance operational efficiency. However, as AI systems become more autonomous, the risk of unintended bias and ethical violations also increases. Recruitment is a critical organisational function that directly impacts individuals’ livelihoods and societal equality. Therefore, ethical oversight in AI-based hiring is essential to ensure that these technologies do not reinforce discrimination or undermine trust.

This report critically examines an AI-based recruitment system used by a multinational organisation to automate candidate screening. The system uses historical hiring data to train machine learning models that evaluate applicants based on resumes, assessments, and behavioural indicators.

The report will analyse the system’s functionality and its impact on stakeholders, identify key ethical risks such as bias and lack of transparency, and evaluate governance challenges. It will further propose a responsible AI strategy, including mitigation measures and governance frameworks, to ensure ethical deployment. Finally, the report will provide actionable recommendations to enhance fairness, accountability, and trust in AI-driven hiring systems.

2. AI System and Impact Analysis (≈430 words)

The AI recruitment system under consideration is a predictive machine learning model designed to evaluate job applicants. It processes resumes, educational backgrounds, work experience, and psychometric test results to generate a suitability score. The system is trained using historical hiring data, which includes profiles of previously successful employees.

Stakeholders

The system affects multiple stakeholders:

  • Job applicants (primary stakeholders)
  • Human resource professionals
  • Organisation management
  • Society at large, particularly marginalized groups

Potential Impacts

1. Bias and Fairness Issues
The system relies on historical data, which may reflect past hiring biases. If previous hiring practices favoured certain demographics (e.g., gender, ethnicity, or educational background), the AI may replicate and even amplify these biases. This leads to unfair exclusion of qualified candidates.

2. Privacy Concerns
The system collects and processes sensitive personal data, including behavioural assessments and possibly social media data. Improper handling or lack of consent can violate privacy rights and data protection regulations.

3. Lack of Transparency
AI models, particularly complex ones like neural networks, often function as “black boxes.” Applicants may not understand why they were rejected, leading to reduced trust and potential legal challenges.

4. Accountability Gaps
When decisions are made by AI, it becomes unclear who is responsible for errors or discrimination—the developers, the organisation, or the system itself.

5. Societal Impact
At a broader level, biased AI hiring systems can reinforce systemic inequalities, limiting opportunities for underrepresented groups and negatively affecting social mobility.

Regulatory Implications

The system must comply with data protection laws such as GDPR (if applicable globally) and anti-discrimination laws. Failure to ensure fairness and transparency may result in legal penalties and reputational damage.

3. Ethical Risk and Governance Evaluation (≈480 words)

Key Ethical Risks

1. Data Bias and Quality Issues
Training data is often the root cause of bias. If the dataset is skewed or incomplete, the model will produce discriminatory outcomes. This violates ethical principles of fairness and equality.

2. Algorithmic Opacity
Lack of explainability makes it difficult to audit decisions. This challenges the principle of transparency and prevents stakeholders from questioning outcomes.

3. Automation Bias
Human recruiters may overly rely on AI recommendations, assuming they are objective. This reduces critical human oversight and increases the risk of unethical decisions.

4. Privacy Violations
The use of personal and behavioural data raises concerns about consent, data minimisation, and misuse.

Governance Challenges

1. Lack of Clear Accountability
Organisations often lack defined roles for AI oversight. Without clear accountability structures, ethical risks may go unaddressed.

2. Insufficient Regulation Compliance
AI systems evolve rapidly, while regulations lag behind. Organisations may struggle to ensure compliance with emerging AI governance standards.

3. Weak Monitoring Mechanisms
Many organisations deploy AI systems without continuous monitoring, leading to undetected biases over time.

Ethical Principles Applied

  • Fairness: Ensuring equal treatment of all candidates
  • Transparency: Making AI decisions understandable
  • Accountability: Assigning responsibility for outcomes
  • Privacy: Protecting personal data
  • Non-maleficence: Avoiding harm to individuals

Risk Severity

The risks associated with AI hiring systems are high-impact, as they directly affect individuals’ employment opportunities. Long-term societal consequences include increased inequality and loss of trust in technology.

4. Responsible AI Strategy and Mitigation Framework (≈480 words)

To address these ethical risks, a structured responsible AI framework is essential.

1. Bias Mitigation Strategies

  • Use diverse and representative datasets
  • Conduct fairness audits regularly
  • Implement bias detection algorithms

2. Transparency and Explainability

  • Adopt Explainable AI (XAI) techniques
  • Provide applicants with clear feedback on decisions
  • Document model decision processes

3. Governance and Oversight

  • Establish an AI Ethics Committee
  • Define clear accountability roles
  • Implement ethical review processes before deployment

4. Privacy Protection Measures

  • Apply data minimisation principles
  • Ensure informed consent
  • Use data anonymisation techniques

5. Human-in-the-Loop Approach

  • Maintain human oversight in final decisions
  • Encourage recruiters to critically evaluate AI outputs

6. Continuous Monitoring

  • Regularly test models for bias and accuracy
  • Update models to reflect changing societal norms

Justification

These strategies align with global AI ethics frameworks such as OECD AI Principles and industry best practices. Implementing them enhances trust, reduces risk, and ensures compliance with legal standards.

5. Conclusion (≈220 words)

AI-based recruitment systems offer significant benefits in efficiency and scalability, but they also introduce complex ethical challenges. This report identified key issues such as bias, lack of transparency, privacy concerns, and accountability gaps. These risks not only affect individual job applicants but also have broader societal implications, including reinforcing inequality and reducing trust in AI systems.

The analysis highlights the importance of integrating ethical considerations into every stage of AI development and deployment. Responsible AI governance is essential to ensure that these systems operate fairly, transparently, and accountably. Without proper oversight, the benefits of AI may be overshadowed by unintended harm.

Organisations must adopt proactive strategies, including bias mitigation, transparency measures, and strong governance frameworks, to address these challenges. Embedding ethical principles into AI systems is not only a moral obligation but also a strategic necessity for maintaining organisational reputation and legal compliance.

Ultimately, responsible AI practices will enable organisations to leverage technological advancements while safeguarding human values and societal well-being.

6. Recommendations (≈270 words)

To ensure ethical and responsible deployment of AI-based recruitment systems, the following recommendations are proposed:

  1. Implement Fairness Audits
    Regularly evaluate AI models for bias using statistical fairness metrics to ensure equitable outcomes.
  2. Adopt Explainable AI Tools
    Provide transparency by enabling stakeholders to understand how decisions are made.
  3. Strengthen Governance Frameworks
    Establish dedicated AI ethics committees and define accountability structures within the organisation.
  4. Enhance Data Governance
    Ensure high-quality, diverse datasets and implement strict data privacy policies.
  5. Maintain Human Oversight
    Adopt a human-in-the-loop approach to prevent over-reliance on automated decisions.
  6. Continuous Monitoring and Improvement
    Regularly update AI systems to adapt to societal and organisational changes.
  7. Compliance with Regulations
    Align AI practices with legal frameworks and ethical guidelines to avoid legal and reputational risks.
  8. Stakeholder Engagement
    Involve diverse stakeholders in AI design and evaluation processes to ensure inclusivity.

These recommendations provide a structured roadmap for organisations to develop and deploy AI systems responsibly, ensuring fairness, accountability, and trust.

7. References (APA 7 – Sample)

  • Floridi, L., et al. (2018). AI4People—An ethical framework for a good AI society.
  • Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines.
  • OECD. (2019). OECD Principles on Artificial Intelligence.
  • European Commission. (2021). Ethics guidelines for trustworthy AI.
  • Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI.
  • Binns, R. (2018). Fairness in machine learning.
  • Barocas, S., & Selbst, A. (2016). Big data’s disparate impact.
  • OpenAI. (2023). ChatGPT [Large language model].
  • Dignum, V. (2019). Responsible Artificial Intelligence.
  • Russell, S., & Norvig, P. (2021). Artificial Intelligence: A Modern Approach.

Example invalid form file feedback

Join our 150К of happy users

Get original papers written according to your instructions and save time for what matters most.