Semester 1 2026
Assessment 2 - Ethical Considerations in AI Solutions – 30% Guide
(Open Assessment)
Submission Deadline: Sunday 19 April 2026 11:59 PM
Total Assessment weighting – 30%
This assessment aims to develop student's ability to critically examine ethical issues arising from the development and deployment of Artificial Intelligence (AI) technologies in real-world contexts. By engaging with a contemporary AI application or scenario, students will apply conceptual knowledge, ethical reasoning frameworks, and analytical thinking to identify potential risks, biases, governance challenges, and societal implications. The task requires students to evaluate the responsible use of AI systems, assess their broader organizational and social impact, and formulate well-reasoned, evidence-based recommendations. Through this process, students will strengthen their capacity for critical reflection, professional judgment, and the production of a structured academic report that demonstrates ethical awareness and responsible AI practice in complex decision-making environments.
ULO 4: Explore and critically assess the ethical considerations surrounding the development and deployment of AI technologies in society.
This is an individual assessment designed to evaluate student's ability to critically analyse ethical considerations arising from the development and deployment of Artificial Intelligence (AI) systems in real-world contexts. Students will be assigned a contemporary AI application or scenario situated within a realistic organisational or societal setting. The scenario will outline the purpose of the AI system, its operational environment, relevant stakeholders, data usage practices, and key ethical concerns.
The task requires students to produce a comprehensive written report of approximately 2000 words that demonstrates critical analysis, ethical reasoning, conceptual understanding, and professional academic communication. The report must evaluate the ethical implications of the AI application, assess risks and governance challenges, and propose well-reasoned recommendations to support responsible AI development and deployment.
This assessment aims to simulate a professional ethical review of an AI solution. Students will be required to:
To complete this assessment, students are required to:
The final submission must include:
This assessment aims to help students:
This assessment must be submitted in an academic report format, including the provided assessment cover sheet from the ICT 902 Moodle page. The report should include an introduction, main body, conclusion, recommendations, and a reference list.
Formatting: This assessment should be submitted with a word count of 2,000 words using either Calibri or Times New Roman font, size 12. The document should be double-spaced, with a minimum of ten references in APA 7 format. The assessment carries a total weight of 30%.
Due Date: Week 7
Resources Available: Lecture slides and notes from weeks one to ten. Videos available in the "Readings and Viewings" section of OASIS.
Guide: The guide below provides a concise overview of how to approach the assessment.
Total Word Count: 2,000 words
This includes ethical analysis, governance evaluation, responsible AI strategy development, and structured recommendations. The report must integrate conceptual understanding and critical reasoning aligned with ethical principles and responsible AI practice discussed in the unit.
Before the due date, each group is allowed three (3) submission attempts, providing an opportunity to check for unintended plagiarism using text-matching software. As a team, review the similarity report together, discuss any necessary revisions, and ensure your final submission reflects the original work. If the similarity score is 31% or higher, collaborate to revise the content before making your final submission, as high similarity may indicate academic misconduct.
Students must submit original work and uphold academic integrity at Southern Cross Institute (SCI). The Academic Integrity Policy and Procedure outlines the principles of academic honesty and details the consequences of misconduct, including plagiarism, recycling, fabricating information, collusion, cheating in examinations, contract cheating, artificial intelligence tools, dishonest behaviour etc. SCI utilises Turnitin to encourage proper citation practices and to detect potential academic misconduct.
Refer to the Quick Guide for Students created by the Learning Support Team for best practices in using GenAI tools. While GenAI can assist with idea generation, structuring, and drafting, students must carefully review, paraphrase, and properly reference any AI-generated content if used. Overreliance on AI may raise academic integrity concerns such as fabricating information.
As per American Psychological Association (2020), the reference and in-text citations for ChatGPT are formatted as follows:
OpenAI. (2023). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat
Note: Although here we focus on ChatGPT, they can be adapted to the use of other large language models (e.g., Bard), algorithms, and similar software.
For further details, please refer to the ICT902 Unit Outline and ICT902 Unit Assessment Guide for additional information or contact your Lecturer. Please refer to the next page for the marking rubric for Assessment 2.
| Criteria | Fail (0 – 49%) | Pass (50 - 64%) | Credit (65-74%) | Distinction (75-84%) | High Distinction (85 – 100%) |
|---|---|---|---|---|---|
| Research Quality and Solution Feasibility (10%) | Insufficient research conducted, with few or no feasible solutions presented. Relies on non-credible or unsupported ideas. | Adequate research conducted, presenting moderately feasible solutions supported by credible sources. | Above-average quality of research, presenting feasible solutions backed by good sources. | Very good research, presenting highly feasible solutions supported by strong academic and professional sources. | Exceptional research, presenting innovative and highly feasible solutions with extensive academic support. |
| Understanding of the challenge in terms of theories and concepts (20%) | Not adequately understood the challenge in terms of the theories and concepts studied (eg have used terminology incorrectly or design/prototype is based on theoretically/conceptually incorrect assumptions or misconceived the issue/problem) | Adequately understood the challenge in terms of the theories and concepts studied (eg have correctly used terminology and design/prototype is based on theoretically/conceptually correct assumptions) | Adequately understood the challenge in terms of the theories and concepts studied to an above average standard | Adequately understood the challenge in terms of the theories and concepts studied to a very good standard | Adequately understood the challenge in terms of the theories and concepts studied to an exceptional standard |
| Coherence of analysis justifying design/prototype (10%) | The rationale for the design/prototype is illogical and/or poorly reasoned (eg because it relies on unfounded assumptions or misunderstands the theories and concepts applied) | The rationale for the design/prototype is mostly logical and well-reasoned | The rationale for the design/prototype is logical and well-reasoned to an above average standard | The rationale for the design/prototype is logical and well-reasoned to a very good standard | The rationale for the design/prototype is logical and well-reasoned to an exceptional standard |
| Support for design/prototype (10%) | The design/prototype is insufficiently supported by theory and/or evidence | The design/prototype is supported by theory and/or evidence | The design/prototype is supported by theory and/or evidence to an above average standard | The design/prototype is supported by theory and/or evidence to a very good standard | The design/prototype is supported by theory and/or evidence to an exceptional standard |
Note: This report is provided as a sample for reference purposes only. For further guidance, detailed solutions, or personalized assignment support, please contact us directly.

Artificial Intelligence (AI) has become a transformative force in modern organisations, particularly in automating decision-making processes such as recruitment. AI-driven hiring systems are increasingly used to screen resumes, rank candidates, and even conduct automated interviews. These systems promise efficiency, cost reduction, and improved decision-making by analysing large volumes of applicant data. However, their deployment raises significant ethical concerns, particularly regarding fairness, transparency, accountability, and privacy.
In today’s data-driven environment, organisations rely heavily on algorithmic systems to enhance operational efficiency. However, as AI systems become more autonomous, the risk of unintended bias and ethical violations also increases. Recruitment is a critical organisational function that directly impacts individuals’ livelihoods and societal equality. Therefore, ethical oversight in AI-based hiring is essential to ensure that these technologies do not reinforce discrimination or undermine trust.
This report critically examines an AI-based recruitment system used by a multinational organisation to automate candidate screening. The system uses historical hiring data to train machine learning models that evaluate applicants based on resumes, assessments, and behavioural indicators.
The report will analyse the system’s functionality and its impact on stakeholders, identify key ethical risks such as bias and lack of transparency, and evaluate governance challenges. It will further propose a responsible AI strategy, including mitigation measures and governance frameworks, to ensure ethical deployment. Finally, the report will provide actionable recommendations to enhance fairness, accountability, and trust in AI-driven hiring systems.
The AI recruitment system under consideration is a predictive machine learning model designed to evaluate job applicants. It processes resumes, educational backgrounds, work experience, and psychometric test results to generate a suitability score. The system is trained using historical hiring data, which includes profiles of previously successful employees.
Stakeholders
The system affects multiple stakeholders:
Potential Impacts
1. Bias and Fairness Issues
The system relies on historical data, which may reflect past hiring biases. If previous hiring practices favoured certain demographics (e.g., gender, ethnicity, or educational background), the AI may replicate and even amplify these biases. This leads to unfair exclusion of qualified candidates.
2. Privacy Concerns
The system collects and processes sensitive personal data, including behavioural assessments and possibly social media data. Improper handling or lack of consent can violate privacy rights and data protection regulations.
3. Lack of Transparency
AI models, particularly complex ones like neural networks, often function as “black boxes.” Applicants may not understand why they were rejected, leading to reduced trust and potential legal challenges.
4. Accountability Gaps
When decisions are made by AI, it becomes unclear who is responsible for errors or discrimination—the developers, the organisation, or the system itself.
5. Societal Impact
At a broader level, biased AI hiring systems can reinforce systemic inequalities, limiting opportunities for underrepresented groups and negatively affecting social mobility.
Regulatory Implications
The system must comply with data protection laws such as GDPR (if applicable globally) and anti-discrimination laws. Failure to ensure fairness and transparency may result in legal penalties and reputational damage.
1. Data Bias and Quality Issues
Training data is often the root cause of bias. If the dataset is skewed or incomplete, the model will produce discriminatory outcomes. This violates ethical principles of fairness and equality.
2. Algorithmic Opacity
Lack of explainability makes it difficult to audit decisions. This challenges the principle of transparency and prevents stakeholders from questioning outcomes.
3. Automation Bias
Human recruiters may overly rely on AI recommendations, assuming they are objective. This reduces critical human oversight and increases the risk of unethical decisions.
4. Privacy Violations
The use of personal and behavioural data raises concerns about consent, data minimisation, and misuse.
Governance Challenges
1. Lack of Clear Accountability
Organisations often lack defined roles for AI oversight. Without clear accountability structures, ethical risks may go unaddressed.
2. Insufficient Regulation Compliance
AI systems evolve rapidly, while regulations lag behind. Organisations may struggle to ensure compliance with emerging AI governance standards.
3. Weak Monitoring Mechanisms
Many organisations deploy AI systems without continuous monitoring, leading to undetected biases over time.
Ethical Principles Applied
Risk Severity
The risks associated with AI hiring systems are high-impact, as they directly affect individuals’ employment opportunities. Long-term societal consequences include increased inequality and loss of trust in technology.
To address these ethical risks, a structured responsible AI framework is essential.
1. Bias Mitigation Strategies
2. Transparency and Explainability
3. Governance and Oversight
4. Privacy Protection Measures
5. Human-in-the-Loop Approach
6. Continuous Monitoring
Justification
These strategies align with global AI ethics frameworks such as OECD AI Principles and industry best practices. Implementing them enhances trust, reduces risk, and ensures compliance with legal standards.
AI-based recruitment systems offer significant benefits in efficiency and scalability, but they also introduce complex ethical challenges. This report identified key issues such as bias, lack of transparency, privacy concerns, and accountability gaps. These risks not only affect individual job applicants but also have broader societal implications, including reinforcing inequality and reducing trust in AI systems.
The analysis highlights the importance of integrating ethical considerations into every stage of AI development and deployment. Responsible AI governance is essential to ensure that these systems operate fairly, transparently, and accountably. Without proper oversight, the benefits of AI may be overshadowed by unintended harm.
Organisations must adopt proactive strategies, including bias mitigation, transparency measures, and strong governance frameworks, to address these challenges. Embedding ethical principles into AI systems is not only a moral obligation but also a strategic necessity for maintaining organisational reputation and legal compliance.
Ultimately, responsible AI practices will enable organisations to leverage technological advancements while safeguarding human values and societal well-being.
To ensure ethical and responsible deployment of AI-based recruitment systems, the following recommendations are proposed:
These recommendations provide a structured roadmap for organisations to develop and deploy AI systems responsibly, ensuring fairness, accountability, and trust.
Get original papers written according to your instructions and save time for what matters most.