1. What is the primary ethical concern when using AI in healthcare?
a) Speed of diagnosis
b) Cost of implementation
c) Patient privacy and data security
d) Availability of AI tools
Answer: c) Patient privacy and data security
2. Which principle ensures that AI systems in healthcare do not discriminate against patients based on race, gender, or socioeconomic status?
a) Transparency
b) Fairness
c) Accountability
d) Autonomy
Answer: b) Fairness
3. What is “algorithmic bias” in the context of AI in healthcare?
a) Errors caused by outdated algorithms
b) Bias introduced due to unrepresentative training data
c) Over-reliance on AI systems by clinicians
d) Lack of transparency in decision-making
Answer: b) Bias introduced due to unrepresentative training data
4. Which ethical framework emphasizes the importance of informed consent when using AI for patient care?
a) Deontology
b) Utilitarianism
c) Virtue ethics
d) Principlism
Answer: d) Principlism
5. What is a potential risk of over-relying on AI systems for medical diagnoses?
a) Increased patient satisfaction
b) Loss of clinician expertise
c) Faster treatment delivery
d) Improved accuracy
Answer: b) Loss of clinician expertise
6. Which regulation focuses on protecting patient data in the European Union?
a) HIPAA
b) GDPR
c) PIPEDA
d) CCPA
Answer: b) GDPR
7. What is the role of “explainability” in ethical AI use in healthcare?
a) To ensure AI decisions are transparent and understandable
b) To reduce the cost of AI implementation
c) To increase the speed of AI processing
d) To improve the accuracy of AI predictions
Answer: a) To ensure AI decisions are transparent and understandable
8. Which of the following is an ethical challenge of using AI in mental health care?
a) Lack of qualified therapists
b) Inability to detect subtle emotional cues
c) High costs of therapy
d) Over-prescription of medication
Answer: b) Inability to detect subtle emotional cues
9. What does “data minimization” refer to in the context of AI ethics?
a) Collecting as much data as possible for better AI performance
b) Limiting data collection to what is necessary for the task
c) Deleting all patient data after use
d) Sharing data with third parties
Answer: b) Limiting data collection to what is necessary for the task
10. Which ethical issue arises when AI systems replace human caregivers in elderly care?
a) Increased efficiency
b) Dehumanization of care
c) Lower healthcare costs
d) Improved patient outcomes
Answer: b) Dehumanization of care
11. What is the main goal of “value alignment” in AI ethics?
a) Aligning AI goals with organizational profits
b) Ensuring AI systems reflect societal values and ethics
c) Reducing the computational power required by AI
d) Increasing AI adoption rates
Answer: b) Ensuring AI systems reflect societal values and ethics
12. Which of the following is a key ethical consideration when deploying AI in low-resource healthcare settings?
a) Ensuring equitable access to AI tools
b) Maximizing AI complexity
c) Prioritizing high-income patients
d) Focusing solely on urban areas
Answer: a) Ensuring equitable access to AI tools
13. What is “surveillance capitalism” in the context of AI in healthcare?
a) Using AI to monitor patient behavior for profit
b) Deploying AI systems in public hospitals
c) Training AI models on surveillance footage
d) Monitoring healthcare workers’ performance
Answer: a) Using AI to monitor patient behavior for profit
14. Which ethical principle requires AI systems to be accountable for their decisions?
a) Beneficence
b) Non-maleficence
c) Accountability
d) Justice
Answer: c) Accountability
15. What is a potential consequence of AI systems making incorrect medical predictions?
a) Increased trust in AI
b) Harm to patients and reputational damage
c) Lower healthcare costs
d) Faster treatment delivery
Answer: b) Harm to patients and reputational damage
16. Which of the following is an example of “informed consent” in AI-driven healthcare?
a) Patients agreeing to share their data for AI training
b) Clinicians choosing which AI tools to use
c) Hospitals purchasing AI systems
d) Governments regulating AI use
Answer: a) Patients agreeing to share their data for AI training
17. What is the ethical concern of using AI to predict patient life expectancy?
a) It may lead to improved palliative care
b) It could result in discrimination or denial of care
c) It increases patient autonomy
d) It reduces healthcare costs
Answer: b) It could result in discrimination or denial of care
18. Which ethical issue arises from the lack of diversity in AI training datasets?
a) Improved model accuracy
b) Algorithmic bias and unfair outcomes
c) Faster AI deployment
d) Increased transparency
Answer: b) Algorithmic bias and unfair outcomes
19. What is the role of “stakeholder engagement” in ethical AI development?
a) To ensure AI systems meet the needs of all stakeholders
b) To maximize profits for AI developers
c) To reduce the time needed for AI deployment
d) To focus solely on technical aspects
Answer: a) To ensure AI systems meet the needs of all stakeholders
20. Which of the following best describes “moral responsibility” in AI ethics?
a) Holding AI systems accountable for their actions
b) Assigning responsibility to humans for AI outcomes
c) Ignoring ethical considerations in AI design
d) Focusing only on legal compliance
Answer: b) Assigning responsibility to humans for AI outcomes
21. What is a potential ethical issue with using AI for resource allocation in healthcare?
a) Increased efficiency
b) Unfair prioritization of certain patient groups
c) Lower operational costs
d) Improved patient outcomes
Answer: b) Unfair prioritization of certain patient groups
22. Which ethical principle emphasizes avoiding harm to patients in AI-driven healthcare?
a) Beneficence
b) Non-maleficence
c) Autonomy
d) Justice
Answer: b) Non-maleficence
23. What is the ethical concern of using AI to automate clinical decision-making?
a) Increased patient satisfaction
b) Erosion of clinician-patient trust
c) Faster treatment delivery
d) Reduced healthcare costs
Answer: b) Erosion of clinician-patient trust
24. Which of the following is a key ethical consideration when sharing patient data with AI developers?
a) Ensuring data anonymization
b) Maximizing data volume
c) Focusing on financial benefits
d) Ignoring patient consent
Answer: a) Ensuring data anonymization
25. What is the ethical implication of AI systems replacing human judgment in critical care?
a) Improved accuracy
b) Loss of empathy and human connection
c) Faster decision-making
d) Lower costs
Answer: b) Loss of empathy and human connection
Section 1: Bias & Fairness in AI
1. An AI diagnostic tool performs poorly on minority populations due to underrepresentation in training data. What ethical issue arises?
a) Lack of transparency
b) Algorithmic bias
c) Data privacy violation
d) Over-reliance on AI
Answer: b) Algorithmic bias
Explanation: Bias occurs when training data lacks diversity, leading to inequitable outcomes. This violates the principle of justice in healthcare.
2. A hospital uses an AI system to prioritize patients for surgery, but it disproportionately delays care for elderly patients. What is the ethical concern?
a) Lack of accountability
b) Age discrimination
c) Data security breach
d) Informed consent
Answer: b) Age discrimination
Explanation: Discriminating based on age violates fairness and non-maleficence. AI systems must be audited for such biases.
Section 2: Transparency & Explainability
3. A doctor uses an AI tool to diagnose cancer but cannot explain how it reached its conclusion. What ethical principle is compromised?
a) Autonomy
b) Transparency
c) Justice
d) Beneficence
Answer: b) Transparency
Explanation: Clinicians and patients must understand AI decisions to trust and validate them. Lack of explainability undermines accountability.
4. A patient refuses treatment recommended by an AI system because they don’t understand its reasoning. What should the doctor do?
a) Override the refusal for patient safety
b) Respect the refusal and seek alternatives
c) Explain the AI’s decision in simpler terms
d) Refer the patient to another specialist
Answer: c) Explain the AI’s decision in simpler terms
Explanation: Patients have the right to understand their care. Simplifying AI outputs aligns with patient-centered care.
Section 3: Privacy & Data Security
5. An AI system trained on patient data is sold to a private company without patient consent. What ethical violation occurs?
a) Breach of confidentiality
b) Lack of transparency
c) Algorithmic bias
d) Over-reliance on AI
Answer: a) Breach of confidentiality
Explanation: Using patient data without consent violates privacy and autonomy, as per GDPR and HIPAA regulations.
6. A hospital’s AI system is hacked, exposing sensitive patient data. Who is ethically responsible?
a) The AI developer
b) The hospital’s IT team
c) The patients
d) The government
Answer: b) The hospital’s IT team
Explanation: Healthcare providers are responsible for safeguarding patient data. Failure to secure AI systems breaches non-maleficence.
Section 4: Accountability & Liability
7. An AI system misdiagnoses a patient, leading to harm. Who is liable?
a) The AI developer
b) The treating physician
c) The hospital
d) All of the above
Answer: d) All of the above
Explanation: Liability may be shared among developers, clinicians, and institutions, depending on the error’s cause (e.g., algorithm flaw, misuse).
8. A doctor blindly follows an AI recommendation without clinical judgment, resulting in patient harm. What ethical principle is violated?
a) Autonomy
b) Accountability
c) Justice
d) Beneficence
Answer: b) Accountability
Explanation: Clinicians must exercise independent judgment. Over-reliance on AI without oversight breaches professional accountability.
Section 5: Patient Autonomy & Consent
9. A patient is unaware that their data is being used to train an AI system. What ethical issue arises?
a) Lack of transparency
b) Breach of confidentiality
c) Lack of informed consent
d) Algorithmic bias
Answer: c) Lack of informed consent
Explanation: Patients must be informed and consent to how their data is used, as per autonomy and GDPR requirements.
10. A patient refuses AI-assisted surgery, preferring a human surgeon. What is the ethical response?
a) Override the refusal for better outcomes
b) Respect the refusal and proceed without AI
c) Explain the benefits of AI to change their mind
d) Refer the patient to another hospital
Answer: b) Respect the refusal and proceed without AI
Explanation: Patients have the right to refuse AI involvement, as per autonomy and informed consent.
Section 6: Clinical Decision-Making & Trust
11. A doctor disagrees with an AI diagnosis but follows it to avoid liability. What ethical concern arises?
a) Over-reliance on AI
b) Lack of transparency
c) Breach of confidentiality
d) Algorithmic bias
Answer: a) Over-reliance on AI
Explanation: Blindly following AI undermines clinical judgment and patient-centered care.
12. A patient distrusts an AI recommendation because it conflicts with their doctor’s opinion. What should the doctor do?
a) Override the AI and follow their own judgment
b) Explain the AI’s reasoning and seek consensus
c) Refer the patient to another specialist
d) Discontinue AI use entirely
Answer: b) Explain the AI’s reasoning and seek consensus
Explanation: Building trust requires transparency and shared decision-making, aligning with patient autonomy.
Section 7: Emerging Issues in AI Healthcare
13. A company markets an unproven AI diagnostic tool directly to patients. What ethical issue arises?
a) Lack of accountability
b) Exploitation of vulnerable populations
c) Breach of confidentiality
d) Over-reliance on AI
Answer: b) Exploitation of vulnerable populations
Explanation: Marketing unproven tools exploits patient trust and violates non-maleficence.
14. An AI system recommends expensive treatments that benefit the hospital financially. What ethical concern arises?
a) Conflict of interest
b) Lack of transparency
c) Algorithmic bias
d) Breach of confidentiality
Answer: a) Conflict of interest
Explanation: Financial incentives can bias AI recommendations, undermining justice and beneficence.
1. Ethical Principles and Frameworks
- Which ethical principle is most relevant to ensuring that AI in healthcare does not cause harm?
- A) Autonomy
- B) Beneficence
- C) Non-maleficence
- D) Justice
- Answer: C) Non-maleficence
- Which organization developed the “Ethical Principles for AI in Medicine and Healthcare”?
- A) World Health Organization (WHO)
- B) American Medical Association (AMA)
- C) UNESCO
- D) European Medicines Agency (EMA)
- Answer: A) World Health Organization (WHO)
- What does the principle of justice in AI ethics primarily address?
- A) Fair allocation of resources and benefits
- B) The right of patients to refuse AI-assisted treatment
- C) AI’s ability to outperform human doctors
- D) Profitability of AI-driven healthcare tools
- Answer: A) Fair allocation of resources and benefits
- Which of the following is NOT a core ethical principle in AI healthcare applications?
- A) Transparency
- B) Efficiency
- C) Accountability
- D) Equity
- Answer: B) Efficiency
- The AI Ethics Guidelines for Trustworthy AI, published by the European Commission, emphasize which key concept?
- A) AI should be completely autonomous
- B) AI should replace human healthcare workers
- C) AI should be lawful, ethical, and robust
- D) AI should only be used in private healthcare settings
- Answer: C) AI should be lawful, ethical, and robust
2. Bias and Fairness in AI
- Algorithmic bias in AI healthcare can lead to:
- A) Increased accuracy in diagnoses
- B) Fairer treatment outcomes
- C) Disparities in medical decision-making
- D) Increased acceptance of AI in medicine
- Answer: C) Disparities in medical decision-making
- What is one common cause of bias in AI healthcare systems?
- A) Too much data
- B) Use of diverse training datasets
- C) Underrepresentation of certain populations in training data
- D) Too many regulations
- Answer: C) Underrepresentation of certain populations in training data
- Which of the following strategies can help reduce bias in AI healthcare applications?
- A) Using historical medical records exclusively
- B) Ensuring diverse and representative training data
- C) Ignoring sensitive demographic factors
- D) Relying only on AI without human oversight
- Answer: B) Ensuring diverse and representative training data
- In AI-driven diagnostics, racial bias has been observed in which of the following medical areas?
- A) Predictive modeling for kidney disease
- B) Cancer detection in imaging
- C) Skin disease recognition
- D) All of the above
- Answer: D) All of the above
- What is a major concern with using historical patient data to train AI models?
- A) It increases computational costs
- B) It may reinforce existing biases
- C) It slows down AI decision-making
- D) It makes AI less explainable
- Answer: B) It may reinforce existing biases
3. Privacy and Data Security
- What is the primary legal framework governing patient data protection in the European Union?
- A) HIPAA
- B) GDPR
- C) FDA Regulations
- D) CCPA
- Answer: B) GDPR
- The Health Insurance Portability and Accountability Act (HIPAA) primarily applies to:
- A) The European Union
- B) The United States
- C) All AI-driven healthcare technologies globally
- D) Only government healthcare institutions
- Answer: B) The United States
- What is the primary ethical issue related to AI and patient privacy?
- A) AI can make more accurate predictions than doctors
- B) AI can access and share sensitive health data
- C) AI lacks emotional intelligence
- D) AI replaces human medical professionals
- Answer: B) AI can access and share sensitive health data
- In AI-driven healthcare, “informed consent” refers to:
- A) Patients agreeing to undergo AI-based treatment
- B) Patients being informed about how AI uses their data
- C) Doctors making decisions without consulting patients
- D) AI making decisions autonomously
- Answer: B) Patients being informed about how AI uses their data
- Which of the following is an important aspect of secure AI healthcare systems?
- A) Open access to all patient records
- B) Data encryption and anonymization
- C) Removing all regulations on AI use
- D) Selling patient data to research companies
- Answer: B) Data encryption and anonymization
4. Accountability and Decision-Making
- If an AI system misdiagnoses a patient, who is primarily accountable?
- A) The patient
- B) The AI itself
- C) The developers and healthcare providers
- D) No one, since AI is independent
- Answer: C) The developers and healthcare providers
- The concept of explainability in AI ethics refers to:
- A) AI systems providing clear reasons for their decisions
- B) AI making decisions without human involvement
- C) AI performing tasks faster than humans
- D) AI having complete autonomy over medical procedures
- Answer: A) AI systems providing clear reasons for their decisions
- The black box problem in AI ethics refers to:
- A) AI systems lacking transparency in decision-making
- B) AI only working in certain hospitals
- C) AI replacing all human doctors
- D) AI refusing to make ethical decisions
- Answer: A) AI systems lacking transparency in decision-making
- Which ethical principle is violated when an AI system gives different treatment recommendations for two patients with similar conditions?
- A) Autonomy
- B) Beneficence
- C) Justice
- D) Non-maleficence
- Answer: C) Justice
- Which of the following can improve AI accountability in healthcare?
- A) Making AI decisions fully autonomous
- B) Implementing regular audits and human oversight
- C) Keeping AI decision-making processes confidential
- D) Using proprietary algorithms without review
- Answer: B) Implementing regular audits and human oversight