Artificial Intelligence & the Capacity for Discrimination: The Imperative Need for Frameworks, Diverse Teams & Human Accountability
Technology and Society受け取った 18 Sep 2024 受け入れられた 09 Oct 2024 オンラインで公開された 10 Oct 2024
科学、技術、工学、医学(STEM)分野に焦点を当てています | ISSN: 2995-8067 G o o g l e Scholar
受け取った 18 Sep 2024 受け入れられた 09 Oct 2024 オンラインで公開された 10 Oct 2024
The increasing integration of Artificial Intelligence (AI) in various industries has led to concerns about how these systems can perpetuate discrimination, particularly in fields like employment, healthcare, and public policy. Multiple academic and business perspectives on AI discrimination, focusing on the need for global policy coordination and ethical oversight to mitigate biased outcomes, ask for our technical innovators to create contingencies that will better protect humanity’s experience with AI’s ever-expanding reach. Central to the key constructs such as biased datasets, algorithmic transparency, and the global governance of AI systems can function as a harmful drawback to these systems. Without adequate data governance and transparency, AI systems can perpetuate discrimination.
AI's ability to discriminate stems primarily from biased data and the opacity of machine learning models, necessitating proactive research and policy implementation on a global scale. These frameworks must transcend the limitations of the experiences or perspectives of their programmers to ensure that AI innovations are ethically sound and that their use in global organizations adheres to principles of fairness and accountability. This synthesis will explore how these articles advocate for comprehensive, continuous monitoring of AI systems and policies that address both local and international concerns, offering a roadmap for organizations to innovate responsibly while mitigating the risks of AI-driven discrimination.
The existing research on AI discrimination spans both theoretical and practical domains, with academic research focusing on the conceptual underpinnings of bias in AI systems and business research emphasizing actionable strategies for mitigating those biases. From an academic perspective, studies like those by Ajunwa [
] and Binns, et al. [ ] concentrate on understanding the origins of AI bias, particularly how historical data inputs lead to discriminatory outcomes. These works highlight the complex sociotechnical systems that feed into AI, revealing that bias is not just a technical issue but a socio-ethical challenge that requires rigorous data governance and ethical frameworks. Binns, et al. [ ], for example, emphasize the importance of embedding ethics into AI development, while Ajunwa [ ] stresses the necessity of regulatory oversight to prevent AI from perpetuating workplace discrimination. This theoretical discourse is foundational to understanding why AI bias occurs and offers a base for creating ethical AI systems that are more equitable across industries. Regulatory oversights such as the AI Risk Management Framework could be solutions for these biases, but its voluntary nature allows for gaps in systems that should not exist.In contrast, business research such as Westerman’s [
] work in Harvard Business Review and Floridi and Cowls’ [ ] global policy analysis focuses on the practical implications of AI bias for organizations. These studies pivot from theoretical discussions to implementation strategies, offering businesses solutions for mitigating AI biases through risk management frameworks, real-time monitoring, and policy alignment. Westerman [ ] points out the importance of continuous auditing and transparency in AI systems, advocating for practical steps that can be adopted by organizations to reduce discriminatory practices. Floridi and Cowls [ ] extend this by discussing how global coordination is necessary for AI governance, particularly in multinational organizations where AI impacts can cross national borders. While academic research provides a foundation for understanding the origins of AI discrimination, business research supplies actionable strategies to implement within an organizational framework, emphasizing real-time adjustments and governance as critical to addressing AI bias.The iTutor Group case serves as a cautionary tale about the unintended consequences of AI in recruitment processes. Though there is much excitement surrounding the idea of using Applicant Tracking Systems (ATS), they are still capable of taking on discriminative practices programmed in them by their human users. Technology is often only as smart as those who are using the tool. The company’s AI-powered recruiting software, designed to streamline and enhance the hiring process, inadvertently led to systematic age discrimination. The software automatically rejected over 200 qualified applicants based on age criteria, leading to a lawsuit from the EEOC. According to the EEOC’s lawsuit, iTutorGroup programmed their tutor application software to automatically reject female applicants aged 55 or older and male applicants aged 60 or older. iTutorGroup rejected more than 200 qualified applicants based in the United States because of their age [
]. This incident underscores the risks associated with AI, particularly when it comes to embedding human biases into automated systems. It highlights the need for rigorous oversight and regular audits of AI systems to detect and correct such biases.The case highlights the legal and regulatory risks organizations face when deploying AI technologies. These risks are often brought on by their end users who implement their own prejudices into a system that is meant to be utilized in a way that brings efficiency to an organization. The settlement with the EEOC not only resulted in a financial penalty for iTutor Group but also required the company to implement new anti-discrimination policies and practices. Not only is the implementation of these practices important, but this also emphasizes the importance of compliance with anti-discrimination laws and the role of governance frameworks in managing the deployment of AI in business processes. The iTutor Group case illustrates the tension between the efficiency gains offered by AI and the potential loss of fairness and human judgment in decision-making processes. While AI can significantly streamline operations, over-reliance on such technology can lead to dehumanized and unjust outcomes. This case shows the growing necessity of maintaining a balance between automation and human oversight, particularly in decisions that have profound implications for individuals' lives and careers.
One key area of interest in AI discrimination research is the nature and impact of data bias. Theoretical discussions, such as those by Ajunwa [
] and Binns, et al. [ ], focus on how historical and systemic biases embedded in training datasets perpetuate discrimination. AI systems are often designed using data that reflects existing social inequalities, leading to biased outcomes when these models are deployed in real-world applications like hiring or healthcare. This concern over biased datasets emphasizes the critical need for ethical data curation, where AI systems must be trained on representative and diverse data to prevent skewed outcomes.Theoretical research here seeks to understand how data choices affect AI decisions and the moral responsibilities of those developing these systems.
Another area of interest is algorithmic transparency and accountability, which addresses how AI systems make decisions and how these processes can be traced and evaluated. Both Floridi and Cowls [
] and Westerman [ ] emphasize that AI systems need to be transparent, especially as they become integral to business operations and public policy. The difficulty lies in balancing the complexity of AI algorithms with the need for interpretability, ensuring that AI's decision-making processes can be audited for fairness. Theoretical questions arise around how to design algorithms that are not only efficient but also comprehensible and accountable. This field also explores the implications of "black-box" AI models, where decision-making processes are opaque, making it challenging to detect and mitigate bias. Thus, creating the feedback loop of biased data that continues to multiply within itself without human intervention.A third theoretical focus is on global policy and governance in AI. Given the transnational nature of AI technologies, there is a pressing need for global frameworks that ensure the ethical use of AI across different jurisdictions. Floridi and Cowls [
] highlight the importance of international coordination, particularly as AI operates beyond the borders of individual countries. The challenge here is developing governance models that harmonize regulatory standards while respecting national legal and ethical differences. A notable example of governance would be the AI RMF. If it were non-voluntary implementation, like data privacy law governance requiring processes, the AI RMF could be effective. Theoretical research in this area seeks to explore how nations can collaborate to prevent discrimination and ensure fairness in AI deployment on a global scale, particularly as AI's influence grows in both the public and private sectors.The literature on AI discrimination, informed by theoretical and empirical research, emphasizes key constructs such as bias in datasets, algorithmic transparency, and ethical governance. A central theme is how AI systems inherit biases from the data used to train them, a construct consistently highlighted in studies like those by Ajunwa [
] and Binns, et al. [ ]. These biases arise from historical patterns of discrimination in various domains, which are reflected in the datasets that AI models use. As a result, AI systems often reinforce and perpetuate these inequalities, particularly in critical fields like employment and healthcare. Theoretical discussions about bias focus on how data choice and preprocessing affect outcomes, leading to calls for diverse and representative datasets that minimize the potential for skewed or discriminatory results.Another important concept is algorithmic transparency and explainability, which relates to how AI systems make decisions. The complexity of machine learning algorithms, especially those that function as "black boxes," makes it difficult for users and regulators to understand how AI arrives at its conclusions. This lack of transparency is a major issue, as it complicates efforts to audit AI systems for bias and ensure accountability, as discussed by Floridi and Cowls [
] and Westerman [ ]. The focus on transparency also links to the concept of fairness, as without a clear understanding of how decisions are made, it becomes difficult to ensure equitable outcomes. Research in this area pushes for more interpretable AI systems and policies that require organizations to audit and explain their AI tools' decision-making processes.The construct of global governance and policy frameworks is central to discussions on AI discrimination. Given that AI technologies often operate across borders, the need for internationally coordinated policy responses becomes paramount, as argued by Floridi and Cowls [
]. This concept revolves around creating regulatory frameworks that address AI's global impact while considering regional legal and ethical differences. These frameworks are essential for managing risks like discrimination and ensuring that AI's implementation aligns with societal values. The research suggests that addressing AI bias and discrimination requires collaboration across industries, governments, and academia to develop policies that balance innovation with ethical considerations.The articles referenced demonstrate several innovative concepts for managing AI and technology in global organizations, particularly regarding policy implementation, maintenance, and contingency planning for addressing AI discrimination. One key innovation lies in the emphasis on transparent algorithmic governance. Both Floridi and Cowls [
] and Westerman [ ] stress the importance of establishing frameworks that hold AI systems accountable through continuous monitoring and audit trails. This approach aligns with global trends in policy where businesses and governments are expected to maintain transparency in AI's decision-making processes. Such transparency is essential not only to mitigate AI discrimination but also to build public trust and comply with international regulations.Another innovative concept is the embedding of ethical frameworks into AI development. As highlighted by Binns, et al. [
], there is a growing movement toward integrating ethics into AI systems from the ground up, rather than as an afterthought. This proactive approach ensures that considerations of fairness, bias, and discrimination are central to AI development in global organizations. This innovation is crucial for maintaining ethical standards across industries and geographies, where AI may operate in diverse cultural and legal environments. Ajunwa [ ] similarly advocates for regulatory frameworks that enforce fairness and equality in AI usage, particularly in employment, where AI systems risk reinforcing historical biases.These articles underscore the significance of global policy coordination as a key innovation for managing AI risks. Floridi and Cowls [
] discuss how global organizations must adopt unified policies to prevent discrepancies across different regions. The global nature of AI requires an international approach to governance, ensuring that ethical considerations and compliance standards are maintained worldwide. This innovative approach helps businesses navigate different regulatory landscapes while ensuring AI systems do not perpetuate discrimination across borders. By harmonizing AI policy globally, organizations can better manage AI innovations while maintaining contingency plans to manage unforeseen risks related to bias and discrimination.To promote transparency in AI decision-making, several frameworks and methods have been developed to make AI outputs more understandable to non-technical stakeholders. One popular method is Local Interpretable Model-agnostic Explanations (LIME), which explains the predictions of machine learning models by approximating the models locally with an interpretable one. This helps non-technical stakeholders understand the reasoning behind specific.
AI predictions, thereby fostering greater trust [
]. By focusing on the local behavior of the model, LIME provides an easy-to-grasp explanation for how AI systems arrive at decisions.Another method is the use of Model Cards, proposed by Google AI, which standardizes documentation for machine learning models. These cards provide a comprehensive summary of an AI model’s purpose, the data it was trained on, performance results, and intended use cases. The simplicity and clarity of these cards ensure that non-technical users can understand AI models’ risks, limitations, and potential biases [
]. This form of documentation is a valuable tool for fostering transparency, especially in contexts where discrimination or bias is a concern.In addition, decision trees, flowcharts, and Human-In-The-Loop (HITL) systems offer methods for enhancing transparency in AI. Decision trees break down decision-making processes into visual diagrams, helping stakeholders understand the internal workings of rule-based AI systems [
]. Meanwhile, HITL frameworks involve human oversight at key decision points in AI processes, allowing stakeholders to audit and modify AI outputs, ensuring greater accountability [ ]. The implementation of these frameworks, alongside ethical standards such as the AI Risk Management Framework (AI RMF), promotes fairness and ensures AI decisions are accessible to non-technical users [ ].The articles collectively advocate for a thoughtful approach to research, evaluation, and implementation by emphasizing the importance of staying ahead in a rapidly evolving landscape. They highlight that forward-thinking research involves actively seeking out the latest trends, technologies, and methodologies rather than waiting for them to emerge. This forward-thinking attitude allows organizations and individuals to anticipate changes and adapt more effectively, leading to a competitive advantage. By engaging in continuous research, stakeholders can identify potential opportunities and challenges early, facilitating more informed decision-making and strategic planning.
The articles stress the value of rigorous evaluation processes in assessing the viability and impact of innovative ideas. Proactive evaluation involves systematically analyzing the potential benefits and risks associated with innovations before fully committing resources. This approach ensures that new initiatives are not only feasible but also aligned with overarching goals and objectives. By implementing a structured evaluation framework, organizations can mitigate risks and optimize the implementation of innovative ideas, enhancing their chances of success. Together, initiative-taking research and evaluation create a dynamic environment where innovative concepts can be effectively assessed, refined, and integrated into existing systems, driving progress, and fostering sustainable growth.
The articles addressing AI discrimination provide valuable insights into both the beneficial and detrimental aspects of IT policy and strategy for organizations. On the beneficial side, the emphasis on algorithmic transparency and ethical governance frameworks is widely advocated as a critical strategy to mitigate AI bias. Floridi and Cowls [
] emphasize the importance of global governance structures to ensure that AI systems operate fairly and ethically across borders. This global approach helps organizations align their AI operations with international standards, reducing the risk of discriminatory outcomes and fostering innovation in AI while maintaining public trust. Also, Ajunwa [ ] highlights how proactive regulatory frameworks can be instrumental in preventing AI systems from perpetuating bias, especially in high-stakes sectors like employment. Organizations that incorporate such frameworks not only comply with evolving regulations but also enhance their reputation for fairness and inclusivity. AI RMF is a framework that often finds itself center stage when discussing balance for AI innovation for example.The research also reveals detrimental challenges when policies are poorly implemented or insufficiently monitored. Binns, et al. [
] warn that without adequate ethical oversight, AI systems can perpetuate harmful biases, particularly when relying on biased datasets. This underscores the risk of implementing AI without appropriate checks and balances, leading to discrimination in areas like hiring or healthcare. These imbalances have already shown themselves in research such as Wójcik’s [ ] research article on algorithmic discrimination in healthcare in relation to intellectual disabilities in which minorities were more likely to be labeled with such disabilities associated with biased informational lenses that did not measure cultural differences, language barriers, accesses to stable education, and other variables. Westerman [ ] identifies gaps in real-time monitoring as a significant issue, where organizations might deploy AI systems without ongoing evaluation of their impacts. These lapses can result in long-term reputational and financial damage if discriminatory outcomes are discovered too late. While IT policy can drive innovation and operational efficiency, its implementation must be backed by rigorous, proactive measures to ensure that AI enhances, rather than hinders, organizational fairness.The articles approach IT policy for managing technology and innovation from a global perspective by advocating for international coordination and shared governance to address AI discrimination. Floridi and Cowls [
] highlight the necessity for globally harmonized policies, particularly as AI systems are deployed across borders and affect various sectors, from healthcare to public policy. This global perspective acknowledges that different countries have diverse regulatory frameworks, and a unified approach is essential to ensuring that AI systems are held to consistent ethical and fairness standards. The authors argue that without such coordination, AI technologies could exacerbate inequalities across regions, reinforcing systemic biases and limiting the global scalability of ethical AI systems. This call for transnational AI governance underscores the importance of creating policies that transcend national boundaries, ensuring that organizations deploying AI globally can maintain compliance and ethical integrity.The research by Ajunwa [
] and Westerman [ ] complements this global approach by discussing the role of forward-thinking regulation and corporate governance. Ajunwa focuses on the regulatory aspects of AI bias in the U.S. employment sector, but the principles she outlines, such as ethical data collection and anti-discrimination frameworks, are applicable on a global scale. Westerman [ ] takes this further by advocating for continuous monitoring and auditing of AI systems, ensuring that organizations worldwide remain adaptable to both local and international regulatory shifts. Together, these articles argue that managing AI innovation requires a combination of technical oversight, ethical frameworks, and flexible IT policies that adapt to the evolving global landscape. This synthesis of initiative-taking policy implementation and global governance provides a comprehensive framework for managing the risks of AI discrimination while promoting innovation on an international level.Promoting innovation & protecting fairness
Balancing the promotion of innovation with the protection of fairness in AI is a complex yet essential task. On one hand, AI systems drive innovation by optimizing processes, providing personalized solutions, and improving decision-making in fields like healthcare, finance, and education. However, unchecked innovation can lead to unintended consequences, such as reinforcing societal biases, discrimination, and lack of transparency. To address this, a regulatory framework must encourage innovation while embedding fairness at the core of AI development.
A key approach is the implementation of ethical guidelines and frameworks that prioritize fairness, such as the Artificial Intelligence Risk Management Framework (AI RMF) [
]. The framework highlights the importance of managing AI risks to prevent harmful biases and ensure fairness without stifling innovation. It advocates for continuous monitoring and assessment of AI systems, ensuring they evolve in a way that aligns with ethical considerations while maintaining innovative momentum. Integrating human oversight into AI decision-making, such as using HITL systems, strikes a balance between automated efficiency and human judgment. This framework allows for greater accountability and adjustments when biases or fairness issues arise, promoting fairness without hindering the innovative capabilities of AI technologies [ ]. Through HITL, AI systems can be adapted to avoid biases while still leveraging machine learning's innovative potential. Promoting innovation in AI requires a multi-faceted approach that includes transparency, regulatory frameworks, and human oversight to ensure that technological advancements do not compromise fairness.Policymakers play a crucial role in preventing AI discrimination by establishing comprehensive regulatory frameworks that ensure ethical AI development and deployment. One key recommendation is to enact legislation mandating algorithmic transparency, which requires organizations to disclose how their AI systems operate, including the data used for training and the decision-making processes involved. This transparency can be enforced through regular audits conducted by independent third parties to assess AI systems for potential biases and discriminatory practices. Additionally, establishing an ethical AI framework that incorporates diverse stakeholder input—including affected communities, civil rights organizations, and technology experts—can help ensure that the perspectives of marginalized groups are considered during AI development [
].Another vital action is to promote the use of fairness metrics in AI systems. Policymakers should mandate that organizations employ standardized metrics to evaluate and mitigate bias in their algorithms. These metrics can help identify disparities in how different demographic groups are treated by AI systems, allowing developers to adjust their algorithms accordingly. Furthermore, international collaboration is essential, as AI discrimination is a global issue that transcends borders. Establishing a global consortium for AI ethics could facilitate knowledge sharing, best practices, and standardized regulations that address discrimination in AI. This consortium could also work towards creating a universal framework for ethical AI deployment that prioritizes human rights and equality (Binns, 2018).
In conclusion, preventing AI discrimination requires a multifaceted approach that combines legislative action, regulatory frameworks, and ethical guidelines for AI development.
By promoting algorithmic transparency and mandating independent audits, policymakers can ensure that AI systems are held accountable for their decision-making processes. Additionally, implementing standardized fairness metrics will help identify and mitigate biases, leading to more equitable outcomes for all individuals. The exploration of AI, particularly through the lens of ethical considerations and the implementation of frameworks like the AI RMF, reveals the intricate balance between fostering innovation and ensuring fairness. The insights derived from case studies illustrate the importance of incorporating ethical guidelines at every stage of AI development to mitigate the risk of discrimination and bias. By recognizing the significance of algorithm transparency and stakeholder engagement, organizations can develop AI systems that not only drive technological advancements but also uphold the values of equity and justice. The path forward requires a commitment to continuous evaluation and adaptation of AI systems, alongside the integration of human oversight mechanisms. A unified approach to ethical AI development that includes diverse stakeholder input can significantly enhance the fairness and inclusivity of AI technologies. The key takeaway is that proactive and collaborative measures must be taken to foster an AI landscape that prioritizes human rights and equality, ensuring that these powerful tools benefit everyone without perpetuating existing disparities. It should also be mentioned that successful AI deployment hinges on a collaborative approach that values both technological progress and ethical responsibility. By prioritizing these aspects, we can harness the potential of AI to create a more inclusive and equitable future.
Ajunwa I. Artificial intelligence and the challenges of workplace discrimination. SSRN Electron J. 2020.
Binns R, Veale M, Van Kleek M. Integrating ethics in AI development: A qualitative study. BMC Med Ethics. 2022;23:100.
Westerman G. How to implement digital transformation successfully. Harv Bus Rev. 2020.
Floridi L, Cowls J. The global impact of artificial intelligence on public policy. Sustainability. 2020;12(17):7076.
S. Equal Employment Opportunity Commission. ITutorGroup to pay $365,000 to settle EEOC discriminatory hiring suit. US EEOC. 2023 Sep 11. Available from: https://www.eeoc.gov/newsroom/itutorgroup-pay-365000-settle-eeoc-discriminatory-hiring-suit
Ribeiro MT, Singh S, Guestrin C. "Why should I trust you?" Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; 2016 Aug 13-17; San Francisco, CA. p. 1135-44. Available from: https://www.kdd.org/kdd2016/papers/files/rfp0573-ribeiroA.pdf
Mitchell M, Wu S, Zaldivar A, Barnes P, Vasserman L, Hutchinson B, et al. Model cards for model reporting. In: Proceedings of the Conference on Fairness, Accountability, and Transparency; 2019 Jan 29-Feb 1; Atlanta, GA. p. 220-9. Available from: https://dl.acm.org/doi/10.1145/3287560.3287596
Zanzotto FM. Human-in-the-loop artificial intelligence. J Artif Intell Res. 2019;64:243-52.
National Institute of Standards and Technology (NIST). Artificial Intelligence Risk Management Framework (AI RMF) 1.0. 2023. Available from: https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1270.pdf
Wójcik MA. Algorithmic discrimination in health care: an EU law perspective. PubMed Central (PMC). 2022 Jun 1. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9212826/
Raji ID, Buolamwini J. Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial AI products. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society; 2019 Jan 27-28; Honolulu, HI. Available from: https://dl.acm.org/doi/10.1145/3306618.3314244
Hunter DE. Artificial Intelligence & the Capacity for Discrimination: The Imperative Need for Frameworks, Diverse Teams & Human Accountability. IgMin Res. . October 10, 2024; 2(10): 801-806. IgMin ID: igmin250; DOI:10.61927/igmin250; Available at: igmin.link/p250
次のリンクを共有した人は、このコンテンツを読むことができます:
National University, Kuinua Tech LLC, USA
Address Correspondence:
Destiny J Hunter, National University, Kuinua Tech LLC, USA, Email: admin@kuinuatech.org; destiny.morgan@kuinuatechllc.org
How to cite this article:
Hunter DE. Artificial Intelligence & the Capacity for Discrimination: The Imperative Need for Frameworks, Diverse Teams & Human Accountability. IgMin Res. . October 10, 2024; 2(10): 801-806. IgMin ID: igmin250; DOI:10.61927/igmin250; Available at: igmin.link/p250
Copyright: © 2024 Hunter DJ. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Ajunwa I. Artificial intelligence and the challenges of workplace discrimination. SSRN Electron J. 2020.
Binns R, Veale M, Van Kleek M. Integrating ethics in AI development: A qualitative study. BMC Med Ethics. 2022;23:100.
Westerman G. How to implement digital transformation successfully. Harv Bus Rev. 2020.
Floridi L, Cowls J. The global impact of artificial intelligence on public policy. Sustainability. 2020;12(17):7076.
S. Equal Employment Opportunity Commission. ITutorGroup to pay $365,000 to settle EEOC discriminatory hiring suit. US EEOC. 2023 Sep 11. Available from: https://www.eeoc.gov/newsroom/itutorgroup-pay-365000-settle-eeoc-discriminatory-hiring-suit
Ribeiro MT, Singh S, Guestrin C. "Why should I trust you?" Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; 2016 Aug 13-17; San Francisco, CA. p. 1135-44. Available from: https://www.kdd.org/kdd2016/papers/files/rfp0573-ribeiroA.pdf
Mitchell M, Wu S, Zaldivar A, Barnes P, Vasserman L, Hutchinson B, et al. Model cards for model reporting. In: Proceedings of the Conference on Fairness, Accountability, and Transparency; 2019 Jan 29-Feb 1; Atlanta, GA. p. 220-9. Available from: https://dl.acm.org/doi/10.1145/3287560.3287596
Zanzotto FM. Human-in-the-loop artificial intelligence. J Artif Intell Res. 2019;64:243-52.
National Institute of Standards and Technology (NIST). Artificial Intelligence Risk Management Framework (AI RMF) 1.0. 2023. Available from: https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1270.pdf
Wójcik MA. Algorithmic discrimination in health care: an EU law perspective. PubMed Central (PMC). 2022 Jun 1. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9212826/
Raji ID, Buolamwini J. Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial AI products. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society; 2019 Jan 27-28; Honolulu, HI. Available from: https://dl.acm.org/doi/10.1145/3306618.3314244