Help ?

IGMIN: あなたがここにいてくれて嬉しいです. お願いクリック '新しいクエリを作成してください' 当ウェブサイトへの初めてのご訪問で、さらに情報が必要な場合は.

すでに私たちのネットワークのメンバーで、すでに提出した質問に関する進展を追跡する必要がある場合は, クリック '私のクエリに連れて行ってください.'

科学、技術、工学、医学(STEM)分野に焦点を当てています | ISSN: 2995-8067  G o o g l e  Scholar

logo image

IgMin Research | マルチディシプリナリーオープンアクセスジャーナルは、科学、技術、工学、医学(STEM)の広範な分野における研究と知識の進展に貢献することを目的とした権威ある多分野のジャーナルです.

140 of 153
Screening for Sexually Transmitted Infections in Adolescents with Genitourinary Complaints: Is There a Still Role for Endocervical Gram Stains?
Subah Nanda, Amanda Schoonover, Jasman Kaur, Annie Vu, Erica Tavares, Angela Zamarripa, Christian Kolacki, Lindsey Ouellette and Jeffrey Jones
Abstract

要約 at IgMin Research

私たちの使命は、学際的な対話を促進し、広範な科学領域にわたる知識の進展を加速することです.

Engineering Group Review Article 記事ID: igmin250

Artificial Intelligence & the Capacity for Discrimination: The Imperative Need for Frameworks, Diverse Teams & Human Accountability

Technology and Society Affiliation

Affiliation

    National University, Kuinua Tech LLC, USA

要約

The increasing integration of Artificial Intelligence (AI) in various industries has led to concerns about how these systems can perpetuate discrimination, particularly in fields like employment, healthcare, and public policy. Multiple academic and business perspectives on AI discrimination, focusing on the need for global policy coordination and ethical oversight to mitigate biased outcomes, ask for our technical innovators to create contingencies that will better protect humanity’s experience with AI’s ever-expanding reach. Central to the key constructs such as biased datasets, algorithmic transparency, and the global governance of AI systems can function as a harmful drawback to these systems. Without adequate data governance and transparency, AI systems can perpetuate discrimination. 
AI's ability to discriminate stems primarily from biased data and the opacity of machine learning models, necessitating proactive research and policy implementation on a global scale. These frameworks must transcend the limitations of the experiences or perspectives of their programmers to ensure that AI innovations are ethically sound and that their use in global organizations adheres to principles of fairness and accountability. This synthesis will explore how these articles advocate for comprehensive, continuous monitoring of AI systems and policies that address both local and international concerns, offering a roadmap for organizations to innovate responsibly while mitigating the risks of AI-driven discrimination.

数字

参考文献

    1. Ajunwa I. Artificial intelligence and the challenges of workplace discrimination. SSRN Electron J. 2020.
    2. Binns R, Veale M, Van Kleek M. Integrating ethics in AI development: A qualitative study. BMC Med Ethics. 2022;23:100.
    3. Westerman G. How to implement digital transformation successfully. Harv Bus Rev. 2020.
    4. Floridi L, Cowls J. The global impact of artificial intelligence on public policy. Sustainability. 2020;12(17):7076.
    5. S. Equal Employment Opportunity Commission. ITutorGroup to pay $365,000 to settle EEOC discriminatory hiring suit. US EEOC. 2023 Sep 11. Available from: https://www.eeoc.gov/newsroom/itutorgroup-pay-365000-settle-eeoc-discriminatory-hiring-suit
    6. Ribeiro MT, Singh S, Guestrin C. "Why should I trust you?" Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; 2016 Aug 13-17; San Francisco, CA. p. 1135-44. Available from: https://www.kdd.org/kdd2016/papers/files/rfp0573-ribeiroA.pdf
    7. Mitchell M, Wu S, Zaldivar A, Barnes P, Vasserman L, Hutchinson B, et al. Model cards for model reporting. In: Proceedings of the Conference on Fairness, Accountability, and Transparency; 2019 Jan 29-Feb 1; Atlanta, GA. p. 220-9. Available from: https://dl.acm.org/doi/10.1145/3287560.3287596
    8. Zanzotto FM. Human-in-the-loop artificial intelligence. J Artif Intell Res. 2019;64:243-52.
    9. National Institute of Standards and Technology (NIST). Artificial Intelligence Risk Management Framework (AI RMF) 1.0. 2023. Available from: https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1270.pdf
    10. Wójcik MA. Algorithmic discrimination in health care: an EU law perspective. PubMed Central (PMC). 2022 Jun 1. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9212826/
    11. Raji ID, Buolamwini J. Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial AI products. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society; 2019 Jan 27-28; Honolulu, HI. Available from: https://dl.acm.org/doi/10.1145/3306618.3314244

ソーシャルアイコン

研究を公開する

私たちは、科学、技術、工学、医学に関する幅広い種類の記事を編集上の偏見なく公開しています。

提出する

見る 原稿のガイドライン 追加 論文処理料

IgMin 科目を探索する
グーグルスカラー
welcome Image

Google Scholarは2004年11月にベータ版が発表され、幅広い学術領域を航海する学術ナビゲーターとして機能します。それは査読付きジャーナル、書籍、会議論文、論文、博士論文、プレプリント、要約、技術報告書、裁判所の意見、特許をカバーしています。 IgMin の記事を検索