Balancing Act: Exploring the Interplay Between Human Judgment and Artificial Intelligence in Problem-solving, Creativity, and Decision-making
Cognitive Resonance受け取った 15 Feb 2024 受け入れられた 22 Mar 2024 オンラインで公開された 25 Mar 2024
科学、技術、工学、医学(STEM)分野に焦点を当てています | ISSN: 2995-8067 G o o g l e Scholar
Next Full Text
Fibrin Contributes to an Improvement of an in vitro Wound Repair Model using Fibroblast-populated Collagen Lattices
Previous Full Text
Modeling of an Electric-fired Brick Oven, Directly Heated
受け取った 15 Feb 2024 受け入れられた 22 Mar 2024 オンラインで公開された 25 Mar 2024
This study explores the repercussions of excessive reliance on Artificial Intelligence (AI) on human cognitive processes, specifically targeting problem-solving, creativity, and decision-making. Employing qualitative semi-structured interviews and Interpretative Phenomenological Analysis (IPA), it delves into the nuanced challenges and risks stemming from an overemphasis on AI. The research illuminates a nuanced landscape: while AI streamlines problem-solving tasks and provides valuable support, there’s a crucial need to safeguard human judgment and intuition. In the realm of creativity, divergent viewpoints emerge, underscoring concerns regarding AI’s potential limitations and advocating for a harmonious interplay between AI-generated suggestions and individual creative thought. Regarding decision-making, participants recognize AI’s utility but underscore the necessity of blending AI insights with critical thinking and consideration of unique circumstances. They caution against complacency, advocating for a judicious equilibrium between AI guidance and individual expertise. This study innovates by providing multifaceted insights into the complexities of AI-human interaction, uncovering nuanced perspectives on its impacts across problem-solving, creativity, and decision-making domains. By bridging this gap, it advances understanding of how AI integration influences cognitive processes, offering practical implications for fostering a balanced approach. Its innovative methodology combines qualitative interviews and IPA, offering rich, nuanced data that provide a deeper understanding of the subject matter. This research serves as a beacon for promoting awareness of the risks associated with overreliance on AI, advocating for a mindful integration that upholds human agency while leveraging AI capabilities effectively.
In recent years, there has been rapid advancement in Artificial Intelligence (AI), revolutionizing various aspects of our lives. AI has transformed industries, improved efficiency, and provided innovative solutions to complex problems. The term AI now encompasses the broad concept of intelligent machines with operational and social implications, projected to reach a market value of 3 trillion by 2024 [
]. As the abundance of information continues to expand, humans are increasingly relying on AI systems for various aspects of their lives, including research, work, entertainment, and education [ - ].Since the concept of “Just google it,” technology become ingrained in our digital age, where individuals instinctively turn to search engines for quick answers to their queries. As AI continues to progress, there is a growing concern about the overdependence on AI technologies and their potential impact on human cognition in terms of creativity, problem-solving, and decision-making processes. Carr [
] believes that “we are evolving from being cultivators of personal knowledge to being hunters and gatherers in the electronic data forest... dazzled by the net’s treasures, we are blind to the damage we may be doing to our intellectual lives”. Siemens, et al. [ ] respond to this by stating, “AI is not a future technology. It is already present in our daily lives, often shaping, behind the scenes, the types of information we encounter”. Through AI, “machines have evolved to the point where they can now do what we might think of as complex cognitive work, such as math equations, recognizing language and speech, and writing” [ ]. Beyond that, Jeste, et al. [ ] argue that AI intelligence “does not best represent the technological needs of advancing society because it is ‘wisdom’, rather than intelligence, that is associated with greater well-being, happiness, health, and perhaps even longevity of the individual and society. Thus, the future need in technology is for artificial wisdom (AW)”.AI-Augmented Minds, commonly known as Augmented Intelligence [
], is a term that describes the complex and beneficial interaction between humans and AI technologies. It refers to how AI can help and improve human cognitive processes by integrating AI intelligence into various aspects of human thinking, problem-solving, decision-making, and creativity to make them more effective and productive [ ]. Hence, Cremer & Kasparov [ ] emphasize the positive potential of AI and counter the pessimistic view that AI will have detrimental effects on society and organizations. They highlight the belief that AI has the capacity to enhance productivity and automate routine cognitive tasks, which can ultimately be beneficial rather than threatening. To achieve that, AW systems will be built upon neurobiological models of human wisdom. These systems should possess the ability to: a) learn from experience and rectify errors, b) demonstrate compassionate, impartial, and ethical behaviors and c) recognize human emotions and assist users in managing their emotions and making wise choices [ ].This study proposes the term “Artificial-Intelligence-Minds” or “AI-Minds,” which refers to the phenomenon where individuals excessively rely on AI tools and systems as a source of information, guidance, and decision-making, potentially altering their intellectual characteristics and cognitive processes. This overreliance on AI raises questions about the extent to which it may shape human thinking patterns, influence higher-order thinking skills, and impact the overall cognitive abilities of individuals. By considering the implications of the “Just google it” mentality and exploring the concept of AI-Minds, we can gain insights into the potential risks associated with overreliance on AI and the ways in which AI integration might shape human cognition. In addressing human concerns regarding the power dynamics between humans and intelligent machines, it is important to emphasize that AI should function solely as a service provider to humans [
]. This highlights the significance of adhering to ethical principles and acknowledging the value of rational decision-making in the context of AI-human interactions [ ].The limited research on the impact of overreliance on AI technologies on human cognition, specifically on creativity, problem-solving, and decision-making, hinders our comprehensive understanding of its consequences and implications. While some studies focus on the benefits of AI augmentation, there is a lack of empirical evidence and theoretical frameworks examining the risks and limitations of excessive reliance on AI. This gap inhibits the development of responsible and ethical AI utilization and calls for investigations into the potential drawbacks of AI integration.
Understanding the impact of overreliance on AI is crucial for informing the design, implementation, and regulation of AI technologies. In recent years, the proliferation of AI technologies has led to an unprecedented reliance on them for various cognitive tasks, ranging from simple problem-solving to complex decision-making processes. However, this increased dependence raises concerns about its impact on human cognition and decision-making abilities.
As humans delegate more cognitive tasks to AI systems, questions arise about the consequences of such reliance on creativity, problem-solving, and decision-making processes. The transition from AI-augmented minds to AI-centric minds, where individuals excessively rely on AI for cognitive tasks, presents a paradigm shift with profound implications. This shift prompts the need for a critical examination of the consequences, risks, and limitations of this dependency on AI, particularly in shaping human thinking patterns and higher-order cognitive skills.
This study aims to fill this research gap by critically examining the impact of overreliance on AI technologies on human cognition, with a specific focus on problem-solving, creativity, and decision-making. By elucidating the ramifications of excessive reliance on AI, the research seeks to inform responsible and ethical AI utilization practices. Importantly, the study does not delve into the technical aspects of AI development or implementation but instead focuses solely on the cognitive processes of individuals and the associated risks posed by overreliance on AI.
Theoretical approach: The theoretical approach for the study can draw upon Technological Determinism (TD), which explores the idea that technology plays a significant role in shaping society and human behavior [
- ]. The adoption of disruptive innovations such as AI offers both opportunities and threats for all stakeholders involved [ ]. Despite the criticisms of TD, it remains prevalent as analysts rely on it to understand the integration of advancing technologies such as AI in diverse social contexts, as well as the reactions and responses we all encounter when faced with novel machines and alternative methods of accomplishing tasks [ , ]. Cremer & Kasparov [ ] acknowledge that the initial stages of implementing and developing new technology can be disruptive but argue that the true value of AI often becomes apparent over time.Another theoretical approach to the current study is the augmented Cognition Theory (ACT). This theory focuses on designing technologies, including AI, to enhance human cognitive abilities [
]. It emphasizes the idea that technology can be used to augment human cognitive processes, such as perception, attention, memory, and problem-solving [ ]. By leveraging AI capabilities, such as data processing, pattern recognition, and information retrieval, systems can provide support and assistance to individuals, ultimately enhancing their cognitive performance [ ]. Typically, an augmented cognition system is composed of three primary elements: cognitive state sensors, adaptation strategies, and control systems [ ]. Cognitive state sensors are devices that assess the user’s cognitive and emotional states by analyzing behavioral, physiological, and neurophysiological signals [ ]. Adaptation strategies are techniques that adjust the interaction between the user and the system based on the user’s state, such as modifying the presentation of information, offering feedback, or providing assistance [ ]. Control systems are algorithms that coordinate the sensors and adaptation strategies to enhance human performance and optimize the user experience [ ].In the context of this study, both TD and ACT provide a theoretical lens to examine how the increasing reliance on AI technologies may lead to a shift from AI-Augmented minds, where AI enhances human cognition, to AI-Minds, where humans excessively rely on AI for cognitive processes. Both perspectives help in understanding the potential consequences and implications of such a shift in human problem-solving, creativity, and decision-making.
To begin, it is important to grasp the concept of cognition and how it can be coordinated between humans and AI. According to Korteling, et al. [
], “for the time being, AI systems will have fundamentally different cognitive qualities and abilities than biological systems”. Cognition can be defined as “the sensory processes, general operations, and complex integrated activities involved in interacting with information” [ ]. Siemens, et al. further elaborate that sensory processes encompass vision, perception, and attention, while general operations involve language, memory, recognition, recall, information seeking and management bbehaviors Complex integrated activities include reasoning, judgment, decision making, problem-solving sensemaking, and creativity [ ]. They also emphasize the cognitive tasks that can be augmented by AI or performed by humans (Figure 1).One critical feature of human cognition is problem-solving, which is “the process of constructing and applying mental representations of problems to finding solutions to those problems that are encountered in nearly every context” [
]. Problem-solving can be characterized by two fundamental attributes: 1) it involves constructing a mental representation of the given problem situation based on the provided information and, 2) problem-solving often relies on retrieving problem schemas or previously stored problem-solving experiences from the solver’s memory [ ]. In contrast, AI excels at processing and analyzing vast amounts of data quickly and accurately, as well as utilizing machine learning algorithms to identify patterns, make predictions, and extract insights from structured and unstructured data [ , ].Another distinctive feature of human cognition is creativity, which is a complex process of the human mind that is usually associated with problem-solving [
]. The concept of creativity encompasses both cognitive and embodied aspects of human thought and action [ ]. Additionally, it extends beyond being solely an interpersonal skill, as it is influenced by an individual’s cognition, personality, motivation, background, and the specific context it is expressed in [ ]. Boden [ ] classified creativity into three distinct types, each characterized by unique methods for generating novel ideas. The first type involves the construction of new ideas by combining familiar concepts in unexpected or unconventional ways. The second and third types, known as exploratory and transformational creativity, are closely linked and entail establishing fresh connections between familiar ideas or exploring and transforming existing concepts. On the other hand, AI creativity heavily relies on training data and predefined algorithms. AI can generate outputs that may appear creative, but they are essentially based on patterns and combinations learned from the provided data [ , ]. Furthermore, AI creativity is limited in its ability to generate truly original and unconventional ideas or solutions [ , ]. However, the development of AI has recently highlighted serious limitations in human rationality and shown that computers can be highly creative [ ].The third important feature of human cognition is decision-making which is a natural result of problem-solving and creativity. Decision-making is an essential skill that plays a pivotal role in our daily lives, enabling us to adapt to our surroundings and exercise autonomy [
]. It entails the capacity to select among multiple options, and researchers from various disciplines have examined and explored this process through different theoretical perspectives [ ]. AI, on the other hand, usually follows predefined algorithms and rules to make decisions. AI uses mathematical models, statistical techniques, and logical reasoning to evaluate options and select the most optimal solution based on given criteria [ , ]. Some scholars argue that in specific domains, AI has demonstrated its superiority over human decision-making, such as in politics, where advanced strategic thinking and analysis of extensive data are required [ ].Given the focus on responsible and ethical AI utilization, Siemens, et al. [
] argue that addressing bias, ethics, suitability, and long-term impacts on individuals and society is of utmost importance. It is crucial to recognize that biases present in AI systems can also influence human systems. Therefore, it is essential to shed light on how AI already affects complex knowledge processes in order to mitigate its influence [ ]. At the 41st session of the General Conference of UNESCO, held in Paris from November 9 to 24, 2021, the profound and dynamic impacts of AI on societies, the environment, ecosystems, and human lives were acknowledged. It was recognized that AI’s influence on human thinking, interaction, and decision-making, as well as its effects on education, sciences, culture, and communication, can be both positive and negative [ ]. Therefore, it is crucial to address ethical concerns related to informed consent, data ownership, algorithmic accountability, and the possibility of unintended consequences [ - ]. However, Héder [ ] argues that “the current wave of Artificial Intelligence Ethics Guidelines can be understood as desperate attempts to achieve social control over a technology that appears to be as autonomous as no other”.Through the theoretical lenses of Technological Determinism (TD), Augmented Cognition Theory (ACT), and the work of Siemens, et al. [
], this study introduces the concept of “AI-Minds” to describe the excessive or overreliance on AI technologies. The shift from AI-Augmented minds to AI-Minds can be influenced by technological advancements, societal perspectives, and the widespread use of AI in daily life. Excessive reliance on AI technologies can have implications for cognitive processes such as problem-solving creativity, and decision-making, as well as social interactions in the broader social context. Additionally, ethical concerns, privacy issues, and other potential risks may arise as a result of this shift (Figure 2).Several studies emphasized the consequences of overreliance on AI and its impacts on humans. The gap between humans and cognitive technologies, such as AI, is narrowing, and individuals are increasingly open to incorporating intelligent robots into even their most personal aspects of life [
, ]. However, previous research has highlighted a concerning issue in human-AI decision-making teams known as overreliance. Overreliance occurs when individuals continue to trust and agree with AI even when it is incorrect [ ]. Surprisingly, providing explanations for AI predictions does not mitigate overreliance compared to solely presenting predictions [ ]. Some theories suggest that overreliance stems from cognitive biases or misjudged trust, implying that it is an inherent characteristic of human cognition [ ]. Overreliance on AI can hinder the development of individuals’ creativity. When AI offers predetermined answers and dictates the learning process, individuals may have limited opportunities for independent problem-solving and creative exploration [ , , , ].For example, Chong, et al. [
] examined how positive and negative experiences influence confidence levels and decision-making. The findings revealed that human self-confidence, rather than confidence in AI, plays a crucial role in determining the acceptance or rejection of AI suggestions. Additionally, the research identified a tendency for humans to misattribute blame to themselves, leading to a negative cycle of relying on underperforming AI. The study emphasizes the importance of effectively calibrating human self-confidence for successful AI-assisted decision-making.The study by Buçinca, et al. [] found that cognitive forcing was more effective than simple explainable AI approaches in reducing overreliance. The researchers also examined whether the interventions benefited people with different levels of need for cognition, which measures their motivation for engaging in mental effort. On average, participants with higher levels of need for cognition benefited more from cognitive forcing interventions. This research indicates that human cognitive motivation plays a role in moderating the effectiveness of explainable AI solutions. Similarly, Schemmer, et al. [
] observed that humans sometimes struggle to ignore incorrect AI advice, leading to an overreliance on AI. The desired outcome should be to empower humans to discern the quality of AI advice and make better decisions based on it, rather than blindly relying on it.Furthermore, Vorobeva, et al. [
] found that the presence of AI has negative consequences for individuals engaged in thinking tasks rather than feeling tasks. This is attributed to the adverse impact on their perceived ability or relative performance. The study suggests that these detrimental effects occur specifically when people compare their own abilities to those of AI. Moreover, the study by Schelble, et al. [ ] found that perceiving a teammate as artificial led to worse performance compared to perceiving them as human. The perceived artificiality did not affect shared mental model similarity, but it did impact participants’ perception of team cognition. Individual performance mediated the effect of perceived teammate artificiality on perceived team cognition.In another study, Bakpayev, et al. [
] found that consumers have positive attitudes towards human-created and AI-created cognitive-oriented advertising, but AI-created emotion-oriented content receives lower evaluations. Programmatic creative ads work well for rational appeals and utilitarian products, but not for emotional appeals and hedonic products. Human input is necessary for creating emotion-oriented advertisements. Similarly, the study by Jakesch, et al. [ ] shows that people struggle to identify AI-generated text and are often misled by intuitive but flawed heuristics. AI systems can exploit these heuristics to produce text that appears remarkably human. This raises concerns about the impact of AI-generated text on human cognition, emphasizing the need to reorient AI language system development to support human cognition and decision-making.In order to accomplish the goals of this research, a qualitative methodology is employed. Semi-structured interviews were conducted with a carefully chosen group of five individuals who are avid users of AI. Semi-structured interviews were chosen as the primary data collection method to allow participants the flexibility to express their experiences and viewpoints in a conversational manner [
]. This approach facilitates the collection of rich, detailed narratives, providing nuanced insights into the complex interplay between individuals and AI technologies.The deliberate choice of engaging with a carefully chosen group of five avid AI users was motivated by the aim to delve deeply into the perspectives of individuals who have extensive experience and engagement with AI technologies. This strategic sampling ensures a focused exploration of the phenomenon under investigation, as these participants are likely to offer unique insights based on their substantial interaction with AI in cognitive processes.
The subsequent step in the research design involves the application of Interpretative Phenomenological Analysis (IPA) for rigorous data analysis [
]. IPA was selected for its suitability in uncovering the common essences of human experiences and delving into individual interpretations in-depth [ ]. This method aligns with the qualitative nature of the study, aiming to capture the complexities associated with the overdependence on AI technologies in cognitive processes.By utilizing IPA, this research seeks to go beyond surface-level observations, providing a comprehensive exploration of participants’ experiences, perceptions, and interpretations. The methodological choice of IPA enhances the study’s ability to uncover subtle nuances and variations in participants’ responses, contributing to a more profound understanding of the implications of AI overreliance on cognitive processes. The process for conducting IPA, as outlined by Squires [
], includes the following steps:Articulating the research problem: This study aims to critically examine the concept of “AI-Minds” and explore the implications of the increasing overreliance on AI technologies in cognitive processes, such as creativity, problem-solving, and decision-making
Recruiting participants: In IPA research, the focus is on obtaining comprehensive and detailed data on participants’ perceptions and interpretations of a specific experience. Therefore, a deliberate choice is made to have a small sample size, consisting of five heavy AI users, to gather in-depth insights.
Collecting data: Semi-structured interviews are conducted [
] to elicit rich, detailed, and first-person accounts of participants’ experiences related to the phenomenon under investigation.Analyzingthe data: The thematic analysis method is employed [
], involving several steps. The researcher begins by thoroughly reading and annotating key ideas and thoughts from one case’s transcript. This process is repeated for each case, identifying emergent themes and subthemes. To facilitate cross-case analysis, a table is created to organize the identified themes from each case.Writing up the findings and discussion: The findings are presented systematically, highlighting each subtheme individually and establishing connections to the existing literature within the same section. The narratives incorporate the perspectives of each participant, ensuring clear links that connect and relate the themes to the overall analysis.
The selection of participants for this study followed a purposive sampling method [
] to ensure the inclusion of individuals with relevant experience and expertise in using AI technologies for cognitive tasks such as problem-solving creativity, and decision-making Table 1 summarises participants’ profiles.The target population consisted of avid users of AI technologies, representing diverse backgrounds, professions, and age groups. The sample size was determined to be five participants, considering the qualitative nature of the study and the objective of obtaining in-depth insights into the phenomenon of overreliance on AI technologies. The criteria for participant selection included the following:
The recruitment process involved a combination of strategies. Firstly, professional networks, online communities, and AI-related forums were explored to identify potential participants. Snowball sampling techniques were also employed [
], where participants were encouraged to suggest other individuals who met the selection criteria.Ethical considerations were given utmost importance throughout the research process. All participants were provided with informed consent forms, detailing the purpose, procedures, and protection of their confidentiality and anonymity. The data collected were treated with strict confidentiality, and measures were taken to ensure that participants’ privacy was maintained during the reporting and dissemination of findings.
In the case of IPA, the research questions are generally open-ended and exploratory in nature, allowing for a detailed and nuanced exploration of the participants’ experiences and perspectives [
]. The following are the research questions that guided the data collection and analysis process:To what extent do individuals rely on AI technologies in their cognitive processes, such as problem-solving creativity, and decision-making.The collection of data through semi-structured interviews underwent a rigorous analysis using IPA. Three main themes and relevant sub-themes emerged as follows (Table 2):
These main themes encapsulate the profound impact of AI on problem-solving, creativity, and decision-making processes, along with the challenges and considerations inherent in integrating AI. Additionally, they underscore the ethical considerations and the critical need for achieving a delicate balance between human judgment and AI assistance.
Table 2 serves as a comprehensive overview, providing a structured framework for understanding the multifaceted implications of AI integration across various cognitive domains. It enables researchers and practitioners to grasp the nuances of each theme and sub-theme, facilitating deeper insights into the complex interplay between AI and human cognition.
Impact of AI on problem-solving, creativity, and decision-
making
The impact of AI on problem-solving processes: Data analysis suggests that AI has had a positive impact on the problem-solving processes of participants, making tasks easier, saving time and effort, enhancing access to information, and providing support in various aspects of problem-solving:
- T: “As a user of AI, my problem-solving process has significantly evolved. AI enhances my access to information, making my research faster and more accurate. It automates routine tasks, giving me more time to focus on the crux of my problems. It provides valuable insights by analyzing vast data, helping me understand patterns and trends.”
- S: “Using artificial intelligence-based software and tools has greatly helped me in accomplishing many tasks at work more quickly... they have significantly and positively saved my time and effort.”
- R: “Artificial intelligence technologies make it lighter for me to solve problems, as there is now always a solution to any possible problem I could face.”
- J: “It made it easier for me to solve complex problems.”
However, there is also a recognition of the need to maintain human judgment and intuition, as well as a potential caution regarding overreliance on AI during problem-solving processes:
- A: “It made the process much easier. I don’t have to think hard.”T: “I understand that AI complements, not replaces, my judgment and intuition in problem-solving.”
- S: “I believe these tools and technologies have significantly and positively saved my time and effort, but on the other hand, they have reduced the role of humans in problem-solving processes.”
- R: “Artificial intelligence technologies make it lighter for me to solve problems, as there is now always a solution to any possible problem I could face.”
- J: “It made me sometimes somewhat rely on it completely.”
Participants’ perception of AI’s impact on creativity varies. Older participants see AI as an enhancer that expands their creative capabilities, provides fresh ideas, and facilitates learning:
- A: “When I ask AI, it gives me enough answers and ideas to start with.”
- T: “AI has been instrumental in expanding my creative capabilities... AI exposes me to diverse ideas”
- S: “I believe that artificial intelligence… play a significant role in enhancing creativity, elevating thinking abilities, and facilitating learning from the solutions it provides… it expands my horizons and generates new ideas for me.”
- However, most participants expressed concerns about overreliance on AI’s suggestions and its potential role in limiting creativity:
- T: “There have been instances where AI seems to curb my creativity... Sometimes, I catch myself relying heavily on AI suggestions, which may discourage my original thinking.”
- R: “It may have restricted some of my creative ideas, but I don’t want to blame it all on AI technologies.”
- J: “Yes, sometimes AI made me less creative because I simply just don’t think when using it.”
Data analysis suggests that participants perceived AI as a valuable tool to inform decisions. Many of them even provided great examples:
- A: “In my work, if I am stuck with something or run out of ideas, I ask AI.”
- T: “AI has significantly influenced my decision-making process in numerous ways. One specific example, I used a robot financial advisor… It helped me understand market trends, risks, and projected returns, which influenced my decision on where to invest my savings.”
- S: “I have used an artificial intelligence tool that helps me test and statistically analyze data, providing me with decisions and theoretical analysis based on that data… I have used chatbot systems, to answer many questions that I relied on for decision-making. I have also utilized various data analysis tools to generate supportive visualizations for decision-making.”
- J: “AI can sometimes share really interesting and unthinkable ideas I take inspiration from.”
On the other hand, most of them emphasize the importance of combining AI recommendations with their own judgment, critical thinking, and consideration of unique needs and values. There is also a tendency to rely on AI for decision-making in many cases such as a lack of preference, and/or knowledge:
- A: “If I don’t have a preference or I am not sure, I usually go with AI recommendations.”
- T: “I don’t solely rely on AI recommendations… Hence, I find that AI can provide valuable insights and data-driven predictions that I may not be able to generate on my own. For instance, when shopping online, I appreciate AI-generated recommendations based on my previous purchases and browsing history.”
- S: “I do not solely rely on artificial intelligence recommendations… but I benefit from them to guide me towards making what I consider to be appropriate decisions. However, there may be certain factors or insights that I perceive but the tool may not, due to the data it has been trained on.”
- R: “I may be influenced by it at times, but I do not make my final decisions rely on AI recommendations.”
- J: “I generally use it as a tool to help me make more informed decisions.”
Older participants discussed how AI has improved their capacity to generate innovative solutions. They mention that AI has facilitated the discovery of multiple solutions, discerning patterns and trends, and providing new perspectives. AI’s ability to analyze data quickly and offer insights is seen as a valuable asset in the pursuit of innovation:
- A: “I believe it helps here. It can provide many solutions.”
- T: “AI has been instrumental in improving my ability to find innovative solutions. For example, AI-powered data analysis tools have helped me discern patterns and trends in data that weren’t immediately apparent.”
- S: “Using artificial intelligence-based software and tools has improved my ability to find innovative solutions by facilitating the discovery of innovative solutions.”
However, there is a cautionary note regarding overreliance on AI-generated solutions. Participants recognize the risk of becoming complacent or limited in their thinking if they rely solely on AI. They emphasize the importance of balancing AI insights with their own intuition, expertise, and critical thinking. They value their own unique perspectives and consider AI suggestions as a starting point:
- A: “I personally weighed every solution AI provided me with. Then I can choose.”
- T: “It’s easy to fall into the trap of relying solely on AI-generated solutions. There’s a risk of becoming complacent and not pushing myself to think beyond what the AI suggests, which could stifle truly innovative thinking… Balancing my own intuition and expertise with the insights provided by AI is a constant process… I recognize that AI doesn’t have the full context of human experiences, emotions, and values that often play a role in decision-making.”
- S: “AI is more comprehensive compared to the data and expertise I have. However, I also acknowledge that there may be limitations in generating certain answers because there might be other data that these models haven’t been trained on. Ultimately, I rely on my own experience to make the final decision.”
- J: “I don’t totally rely on the answers AI gives me, I’d rather use my own answers and take the answers that the AI gives me with a grain of salt… It can sometimes suggest basic answers that don’t help me at all or suggest answers I’ve already thought of, that I deem not good enough.”
Challenges and navigation in integrating AI: The data suggest that integrating AI into problem-solving, creativity, or decision-making processes can present challenges and obstacles. These challenges include receiving illogical responses, AI not understanding ideas or questions, overreliance on AI, and dealing with the limitations of AI:
- A: “Sometimes AI gives me illogical responses... Sometimes it did not grasp my idea or my question.”
- T: “Yes, I have faced a few challenges when integrating AI into my problem-solving, creative, and decision-making processes. One of the main challenges has been the risk of overreliance on AI… Another challenge is dealing with the limitations of AI. AI is as good as the data it’s trained on, and it can sometimes miss nuances or make errors.”
- S: “One challenge is the time constraint in learning how to effectively utilize artificial intelligence-based tools. These tools are often described as ‘easy to use but hard to master.’”
- J: “Yes, it can sometimes be hard to navigate AI”
- On the other hand, navigating these challenges involves strategies such as questioning AI-generated solutions, staying updated on AI tools, continuously learning, and striking a balance between AI and human intuition:
- T: “I’ve navigated these challenges through a combination of self-awareness, critical thinking, and continual learning. To prevent overreliance on AI, I remind myself regularly to question AI-generated solutions... I try to utilize AI as a supplement to, rather than a replacement for, my own creativity and problem-solving skills… As for the limitations of AI, I make a point of staying updated on the strengths and weaknesses of the AI tools I use. I try to understand the underlying principles and biases that might influence their outputs. This helps me balance them with my own judgment and expertise.”
- S: “It requires a significant amount of learning, knowledge, and reading about the mechanics and usage of these tools.”
- J: “I took advice from people wiser than me and then formed my own solution.”
AI has the potential to help individuals explore alternative perspectives and generate new ideas. AI exposes them to diverse content and perspectives, which can broaden their thinking:
- A: “AI helps me explore alternative perspectives and generate new ideas.”
- T: “I do find that AI helps me explore alternative perspectives and generate new ideas. One clear example is AI algorithms in social media platforms and news aggregators… This diversity of content often sparks new ideas and gives me different perspectives.”
- S: “Yes, artificial intelligence has helped me in understanding and exploring different perspectives on specific problems or datasets. It has aided me in generating ideas and deriving solutions.”
- R: “Yes, as there are now many platforms and programs that help to clarify the available perspectives and be inspired to generate new ideas.”
- J: “Definitely! It helps me explore alternative perspectives and generate new ideas.”
- However, there is also a risk that AI algorithms, if not properly managed, may reinforce existing patterns of thinking and limit exposure to diverse perspectives. It’s important to strike a balance and ensure that AI is utilized in a way that encourages exploration and the generation of new ideas:
- T: “While AI has the capacity to expose me to new ideas, there is also a risk that it might primarily reinforce existing patterns of thinking. Many AI algorithms are designed to personalize content based on past behaviors and preferences. If not properly managed, this can result in an ‘echo chamber effect’ where I’m mostly exposed to views and ideas that align with my existing beliefs, potentially limiting my exposure to diverse perspectives.”
- S: “I find that when properly utilized, AI tools not only enhance existing thinking patterns but also add value by introducing new perspectives and approaches.”
Assessing the reliability and accuracy of AI-generated solutions involves considering factors such as the source and reputation of the AI tool, understanding limitations, verifying suggestions through research or human expertise, and seeking external validation:
- A: “If I have some knowledge about the problem, usually I analyze the AI responses. But if I don’t, I usually go with what AI has suggested. If the problem is too important for me, I do some research to confirm the accuracy of AI suggestions.”
- T: “Assessing the reliability and accuracy of AI-generated solutions or suggestions requires a mix of technical understanding and critical thinking. Firstly, I consider the source and reputation of the AI tool… I also keep in mind the inherent limitations of AI. I understand that AI operates based on the data it’s been trained on and may not account for unique or exceptional circumstances… If an AI-generated solution seems off or contrary to my own knowledge or intuition, I take the time to verify it using other sources or seek human expertise.”
- S: “Reliability is a challenging criterion to assess, but generally, the more algorithms and models learn from data during experimental usage periods, the more reliable they become.”
- R: “I feel like our minds have been programmed and subconsciously trained to have high confidence in AI solutions without questioning.”
- J: “It depends on how much information they give me!”
Trust in AI can be influenced by factors such as the track record of the AI tool, transparency, complexity of the decision, data quality, and alignment with expert opinions. It is important to balance the insights provided by AI with one’s own expertise and judgment, as well as to continuously engage with AI tools and provide feedback to improve their accuracy:
- A: “I know that AI is based on a huge amount of data throughout human history. Its expertise is incomparable to humans.”
- T: “Several factors influence my trust in AI when making important decisions. The first is the track record of the AI tool… Secondly, the transparency of the AI system plays a significant role… The complexity of the decision at hand also influences my trust as for routine or data-driven decisions, I’m more comfortable relying on AI… Lastly, external validation or verification can increase my trust in AI… I always ensure to balance the insights provided by AI with my own expertise and judgment.”
- S: “The data I use can positively or negatively impact my confidence in AI decision support systems. If my data is of high quality, the decisions derived from AI systems are usually effective, and I can trust them and vice versa.”
- J: “The realism of the answers they give me.”
The ‘black box effect’ is a recognized phenomenon, where the lack of transparency in AI’s reasoning or processes can impact confidence in using AI for problem-solving or decision-making. While some individuals may still rely on AI’s answers if they are logical, others highlight the importance of understanding the underlying processes and seeking additional verification or using transparent AI systems:
- A: “Sometimes I don’t understand the way It works.”
- T: “Yes! One example was when I was utilizing an AI-powered recommendation system to make movie recommendations. The system gave me recommendations, but I didn’t understand why it was recommending specific films!”
- S: “Yes, the internal processes occurring during the learning or training phase of artificial intelligence models cannot be understood by the human mind.”
- R: “I was having a conversation with a robot on an application I use very often, and I was shocked by the amount of information it knew about me, my life, and the people close to me, it was also suggesting solutions to problems that had taken place between us!”
The impact of the ‘black box effect’ can vary, leading to caution, curiosity, questioning, or a combination of these responses:
- A: “I don’t have to trust it entirely. If it provides a logical answer to me, I go with it.”
- T: “If I don’t understand how an AI reached a certain conclusion, I may be cautious to depend on its results... Transparency is essential for building trust… I usually augment the AI’s findings with extra study or seek a second opinion… I prioritize the use of AI tools with a track record of dependability and accuracy, even if their inner workings are not totally apparent.”
- S: “Considering the significant advancements in artificial intelligence techniques and tools, along with their training on massive datasets, I still have confidence in many of the solutions and answers provided by these tools, regardless of some perplexing responses.”
- R: “It makes me think and question more things about AI and how it’s made and what is behind it.”
- J: “I generally use AI to help me solve complex math problems and it does the job pretty well and gives me accurate answers.”
Ethical considerations and navigating biases in AI usage: Interviewees stressed the importance of ethical considerations when using AI involving aspects such as respect for intellectual property, fairness, prejudice, data security, privacy, honesty, responsibility, and avoiding biases or discriminatory conclusions:
- A: “I care about copyrights and others’ intellectual properties.”
- T: “First, I consider the AI system’s fairness and prejudice… Second, I explore the question of data security and privacy… Finally, I consider honesty and responsibility.”
- S: “It is important for me to maintain my privacy by not providing sensitive or confidential data as inputs to these tools… I also want to know if these tools store any data about me, my device, or my phone for later use in advertising, content personalization, or other purposes.”
- R: “I am against racism and prejudice between people.”
- J: “I refuse to use biased sources and racist answers.”
Accordingly, they usually navigate potential biases or ethical dilemmas through approaches such as diverse data training, critical thinking, continuous learning, engagement with experts or standards, privacy protection, adherence to ethical norms and best practices, and expressing their opinions against racism and prejudice. The emphasis is on responsible and ethical use of AI in problem-solving or decision-making scenarios:
- A: “I do my best not to infringe others’ rights.”
- T: “When using AI, navigating possible biases or ethical quandaries involves a combination of critical thinking, continuous learning, and contact with experts or standards. To reduce any biases, I try to employ AI systems that have been trained on varied and representative data. If the AI’s data or the way it processes that data might result in unfair or discriminatory conclusions, I look for alternate tools or solutions. When faced with an ethical quandary, such as determining how much personal data to share with an AI, I assess the possible advantages against the potential hazards. I also refer to ethical norms and best... Overall, the concepts of justice, openness, and privacy protection lead my approach to ethical issues while employing.”
- S: “Depending on the nature and usage of the tool, sometimes there may be bias towards a specific race, religion, or culture. I avoid using these tools in problem-solving or decision-making related to such issues. I try to focus the use of these tools on solving problems and making decisions in academic and scientific research, data-related issues, student-related matters, educational processes, and others.”
- R: “I would usually try to express my opinion if there is a place for that.”
- J: “I try hard to avoid them.”
Data suggests that all interviewees recognize the importance of achieving a balance between human judgment and AI assistance in problem-solving, creativity, and decision-making processes. AI is seen as a valuable tool that can provide insights and efficiency, but it is not considered a replacement for human judgment. Human intuition, critical thinking, creativity, and ethical considerations are viewed as irreplaceable and essential in complex or nuanced situations. The individuals strive to leverage the strengths of AI while maintaining their own expertise and insights. The caution against overreliance on AI and the need for a balanced approach is emphasized:
- A: “It is important to achieve balance. AI here is to help not to dictate. Possibly in the future, it will do! Have you seen the movies where robots dictate the earth?”
- T: “In my problem-solving, creative, and decision-making processes, I would define the balance between human judgment and AI aid as a synergistic collaboration… I do not consider AI to be a replacement for human judgment, but rather a tool that supplements it. Human intuition, knowledge, and context awareness are irreplaceable and critical, especially when dealing with difficult or nuanced situations, or when empathy and ethical considerations are involved. As a result, while I frequently use AI, I always include my own critical thinking and creativity into the process.”
- S: “Balance remains a requirement in everything, and it is a non-negotiable demand when using artificial intelligence tools. Initially, a person may be amazed by the level of advancement and progress these tools have achieved. However, upon further reading and exploration, they will find that these tools are limited by the type and nature of the data they were trained on. Human judgment remains crucial in assessing the decisions made by artificial intelligence. It is important not to rely solely on artificial intelligence decisions, as they are susceptible to errors, even if at a very small percentage.”
- R: “Currently, our minds have become very dependent on technology and artificial intelligence. We think that we are the ones who create and remember, while artificial intelligence technologies are the ones who do all this for us.”
- J: “I try not to use AI relentlessly to the point where I can’t form my own way to solve problems and navigate creative situations, but I’d try to balance my use for AI and my own brain.”
This study aimed to understand the impact of overreliance on AI technologies on human cognition, specifically in problem-solving, creativity, and decision-making, through qualitative research using semi-structured interviews. The IPA analysis revealed three main themes and their respective sub-themes.
The first theme explored the impact of AI on problem-solving, creativity, and decision-making processes. Participants acknowledged that AI has had a positive influence on problem-solving by making tasks easier, saving time, and providing support. However, they also recognized the importance of maintaining human judgment and intuition, as there was a cautionary note regarding overreliance on AI during problem-solving. The role of AI in enhancing and hindering creativity was perceived differently among participants. Older individuals saw AI as an enhancer that expanded their creative capabilities, provided fresh ideas, and facilitated learning, probably due to work expertise. However, many participants expressed concerns about the potential limitations of overreliance on AI suggestions and its potential role in limiting creativity. In terms of decision-making, participants perceived AI as a valuable tool that informed decisions. They emphasized the importance of combining AI recommendations with their own judgment, critical thinking, and consideration of unique needs and values. However, there was also a tendency to rely on AI for decision-making in cases where preferences or knowledge were lacking. AI was seen as having a positive impact on generating innovative solutions by facilitating the discovery of multiple options, discerning patterns and trends, and providing new perspectives. Nevertheless, participants cautioned against becoming complacent or limited in their thinking through sole reliance on AI-generated solutions. They stressed the need to balance AI insights with their own intuition, expertise, and critical thinking, valuing their own unique perspectives while considering AI suggestions as a starting point.
The second theme delved into the challenges and considerations involved in integrating AI. Participants highlighted various challenges, including receiving illogical responses, AI not understanding ideas or questions, overreliance on AI, and dealing with the limitations of AI. To navigate these challenges, strategies such as questioning AI-generated solutions, staying updated on AI tools, continuous learning, and striking a balance between AI and human intuition were employed. AI was recognized for its potential to influence perspectives and idea generation. It exposed individuals to diverse content and perspectives, thereby broadening their thinking. However, there was also a risk that AI algorithms, if not properly managed, may reinforce existing patterns of thinking and limit exposure to diverse perspectives. Achieving a balance and utilizing AI in a way that encourages exploration and the generation of new ideas was deemed important. Participants also discussed the evaluation of reliability and trust in AI-generated solutions. Factors such as the source and reputation of the AI tool, understanding limitations, verifying suggestions through research or human expertise, and seeking external validation were taken into consideration. Trust in AI was influenced by the track record of the AI tool, transparency, complexity of the decision, data quality, and alignment with expert opinions. The importance of balancing AI insights with one’s own expertise and judgment, as well as actively engaging with AI tools and providing feedback for improvement, was emphasized. The “black box effect,” referring to the lack of transparency in AI’s reasoning or processes, was recognized as having an impact on confidence in using AI for problem-solving or decision-making. While some individuals may still rely on AI’s answers if they are logical, others highlighted the importance of understanding the underlying processes and seeking additional verification or using transparent AI systems. The impact of the “black box effect” varied, leading to caution, curiosity, questioning, or a combination of these responses.
The third theme centered around ethical considerations and the balance between human judgment and AI assistance. Interviewees emphasized the importance of ethical considerations when using AI, encompassing aspects such as respect for intellectual property, fairness, prejudice, data security, privacy, honesty, responsibility, and avoiding biases or discriminatory conclusions. To address potential biases or ethical dilemmas approaches such as diverse data training, critical thinking, continuous learning, engagement with experts or standards, privacy protection, adherence to ethical norms and best practices, and expressing opinions against racism and prejudice were employed. Maintaining a balance between human judgment and AI assistance was recognized as crucial in problem-solving, creativity, and decision-making processes. AI was seen as a valuable tool that provides insights and efficiency but is not a replacement for human judgment. Human intuition, critical thinking, creativity, and ethical considerations were viewed as irreplaceable and essential in complex or nuanced situations. Participants aimed to weight the strengths of AI while preserving their own expertise and insights. The caution against overreliance on AI and the need for a balanced approach were emphasized throughout the findings.
To reflect on these findings, the current study adopts two theoretical approaches: Technological Determinism (TD) and Augmented Cognition Theory (ACT), to understand the role of AI in shaping society and human behavior [
, ].The findings of the current study discuss how the increasing reliance on AI technologies may lead to a shift from AI-Augmented minds, where AI enhances human cognition, to AI-Minds, where humans excessively rely on AI for cognitive processes [
, ]. The findings also highlight that human cognition involves problem-solving, creativity, and decision-making, while AI excels at data processing and pattern recognition [ - ]. Additionally, the findings acknowledge the ethical concerns and potential risks associated with the overreliance on AI [ - ].The concept of ‘AI-Minds’ is introduced to describe the excessive reliance on AI technologies, which can have implications for cognitive processes [
]. Overreliance on AI can hinder creativity and independent problem-solving [ , , , ]. Previous studies highlight the consequences of overreliance on AI and the need to calibrate human self-confidence for successful AI-assisted decision-making [ , , ]. It is found that cognitive forcing interventions and effective discernment of AI advice can mitigate overreliance [ , ]. The presence of AI can negatively impact individuals’ perceived ability and team performance in thinking tasks [ , ]. Furthermore, AI-generated content in advertising and text can have varying effects on consumer attitudes and human cognition [ , ]. These studies collectively emphasize the importance of responsible and ethical AI utilization, addressing biases, and considering the impact of AI on individuals and society [ , ].All in all, the study highlights the positive impact of AI on problem-solving, creativity, and decision-making, while cautioning against overreliance. Participants stressed the need for a balanced approach, combining AI with human judgment. Challenges in AI integration were identified, along with strategies to navigate them. Ethical considerations, including fairness and privacy, were emphasized, with a call for responsible AI utilization. The concept of ‘AI-Minds’ was introduced to describe excessive reliance on AI. Previous research underscores the importance of calibrating human self-confidence and discerning AI advice effectively. Addressing biases and considering AI’s impact on individuals and society is crucial for ethical AI utilization.
For implications, the study sheds light on the potential pitfalls of leaning too heavily on AI. It’s a call for individuals to be aware of these risks and to tread carefully in their reliance on AI technologies.
Participants voiced concerns about relying too much on AI, emphasizing the importance of holding onto human judgment and intuition. This suggests a need for a balanced approach, where AI insights complement individual cognitive abilities and expertise.
While some participants recognized AI as a creativity booster, there were worries about its potential to stifle creativity. This underlines the importance of striking a balance between AI suggestions and our innate creative thinking to nurture our imaginative capabilities.
Participants further valued AI as a decision-making tool but emphasized the need to incorporate human judgment, critical thinking, and an understanding of unique needs and values. This points to the necessity for a comprehensive decision-making process that marries AI recommendations with individual input.
As the findings stress the importance of actively engaging with AI technologies, steering clear of complacency or narrow thinking solely reliant on AI-generated solutions, this underscores the need for a continuous exercise of our cognitive abilities, expertise, and critical thinking alongside the assistance of AI.
Some limitations are associated with the current study. First, participants’ responses during interviews may be influenced by social desirability bias or their own interpretation of the research topic. They may provide responses that they perceive as more socially acceptable or desirable, leading to potential inaccuracies or limitations in the data collected. Second, lack of quantitative data: The study relied solely on qualitative data obtained through interviews. While qualitative data offer in-depth insights and rich descriptions, they may lack statistical rigor and quantifiable measures. The absence of quantitative data limits the ability to establish statistical relationships or draw precise conclusions. Finally, the study focused specifically on the impact of overreliance on AI on problem-solving, creativity, and decision-making processes. It did not explore other potential implications or aspects of overreliance on AI, such as its effects on job displacement, social interactions, or ethical considerations beyond the realm of cognition. The limited scope restricts a comprehensive understanding of the broader implications of overreliance on AI. Hence, it is important to consider these limitations when interpreting the study’s findings and to recognize the need for further research to address these limitations and gain a more comprehensive understanding of the topic such as:
The study’s findings underscore the risks and challenges that come with relying too heavily on AI. While participants recognized AI’s positive impact on problem-solving, creativity, and decision-making, they also voiced concerns about leaning too much on AI and its limitations.
When it comes to problem-solving, participants appreciated how AI made tasks easier, saved time, and offered support. But they also stressed the importance of not letting AI overshadow human judgment and intuition in solving problems.
In terms of creativity, participants viewed AI as a tool that boosted their creative abilities and aided in learning. Yet, they worried about AI potentially stifling creativity and emphasized the need to balance AI suggestions with their creative thinking.
Regarding decision-making, participants valued AI’s role in providing insights but emphasized the necessity of pairing AI recommendations with their critical thinking and consideration of individual needs and values. While some leaned-on AI when lacking preferences or knowledge, they recognized the importance of human input for well-rounded decision-making.
Participants cautioned against blindly relying on AI-generated solutions, urging for a balance between AI insights and their expertise and intuition. Despite AI’s capacity to spark innovative solutions and offer fresh perspectives, participants stressed the need to maintain a balance between AI and human cognitive abilities.
These findings highlight the potential downsides of excessive reliance on AI, including limitations on creativity and the importance of human judgment in decision-making. The study underscores the importance of individuals being aware of their reliance on AI and actively engaging their cognitive skills alongside AI technologies.
Data availability: The data that support the findings of this study are available upon reasonable request from the corresponding author.
Andreu-Perez J, Deligianni F, Ravi D, Yang GZ. Artificial Intelligence and Robotics. arXiv preprint arXiv:1803.10813. 2018; 1-44. https://doi.org/https://doi.org/10.48550/arXiv.1803.10813
Al-Zahrani AM. The impact of generative AI tools on researchers and research: Implications for academia in higher education. Innovations in Education and Teaching International. 2023; 1-15. https://doi.org/10.1080/14703297.2023.2271445
Al-Zahrani AM. From Traditionalism to Algorithms: Embracing Artificial Intelligence for Effective University Teaching and Learning. Educational Technology at IgMin. 2024; 2(2): 102-0112. https://doi.org/10.61927/igmin151
Dong Y, Hou J, Zhang N, Zhang M. Research on How Human Intelligence, Consciousness, and Cognitive Computing Affect the Development of Artificial Intelligence. 2020; 1680845:1680841-1680845:1680810.
Nicholas C. The Shallows. What the Internet Is Doing to Our Brains. New York, London: W.W. Norton & Company. 2010.
Siemens G, Marmolejo-Ramos F, Gabriel F, Medeiros K, Marrone R, Joksimovic S, De Laat M. Human and Artificial Cognition. Computers and Education: Artificial Intelligence. 2022; 3: 100107. https://doi.org/https://doi.org/10.1016/j.caeai.2022.100107
Cremer DD, Kasparov G. AI Should Augment Human Intelligence, Not Replace It. Business And Society. https://hbr.org/2021/03/ai-should-augment-human-intelligence-not-replace-it
Jeste DV, Graham SA, Nguyen TT, Depp CA, Lee EE, Kim HC. Beyond artificial intelligence: exploring artificial wisdom. Int Psychogeriatr. 2020 Aug;32(8):993-1001. doi: 10.1017/S1041610220000927. Epub 2020 Jun 25. PMID: 32583762; PMCID: PMC7942180.
Sadiku MNO, Musa SM. Augmented Intelligence. In M. N. O. Sadiku & S. M. Musa (Eds.), A Primer on Multiple Intelligences. 2021; 191-199. Springer International Publishing. https://doi.org/10.1007/978-3-030-77584-1_15
Drew R. Technological Determinism. In A Companion to Popular Culture. 2016; 165-183. https://doi.org/https://doi.org/10.1002/9781118883341.ch10
Hallström J. Embodying the Past, Designing the Future: Technological Determinism Reconsidered in Technology Education. International Journal of Technology and Design Education. 2022; 32(1): 17-31. https://doi.org/10.1007/s10798-020-09600-2
Moore PT, Pham HV. Informatics and the Challenge of Determinism. Sci. 2020; 1-32. https://doi.org/doi:10.20944/preprints202007.0530.v1
Héder AI and the Resurrection of Technological Determinism. Informacios Tarsadalom, 2021; 21-130(2): 119. Doi: https://doi.org/10.22503/inftars.xxi.2021.2.8
Wyatt S. Technological Determinism is Dead; Long Live Technological Determinism. The Handbook of Science and Technology Studies. 2008; 3: 165-180.
Stanney K, Winslow B, Hale K, Schmorrow D. Augmented Cognition. In APA Handbook of Human Systems Integration. 2015; 329-343. American Psychological Association. https://doi.org/10.1037/14528-021
Stanney KM, Schmorrow DD, Johnston M, Fuchs S, Jones D, Hale KS, Young P. Augmented Cognition: An Overview. Reviews of Human Factors and Ergonomics. 2009; 5(1): 195-224. https://doi.org/10.1518/155723409x448062
Korteling JE, Van De Boer-Visschedijk GC, Blankendaal RAM, Boonekamp RC, Eikelboom AR. Human- versus Artificial Intelligence [Conceptual Analysis]. Frontiers in Artificial Intelligence. 2021; 4. https://doi.org/10.3389/frai.2021.622364
Jonassen DH, Hung W. Problem Solving. In N. M. Seel (Ed.), Encyclopedia of the Sciences of Learning. 2012; 2680-2683. Springer US. https://doi.org/10.1007/978-1-4419-1428-6_208
Suh B. When Should You Use AI to Solve Problems? Harvard Business Review. 2021. https://hbr.org/2021/02/when-should-you-use-ai-to-solve-problems
Creely E, Henriksen D, Henderson M. Artificial Intelligence, Creativity, and Education: Critical Questions for Researchers And Educators. Society for Information Technology & Teacher Education International Conference. 2023. New Orleans, LA, United States. https://www.learntechlib.org/p/221998
Sternberg RJ, Lubart TI, Kaufman JC, Pretz JE. Creativity. In K. J. Holyoak & R. G. Morrison (Eds.), The Cambridge handbook of thinking and reasoning. 2005; 351-369. New York: Cambridge University Press.
Boden MA. Creativity and Artificial Intelligence. Artificial Intelligence. 1998; 103(1-2): 347-356.
Anantrasirichai N, Bull D. Artificial Intelligence in the Creative Industries: A Review. Artificial Intelligence Review. 2022; 55(1): 589-656. https://doi.org/10.1007/s10462-021-10039-7
Gobet F, Sala G. How Artificial Intelligence Can Help Us Understand Human Creativity. Front Psychol. 2019 Jun 19;10:1401. doi: 10.3389/fpsyg.2019.01401. PMID: 31275212; PMCID: PMC6594218.
Morelli M, Casagrande M, Forte G. Decision Making: a Theoretical Review. Integr Psychol Behav Sci. 2022 Sep;56(3):609-629. doi: 10.1007/s12124-021-09669-x. Epub 2021 Nov 15. PMID: 34780011.
Chong L, Zhang G, Goucher-Lambert K, Kotovsky K, Cagan J. Human Confidence in Artificial Intelligence and in Themselves: The Evolution and Impact of Confidence on Adoption of AI Advice. Computers in Human Behavior. 2022; 127: 107018. https://doi.org/https://doi.org/10.1016/j.chb.2021.107018
Colson E. What AI-Driven Decision Making Looks Like. Harvard Business Review. 2019. https://hbr.org/2019/07/what-ai-driven-decision-making-looks-like
Meissner P, Keding C. The Human Factor in AI-Based Decision-Making. MIT Sloan Management Review. Massachusetts Institute of Technology. 2021. https://sloanreview.mit.edu/article/the-human-factor-in-ai-based-decision-making/
Sætra HS. A shallow defence of a technocracy of artificial intelligence: Examining the political harms of algorithmic governance in the domain of government. Technol Soc. 2020 Aug;62:101283. doi: 10.1016/j.techsoc.2020.101283. Epub 2020 Jun 8. PMID: 32536737; PMCID: PMC7278651.
The Different Ways AI Makes Decisions Compared to Humans. Surfactanta. 2022. https://www.surfactants.net/the-different-ways-ai-makes-decisions-compared-to-humans/
Recommendation on the Ethics of Artificial Intelligence. UNESCO. 2022. https://unesdoc.unesco.org/ark:/48223/pf0000381137
Farisco M, Evers K, Salles A. Towards Establishing Criteria for the Ethical Analysis of Artificial Intelligence. Sci Eng Ethics. 2020 Oct;26(5):2413-2425. doi: 10.1007/s11948-020-00238-w. PMID: 32638285; PMCID: PMC7550314.
Kerr A, Barry M, Kelleher JC. Expectations of Artificial Intelligence and the Performativity of Ethics: Implications for communication governance. Big Data & Society. 2020; 7(1): 205395172091593. https://doi.org/10.1177/2053951720915939
Mökander J, Floridi L. Ethics-Based Auditing to Develop Trustworthy AI. Minds and Machines. 2021b; 31(2): 323-327. https://doi.org/10.1007/s11023-021-09557-8
Owe A, Baum SD. Moral consideration of Nonhumans in the Ethics of Artificial Intelligence. AI and Ethics. 2021; 1(4): 517–528. https://doi.org/10.1007/s43681-021-00065-0
Ryan M, Antoniou J, Brooks L, Jiya T, Macnish K, Stahl B. Research and Practice of AI Ethics: A Case Study Approach Juxtaposing Academic Discourse with Organisational Reality. Sci Eng Ethics. 2021 Mar 8;27(2):16. doi: 10.1007/s11948-021-00293-x. PMID: 33686527; PMCID: PMC7977017.
Stahl BC, Antoniou J, Ryan M, Macnish K, Jiya T. Organisational Responses to the Ethical Issues of Artificial Intelligence. AI & Society. 2021; 37(1):23–37. https://doi.org/10.1007/s00146-021-01148-6
Zhou J, Chen F, Berry A, Reed MR, Zhang S, Savage S. A Survey on Ethical Principles of AI and Implementations. 2020. https://doi.org/10.1109/ssci47803.2020.9308437
Kuzior A, Kwilinski A. Cognitive Technologies and Artificial Intelligence in Social Perception. Management Systems in Production Engineering. 2022; 30(2):109-115. https://doi.org/doi:10.2478/mspe-2022-0014
Zhao G, Li Y, Xu Q. From Emotion AI To Cognitive AI. International Journal of Network Dynamics and Intelligence. 2022; 1(1):65-72. https://doi.org/https://doi.org/10.53941/ijndi0101006
Vasconcelos H, Jörke M, Grunde-McLaughlin M, Gerstenberg T, Bernstein MS, Krishna R. Explanations Can Reduce Overreliance on AI Systems During Decision-Making. ACM Hum.-Comput. Interact. 7(CSCW1), Article 129. 2023. https://doi.org/10.1145/3579605
Halina M. Insightful Artificial Intelligence. Mind & Language. 2021; 36(2):315–329. https://doi.org/10.1111/mila.12321
Li S, Ren X, Schweizer K, Brinthaupt TM, Wang T. Executive Functions as Predictors of Critical Thinking: Behavioral and Neural Evidence. Learning and Instruction. 2021; 71: https://doi.org/10.1016/j.learninstruc.2020.101376
Buçinca Z, Malaya MB, Gajos KZ. To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI In AI-Assisted Decision-Making. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1). 2021; 1-21. Doi: https://doi.org/10.1145/3449287
Schemmer M, Hemmer P, Kühl N, Benz C, Satzger G. Should I follow AI-based advice? Measuring appropriate reliance in human-AI decision-making. arXiv preprint arXiv:2204.06916. 2022. https://doi.org/https://doi.org/10.48550/arXiv.2204.06916
Vorobeva D, El Fassi Y, Pinto CD, Hildebrand D, Herter MM, Mattila AS. Thinking Skills Don’t Protect Service Workers from Replacement by Artificial Intelligence. Journal of Service Research. 2022; 25(4):601-613. https://doi.org/10.1177/10946705221104312
Schelble BG, Flathmann C, McNeese NJ, O’Neill T, Pak R, Namara M. Investigating the Effects of Perceived Teammate Artificiality on Human Performance and Cognition. International Journal of Human–Computer Interaction. 2022; 1-16. Doi: https://doi.org/10.1080/10447318.2022.2085191
Bakpayev M, Baek TH, van Esch P, Yoon S. Programmatic Creative: AI Can Think but it Cannot Feel. Australasian Marketing Journal. 2022; 30(1):90-95. Doi: https://doi.org/10.1016/j.ausmj.2020.04.002
Jakesch M, Hancock JT, Naaman M. Human heuristics for AI-generated language are flawed. Proc Natl Acad Sci U S A. 2023 Mar 14;120(11):e2208839120. doi: 10.1073/pnas.2208839120. Epub 2023 Mar 7. PMID: 36881628; PMCID: PMC10089155.
Mertens DM. Research and Evaluation in Education and Psychology: Integrating Diversity with Quantitative, Qualitative, and Mixed Methods (2nd). Thousand Oaks, Calif., London: Sage Publications. 2005.
Squires V, Okoko JM, Tunison S, Walker KD. Interpretative Phenomenological Analysis. Varieties of Qualitative Research Methods: Selected Contextual Perspectives. Springer International Publishing. 2023; 269-274. https://doi.org/10.1007/978-3-031-04394-9_43
Al-Zahrani AB. Balancing Act: Exploring the Interplay Between Human Judgment and Artificial Intelligence in Problem-solving, Creativity, and Decision-making. IgMin Res. 25 Mar, 2024; 2(3): 145-158. IgMin ID: igmin158; DOI:10.61927/igmin158; Available at: igmin.link/p158
次のリンクを共有した人は、このコンテンツを読むことができます:
University of Jeddah, Jeddah, Saudi Arabia
Address Correspondence:
Abdulrahman M Al-Zahrani, University of Jeddah, Jeddah, Saudi Arabia, Email: ammzahrani@uj.edu.sa
How to cite this article:
Al-Zahrani AB. Balancing Act: Exploring the Interplay Between Human Judgment and Artificial Intelligence in Problem-solving, Creativity, and Decision-making. IgMin Res. 25 Mar, 2024; 2(3): 145-158. IgMin ID: igmin158; DOI:10.61927/igmin158; Available at: igmin.link/p158
Copyright: © 2024 Al-Zahrani AM. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Andreu-Perez J, Deligianni F, Ravi D, Yang GZ. Artificial Intelligence and Robotics. arXiv preprint arXiv:1803.10813. 2018; 1-44. https://doi.org/https://doi.org/10.48550/arXiv.1803.10813
Al-Zahrani AM. The impact of generative AI tools on researchers and research: Implications for academia in higher education. Innovations in Education and Teaching International. 2023; 1-15. https://doi.org/10.1080/14703297.2023.2271445
Al-Zahrani AM. From Traditionalism to Algorithms: Embracing Artificial Intelligence for Effective University Teaching and Learning. Educational Technology at IgMin. 2024; 2(2): 102-0112. https://doi.org/10.61927/igmin151
Dong Y, Hou J, Zhang N, Zhang M. Research on How Human Intelligence, Consciousness, and Cognitive Computing Affect the Development of Artificial Intelligence. 2020; 1680845:1680841-1680845:1680810.
Nicholas C. The Shallows. What the Internet Is Doing to Our Brains. New York, London: W.W. Norton & Company. 2010.
Siemens G, Marmolejo-Ramos F, Gabriel F, Medeiros K, Marrone R, Joksimovic S, De Laat M. Human and Artificial Cognition. Computers and Education: Artificial Intelligence. 2022; 3: 100107. https://doi.org/https://doi.org/10.1016/j.caeai.2022.100107
Cremer DD, Kasparov G. AI Should Augment Human Intelligence, Not Replace It. Business And Society. https://hbr.org/2021/03/ai-should-augment-human-intelligence-not-replace-it
Jeste DV, Graham SA, Nguyen TT, Depp CA, Lee EE, Kim HC. Beyond artificial intelligence: exploring artificial wisdom. Int Psychogeriatr. 2020 Aug;32(8):993-1001. doi: 10.1017/S1041610220000927. Epub 2020 Jun 25. PMID: 32583762; PMCID: PMC7942180.
Sadiku MNO, Musa SM. Augmented Intelligence. In M. N. O. Sadiku & S. M. Musa (Eds.), A Primer on Multiple Intelligences. 2021; 191-199. Springer International Publishing. https://doi.org/10.1007/978-3-030-77584-1_15
Drew R. Technological Determinism. In A Companion to Popular Culture. 2016; 165-183. https://doi.org/https://doi.org/10.1002/9781118883341.ch10
Hallström J. Embodying the Past, Designing the Future: Technological Determinism Reconsidered in Technology Education. International Journal of Technology and Design Education. 2022; 32(1): 17-31. https://doi.org/10.1007/s10798-020-09600-2
Moore PT, Pham HV. Informatics and the Challenge of Determinism. Sci. 2020; 1-32. https://doi.org/doi:10.20944/preprints202007.0530.v1
Héder AI and the Resurrection of Technological Determinism. Informacios Tarsadalom, 2021; 21-130(2): 119. Doi: https://doi.org/10.22503/inftars.xxi.2021.2.8
Wyatt S. Technological Determinism is Dead; Long Live Technological Determinism. The Handbook of Science and Technology Studies. 2008; 3: 165-180.
Stanney K, Winslow B, Hale K, Schmorrow D. Augmented Cognition. In APA Handbook of Human Systems Integration. 2015; 329-343. American Psychological Association. https://doi.org/10.1037/14528-021
Stanney KM, Schmorrow DD, Johnston M, Fuchs S, Jones D, Hale KS, Young P. Augmented Cognition: An Overview. Reviews of Human Factors and Ergonomics. 2009; 5(1): 195-224. https://doi.org/10.1518/155723409x448062
Korteling JE, Van De Boer-Visschedijk GC, Blankendaal RAM, Boonekamp RC, Eikelboom AR. Human- versus Artificial Intelligence [Conceptual Analysis]. Frontiers in Artificial Intelligence. 2021; 4. https://doi.org/10.3389/frai.2021.622364
Jonassen DH, Hung W. Problem Solving. In N. M. Seel (Ed.), Encyclopedia of the Sciences of Learning. 2012; 2680-2683. Springer US. https://doi.org/10.1007/978-1-4419-1428-6_208
Suh B. When Should You Use AI to Solve Problems? Harvard Business Review. 2021. https://hbr.org/2021/02/when-should-you-use-ai-to-solve-problems
Creely E, Henriksen D, Henderson M. Artificial Intelligence, Creativity, and Education: Critical Questions for Researchers And Educators. Society for Information Technology & Teacher Education International Conference. 2023. New Orleans, LA, United States. https://www.learntechlib.org/p/221998
Sternberg RJ, Lubart TI, Kaufman JC, Pretz JE. Creativity. In K. J. Holyoak & R. G. Morrison (Eds.), The Cambridge handbook of thinking and reasoning. 2005; 351-369. New York: Cambridge University Press.
Boden MA. Creativity and Artificial Intelligence. Artificial Intelligence. 1998; 103(1-2): 347-356.
Anantrasirichai N, Bull D. Artificial Intelligence in the Creative Industries: A Review. Artificial Intelligence Review. 2022; 55(1): 589-656. https://doi.org/10.1007/s10462-021-10039-7
Gobet F, Sala G. How Artificial Intelligence Can Help Us Understand Human Creativity. Front Psychol. 2019 Jun 19;10:1401. doi: 10.3389/fpsyg.2019.01401. PMID: 31275212; PMCID: PMC6594218.
Morelli M, Casagrande M, Forte G. Decision Making: a Theoretical Review. Integr Psychol Behav Sci. 2022 Sep;56(3):609-629. doi: 10.1007/s12124-021-09669-x. Epub 2021 Nov 15. PMID: 34780011.
Chong L, Zhang G, Goucher-Lambert K, Kotovsky K, Cagan J. Human Confidence in Artificial Intelligence and in Themselves: The Evolution and Impact of Confidence on Adoption of AI Advice. Computers in Human Behavior. 2022; 127: 107018. https://doi.org/https://doi.org/10.1016/j.chb.2021.107018
Colson E. What AI-Driven Decision Making Looks Like. Harvard Business Review. 2019. https://hbr.org/2019/07/what-ai-driven-decision-making-looks-like
Meissner P, Keding C. The Human Factor in AI-Based Decision-Making. MIT Sloan Management Review. Massachusetts Institute of Technology. 2021. https://sloanreview.mit.edu/article/the-human-factor-in-ai-based-decision-making/
Sætra HS. A shallow defence of a technocracy of artificial intelligence: Examining the political harms of algorithmic governance in the domain of government. Technol Soc. 2020 Aug;62:101283. doi: 10.1016/j.techsoc.2020.101283. Epub 2020 Jun 8. PMID: 32536737; PMCID: PMC7278651.
The Different Ways AI Makes Decisions Compared to Humans. Surfactanta. 2022. https://www.surfactants.net/the-different-ways-ai-makes-decisions-compared-to-humans/
Recommendation on the Ethics of Artificial Intelligence. UNESCO. 2022. https://unesdoc.unesco.org/ark:/48223/pf0000381137
Farisco M, Evers K, Salles A. Towards Establishing Criteria for the Ethical Analysis of Artificial Intelligence. Sci Eng Ethics. 2020 Oct;26(5):2413-2425. doi: 10.1007/s11948-020-00238-w. PMID: 32638285; PMCID: PMC7550314.
Kerr A, Barry M, Kelleher JC. Expectations of Artificial Intelligence and the Performativity of Ethics: Implications for communication governance. Big Data & Society. 2020; 7(1): 205395172091593. https://doi.org/10.1177/2053951720915939
Mökander J, Floridi L. Ethics-Based Auditing to Develop Trustworthy AI. Minds and Machines. 2021b; 31(2): 323-327. https://doi.org/10.1007/s11023-021-09557-8
Owe A, Baum SD. Moral consideration of Nonhumans in the Ethics of Artificial Intelligence. AI and Ethics. 2021; 1(4): 517–528. https://doi.org/10.1007/s43681-021-00065-0
Ryan M, Antoniou J, Brooks L, Jiya T, Macnish K, Stahl B. Research and Practice of AI Ethics: A Case Study Approach Juxtaposing Academic Discourse with Organisational Reality. Sci Eng Ethics. 2021 Mar 8;27(2):16. doi: 10.1007/s11948-021-00293-x. PMID: 33686527; PMCID: PMC7977017.
Stahl BC, Antoniou J, Ryan M, Macnish K, Jiya T. Organisational Responses to the Ethical Issues of Artificial Intelligence. AI & Society. 2021; 37(1):23–37. https://doi.org/10.1007/s00146-021-01148-6
Zhou J, Chen F, Berry A, Reed MR, Zhang S, Savage S. A Survey on Ethical Principles of AI and Implementations. 2020. https://doi.org/10.1109/ssci47803.2020.9308437
Kuzior A, Kwilinski A. Cognitive Technologies and Artificial Intelligence in Social Perception. Management Systems in Production Engineering. 2022; 30(2):109-115. https://doi.org/doi:10.2478/mspe-2022-0014
Zhao G, Li Y, Xu Q. From Emotion AI To Cognitive AI. International Journal of Network Dynamics and Intelligence. 2022; 1(1):65-72. https://doi.org/https://doi.org/10.53941/ijndi0101006
Vasconcelos H, Jörke M, Grunde-McLaughlin M, Gerstenberg T, Bernstein MS, Krishna R. Explanations Can Reduce Overreliance on AI Systems During Decision-Making. ACM Hum.-Comput. Interact. 7(CSCW1), Article 129. 2023. https://doi.org/10.1145/3579605
Halina M. Insightful Artificial Intelligence. Mind & Language. 2021; 36(2):315–329. https://doi.org/10.1111/mila.12321
Li S, Ren X, Schweizer K, Brinthaupt TM, Wang T. Executive Functions as Predictors of Critical Thinking: Behavioral and Neural Evidence. Learning and Instruction. 2021; 71: https://doi.org/10.1016/j.learninstruc.2020.101376
Buçinca Z, Malaya MB, Gajos KZ. To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI In AI-Assisted Decision-Making. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1). 2021; 1-21. Doi: https://doi.org/10.1145/3449287
Schemmer M, Hemmer P, Kühl N, Benz C, Satzger G. Should I follow AI-based advice? Measuring appropriate reliance in human-AI decision-making. arXiv preprint arXiv:2204.06916. 2022. https://doi.org/https://doi.org/10.48550/arXiv.2204.06916
Vorobeva D, El Fassi Y, Pinto CD, Hildebrand D, Herter MM, Mattila AS. Thinking Skills Don’t Protect Service Workers from Replacement by Artificial Intelligence. Journal of Service Research. 2022; 25(4):601-613. https://doi.org/10.1177/10946705221104312
Schelble BG, Flathmann C, McNeese NJ, O’Neill T, Pak R, Namara M. Investigating the Effects of Perceived Teammate Artificiality on Human Performance and Cognition. International Journal of Human–Computer Interaction. 2022; 1-16. Doi: https://doi.org/10.1080/10447318.2022.2085191
Bakpayev M, Baek TH, van Esch P, Yoon S. Programmatic Creative: AI Can Think but it Cannot Feel. Australasian Marketing Journal. 2022; 30(1):90-95. Doi: https://doi.org/10.1016/j.ausmj.2020.04.002
Jakesch M, Hancock JT, Naaman M. Human heuristics for AI-generated language are flawed. Proc Natl Acad Sci U S A. 2023 Mar 14;120(11):e2208839120. doi: 10.1073/pnas.2208839120. Epub 2023 Mar 7. PMID: 36881628; PMCID: PMC10089155.
Mertens DM. Research and Evaluation in Education and Psychology: Integrating Diversity with Quantitative, Qualitative, and Mixed Methods (2nd). Thousand Oaks, Calif., London: Sage Publications. 2005.
Squires V, Okoko JM, Tunison S, Walker KD. Interpretative Phenomenological Analysis. Varieties of Qualitative Research Methods: Selected Contextual Perspectives. Springer International Publishing. 2023; 269-274. https://doi.org/10.1007/978-3-031-04394-9_43