Recently, I read one research paper titled, “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking” by Prof. Dr. Michael Gerlich. It is one of those research pieces, which I find extremely interesting and want to get into the depth and collect more insights. So, I tried to touch base with the research scientist and asked for his time for an email interview fortunately, he agreed.
Research scientists, like Dr. Gerlich are my intellectual heroes who, despite their towering intellect and busy schedule, remain grounded and approachable. Their humility really stands out when they patiently explain complex concepts in simple terms, ensuring that knowledge isn’t confined to academia but becomes a bridge that connects us all.
It’s one of those opportunities I truly love and cherish – getting a chance to talk to my heroes and gain a glimpse into their thought process. Scroll down to explore the fascinating interview below!
Can you elaborate on the key findings from your study, AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking? What initially inspired you to explore the relationship between AI usage and critical thinking?
I found it particularly illuminating to observe how individuals appear to turn to AI tools as a quick fix for tasks that traditionally required deliberate effort and critical reflection. The study’s key finding is that increased dependency on AI tools correlates with a decline in independent critical thinking skills. Initially, I wanted to uncover whether these tools enhance human cognition by augmenting our capabilities or whether they lead to a form of cognitive outsourcing that erodes mental acuity over time. This concern arose from a growing body of literature on digital distractions and technological dependencies, which often highlights the ways new technologies can modify human attention spans and problem-solving approaches.
At all the universities I taught with the emergence of ChatGPT a distinctive reduction of critical thinking of students could be observed. Colleagues form other universities confirmed having similar observations. Curiosity about how this phenomenon shapes contemporary decision-making spurred me to conduct a focused examination of AI’s influence on our capacity to scrutinise information, differentiate reliable data from misleading content, and persist in complex problem-solving tasks.
Your study shows that younger participants tend to rely more on AI tools and exhibit lower critical thinking scores. What factors do you believe contribute to this generational difference in AI usage?
The generational difference in AI usage, where younger participants seemed far more reliant on AI tools and demonstrated lower critical thinking scores, appears driven by familiarity with continuous connectivity and digital services from an early age. Many younger individuals grew up in environments saturated by smartphones, interactive apps, and smart devices. These technologies often encourage instant solutions and discourage slower, reflective thinking. In addition, the immediacy and convenience of AI-driven tools appeal to younger cohorts who may perceive traditional problem-solving strategies as time-consuming or antiquated. This inclination towards rapid resolution perhaps undermines the cultivation of deeper reasoning abilities. The main problem probably relates to the wrong use of generative AI.
Young people seem to be gotten used to ask google, or now an AI, a question that leads to the direct solution/answer of a task/question, instead of investigation potential sources with the help of AI and conducting the critical analysis and conclusion by themselves. Older generations have been applying critical thinking for a longer time and are used to the process. This past experience does not mean that older people are “save” from reduced critical thinking. It might impact them as well – in the future.
What are the potential long-term cognitive effects of extensive AI usage, particularly when it comes to skills like memory, attention and problem-solving? Do you foresee a future where people will no longer need these skills due to AI’s dominance?
Current generative AI models like ChatGPT are constructed to make the user happy and by doing so, they feed the confirmation bias we all have. This clearly will have negative long-term effects on the users. But besides that, extensive AI usage could have a myriad of cognitive effects, especially regarding memory, attention, and problem-solving. Relying on AI to store and retrieve information might impair the practice of memory recall; diminished attention spans may become more widespread when people feel less compelled to concentrate on tasks that can be completed automatically; and problem-solving abilities could atrophy in the absence of consistent, deliberate practice. I do not predict that individuals will completely abandon these skills, even if AI reaches levels of sophistication that automate most tasks. Humans have always sought mastery over mental faculties, and these core skills remain necessary for ethical decision-making and unforeseen crises that might exceed an AI’s operational scope. Nonetheless, some aspects of cognition, particularly those associated with rote memorisation, may become relegated to AI support, requiring new forms of education that emphasise advanced judgement and context-specific reasoning. Understanding human behaviour though, a general reduction of critical thinking in the wider society seems not unlikely.
Your study also touches on algorithmic bias and transparency issues with AI tools. How do you think we can address these ethical concerns as AI becomes more integrated into decision-making processes across industries?
Addressing concerns around algorithmic bias and transparency is paramount. Many AI systems learn from historical data, which can contain latent prejudices. Mitigation efforts should involve multi-stakeholder engagement, thorough audits of AI models, legal frameworks that ensure accountability, and educational initiatives that promote data literacy. When AI becomes embedded in multiple industries, from healthcare to finance, robust oversight and interpretability become crucial to maintaining public trust. Recent scholarship points to the value of including ethicists and social scientists in AI design processes. Greater diversity among developers and improved regulation can create an environment in which algorithmic biases are identified early and rectified, rather than being perpetuated or amplified.
The concept of technological singularity – when AI surpasses human intelligence – is often discussed in the AI community. There is also no doubt that AI systems are equipped with powerful computational resources, which makes them excel in certain areas, such as medicine, mathematics or quantum physics. So, AI has the potential to exceed human knowledge, what would it mean for humanity?
As for the concept of technological singularity, should AI surpass human intelligence in certain respects, its impact on humanity hinges on the directions we set for its deployment. AI that exceeds human knowledge could theoretically revolutionise research and application in fields as diverse as biomedical engineering and environmental conservation. The potential for addressing large-scale global challenges is immense. However, caution is advisable given how the asymmetry in intelligence might introduce new forms of dependence, shifting the centre of authority from human oversight to AI-driven processes. Maintaining human agency, ethical guidelines, and democratic oversight should remain priorities, ensuring that AI-driven innovations continue to serve rather than subjugate societies. Considering this, keeping the critical thinking part on the human side of an AI-human partnership must be a priority. The relationship of extensive (wrong) use of AI and reduced critical thinking becomes even more worrying.
There is growing interest in AI systems that can detect or simulate human emotions. How do you think AI’s understanding of human emotions will affect human-AI interactions, and what ethical concerns does this raise?
AI systems that attempt to detect or simulate human emotions raise questions about how machines will influence interpersonal relationships. Interactions might become more frictionless if an AI can sense frustration or confusion and adapt accordingly. Yet ethical concerns surface regarding privacy, autonomy, and manipulation, particularly if AI is embedded in marketing or social media contexts. The notion of machines interpreting or imitating human emotional states blurs the line between empathy as a human capacity and empathy as an AI-facilitated simulation. Societies must establish clear standards about data collection, consent, and the permissible applications of emotion-sensing technologies. A recent study of mine: “Societal Perceptions and Acceptance of Virtual Humans: Trust and Ethics across Different Contexts” (2024, https://doi.org/10.3390/socsci13100516) show that depending on the social context participants might have enormous reservations about such (fake) simulated emotions.
Extending the previous question, will the future of creativity in AI lie in collaboration between humans and AI, or do you believe AI could develop its own “creative” capabilities in the near future?
This is probably one of the most difficult questions to answer. What defines creativity? Creativity in AI, whether it emerges from collaboration with humans or from the AI’s own algorithms, remains an intriguing terrain. Collaborative endeavours, where humans direct, critique, or refine AI-generated outputs, might spark new forms of art or scientific breakthroughs. Still, the long-term potential for AI to generate ideas autonomously cannot be dismissed, especially as neural network models grow in sophistication. Genuine creativity often entails a sense of context, cultural awareness, and the ability to produce unexpected insights. If AI systems develop capacities to integrate contextual knowledge with novel idea generation, they might exhibit forms of creativity that differ from human standards. Ultimately, that is what we expect from an artificial intelligence.
What are your other interests besides academic commitments and research responsibilities… reading, painting, gardening, skiing maybe?
Outside academic work and research, I enjoy time spent with my amazing family, my smart wife and my outstanding daughters. I enjoy outdoors, hiking, skiing but as well motorcross. Let us not forget travelling, reading, and experiencing new cultures (but usually there is never enough time to dive deep). Whether strolling through a new city or experiencing its local restaurants, I appreciate how each environment broadens my perspective.
What advice would you give to aspiring doctoral students interested in researching the impact of AI on consumer behavior, marketing and society?
To aspiring doctoral students interested in the impact of AI on consumer behaviour, marketing, and society, I would recommend immersing themselves in research at the juncture of technology and social sciences. Developing a sound understanding of social and behavioural science theory and empirical research methods will equip them to interpret complex data responsibly. Engaging with interdisciplinary teams, such as those that include computer scientists, sociologists, and ethicists, sharpens one’s ability to examine AI’s influence on individual decisions and broader societal norms. Only today a new research of mine has been published that addresses this topic: “The Shifting Influence: Comparing AI Tools and Human Influencers in Consumer Decision-Making” (2025, https://doi.org/10.3390/ai6010011).
Quick bits:
If AI could have a “brainstorming” session with humans, what do you think it would pitch as the next big idea?
If AI could have a “brainstorming” session with humans, it might pitch ideas about how to solve pressing global problems, such as climate change or pandemic response, harnessing data-driven insights that surpass the limits of conventional human analysis.
If AI could teach us one thing about human thinking, what do you think it would choose to explain?
If AI could teach us one thing about human thinking, it might underscore the inherent biases and heuristics that shape our decisions, encouraging us to be more methodical and evidence-based.
What if one day AI asks us, ‘What’s it like to forget something?’- how would we explain that?
If one day AI asked, “What is it like to forget something?”, we could attempt to explain it as the unravelling of mental connections that once held meaning, which stands in contrast to a system that can store and retrieve data indefinitely.
If AI could dream, what do you think it would dream about? Solving problems or just finding the perfect Wi-Fi signal?
If AI could dream, it might indeed dream of infinite problem-solving frontiers or an idealised operational environment where connectivity never fails.
If AI had a favorite hobby, do you think it would be coding, learning new algorithms or just watching humans solve problems?
If AI had a favourite hobby, it might well be the pursuit of yet more efficient algorithms, although it might also find fascination in observing and analysing how humans approach puzzles, and ironically, how we endeavour to find meaning beyond mechanical computation.
(Wow! Thank you, Professor Gerlich, for an incredibly inspiring conversation! Your work is a true source of inspiration. We eagerly anticipate our next visit to witness more of your innovative research. Until then, we extend our best wishes for your continued success in all your future endeavours.)