I recently spoke to a group of university educators worried about students constantly using AI in their educational activities. They feared that students—a future generation of industry workers—will become overly dependent on AI. The main question was: Should we try to intervene? (See Owoc et al., 2019 for a discussion.)
While the question is complex, I will suggest one way to look at it, which I hope reflects a more balanced view on the implications of our everyday AI usage. The idea consists of viewing AI in the same way we view any technology that appeared in the past and made a “Kuhnian” (Kuhn, 1997) revolution in how we live and think.
A chatbot is excited to solve your problem.
Source: Nick Kabrel
Consider the printing press: critics feared it would overwhelm society with unfiltered ideas and undermine scholarly authority. Radio faced backlash for distracting families and spreading sensationalism. Television has long been blamed for eroding attention spans and critical thinking (perhaps for good reasons). Finally, and most importantly, the internet and smartphones—perhaps the last big tech revolution before LLMs—provoked debates over information overload and social media harm (Carr, 2020).
My point is, it is not the type of technology that makes your mind sharper or duller—it is about the way you engage with it. Take the internet: in developed countries, everyone has access to it. It is hard even to imagine the amount of useful information the internet contains. Some argue that this equal accessibility of the internet should also result in equal opportunities for learning (Haq et al., 2023, p. 76). Yet, intelligence and expertise vary widely (not due to inherent IQ, but in how knowledge is pursued and applied). Why? Again, I believe that it highly depends on how people use the technology.
Some people use the internet to find great resources, databases, books, educational videos, etc. Other people use the internet to scroll through TikTok and watch videos of cats. So, when it comes to modern chatbots, I believe the same story applies here. It is not up to AI whether you become smarter or dumber by using it—it is up to you. Let me show you how with an example.
Example #1: AI Usage That Hinders Creativity and Agency
Suppose a person is an HR manager in a company, and they need to create an event enhancing the well-being of the co-workers. The person inserts the following prompt into a chatbot: “I’m an HR manager at [company name]. Design a two-hour event for the team of software engineers that will enhance their well-being. Provide me a detailed plan for the event with timestamps and concrete descriptions of activities."
As you can see in this example, all the creative activities are outsourced to the chatbot. Yes, the person will probably accomplish the task quickly and effectively. But what is the value for their professional growth (assuming that they want to grow, of course)? It’s not about task accomplishment, it’s about development. So, how could you use LLMs without sacrificing your creativity and agency?
My answer is: “By resisting the temptation to receive ready-made answers and solutions.” For me, one of the core beneficial features of LLMs is that they can provide guidance that is aligned with your specific context.
For centuries, methods like Socratic questioning have fueled critical thinking and intellectual growth (Zare & Mukundan, 2015). Today, LLMs offer a free, tireless “sparring partner” to probe your ideas, sharpen your reasoning, and eliminate weak concepts before they fail in the real world (Laak & Aru, 2024). How, then, can our HR manager harness LLMs to enhance their creativity and agency instead of blunting them?
Example #2: AI Usage That Promotes Creativity and Agency
The HR manager inserts the following prompt into a chatbot: “I’m an HR manager at [company name], tasked with designing and leading a two-hour event to improve the well-being of the software engineers team. Guide my learning and planning. I’ll ask basic questions about well-being programs and strategies and what makes them effective, but don’t give me direct solutions or specific plans. Instead, use clear, thought-provoking questions to help me develop my own ideas, critique my suggestions by pointing out potential gaps, and ensure your responses foster my independence, creativity, and curiosity.”
Artificial Intelligence Essential Reads
Auctioning AI: Ai-Da's Historic Million-Dollar Moment
We Are All AI's Teachers—And It's Our Job to Teach Wisely
A chatbot is eager to guide you in your learning process.
Source: Nick Kabrel
By the end of the process, the HR manager in the second example will arguably grow significantly as a professional, despite relying on AI for support. This case illustrates how AI’s impact may vary widely based on its application.
AI as a Guide, Not a Problem-Solver
Some people argue that restricting students’ AI use is necessary to prevent intellectual laziness and ensure they learn essential skills (Yu, 2023). I disagree. The AI revolution is unstoppable, and banning its use is futile. There is another approach, a more provocative, perhaps. I tell my students: “Please, use AI (if you want). But use it not to substitute your own brain—use it to amplify your learning in ways previously impossible. Instead of utilizing it as an answer-giver and problem-solver, use it as a personal tutor, Socratic opponent, impartial argument analyst, or a guide to push you toward your zone of proximal development."
It’s akin to a parent solving a problem for a child who is capable of tackling it alone, which hinders their development. A good parent, instead, identifies the child’s zone of proximal development (Vygotsky, 1978) and nudges them toward it. Similarly, you can use AI as a bad parent who solves your problems and takes your development from you, or as a mentor who helps you identify places to go and grow yourself.
References
Owoc, M. L., Sawicka, A., & Weichbroth, P. (2019, August). Artificial intelligence technologies in education: benefits, challenges and strategies of implementation. In IFIP international workshop on artificial intelligence for knowledge management (pp. 37-58). Cham: Springer International Publishing.
Kuhn, T. S. (1997). The structure of scientific revolutions (Vol. 962). Chicago: University of Chicago press.
Carr, N. (2020). The shallows: What the Internet is doing to our brains. WW Norton & Company.
Zare, P., & Mukundan, J. (2015). The use of Socratic method as a teaching/learning tool to develop students’ critical thinking: A review of Literature. Language in India, 15(6), 256-265.
Laak, K. J., & Aru, J. (2024). AI and personalized learning: bridging the gap with modern educational goals. arXiv preprint arXiv:2404.02798.
Yu, H. (2023). Reflection on whether Chat GPT should be banned by academia from the perspective of education and teaching. Frontiers in Psychology, 14, 1181712.
Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press.
Haq et al. (2023). Dialogues on AI, Society, and what comes next. Atlantic ReThink.