When new tools are developed, they replace a skill set previously needed to accomplish a task. We begin using a new tool and gradually the muscle memory, dexterity, cognitive acuity, etc., we once used to accomplish that task atrophies. We become rusty or struggle to retain that skill set.
Use it or lose it, as they say.
That’s a concern many have with the rise of generative AI, especially among knowledge workers. How much is my dependence on Claude, Perplexity, Notebook LM, or Gemini undermining my ability to think critically? Am I becoming a less capable writer? Is my proficiency in synthesizing ideas and restructuring them in creative ways diminishing?
Well, according to recent reports from Microsoft and Carnegie Mellon University researchers, we are right to worry about the impact our increased reliance on AI tools is having on our cognitive abilities and independent problem-solving.
At least, depending on how you use them.
This study of 319 knowledge workers revealed that those with higher confidence in AI tools self-reported reduced critical thinking, while those maintaining healthy self-confidence in their abilities tend to engage more deeply with AI-generated outputs.
In other words, those who use AI tools uncritically—mistaking copy-pasting with minor modifications as critical thought and accepting AI outputs without proper scrutiny—tend to produce lower-quality work. This means, that generative AI tools are powerful allies, but we must be certain we employ thorough human judgment when evaluating AI outputs.
This new wave of tools is as exciting as it is overwhelming, but, as knowledge workers, we must not let our cognitive skills, core to what we do, become neglected in the name of presumed productivity.
When using a tool such as Perplexity Pro (a massive upgrade to the free version, and my favorite researching/writing tool alongside Claude), go deeper after it reveals its answer. Click on the sources it links to. Find the relevant information and verify it. Assess how reputable the sources are and whether the output accurately reflects what the source is actually saying.
As a human, you have the kind of judgment that generative AI still struggles to replicate. Call it the part that separates us from the machine.
This bevy of AI tools can comb through miles of data and mountains of information in seconds, helping us narrow down what is likely most relevant to our task. It is for us, as knowledge workers, to then examine those synthesized outputs and evaluate their validity and utility.
That is how we keep our critical thinking skills sharp. And it’s how we keep our values and goals aligned in what should be human-centered activities.