Generative AI & Education

Are We Getting Smarter or Just More Dependent?

Generative AI in Education and Tech Careers: Boon or Crutch?

Generative AI (GenAI) has burst onto the scene in classrooms, code editors, and security labs alike. It promises to help students learn faster, write code for us, and even hunt hackers – but is it truly making us smarter and more efficient, or just more dependent on a digital crutch? In this deep dive, we’ll explore how GenAI is impacting student learning (especially in tough STEM subjects), what it means for software development and cybersecurity careers, and what the future might hold. Buckle up – we’ll separate hype from reality (with a bit of humor to keep things lively) and cite the experts along the way.

1) Student Learning in the AI Era: Improved Outcomes or Lazy Learners?

The arrival of AI tools like ChatGPT in late 2022 sparked both excitement and panic in education. Students suddenly had a tireless, know-it-all study buddy available 24/7. Need help with an advanced probability proof, debugging a network configuration, or writing a cybersecurity report? Just ask the bot. But does this actually improve learning – or are students outsourcing their brainwork and undermining their own education?

Recent surveys indicate AI usage among students is sky-high. In one poll, 89% of respondents admitted using ChatGPT for homework. Students are using GenAI to draft essays, solve problem sets, and even answer take-home exams. It’s not hard to see the appeal – why struggle for hours when an AI can spit out an answer in seconds?

Some students report they've gotten great grades with AI-generated work – after all, ChatGPT famously passed a Wharton MBA exam with a B- grade. But getting the right answer ≠ learning. Early studies suggest heavy reliance on AI may boost short-term results while hurting long-term understanding. One study found students who practiced math with ChatGPT solved more problems during practice but scored 17% lower on the exam later compared to a control group. Essentially, they breezed through homework with AI help, then bombed the test.

The researchers likened it to pilots relying on autopilot: if you let the AI do all the flying, your own skills atrophy. Students with ChatGPT often just asked for the answer rather than working through the problem themselves. The AI “crutch” can yield impressive homework but shallow learning. And the bot is frequently wrong – one study found its step-by-step solutions were incorrect 42% of the time in math. It sounds authoritative but can quietly do algebra in a ditch.

This mismatch between “good enough for homework” and “not good enough for real understanding” is a huge concern in tough fields like advanced math, networking, or cybersecurity. Some professors note how students are skipping textbooks, office hours, or good old problem-solving in favor of instant AI. It’s a double-edged sword: yes, AI can provide quick explanations, but overuse leads to superficial knowledge – and that shows up on tough exams, labs, or real-world tasks.

Personally, I recall the pre-ChatGPT days of my EC327 and EC330 courses at BU, suffering through code compilers, data structures, and algorithmic proofs. There were times I wished for a magical AI to handle everything, but now I’m grateful I learned the fundamentals the old-school way – because it truly stuck with me forever. Some current students say they rely on ChatGPT to solve these same classes’ homework. Sure, they breeze through, but do they really grasp what a “graph algorithm” is or how dynamic memory works? Probably not. As soon as it gets tough, they consult their best friend AI – ironically short-circuiting their own ability to build a deep mental model of the material. That’s the big risk: you pass the course on autopilot but never learn to fly .

In short, AI can boost productivity and provide support – but over-reliance can result in shallow learning. Savvy students are learning to strike a balance: using ChatGPT to check work or get hints, but doing the mental heavy lifting themselves. Meanwhile, educators scramble to adapt assignments, some going “ChatGPT-proof” or reintroducing more in-person exams to ensure real understanding is measured. We’re in a transition, and only time (and test scores) will tell if the new AI-savvy generation actually retains the knowledge they “learned.”

2) AI Coding Agents: Will They Replace Devs or Supercharge Them?

Let’s move from the classroom to the coder’s cubicle. AI coding agents like GitHub Copilot or OpenAI Codex are basically “auto-complete on steroids.” They can generate code from comments, suggest bug fixes, and handle routine tasks – so does that mean software engineers are obsolete?

Experts say we still need humans, but the work is changing. AI can automate a lot of routine coding – the boilerplate, straightforward logic, or repetitive tasks. That might threaten some entry-level dev roles, because what used to take 3 junior developers might now take 1 developer with AI assistance. On the other hand, actual software creation goes beyond just spitting out code: it requires design, architecture, security audits, user empathy, and creativity .

AI tools shine at generating code for well-defined problems, but they struggle with truly novel scenarios or complex system contexts . They also can produce insecure or buggy code if you don’t verify. So while they reduce tedium, devs must still do the high-level thinking and debugging. The future probably holds fewer “pure coding” roles but more “AI-augmented engineering” positions – imagine a developer managing multiple AI tools, reviewing their outputs, integrating components, and using creativity to tackle the last mile. Those who rely too heavily on AI from day one may never build core skills, so when the AI fails, they’re stuck. But those who treat AI as a power tool while still learning the fundamentals will thrive. Productivity could skyrocket, but so must human oversight. That’s the sweet spot: use AI to write code faster, but never forget how to code without it.

3) Cybersecurity & Networking: Can AI Outsmart Hackers?

If any field could use superhuman help, it’s security. Threats evolve fast, and there's a talent shortage. AI can help spot anomalies in huge logs, scan for known vulnerabilities, and even propose some fixes. However, generative models like ChatGPT lack real-world agency: they can’t literally hack systems or interpret complex hardware specifics on the fly. They also rely on training data, so brand-new “zero-day” exploits might fly right under an AI’s radar. True hacking or in-depth debugging requires creativity and context, something AI doesn’t fully possess (yet).

Personally, I once tried using ChatGPT to help with a port-hacking exercise in a cybersecurity class. It gave me bits of code, but it was disjointed and often incorrect. I wasted weeks feeding it errors, hoping it’d self-correct. Eventually, I realized a combination of professor’s notes, official documentation, and rewatching an MIT “Computer Systems Security” lecture was infinitely better. Yes, ChatGPT gave me some partial leads, but the time spent wrestling with AI ended up longer than if I’d just studied the authoritative sources from day one.

That’s the paradox: in advanced fields like cybersecurity, relying on AI can be a time sink when it starts hallucinating. Or you might become complacent, never fully learning how to analyze packets or debug a misconfiguration. So you pass a subpar assignment with AI’s half-baked solution, but do you truly understand firewalls, encryption, or intrusion detection? Probably not. Meanwhile, a competitor who studied meticulously from the textbook and labs might not have AI’s shortcuts, but they’ll develop a deeper skillset that stays with them for years.

In short, AI is a powerful assistant but no silver bullet in cybersecurity and networking. It can handle routine tasks, but humans are still the key to creative, context-driven solutions – the kind that catch novel attacks or debug hardware quirks. If you never learn the low-level details yourself, you might be left behind when AI faces a truly unique challenge.

4) The Future: Job Landscape & Education in 5–10 Years

In the next decade, we’ll likely see a workforce that’s AI-augmented at every level . Basic coding or formulaic tasks might be mostly done by AI tools. Entry-level dev and engineering roles could shrink or transform. Students who used AI all through college but never built solid foundations might struggle to adapt. But new roles will also appear: “AI supervisor,” “prompt engineer,” or “ML-based system architect.” Meanwhile, education might shift to emphasize managing AI outputs rather than raw memorization. Will that make us “smarter” or “lazier”? Possibly both – depends on how we harness it.

Experts are divided, but many say the best approach is responsible, balanced AI usage . Let AI handle drudge work so humans can focus on creativity, design, and deeper problem-solving. However, if we rely on AI from day one for everything, we risk producing a generation that can’t handle complex tasks without their AI sidekick. And if AI coding agents or security scripts become the norm, **true** skill might only reside in those who occasionally step away from the bot to learn it manually. Over time, we might see a big skill gap between those who can do the heavy lifting themselves and those who only know how to ask the AI for answers.

5) Conclusion: A Double-Edged Sword We Should Wield Wisely

Ultimately, Generative AI can be an amazing accelerator if used carefully. It can speed up learning, help debug code, and provide near-infinite practice problems. But the dark side is that overuse or misuse can undermine real understanding and skill growth . The same story plays out across STEM fields, software development, and security: AI’s an extraordinary tool, but no standalone expert. The best results come when humans stay in the driver’s seat and let AI handle mundane tasks, not the thinking itself.

So, are we getting “smarter” or “dumber” with AI? Maybe we’re both – or rather, it’s up to how we use it. The student who invests time learning fundamentals and consults AI sparingly may end up more efficient and more knowledgeable. The one who copies AI solutions blindly might coast through classes but be unprepared for real-world challenges. In five or ten years, the difference between those two approaches will become starkly obvious in the workforce. Let’s hope we aim for synergy rather than a total trade-off of convenience for competence.

At the end of the day, technology changes, but core understanding and critical thinking remain priceless. By all means, ask ChatGPT for that snippet of code , but please also open your textbook and labs. If you do, you’ll be unstoppable. If you don’t, the day your AI tool fails, you may wonder where your own knowledge went.

—Carther Theogene, February 2025

64 views
1 claps
1 comments

References

  • Barshay, J. (2024). Kids who use ChatGPT as a study assistant do worse on tests. The Hechinger Report/Popular Science.
  • Study.com Survey via Forbes (2023). 89% of students admit to using ChatGPT for homework.
  • Orosz, G. & Osmani, A. (2023). How AI-assisted coding will change software engineering. Pragmatic Engineer.
  • Zinkula, J. & Mok, A. (2023). 10 jobs AI is most likely to replace. Business Insider.
  • Walker, K. (2023). Which Programming Jobs Are Likely To Be Replaced By AI? CodeOp Blog.
  • Horrocks, D. (2024). Why Copilot is Making Programmers Worse at Programming. The Angry Dev.
  • Wiz.io (2023). Will AI Replace Cybersecurity? No, but it will change it.
  • BlueGoat Cyber (2023). How ChatGPT Aids in Penetration Testing.
  • Business Insider (2023). ChatGPT passed a Wharton MBA exam.
  • Heikkinen, N. (2022). Student caught submitting AI-written essay. Insider.
  • Tierney, L. (2024). The intersection of AI use in education and plagiarism. EdNC.
  • Katrina, W. (2023). CodeOp CEO on AI-proof skills.
  • SentinelOne & Redscan (2023). ChatGPT Security Risks.
  • NCSC UK (2023). AI and Cyber Security: what you need to know.
  • Spitzer, M. (2019). Digital Dementia.

Comments (1)

Anonymous March 21, 2025 at 11:01 PM

Well said

Leave a Comment

« Back to Main Page