跳到主要内容

James Anderson:AI 让开发者变笨!

AI 让开发者变蠢

人们经常谈论大语言模型(LLM)带来的生产力提升,如果我否认这一点,那就太不诚实了。确实如此,借助 LLM 辅助的工作流程,你可以提高工作效率,但同样的工作流程也可能让你变得更笨。

我之所以这么说,背后是有原因的。随着时间的推移,你会对 LLM 工具产生依赖,以至于在没有它的情况下很难开展工作。

对我来说,我投身软件工程是因为热爱构建事物和探究事物的运作原理。这意味着我享受在键盘上敲击代码块的艰苦(繁琐)过程。

而 LLM 辅助的工作流程剥夺了这种乐趣。人们不再需要手动解决问题,而是只需让机器进行猜测即可。

你不再理解事物运作的原因,而是依赖 AI 助手来告诉你应该做什么。

有些人可能不喜欢自己编写代码。如果是这样的话,尽管听起来有些苛刻,我会说他们可能不适合从事这一领域的工作。或许你只是冲着钱来的?这也无可厚非。每个行业都有这样的人,通常从他们的热情和态度就能看出。

我见过的最好的工程师是那些愿意在周末花几个小时构建自己版本的工具或软件的人。嘿,这就是你获得创新和进步的地方。如果你不了解系统的工作原理,就无法找到性能提升的方法,否则你只是在盲目尝试。

有一个概念叫做“Copilot 延迟”。它指的是在每次操作 LLM 辅助工具之后,工程师会停下来,等待提示下一步该怎么做。他们没有了自主性,只是等待人工智能告诉他们接下来该做什么。这就像行业新手刚开始时那样 —— 依赖更资深的同事来指导他们,了解如何继续进行。

这是真实存在的事情。

很久以前,我曾经在 VS Code 中使用过 GitHub Copilot。现在回想起那段时间,我很惊讶它当时没有对我的知识保留造成更多损害。

随着时间的推移,我开始忘记我所使用的编程语言的基本要素。我开始忘记部分语法,忘记基本语句的使用方法。回想起来,我只是为了短期内提高速度而浪费了我积累的知识,这真是令人尴尬。

这就是使用 Copilot 一年后会发生的真实情况。你会开始忘记一些东西,因为你不再像试图自己解决问题时那样思考。

事实上,是 ThePrimeagen 的一段视频让我意识到这一点并面对现实。他在一次直播中剪辑了一段关于 Copilot 延迟的视频。这真是一个警钟!

从那以后,我就不再使用 LLM 助手来编写代码了,我很高兴我这么做了。

举个例子,编译器是我非常感兴趣的一个领域。当时我曾尝试研读 Thorsten Ball 的《用 Go 编写解释器》。但这完全没有意义 —— Copilot 并没有让我学习书中的主题和技术,而是直接输出代码。当然,编写解析器可能会感觉很酷,但如果你关闭 Copilot,你还能再写一次吗?恐怕不行。你也失去了学习内存管理或面向数据设计等概念的机会,因为 Copilot 只会给你一些它认为可能有用的代码,而不是让你研究主题和理解细微差别。

这实际上引出了另一个角度 —— 研究。这次对人工智能有一个更积极的看法。

确实如此。LLM 大模型很有用。它们就像搜索引擎。我们过去常常使用 Stack Overflow 来解决编程问题。由于 LLM 是在所有这些数据上训练的,因此它们可以成为学习概念的有效工具。但前提是你要用探究的心态使用它们,不要轻信它们的输出。

因为它们以编造垃圾内容而臭名昭著 —— 嗯,好吧,这就是 LLM 的设计原理,这意味着它们可能有一半时间都在胡编乱造。它们只是关于模式和标记序列,而不是知识渊博的人的真实陈述。它们是基于那些知道自己在说什么的人所创建的内容进行训练的,但它们以不同于源材料的方式重新输出。

总之,询问回复并试图弄清楚为什么它推荐某些方法是唯一能从 LLM 中获益的真正途径。把它当作与某人的对话,你试图理解为什么他们喜欢某种技术。如果你不明白它为什么提出某个建议,或者它实际上在背后还隐藏着什么,那么你使用它就是失败的。

对了,非常重要的一点是要做笔记!大量的笔记!我最近开始玩 Zig 语言,我经常记下我正在学习的语言知识,尤其是考虑到这是我第一次处理内存管理。当你遇到困难时,它们可以成为一个有用的参考点,在你卡住时非常方便,甚至可以与他人分享!

好啦,我在早高峰通勤时写了这篇帖子,现在我的地铁已经到达目的地,我就写到这里。


People often talk about the productivity gains that comes from LLMs, and it would be disingenuous of me to dismiss these. It’s true. You can be productive with an LLM-assisted workflow, but that same workflow could also be making you dumber.

There’s a reason behind why I say this. Over time, you develop a reliance on LLM tools. This is to the point where is starts to become hard for you to work without one.

I got into software engineering because I love building things and figuring out how stuff works. That means that I enjoy partaking in the laborious process of pressing buttons on my keyboard to form blocks of code.

LLM-assisted workflows take this away. Instead of the satisfaction of figuring out a problem by hand, one simply asks the LLM to take a guess.

Instead of understanding why things work in the way they do, you become dependent on an assistant to tell you what you should do.

Some people might not enjoy writing their own code. If that’s the case, as harsh as it may seem, I would say that they’re trying to work in a field that isn’t for them. Maybe you’re just here for the money? Fair enough. That happens with every profession, and it generally shows through one’s enthusiasm and demeanor.

The best engineers I’ve met are people who will spend hours at the weekend building their own version of a tool or software. Heck, that’s where you get innovation and advancements from. You can’t find performance improvements without a good understanding of how a system works, otherwise you’re just shooting in the dark.

There is a concept called “Copilot Lag”. It refers to a state where after each action, an engineer pauses, waiting for something to prompt them what to do next. There is no self-sufficiency, just the act of waiting for an AI to tell them what should come next. It’s akin to where a junior in the field might start - relying on their more senior colleagues to guide them and understand how to proceed.

It’s a real thing.

Eons ago, I used to use GitHub Copilot in VS Code. When I look back on that time period now, I’m amazed I didn’t do more damage to knowledge retention.

Over time, I started to forget basic foundational elements of the languages I worked with. I started to forget parts of the syntax, how basic statements are used. It’s pretty embarrassing to look back and think about how I was eroding the knowledge I had gathered just because I wanted a short term speed increase.

That’s the reality of what happens when you use Copilot for a year. You start to forget things, because you are no longer having to think about what you’re doing in the same way that you would when you try to figure out how to solve a problem yourself.

It was actually a video by ThePrimeagen that made me realise this and confront reality. He had a clip from one of his streams where he was talking about Copilot lag. What a wake up call that was!

I stopped using LLM assistants for coding after that, and I’m glad I did.

To give an example, compilers are an area that I find super interesting. I had actually tried to work through Thorsten Ball’s Writing An Interpreter In Go at the time. But it was completely pointless. Instead of learning about the topics and techniques in the book, Copilot was just outputting the code for me. Sure, it might feel cool that you just wrote a parser, but could you do it again if you turned Copilot off? Probably not. You also lose the chance to learn about concepts like memory management or data oriented design, because Copilot just gives you some code that it thinks might work, instead of you researching topics and understanding nuances.

That actually leads into another angle. Research. This time with a more positive spin on AI.

It’s true. LLMs are useful. They’re like a search engine. We used to use stack overflow to get help with a programming problem. Since LLMs are trained on all that data, they can be effective tools to learn more about a concept. But only if you use them with an inquisitive mindset and don’t trust their output.

As they’re notorious for making crap up because, well, that’s how LLMs work by design, it means that they’re probably making up nonsense half the time. It’s just about patterns and token sequences, not real statements by people that are insanely knowledgeable. It’s trained on content created by people who know what they’re talking about, but it regurgitates it in a manner that differs from the source material.

Anyway, interrogating responses and trying to figure out why it's recommending certain approaches is the only real way to get benefits from an LLM. Treat it like a conversation with someone, where you’re trying to understand why they like a certain technique. If you don’t understand why something is being suggested, or what it’s actually doing under the hood, then you have failed.

And make notes! Lots of them! I recently started playing around with Zig, and I constantly make notes on things that I am learning about the language, especially considering that it’s my first time dealing with memory management. They can be a useful reference point and handy when you’re stuck, or maybe even something to share with others!

I wrote this post on my morning commute, and my Tube has arrived at its destination, so I’ll leave it here.