How AI changed the way we learn to code

If you are interested in understanding how programming will be taught and how AI, understood as the massification of generative AI tools, changed the way we learn to program, this video is for you.

If you are interested in understanding how programming will be taught and how AI, understood as the massification of generative AI tools, changed the way we learn to program, this video is for you. It is very good. As it is too long for a LinkedIn post, I summarized it in this post:

  • Sharp decline in Stack Overflow.

  • We are at a point of AI expansion, but eventually, we will reach the point of maturity. It’s not eternally linear or exponential, as a reference, he uses the smartphone world.

  • Answers to Freddy’s question on Twitter: How has the way of learning to program changed after the arrival of ChatGPT, Github Copilot, and LLMs?

1) It feels more agile. But you can’t not know how to program to use it. Even to ask questions, you must know what to ask. AI is not useful without understanding solid foundations of code and systems.

2) You can ask without being judged. He talks about the problems of the university and where we should go.

3) Complement to classes and to better understand what has been explained, like a tutor or separate teacher.

4) LLMs output code that is too advanced because they do a weighted analysis of all the existing culture, and when one is just starting to learn, that is frustrating; the best way to start is the sequential path.

5) Syntax has become less relevant, and now logic and tools for solving problems are more important.

6) Creating reports or writing documentation makes sense, but it doesn’t make sense if you’re not verifying the result and auditing the process. Copy/paste doesn’t work in the long run.

  • He mentions this article that talks about the challenges teachers face: Short-term (changing the way exams are evaluated, exposing students to the capabilities and limitations of AI) and Long-term (explaining fundamentals, explaining ethical problems, creating AI-proof tasks, doing more oral exams, giving personalized help to students, helping instructors with time-consuming tasks, focusing more on code reading and criticism, creating more open-ended tasks, having students collaborate with AI).

  • What is programming? First of all, AI doesn’t replace anyone; it’s a multiplier of what we already are. In other words, there are no good answers if you don’t know what a good question is. Programming is not writing code. Writing code is the result of programming. Programming is understanding a problem with maximum clarity and zero ambiguity. Solid foundations or fundamentals of programming remain important. Real problem: if you know the direction, you can discover the way to get there. If you don’t know it, there is no way to find it. You can’t expect an LLM to program an app like UBER from scratch because it won’t understand it.

  • One of the most demoralizing and demotivating things when you’re starting to program is those small syntax errors that you don’t understand when you don’t have a lot of background, and the way compilers explain the error is useless, and it makes one feel like they’re not cut out for it. On the syntax side, LLMs help a lot, because knowing syntax is not a sign of being a great programmer; it’s a limitation of our current nature of software development that we are now eliminating. Rookie mistakes that used to take hours now take seconds.

  • He mentions this article that deals with the hardest part of creating software, which is understanding the complete system. The technology behind LLMs is not capable of building complete systems. And our mind needs the concept of a complete system to be able to build complete software solutions.

  • Final message for instructors: AI is not going away. To a large extent, educating is motivating to learn.

Check out the video here.


© 2020. All rights reserved.