• 0 Posts
  • 1 Comment
Joined 1 year ago
cake
Cake day: December 13th, 2024

help-circle
  • Large language models can generate defensive code, but if you’ve never written defensively yourself and you learn to program primarily with AI assistance, your software will probably remain fragile.

    This is the thesis of this argument, and it’s completely unfounded. “AI can’t create antifragile code” Why not? Effective tests and debug time checks, at this point, come straight from claude without me even prompting for it. Even if you are rolling the code yourself, you can use AI to throw a hundred prompts at it asking “does this make sense? are there any flaws here? what remains untested or out of scope that I’m not considering?” like a juiced up static analyzer