A programming language is a language designed to be read by both humans and machines. It is an intermediary that saves us (the programmers) from manually reducing all of our high-level designs to straight boolean logic. But it's also structured in such a way that current machine resources can unambiguously interpret it.
When you write a program, your code has two audiences. Humans and machines.
When we alienate one audience, the machines, the result is that the machine either does something wrong or cannot do anything at all. When a programmer makes this kind of error, we call it a bug. Bugs are bad.
But a mistake all too often made is for programmers to intentionally or unwittingly alienate the other audience. I recently inherited a codebase that had a single function whose body was 2, 996 lines long. It had high cyclomatic complexity (lots of control structures), hundreds of variables, and crazy indenting. For the most part, it worked fine. The computer understood it. But trying to add features and find and fix bugs has been a tremendous chore. This program was not written for two audiences. It was only written for one.
If I rewind the clock, I can remember numerous occasions where I wrote cryptic code, failed to document, and committed other such sins. And I can think of many times when I've returned to my own code weeks, months, or years later and have had to re-invest substantial time in figuring out what I wrote. My code was not written for two audiences.
Perhaps what we need is a new term. A counterpart to "bug" that describes the code's failure to remain semantically transparent to humans, and not just to computers. Because like computers, when programmers pick up a piece of code and find it indecipherable, they give up.
Unlike computers, they may just go build an alternative.