I started learning to program when I was about ten years old. For a long time I used quite an old, archaic language exclusively. This language has more than its fair share of quirks, and even now, fifteen years later, I feel as though I'm only just getting a good handle on it. One of the main reasons for this is its age. It's been around for quite some time, and that brings with it a reassuring stability and maturity. However, it also brings legacy and cruft, which means it can take many years of experience to gain a full understanding of it. In addition to age, there are several other factors which contribute to its general eccentricity.
Probably the most intimidating aspect of this language is simply its appearance. While any programming language can seem rather esoteric upon first glance, learning a few keywords quickly makes it possible to at least guess at the meaning of code. This one eschews words almost entirely, however, in favour of symbolic representation. You must become familiar with quite a varied array of symbols before it's possible to trace through the code and begin to get a complete idea of its function. To add a further layer of complexity, the meaning of a particular symbol is often context-sensitive; its function in one part of the program might be substantially different to its function elsewhere. It goes without saying that this can be off-putting to a lot of learners, even if the learning curve isn't quite as steep as it might seem from this description.
Contrary to most other programming languages, the few keywords used by this one aren't just in English; basic Italian, French and German vocabulary quickly becomes important when using it. As inconvenient as all of this may seem, with a little practice the notation aspect fades into the background, becoming second nature. Now, the more interesting foibles can start to emerge...
Learning to program in this language is most easily done by studying others' code. There are plenty of textbooks available, although they're often more concerned with understanding existing code than writing your own. An abundance of easily-available code means that, in my opinion, once you have a rough idea of the notation it's best to just delve straight in. The problem here is that due to the age of the language, and often a complete lack of comments or documentation in programs, it can be difficult to correctly interpret an author's intent. Conventions of any language can change over time, and with one as long-lived as this, even simple code can get lost in translation.
Finally, as if it needed any other unusual features, this code has to be run on very specific hardware. It's fairly different from typical computing hardware in many respects. For starters, the machines that run the code often need a lot more human supervision than typical computing platforms. On top of this, the same code can produce substantially different results when run on different systems. In fact, it can produce varied results each time it's run on the same system. There's an interesting corollary to this; these systems are often pleasantly forgiving when dealing with code that contains errors, although equally they can potentially introduce errors of their own when executing code.
Rather than talking more about this language, it might be about time to show an example of a program written using it, so click here to read part two of this article.