I assume they weren't able to sit in front of a computer for the whole day like we do today. So how did they write their program? On a piece of paper and type it later when the computer is available? How did they do their testing?
Programming History – How Programmers Worked with Expensive, Rare, Room-Sized Computers
history
Related Solutions
You might find this Wikipedia article to be interesting and informative.
Microsoft started development on the .NET Framework in the late 1990s originally under the name of Next Generation Windows Services (NGWS). By late 2000 the first beta versions of .NET 1.0 were released.
An old press release for the .NET family alludes to its previous title of Next Generation Windows Services (NGWS). If sarcasm is more your cup of tea, this announcement from The Register is interesting as well.
And according to this Wikipedia article on Microsoft codenames, it appears that .NET/NGWS went by the names Lightning and Project 42.
Project Lightning was the original codename for the Common Language Runtime in 1997.[73] The team was based in building 42, hence Project 42. "Next Generation Windows Services" appeared in the earliest press releases about the upcoming platform.
Wikipedia links to an interview of Jay Roxe and an article from The Age as evidence for this information.
Jay tells us that development had begun in earnest at least by 1997, as that's when he joined the team:
OK, well let me give you the history. I joined what is now the .NET Framework team, or the Common Language Runtime team, back in November of 1997. [This was] back when it was called Project Lightning, then it became COM+, then it became Project 42, then we had this nice little re-org that made it Project 21 ? we lost half the team.
And so, I wrote things like String and StringBulder, and I wrote the initial implementation, although I did not own it forever, all of the base types like Int [16, 32, and 64], and double, and all of those. I did some of the work on Object and was Dev Lead for the System.IO classes, the globalization, and a bunch of the collections work as well.
A blog post by Jason Zander on an unrelated topic gives us the interesting tidbit of information that the "Lightning" codename was chosen by the founder of the CLR team, Mike Toutonghi:
The original name of the CLR team (chosen by team founder and former Microsoft Distinguished Engineer Mike Toutonghi) was "Lighting". Larry Sullivan's dev team created an ntsd extension dll to help facilitate the bootstrapping of v1.0. We called it strike.dll (get it? "Lightning Strike"? yeah, I know, ba'dump bum).
And James Kovacs's C#/.NET History Lesson fills in a few more of the gaps. This Stack Overflow question is also worth a read, for those interested in history.
Technically speaking, it was more a case of "software rot". The flight control software was recycled from the earlier Ariane 4 rocket, a sensible move given how expensive it is to develop software, especially when it's mission critical software which must be tested and verified to far more rigorous standards than most commercial software needs to be.
Unfortunately, nobody bothered testing what effect the change in operating environment would have, or if they did they didn't do said testing to a sufficiently thorough standard.
The software was built to expect certain parameters to never exceed certain values (thrust, acceleration, fuel consumption rates, vibration levels, etc). In normal flight on an Ariane 4 this wasn't a problem because those parameters would never reach invalid values without something already being spectacularly wrong. The Ariane 5, however, is much more powerful and ranges that would seem to be silly on the 4 could quite easily happen on the 5.
I'm not sure what parameter it was that went out of range (it might have been acceleration, I'll have to check), but when it did, the software was unable to cope and suffered an arithmetic overflow for which there had been insufficient error checking and recovery code implemented. The guidance computer started sending garbage to the engine nozzle gimbals, which in turn started pointing the engine nozzle pretty much randomly. The rocket started to tumble and break up, and the automatic self-destruct system detected the rocket was now in an unsafe irrecoverable attitude and finished the job.
To be honest, this incident probably didn't teach any new lessons, as the kind of problems have been unearthed before in all manner of systems, and there are already strategies in place to deal with finding and fixing errors. What the incident did do was ram home the point that being lax in following those strategies can have enormous consequences, in this case millions of dollars of destroyed hardware, some extremely pissed off customers and an ugly dent in the reputation of Arianespace.
This particular case was especially glaring because a shortcut taken to save money ended up costing a huge amount, both in terms of money and lost reputation. If the software had been tested just as robustly in an Ariane 5 simulated environment as it had been when it was originally developed for Ariane 4, the error surely would have come to light long before the software was installed in launch hardware and put in command of an actual flight. Additionally, if a software developer had deliberately thrown some nonsense input into the software then the error might have even been caught in the Ariane 4 era, as it would have highlighted the fact that the error recovery that was in place was inadequate.
So in short, it didn't really teach new lessons, but it rammed home the dangers of not remembering old ones. It also demonstrated that the environment within which a software system operates is every bit as important as the software itself. Just because the software is verifiably correct for environment X doesn't mean it's fit for purpose in the similar but distinct environment Y. Finally it highlighted how important it is for mission critical software to be robust enough to deal with circumstances that shouldn't have happened.
Contrast flight 501 with Apollo 11 and its computer problems. Whilst the LGC software suffered from a serious glitch during the landing, it was designed to be extremely robust and was able to remain in an operational state in spite of the software alarms that were triggered, without putting any astronauts in danger and still being able to complete its mission.
Related Topic
- How did programming work when programmers used punchcards
- Programming History – Why Object-Oriented Paradigms Took Long to Go Mainstream
- History of Debugging – Techniques Before Protected Memory
- Why were punch cards used for programming
- Version Control – How Did Version Control Work on Microcomputers in the 80s and 90s?
Best Answer
Circa 1974, you'd sit at a convenient desk and write your program out long hand on paper. You'd test it by walking through it in your head using test data. When you were satisfied that your program was correct, you'd go to the punch card room and transcribe your program onto punch cards, one 80 character line per card. You'd also punch cards for any data your program might need. Then you'd also punch a few incredibly cryptic cards in Job Control Language (JCL) that would tell the computer how to compile and run your program, and what input/output devices it would use. Then you'd take your cards to the 'IO Window', where you'd hand them to a clerk.
When your turn came, the clerk would load your cards into a hopper, and push a button to tell the computer to start reading them. The output of your program would generally go to a line printer or a drum plotter. When your program was done, the clerk would collect your cards, and your hard copy output, and put them in a pigeon hole where you could pick them up. You'd pick up the output, review the resuilts, and repeat the process. It would take anywhere from 20 minutes to 24 hours for a complete cycle. You can probably imagine that you were not happy when you found that the only output was a printed message from the compiler telling you that your program had a syntax error.
You might also have access to a computer through a teletype, so you could actually have an interactive session with a remote computer. However, typing on a teletype was physically painful (very stiff keys, and loud), so you still generally wrote and tested your program on paper first.
By 1976 UNIX systems and mini-computers like the PDP 11-70 were becoming more common. You usually worked in a room full of video terminals with 25x80 character displays. These were connected to the computer via serial lines. Crude, but not too dissimilar from working at a command prompt today. Most editors back then were pretty crappy though. Vi was an amazing improvement.