|
Post by Obviousman on Sept 30, 2011 8:56:02 GMT -4
Thanks Jay! Nice to see the detail which I didn't understand.
|
|
|
Post by JayUtah on Sept 30, 2011 14:50:15 GMT -4
In order to describe accurately what happened to the computer, you have to know how the computer works. So you either must lay a tremendous amount of groundwork, or you must simplify the discussion. If Eyles is speaking to a symposium of guidance system engineers, he can presume we're familiar with digital controls, interrupts, instrument interfaces, and so forth. If he's speaking to a class of sixth-graders, he may not be inclined or able to teach them enough theoretical computer science to understand the whole problem, so the "Radar overloaded the computer" story will have to suffice.
That's the big hole in Patrick1000's approach to historical analysis: he is unwilling to consider the interpretive effect of gloss, simplification, constraints on elaboration, and tone. He expects that this will affect only the inconsequential details of related historical accounts. When, for example, we say that real-time tasks were prioritized, there are a number of ways we can interpret that in detail with respect to the AGC: pre-emptive or cooperative multitasking, waitlist processing, or the restart scenario that purges jobs not "blessed" for that program mode.
Many people instinctively understand the notion of priority, so we can say that under extreme load, high-priority work is preserved. But does that mean moving the task ahead in a round-robin scheduling scheme? Or does it mean moving a waitlist task farther up in the waitlist queue? Or does it mean flagging the task as restartable so that a soft reset keeps it resident? All these are subtle differences in the notion of "priority," and to assume the effects of one where another is actually the one at play is to risk misunderstanding how the system works.
Now the reasonable listener will understand that he's been simplified to. He'll grasp that his objection may likely have its answer in a nuance he would only come to understand after study and discussion. But the conspiracy theorist will complain about being misled, or find some nefarious purpose behind the "inconsistency" inherent in different glosses. He'll insist that only one interpretation (his) is correct.
The MIT programmers were actually fairly well equipped for development tools, simulation tools, testing tools, and so forth. Then as well as now, emulating the AGC on a more capable platform wasn't too bad a chore.
What the MIT programmers had back then, that modern programmers don't have today, is time. They were given weeks to perfect what a modern programmer would be expected to churn out in a couple of days. Consequently the MIT people had time to think, test, design, redesign, analyze, optimize, and improve their code.
Computer programming is a common avocation today, and it's tempting to compare today's practice with that of the Draper lab. But today's software is developed on breakneck timetables compared to back then, and with much less preparation. Today we look at rapid prototyping and ad-hoc, "agile" methods that dispense with a lot of design exercise. In the 1960s there was more activity on the chalkboard. Not that code was expensive to produce, but that the programmers then had time to delve, to muse, and generally to think hard about their programs at a more conceptual level.
And yes, the hardware of a spaceborne computer has to pass physical muster. A slow computer is tolerable. A small computer is tolerable. A broken or flaky computer is not. So computing power is not as important as computing reliability. A computer that plods slowly along while the world ends around it will always be more spaceworthy than a supercharged CPU and gigabytes of RAM that crap out at the first little bump.
|
|
|
Post by Obviousman on Sept 30, 2011 17:42:38 GMT -4
Not to disparage today's software designers, but another thing about the MIT people was they had to be lean; there was limited space for the code and so could not be bloated or inefficient.
|
|
|
Post by randombloke on Sept 30, 2011 19:39:48 GMT -4
It's perfectly possible to write lean, efficient code today; you just have to find a corporation willing to drop legacy support and similar compatibility issues. This is probably why Google hardly ever release desktop apps.
|
|
|
Post by ka9q on Sept 30, 2011 21:00:03 GMT -4
Not to disparage today's software designers, but another thing about the MIT people was they had to be lean; there was limited space for the code and so could not be bloated or inefficient. In fact, they were so good at cramming all that functionality into such a limited space that many of the deniers simply refuse to believe it was even possible.
|
|
|
Post by coelacanth on Sept 30, 2011 23:15:44 GMT -4
When something is cheap, it tends to get consumed more. You can have software engineers beat their brains out trying to get their code to be leaner and leaner; is it worth it? If all of the users of the software have multi-gigahertz processors sitting on their desks, with lots of RAM and disk space, there may not be too much point to it. I write code in an interpreted language, that is typically used a small number of times, then thrown away. I could probably make it hundreds or times more efficient in terms of the execution time and memory required, but why spend the effort? I've got computers that run it quickly, there is simply no reason for me to spend several days to make it take up kilobytes instead of megabytes and run twelve minutes faster. Code that is "inefficient" may (at least in some cases) be the economically efficient solution to the problem at hand
|
|
|
Post by twik on Oct 11, 2011 16:07:34 GMT -4
Well, P1K got suspended at JREF, which takes some doing. Wonder where the poor guy will end up next?
|
|
|
Post by abaddon on Oct 11, 2011 16:47:59 GMT -4
Well, P1K got suspended at JREF, which takes some doing. Wonder where the poor guy will end up next? DI, ATS, or GLP at a guess.
|
|
|
Post by captain swoop on Oct 11, 2011 17:02:10 GMT -4
See, even when the Troll is banished his thread lives on providing some real information. That's why I like this site.
|
|
|
Post by Nowhere Man on Oct 11, 2011 17:21:59 GMT -4
He's only suspended; he'll be back. R.A.F. got it too, along with two others.
Fatrick1000 moved over to the "Why science and religion are not compatible" thread a day or two ago. He knows as much about evolution as he knows about Apollo, at about the same accuracy level.
Fred
|
|
|
Post by ka9q on Oct 11, 2011 20:38:24 GMT -4
What the MIT programmers had back then, that modern programmers don't have today, is time. They were given weeks to perfect what a modern programmer would be expected to churn out in a couple of days. Consequently the MIT people had time to think, test, design, redesign, analyze, optimize, and improve their code. And they had time to devise more elegant solutions that were often smaller, more reliable and easier to debug. Elegance in programming -- what some programmers might define as software as a work of art, not just a purely functional entity -- is almost a lost art today. When you can take the time to strive for elegance, many other things fall into place. You look for and find the best algorithms, and every introductory CS student is taught that the better algorithm wins over brute force almost every time. You generally find workable ways to deal with limited resources (CPU time, memory) without having to sacrifice that elegance. You get a program that's easy to understand, which is important for finding bugs now and adding features later. And you get a program that's generally more reliable than something you throw together with a minimum of work. This is not to say that every programmer today should always strive for elegance. If you're writing a program that will be run once and then thrown away, your time writing it is more important than the computer's time and memory in executing it. But when you want something that will last, elegance is usually worth it. I really noticed a sea change when we went from 16-bit CPUs to 32-bit CPUs. In a 64KB address space, you didn't have the luxury of implementing something in half a dozen slightly different ways; you had to pick the most general way and make that be the only way. You had to think carefully about the functions you provided and whether they really were both necessary and simple and general enough to be worth the valuable memory space they occupied. When we got 32-bit machines, everything changed. Yes, I breathed a sigh of relief as did everyone else, but I think we also lost something.
|
|
|
Post by JayUtah on Oct 12, 2011 18:01:08 GMT -4
Elegance in programming -- what some programmers might define as software as a work of art... Eh, sort of. Elegance in any form of creative engineering, including computer programming, generally follows certain ideals. - It solves the problem, not the solution.
This seldom occurs among platform evangelists: people who advocate only one narrow form of computer software technology for all applications. Elegant solutions derive clearly from the functional requirements, not from the vocabulary of the predetermined technology.
One of my clients was trying to do Unix system programming in Java. They were a Java language house, and that's what they knew. But Java is specifically intended to be isolated from the underlying system, while system programming is meant to be very intimate with it. Hence they spent all their programming time bashing problematic and error-prone holes through the Java abstraction so that they could get done what they needed to.
"Solving the solution" is the phrase I apply to spending all your time dealing with the behavior of your desired enabling technology, rather than solving the problem the customer wants solved.
The AGC was an elegant solution because it provided the solution from the bare hardware up. When abstract elements such as the Executive or the interpreter were added, they were added only because they were recognized as the best way to solve the specific problems for that solution. They were synthesized from the requirements, not brought along as baggage.
- It is complete and concise.
The elegant solution contains everything you need and nothing you don't. This is what we mean when we say the Apollo lunar module is an elegant flying machine. It is meant to do one task, to do only that task, and to do that task very well even if something goes wrong.
Designs should be as simple as possible, but no simpler. We point to Google's map-reduce algorithm as a simple, yet powerful and extensive tool. To map something means to apply the same operation to each of a list of something. For example, to format a mailing list you would apply the same formatting template to each of the entries in the list. This is attractive in the high-performance computing world because a mappable problem can be broken down and each map operation assigned to its own processor.
Let L be the set of contacts, {a, b, c, d, ...}. Let f be a function that returns a contact properly formatted for an address label. The mapping M = f:L is the set {f(a), f(b), f(c), f(d),...}.
To reduce a set of something means to replace it with a simpler form or representation. For some contact on our mailing list, the individual's name may be represented by a long string of arbitrarily written text, such as "Mr. Harry S Truman, President". A function to reduce that may produce the string of tokens <salutation> <first-name> <additional-names> <surname> <title>. That in turn may be acceptable input to another function that returns the token <name>. And that may be passed to the formatter mapping function to help determine where to start a new line on the address label.
A more practical reduction would be taking in a representation of an audio signal from a phone line and reducing it to a high-level representation of the phrase "account balance," which can be used to route a caller's transaction to the appropriate software module or human operator.
Or to revisit the prior example, you can have a reduction operator L' = insert (L, a) which returns the list L with the item a inserted into the proper position, say in the alphabet.
The ability to think about solutions in terms of abstract operations like this, and then to actually compose the program out of them, is one possible measure of simplicity.
Inelegant systems tend to err on the side of wastefulness and complexity. The AGC designers could have said, "We have 93 programs, therefore we need 93 core sets in erasable memory." But instead they analyzed the programs and found that not all of them needed core sets, and not all the programs could run at the same time anyway. So they settled on a number of core sets that fit the problem.
- It is evidently correct.
Elegant software shows you that it solves the problem. You can tell what it's doing by looking at it, and you can tell that it's doing the right thing.
This is why I lament that computer science curricula rarely teach functional programming any more. Functional programming teaches you to break down problems in a different way than in declarative programming. Functional programmers are more likely to decompose a problem correctly, usually into its base case and then its compositional case. Their code then consists of a small number of well-partitioned, stateless, visibly correct cases, even if they are implemented iteratively.
In contrast, many recent graduates -- many of whom are brought up on Agile, "grope your way along" methods -- produce code that may pass a small set of functional tests, but is not visibly verifiable or comprehensible.
The AGC created a certain elegance through the interpreter and high-level data structures such as vectors. You could see in the code things like VCOMP and VMAG that referred to the "computer within a computer" to compute vector complements, magnitudes, dot-products, and other operations from which larger guidance solutions were made. You could see the logic of the solution at the vector-arithmetic level, rather than have to wade through a combination of high-level logic and low-level vector-component arithmetic. Modularity is one of many properties of computer programs that makes them transparent -- i.e., easy for a human to follow.
There's a caveat there. The biggest mistake made in commercial computer programming today is premature optimization. That is, programmers believe they know how their algorithm with perform, and take steps to improve the performance without first gathering data. The result is an algorithm that is visibly more complex than the simple one. Not only is it more difficult to implement correctly, test completely, and maintain profitably, it is often actually slower in the common case than the brute-force algorithm would have been. When I advise clients, I give them some rules of thumb. - Simple algorithms work faster when N is small. N is always small.
N here is the problem size, such as "Alphabetize the names of the 20 million people in the city." That takes a certain amount of time on a certain computer. What if you double the number of people it has to alphabetize? Does your processing time double? Or does it increase proportionally according to some other function of N? That behavior is embodied in the algorithm you choose, and the field of "computational complexity" studies that behavior.
However in practical terms the advantages of an algorithm that scales better as N grows are almost always eaten up by constant-overhead concerns for smaller problems being handled by complex algorithms. N hardly ever varies that much in practice. You might choose a well-behaved sorting algorithm, believing that the rewards when N grows from 100 to 1,000 will be worth the effort. But nearly every time, your well-behaved algorithm takes about the same or longer to sort 100 items as a simple one does. And N never gets to 1,000; it wallows around in the 100-300 range.
Simple algorithms are easier for programmers to understand, and therefore cheaper to implement and debug. When you see so-called well-behaved algorithms that cache intermediate results or rely on preserving complex state, you find that all the bugs gravitate toward the add-ons designed to speed the code up.
- Use brute force until measurements show that brute force is inadequate. Most programmers are notoriously bad at knowing where their programs slow down, and most project managers have no clue how much performance they actually need to attain.
Often the surprising economy of computer programming these days is that it's cheaper to buy better hardware than to maintain software that tries too hard. A programmer costs his employer on the order of $120,000 a year, or about $500 a day. If you instruct a programmer to spend a week devising and debugging a complex algorithm to optimize for, say, storage space, you'll spend $2,500 on that solution. $2,500 can by a lot of RAM or hard disk space.
- Don't be clever; be clear.
Clarity is the zen of elegance. Too many of my clients believe that in order to be elegant, their solutions have to exploit some little known feature or side-effect, create some fascinating new interaction, or solve a straightforward problem in some hitherto untried way. That's fine if your goal is to give you something fun to blog about. But if you want to make money and avoid errors, stick with the things you know work.
IBM's virtualizing mainframe operating systems are a good example, however, of a clever yet elegant solution. Their problem was to maximize the use of tremendously expensive hardware when no single software solution would be right. The answer was to virtualize the hardware in software.
Exactly. Commercial software engineering these days is more about getting to market in less time than about creating the elegant solution. You have to argue persuasively to project managers that a little attention toward elegance up front will pay dividends in the future. But the type of programming that the AGC exemplifies is still practiced today. You still have critical embedded systems such as flight controllers and x-ray machine controllers that need elegance in order to achieve rigor. We may have 32-bit memory sizes today, but that's not yet license to implement the same thing a dozen slightly different ways. That's a dozen slightly different implementations that have to be tested, and a dozen slightly different methods that may confuse a new programmer. Lots of RAM should let you have 93 core sets, one for each program. That eliminates a what-if in your design. But it shouldn't be license to write 200 more programs you don't need.
|
|
|
Post by ka9q on Oct 13, 2011 5:42:04 GMT -4
There's a caveat there. The biggest mistake made in commercial computer programming today is premature optimization. Agreed. I say this a lot too. Often true, but not always. I can cite several examples of major breakthroughs resulting from the discovery of lower complexity algorithms, i.e., algorithms whose operations increase less quickly with N even though they may have a fixed constant that's larger. The first example that comes to mind is the fast fourier transform (FFT) but there are others. But I'll be the first to write a simple linear search instead of calling a more efficient but more complex one when I am sure that my value of N is small. Good comments overall, Jay.
|
|
|
Post by coelacanth on Oct 13, 2011 7:30:03 GMT -4
[/li][li] Simple algorithms work faster when N is small. N is always small.[/quote] Hey, your N may be small! I have some jobs from 2003 that would still be running now if I used a small-N algorithm.
|
|
|
Post by ka9q on Oct 13, 2011 14:22:30 GMT -4
Although Jay has a point when you consider how important SIMD machines have become in recent years. They don't change the order of your algorithm, but they do divide N by some fixed constant.
|
|