|
Post by ka9q on Jan 1, 2012 19:27:42 GMT -4
That's as good a reason as any!
Edit: to remove quoted author, to remove unintended implication he was speaking in the first person.
|
|
|
Post by nomuse on Jan 1, 2012 19:43:17 GMT -4
Yah. Devil in the details. I've never found a clear way to express it, but I've noticed over and over again a certain hierarchy of observational skills.
You've got the Apollo Denier, who believes they have a better eye for detail than most people -- allowing them to notice "flaws" and "anomalies" that few other people have seen. So for them, they are ranked at the most accurate and detailed observations, with both supporters of the official story and the great unwashed public below them in skill.
These are the folks who constantly go on about "This looks exactly like..." -- two mountains that are similar, two photographs that are similar, a lens flare that looks sort of like a light fixture, a torn scrap of kapton that looks like an 8 x 10 glossy, a bit of movement that looks like what they think theatrical flying looks like.
The problem is, that in true Dunning-Kruger fashion, their observational skill is actually well below average. Or, to be more fair, whatever skill they have in actually seeing what is on the print is overshadowed by rampant pareidolia and an urgency to make the worst possible inferences (aka studio lights, wires, etc.)
It leads to a rather odd form of talking past, when you try to debate one of these over what they think they are seeing. As you attempt to point out how the surface resemblance fails when a closer look is taken (for instance, a direct overlay of the mountains in question, or a blink comparison of the "identical" photographs) the Apollo Denier remains stuck in the first iteration.
To them, it is inconceivable that you saw what they saw then moved on beyond it. To them, the only possibility is that you simply aren't sharp enough to make the first observation; that your observational skills are too dull (or too dulled by your emotional need to support the Official Story) to see the "identical" mountain or the "flying on a wire" the Apollo Denier is pointing out.
(It isn't just raw skill...there is more here than solving the Sunday Supplement "One of these drawings is different" puzzle. Understanding why the lens flare is not a light requires real world skills beyond knowing that people filming movies often use lights. It involves knowing, for instance, that such fixtures have specific shapes and a pentagonal frame is not one of them. And being familiar with the look of artifacts of internal reflection in a camera, their various sources, their shapes, their geometric relationships, etc.)
|
|
|
Post by Vincent McConnell on Jan 1, 2012 20:20:02 GMT -4
Another one. This one with a sun. Don't ask about the Gemini One label in the corner. I'll explain if you really want to know, but for the most part, it's a stupid story. Okay, off the top of my head, the depth of field is way too shallow, especially if the "sun" is supposed to be the sole light source. The shallow DOF is understandable if the scene is a tabletop miniature, but you would never get away with representing it as a lunar landscape. The DOF could be improved by stopping down the lens aperture, but you would lose the blooming effect on the sun (it would get smaller due to decreased exposure, and you would lose most of the lens flare. That's all I have for now. Could you elaborate as to what "Shallow Depth of Field" means a little more? I plan on building another tabletop lunar miniature and I'd like to take realistic photos for fun. So if I start with the miniature, what is the process I should go through, in detail, to make my photos look like they were taken on the surface of the moon?
|
|
|
Post by trebor on Jan 1, 2012 22:14:58 GMT -4
|
|
|
Post by chrlz on Jan 2, 2012 2:13:27 GMT -4
Okay, off the top of my head, the depth of field is way too shallow, especially if the "sun" is supposed to be the sole light source. The shallow DOF is understandable if the scene is a tabletop miniature, but you would never get away with representing it as a lunar landscape. The DOF could be improved by stopping down the lens aperture, but you would lose the blooming effect on the sun (it would get smaller due to decreased exposure, and you would lose most of the lens flare. That's all I have for now. Could you elaborate as to what "Shallow Depth of Field" means a little more? I plan on building another tabletop lunar miniature and I'd like to take realistic photos for fun. So if I start with the miniature, what is the process I should go through, in detail, to make my photos look like they were taken on the surface of the moon? Check out the first link given above by Trebor, or, if you really want to do this properly and see the sort of (mathematical) complexity involved in just this single aspect, try this (eek!).. If you are using a miniature you are going to have a lot of trouble with that aspect as your camera will be quite close to the subject, which reduces the apparent depth of field - all other things being equal.. The other things? - different cameras have different sensor/film sizes, can be fitted with different lenses with different focal lengths, and then can be set with different aperture settings, and finally focused on the region of interest. ALL of those things affect the 'depth of field', and they interlock in complex ways. So the fact that you are shooting a miniature will be given away in ways you may not expect. Almost all aspects of the camera setup have visible characteristics and telltales - so if you try to pretend you are shooting medium format film, when you actually shoot with a 35mm or small compact digital, it will be spotted. Similarly, if you say that something is distant and it isn't, or you say the lens was set at f8 when it was really f4... again, the truth will be spotted in numerous ways. I'm not just talking about EXIF data (which can be faked reasonably easily and doesn't apply to film images anyway) - a lot of this comes back to photogrammetry including very subtle visible characteristics like noise/grain textures, even the nature, as well as amount of the blurriness of distant v. nearby objects (eg Google 'bokeh' and 'circles of confusion'). Someone with wide experience with cameras will spot this stuff in a split second, and if it faced real forensic investigation.. not a chance. And of course all of the original Apollo film frames have been available for public and scientific scrutiny since the missions came back... The NASA folks have very happily allowed the film to be re-scanned as technology has gotten better over the years, sometimes revealing details that were not spotted at the time of the missions (eg Venus in some frames). And we haven't even started on the lighting. You're going to need a single, very bright, very distant light source (eg the Sun would work well  ...) and it will have to match the apparent diameter of the sun, otherwise... well, look up 'penumbra'.. Best of luck! PS - sorry for jumping the gun, AtomicDog!
|
|
|
Post by ka9q on Jan 2, 2012 11:16:23 GMT -4
Another way to look at the depth of field problem with scale models is to realize that your camera's focal length, image size and working wavelength aren't being scaled down along with the model's dimensions. If you could make a tiny working ultraviolet camera you could avoid the problem.
A more practical workaround is to use a very small f-stop on the lens. This in turn requires a fast sensor and/or intense lighting and/or long exposure times.
|
|
|
Post by chrlz on Jan 2, 2012 16:56:23 GMT -4
Another way to look at the depth of field problem with scale models is to realize that your camera's focal length, image size and working wavelength aren't being scaled down along with the model's dimensions. If you could make a tiny working ultraviolet camera you could avoid the problem. A more practical workaround is to use a very small f-stop on the lens. This in turn requires a fast sensor and/or intense lighting and/or long exposure times. Yes, but as I mentioned above, that approach will bite you in other ways - a very small aperture brings on other issues like diffraction losses which will manifest in loss of sharpness. Also, the smaller the camera the smaller the sensor. Smaller sensors have lower resolution, less dynamic range and higher noise levels, especially when used at higher sensitivities (higher iso settings). To get anything near the images created by a Hasselblad loaded with medium format film and shot in sunlight in an environment with no atmosphere, you need: - a Hasselblad loaded with medium format film and shot in sunlight in ..  Having said that, I understand Tsialkovsky will be back any moment now, with some samples of his faked images... 
|
|
|
Post by twik on Jan 2, 2012 17:37:43 GMT -4
Ah, yes. I'm sure Mr. T. is only waiting until the holidays are over.
|
|
|
Post by Glom on Jan 2, 2012 18:55:40 GMT -4
Could you elaborate as to what "Shallow Depth of Field" means a little more? Depth of field refers to the size of the range of distances of object that can still be in focus. A shallow depth of field means that you need to set your focus much more precisely to get your subject in focus because a little bit closer or further will bring it out of focus. By contrast, a large depth of field means that you can have objects of quite a varying distance be in focus. The cause if f-stop. A narrow aperture allows for larger depths of fields, useful for the kind of photography done on the Moon. A wider aperture gives shallower depths of field, which is the price you pay for more exposure, but is also quite useful artistically in types of photography like portraiture, where you want the focus to be all on the subject and not include anything else. This page from Clavius gives a demonstration.
|
|
|
Post by nomuse on Jan 2, 2012 19:26:54 GMT -4
Yes, but as I mentioned above, that approach will bite you in other ways - a very small aperture brings on other issues like diffraction losses which will manifest in loss of sharpness. Also, the smaller the camera the smaller the sensor. Smaller sensors have lower resolution, less dynamic range and higher noise levels, especially when used at higher sensitivities (higher iso settings). To get anything near the images created by a Hasselblad loaded with medium format film and shot in sunlight in an environment with no atmosphere, you need: - a Hasselblad loaded with medium format film and shot in sunlight in ..  Aha! Can you explain in more detail? This seems to make an intuitive sense but I'd like to hear more. Would help me understand why those few films that have tried to accurately produce the look of the Apollo landscape were forced to use very intense light sources. (Actually, I guess my intuition is off. DOF goes up with a narrower aperture -- I learned THAT lesson when shooting 1/72d scale military models with my old Minolta 201 and a couple of macro lenses. But a larger lens, to gather more light in the first place, is going to show more chromatic aberration and various multi-element effects, right?) So with a really small aperture, anyhow, are we talking diffraction effects because the geometry of the aperture is getting too close to the wavelength of the light in question? Or am I talking through my hat?
|
|
|
Post by ka9q on Jan 2, 2012 23:19:57 GMT -4
So with a really small aperture, anyhow, are we talking diffraction effects because the geometry of the aperture is getting too close to the wavelength of the light in question? Or am I talking through my hat? The aperture doesn't actually have to get down to the size of the wavelength for the diffraction limit to become noticeable. At that point the diffraction blur would cover the entire image! So the closer it gets, the greater the problem. Here's the actual formula. The minimum angle that the camera can resolve is given by arcsin(1.22 * lambda/D), where lambda is the wavelength and D is the diameter of the objective in the same units. Think about it this way. To resolve a small detail in a scene, the light from points across the detail must take paths through the objective lens that vary by at least a significant fraction of a wavelength; otherwise the system won't be able to tell them apart. The bigger the lens, then the smaller the angle by which the incoming light can vary for that requirement to be met. Conversely, as the lens is made smaller, e.g, by stopping it down, the incoming light must come from larger and larger angles for that requirement to be met. You can also work around the problem by using light of shorter wavelengths, hence my comment about using ultraviolet light. It would be difficult to keep the scene in color, though.
|
|
|
Post by nomuse on Jan 3, 2012 4:24:51 GMT -4
Got it. Or in terms I understand, something like the Nyquist limit. So how important is lambda in real-world situations? Would you notice that a red object was more blurred than a blue one, or is the spectrum of visible light simply too narrow?
At least when I was shooting models I was able to lock down the camera and use 2-second exposures -- otherwise I could have never done it with 50 watt halogen R30's. Not sure you could get your astronaut-actors to stand still that long, though. And motion picture -- like the scenes in "From the Earth to the Moon" wouldn't work so hot.
|
|
|
Post by chrlz on Jan 3, 2012 6:09:18 GMT -4
Yes, but as I mentioned above, that approach will bite you in other ways - a very small aperture brings on other issues like diffraction losses which will manifest in loss of sharpness. Also, the smaller the camera the smaller the sensor. Smaller sensors have lower resolution, less dynamic range and higher noise levels, especially when used at higher sensitivities (higher iso settings). To get anything near the images created by a Hasselblad loaded with medium format film and shot in sunlight in an environment with no atmosphere, you need: - a Hasselblad loaded with medium format film and shot in sunlight in ..  Aha! Can you explain in more detail? This seems to make an intuitive sense but I'd like to hear more. Would help me understand why those few films that have tried to accurately produce the look of the Apollo landscape were forced to use very intense light sources. Not quite sure what you wanted explained - maybe ka9q has covered it..? But you've asked me about photography (fool!) so I will answer in great detail anyway! If it's about the lighting and how the film records it.. Back in the good ole days, when we had films like Ektachrome, Kodachrome (transparency or 'positive' films) and Kodacolor (print or 'negative' film), us photography nuts had to very quickly learn about the limitations and best applications of the different types of film. I just picked 3 types above, because those three probably are the best examples to show the *huge* differences in the way they would record any given scene. Things like the size and quality (texture) of the film grain (K'color - large and clumpy, E'chrome small and restrained, K'chrome - virtually non-existent), and the subtle differences in colours (Kchrome - neutral, realistic, long lasting, sometimes slightly reddish skintones, E'chrome - a bit bluish/pinkish, K'color - very good yellows and skintones), and most importantly their latitude and dynamic range. To cut it pretty short, Kodachrome and Ektachrome were both pretty sensitive to correct exposure (limited dynamic range or the ability to record very dark and very light tones in the same scene) and were also *bad* films for overexposure (hence the terribly washed out and 'bloomed' images with the sun in them). Kodacolor has a wider dynamic range and handles overexposure better (so might have been a more suitable film for Apollo in some ways - but it had other problems and I am already digressing..!). But anyway, a photographer from this era (puts hand up!) will be able to pick most images by their 'look', and tell you that it was originally shot on Ekta/Kchrome/whatever. That is MUCH more difficult nowadays, with digital cameras now producing much more neutral (boring?) images. (There's a whole new set of problems with digital though, so now we can pick what size the sensor is..) Anyway.. When you use a film like Ektachrome in anything other than bright lighting, its wheels fall off. Noisy shadows, color goes off, etc. So it really *needed* the Sun - artificial lighting just isn't the same. It's all about quantity of photons! Contrarily, Ekta doesn't handle overexposure well, and anything that exceeds its linear range rapidly gets blown to hell - white, washed out, bloomed, irrecoverably burnt-out areas. Very characteristic - if those images had been shot on Kodachrome or Kodacolor they would have been noticably different (not necessarily better!). Then you have the basic problems of trying to duplicate the lighting on the Moon. You have an intensely bright, very distant and small light source. That means the light rays are almost perfectly parallel - so tiny (yet measurable) penumbra will exist, and there will be single shadows only (except as follows..). And of course not only are there no competing light sources (except the very obvious ones of the astronaut's space suits, the LM, the equipment, any relevant rocks or hills), there is also the huge upwardly reflecting source of the ground, and the heiligenschein effect. All of these factors are obvious or detectable to greater or lesser degrees in the Apollo photography record. To duplicate that lighting effectively would be pretty much impossible for anything but the most simple posed scene, let alone for hundreds of overlapping scenes across a huge area, some of which is also being filmed by other cameras... And if you are trying to duplicate this stuff using lesser light sources there are other telltales if the camera is shot at either low shutter speeds (eg movement blur or lack thereof) or very wide apertures (different bokeh and depth of field) that will catch you out - you simply can't win. Does that help? If not feel free to try to rein me down to the bit you are most interested in..! Yes and no... It really depends on the lens design. Yes, as a rule, a larger (better light gathering) lens will have more CA (and other design problems) than a smaller one, all things being equal (which they never are..!). However, larger lenses are easier to machine accurately - tiny lenses are often not so good simply because they are not precise enough in either grinding or alignment.. But you are right on the money about the more elements, the more flare - that was a bit of a problem with the Hasselblads, made much worse by those horrid reseau plates, which in my highly opionated hindsight and given our current knowledge of photogrammetry, were a crappy idea, I reckon. (I'd be interested to hear Jay's take on that..) Finally, the larger the 'format' of the camera, the shallower the depth of field appears to be - little digitals have very wide depth of field, while large format cameras have much shallower d-o-f. Very useful if you wish to isolate your subject artistically, not very useful if you are shooting macros... Well, that's sorta right - there's a great coverage of the diffraction topic here: Cambridge in Colour - DiffractionFor 35mm format, diffraction used to become an issue at around f16/f22, medium format a bit higher. For compact digital cameras, with their often tiny lenses and small sensors, it can kick in at f8 or even f5.6/4!! That's why you rarely see these cameras offering smaller apertures.
|
|
|
Post by nomuse on Jan 3, 2012 7:30:17 GMT -4
Heh. I truly love these places where the basic physics of the universe shows up in unexpected -- even counter-intuitive -- ways right in the middle of something you interact with daily.
|
|
|
Post by chrlz on Jan 3, 2012 7:41:20 GMT -4
Heh. I truly love these places where the basic physics of the universe shows up in unexpected -- even counter-intuitive -- ways right in the middle of something you interact with daily. ;D By the way, I apologise for not noticing your italicised text and thereby spending most of that lengthy post answering something you didn't ask... 
|
|