raven
Jupiter
That ain't Earth, kiddies.
Posts: 509
|
Post by raven on Dec 16, 2011 16:13:25 GMT -4
In the National Geographic I own from Decmeber of 1969, we have 21 Hasselblad photos, almost all taken on of the moon or on the moon, and several are more than full page spreads. LIFE Magazine also published Hasselblad photos, several of them not in the National Geographic article and many are also larger than one page spreads. And, as mentioned earlier, there was an advertisement for 68 photos on slides in the National Geographic. That represents a quite large chunk of the APollo 11 Hasselblad images taken total.
|
|
|
Post by gillianren on Dec 16, 2011 16:17:12 GMT -4
For heaven's sake, I have books from the early '70s which are full of Apollo pictures. I have dozens of the things around here, and my collection is quite small.
|
|
|
Post by JayUtah on Dec 16, 2011 16:52:08 GMT -4
I too collect historic Apollo books from the 1970s. I have several that are folio size and have high-quality reproductions.
|
|
|
Post by tedward on Dec 16, 2011 16:54:36 GMT -4
Patrick Moore discusses it with Neil Armstrong in 1970 on that program that gave playdor so much angst, he mentions the colour photographs returned from the moon.
It also brings up a nice bit about the colour change due to the angle of the sun compared with Apollo 12.
|
|
|
Post by nomuse on Dec 16, 2011 17:01:41 GMT -4
The idea of Google Earth is very old - from 1970s. Then mostly architects designing buildings used this method. They had to digitize topographic map - take an aerial photograph, tilt it 90 degrees and then take pictures from ground to show trees and buildings and fix it to the xyz-coordinates of the topomap. Ridiculous. You can't rotate the subject in a photograph by 90 degrees! What is done, is that height information taken from various sources is combined with texture information. In the more primitive versions, there is no appropriate texture on the faces. If you are working from aerial photographs at an oblique angle, you can texture parts of one visible face (but you still don't get 360 coverage). And the results are really, really obvious. I mean, stupid looking. Clearly artificial. For comparison, how some later driving games have done it is to play Google Street map, but with vehicles set up to take radar or laser mapping at the same time. I have a friend who was working on this sort of mapping and reconstruction in the Valley of Kings. In this case, very specifically images are taken at street level and matched to the 3d reconstructions developed from multiple depth maps. A version of this going around now is to abstract (using edge-detection algorithms and similar) depth data from multiple photographs. It's a largely automated, computer-intensive version of what photogrammetry has been doing for a long time. You are still restricted to what can be interpolated from the available camera angles, however. The one advantage attempts to simulate the lunar surface have is that, sans boulders, it is gently rolling terrain with no undercuts. The latter means that you can actually simulate the depth with a single depth map -- a technique impossible for an object like the LM. Then they had designed a new buiding separately ... or they had a miniatyr model of the building. This was then transferred to the image. For me to prepare Apollo kind of pictures would take 1 day per flight (I just need LM and space suits). The only problem is that there seems to be also pictures taken from Moon surface from AS 15-17 (probably by robot cameras). For example, the huge stones at Hadley Rim walls seem to be real and too detailed for Panoramic camera. I haven't seen mesh models of that level of detail for the A7 yet. I can think of two fully-rigged models of an Apollo-style suit, but neither has the detail required (and neither is that close a resemblance, either). This is a non-trivial modeling task, and what is worse is that with anything less than a fully modern render machine you are going to have to bake a normals map or similar to transfer the necessary modeled details to a mesh you can actually work with. Still, the modeling and texture needs fade to nothing beside the need to run a Radiosity solution. I've been through the process of faking inter-object reflections (in -- shudder -- Bryce) and it is a challenge involving a whole bunch of carefully parameterized extra lights and way, way, way more cycles of render time. If you walked up to WETA today and said "gimme a realistic CGI of astronauts on the lunar surface" they'd say "sure...give us ten months." Not quite the trivial task you are making it out to be. The moon picture books published in early 1970s were either Disney kind of books with very bad quality pics (from a gravel pit and some with lamp lightning - some from Langley flight simulator) or geological books where only Metric/Panoramic camera pictures were used. Apollo was a Moon mapping mission - very good topographic maps were generated ... and at least the latest missions took pictures also from surface (without men). No. You are wrong, and I have the pictures in my hands to prove it. As do most of the other posters on this thread.
|
|
|
Post by twik on Dec 16, 2011 17:02:43 GMT -4
The idea of Google Earth is very old - from 1970s. Then mostly architects designing buildings used this method. They had to digitize topographic map - take an aerial photograph, tilt it 90 degrees and then take pictures from ground to show trees and buildings and fix it to the xyz-coordinates of the topomap. Then they had designed a new buiding separately ... or they had a miniatyr model of the building. Well, yeah. Because we know that the REAL hard part of designing a building is creating a topographic map of the pre-existing site. The actual design of the building is trivial compared to that. My parent's house was built in the mid-60's. Gosh, I can remember being so excited to see the 3-D computer graphics of how it was going to look, and the excitement when the mapping plane circled the site. Or not. We got blueprints.
|
|
|
Post by Tsialkovsky on Dec 16, 2011 19:31:51 GMT -4
"nomuse" you continue misinterpreting my text. I told that in my model you take the background from 3D, you tilt it 90 degrees and after that add all the foreground - it is not necessary to tilt that part because it is taken from ground level. Of course I can produce the whole landscape 360 deg from background - why not. You can take a pic from LM, fix it to the map coordinate with 3 points and generate a final still pic - then you go e.g. to another side of LM and take another pic and fix it again to the map and your traces are there from the previous pic, and so on. And the background is always correct as it comes from the 3D. AS11-12 have not been done like this as the landscape is so simple (no mountains). This was how the architects did the landscape models - and still do.
How did you get the mesh models to my description - in a case that the building is not yet constructed you should do that, but if you have real LM and astronauts, you just take their photos.
I have gone through certain places (as AS-15 pics) and there is something wrong e.g. in Front (southern part of mission). But as I said, I am rather sure that also ground level pictures have been obtained at least from Hadley Rill. There was a US plan to send a high quality scanner satellite to moon after LO - maybe it took those pics.
In early pages here was a guy who also did 3D models and compared them with ground pictures ... what happened to him? That was I joined this forum, but he is out now.
|
|
|
Post by gillianren on Dec 16, 2011 19:40:34 GMT -4
How about you address the very, very basic point that you are wrong about where, when, and how the photos were released?
|
|
|
Post by nomuse on Dec 16, 2011 22:52:47 GMT -4
"nomuse" you continue misinterpreting my text. I told that in my model you take the background from 3D, you tilt it 90 degrees and after that add all the foreground - it is not necessary to tilt that part because it is taken from ground level. And the background is not taken from ground level, thus any significant elevation experiences texture stretch. This is basic stuff. Of course I can produce the whole landscape 360 deg from background - why not. You can take a pic from LM, fix it to the map coordinate with 3 points and generate a final still pic - then you go e.g. to another side of LM and take another pic and fix it again to the map and your traces are there from the previous pic, and so on. One pic of the model or mock-up LM for almost every photograph, actually. The angle it is viewed at is visibly different from picture to picture. This is not un-doable, of course. You need to match up the rotations applied to the camera filming the model or mock-up LM with the movement and orientation in the simulated 3d space. They have some pretty good motion match software for that sort of thing these days. Tougher in the 70's -- Dykstraflex is the state of the art. So you'd have to develop a system to coordinate the numerically-controlled camera with the parameters in the 3d software. And you've still got compositing to worry about, which was very hard to achieve cleanly. Multiple layers, too, as between sets of photographs there are visible shifts within foreground elements. So we are talking something like the old stat camera days at Disney, with multiple elements on plates you can shift and swap to create that illusion of depth. And, well, NASA wouldn't do it that way. Nobody would. Trumbull and Adler and, heck, Inishiro Honda were all model makers. They would have made scale mock-ups of the lunar surface, with teams of artists working FROM (not simply mechanically adapting) the best photo reference available. I mean...the first CGI spaceship to really take the screen was The Last Starfighter (and it didn't look that great, either). Doing this sort of thing in the 70's is jumping the gun on decades of technological development. And the background is always correct as it comes from the 3D. AS11-12 have not been done like this as the landscape is so simple (no mountains). This was how the architects did the landscape models - and still do. How did you get the mesh models to my description - in a case that the building is not yet constructed you should do that, but if you have real LM and astronauts, you just take their photos. I have gone through certain places (as AS-15 pics) and there is something wrong e.g. in Front (southern part of mission). But as I said, I am rather sure that also ground level pictures have been obtained at least from Hadley Rill. There was a US plan to send a high quality scanner satellite to moon after LO - maybe it took those pics. In early pages here was a guy who also did 3D models and compared them with ground pictures ... what happened to him? That was I joined this forum, but he is out now. Well, I got the models from you suggesting you could throw together convincing images in an afternoon. Unless you happen to have an LM in your backyard! Also, if you keep it in 3d you have control over all the elements and you have no compositing stage. Which is not a small deal when talking about matching up lighting on a model or mock-up with lighting that is baked-in to the textures you've derived from photographs taken by a satellite.
|
|
|
Post by echnaton on Dec 17, 2011 0:07:38 GMT -4
"nomuse" you continue misinterpreting my text. I told that in my model you take the background from 3D, you tilt it 90 degrees and after that You must meet the empirical requirement of showing the photos were hoaxed before the speculation of how they were otherwise made has any meaning. Your failure to do that leaves the rest of your complaints as irrelevant.
|
|
|
Post by JayUtah on Dec 17, 2011 0:57:48 GMT -4
"nomuse" you continue misinterpreting my text. I told that in my model you take the background from 3D, you tilt it 90 degrees... He's not misinterpreting your text; you're being unclear. What exactly is being "tilted 90 degrees?" And about which axis is it being rotated?
|
|
|
Post by Jason Thompson on Dec 17, 2011 4:11:13 GMT -4
It is strange - I was using and teaching computer graphics in early 1970s Then show how such technology could be used to generate the Apollo images. A few weeks after the missions. You are confusing 'published' with 'every picture being printed in a free magazine supplement. You might have had to go and get the pictures frmo a particular source, but they were available. And I repeat: are you conceding that your original statement about them not being published until the 1990s was incorrect? New to you does not mean new to everyone. What we have now is new high-res scans of the pictures being made available as internet capabilities make such high-res images easier and quicker to download. Remember when downloading a five minute, massively compressed, 400 x 300 pixel video online took an hour or so? Now we can have HD YouTube videos. Technology improves, so what we can get online in terms of high-res video and pictures improves. Oh very well done. Some photographs taken from orbit at the same time as the EVA photos on the surface have the same shadows is somehow suspect? Really? So what? It was useful in the two later missions. Again I ask: how were those panoramic and mapping pictures taken? The cameras' film canisters had to be returned to Earth for processing: a procedure very well documented as being performed by an astronaut on an EVA retriving them from the camera mounted in the SIM bay of the service module. The photographic and film trail for that shows quite clearly that what was retrieved by the Russians was a boilerplate which had so many obvious differences from what was stacked and launched on Apollo 13 no-one with eyes could seriously confuse them.
|
|
|
Post by Jason Thompson on Dec 17, 2011 4:12:56 GMT -4
The moon picture books published in early 1970s were either Disney kind of books with very bad quality pics (from a gravel pit and some with lamp lightning - some from Langley flight simulator) or geological books where only Metric/Panoramic camera pictures were used. Absolute rubbish. Why do you insist on simply lying about stuff we know was available at that time?
|
|
|
Post by ka9q on Dec 17, 2011 7:04:42 GMT -4
My touchstone moment at the moment is from "Apollo 13" (1995). Tom Hanks dreaming he is on the Moon, and picks up a handful of lunar soil. And the moment he pours a little from his hand the illusion shattered in a cloud of, well, dust. I had exactly the same reaction from that scene. But it wasn't just the dust. The lighting wasn't right either. In fact, I don't think I've ever seen the lighting in a fictional lunar scene match actual Apollo images. Even Magnificent Desolation had that problem, especially in the fictional "rover wipeout" sequence near the end. The lighting is too soft; it just doesn't look like sunlight. I'm not sure why, as you'd think it would be simple to use one very bright spotlight and to cover the studio walls and ceiling with black fabric to block scattering. Could it be that it's just not possible to uniformly illuminate a large set with just one spotlight set back at a sufficient distance?
|
|
|
Post by gwiz on Dec 17, 2011 7:15:20 GMT -4
The moon picture books published in early 1970s were either Disney kind of books with very bad quality pics (from a gravel pit and some with lamp lightning - some from Langley flight simulator) or geological books where only Metric/Panoramic camera pictures were used. Apart from National Geographic and Life, mentioned above, the technical magazine Aviation Week included a good quality colour section of photos after every mission. I don't think they missed any of the pictures that are now well known. For instance, the hoax theory's C-rock picture was used as an Av Week cover - curiously without the C, which is thus proved to be an artefact from a later scan. Within a few months of each mission, NASA published a preliminary science report with a lot more pictures, typically the ones of less general interest unless you are a geologist, such as the in situ documentation of the retrieved rocks. NASA also published a large format book - Apollo Expeditions to the Moon, 1975 - again with a lot of very good quality photos.
|
|