|
Post by mcclellan on Nov 1, 2011 10:13:39 GMT -4
People see UFO:s all the time, and many of them can swear by God that they have even seen the aliens. Should we take their words for the truth? How can we be sure people didn't see a satellite? This kind of "witness evidence" isn't proof of anything. The only kind of evidence I would accept are concrete calculations based on concrete films, not just someone's "words of honor". You challenged us to provide witnesses who saw Apollo spacecraft in orbit and when we do you shift the goalposts. It is apparent you are an intellectual coward. How can people be sure they see a rocket and not just a satellite when they see a little flying dot in the sky? And this is what I have addressed. Could you swear every time you would see a flying object in the sky that it would be a rocket or a space station? No, you couldn't. There were plenty of satellites in orbit already in 1969 which could have been mistakingly taken for an Apollo rocket.
|
|
|
Post by Jason Thompson on Nov 1, 2011 10:13:46 GMT -4
If you think that is a validation then you have no clue how real research and analysis is done, mcclellan.
The liftoff acceleration test IS a validated method, using a fixed scale next to a moving object. The same analysis on the 12 Apollo Saturn V launches all give the same result. The same analysis performed on every other launch film of any given rocket all give the same result. The method is validated and it works.
Because a method used on two examples gives what looks like a correct answer for one is NOT a validation of a method, especially when it is performed on different stages of the flight. As has already been pointed out to you, the Ares-1X footage and the Saturn V footage analyses use different parts of the plume and are applied when the rocket is doing different things. There is not only no validation there is no repetition of the method to any properly acceptable degree.
|
|
|
Post by Jason Thompson on Nov 1, 2011 10:14:59 GMT -4
There were plenty of satellites in orbit already in 1969 which could have been mistakingly taken for an Apollo rocket. Care to back up that assertion with evidence of any satellite that could be visually mistaken for an Apollo craft, especially given data about its location and what it should be doing at that time?
|
|
|
Post by Jason Thompson on Nov 1, 2011 10:17:04 GMT -4
Furthermore, when everything except your analysis agrees with published data, then where is the most likely error to be found?
|
|
|
Post by tedward on Nov 1, 2011 10:30:59 GMT -4
How can people be sure they see a rocket and not just a satellite when they see a little flying dot in the sky? And this is what I have addressed. Could you swear every time you would see a flying object in the sky that it would be a rocket or a space station? No, you couldn't. There were plenty of satellites in orbit already in 1969 which could have been mistakingly taken for an Apollo rocket. This is what I do when I look out for the ISS. I look up its passes, I check my watch, I go outside and as if by magic it appears at the point indicated from known information and it lasts for as long as predicted before the Earths shadow has it. What were the orbital elements of the other satellites? You say there were plenty, then from that I assume you have examined them so it should be easy for you. BTW, how does a rocket perform with half a load in case you missed it.
|
|
|
Post by JayUtah on Nov 1, 2011 10:34:16 GMT -4
Give me a link then to where you have posted that specific reply. It's hard to keep track of so many pages and so many posts. apollohoax.proboards.com/index.cgi?action=gotopost&board=theories&thread=2732&post=94492This is the last time I will hold your hand. You are the one opening and attempting to pursue several simultaneous propositions, so the workload is largely your own doing. It is up to you to follow the discussion you started. And since you admit difficulty in following the discussion, I presume you have conceded your claim that you have adequately addressed all the objections to your claims. Your admission means you cannot possibly know that you have. Finally, it is not up to me to make and defend some counter-claim regarding the cone. You don't get to assume it's a Mach cone until it's proven to be something else. Instead you're responsible for explaining how it can possibly be a Mach cone when it fails to exhibit the expected properties. You clearly don't know what those properties are. Mach waves are not ordinarily visible to the naked eye. Mach waves are pressure variations in a fluid. If the fluid is transparent, Mach waves are almost never visible. The disputed effect appears to be an opaque reflective cloud. When Mach waves are occasionally visible to the naked eye (such as in a detonation wave front seen edge-on) they are visible only as a refraction effect, not for any supposedly entrained aerosol. Mach waves can only be visualized by very specialized photography. The only method available in 1969 to visualize Mach waves was Schlieren photography, which places constraints on the type of light source, on the arrangement among light source, camera, and subject, and requires specialized equipment. It cannot often be visualized or extracted from normal photography such as was used to take the disputed photo. The photo in question is not a Schlieren photo, as the subject in such photography becomes a silhouette. Mach waves are lines, not zones of uniform optical density. The disputed effect displays uniform optical density throughout its spatial extent. This behavior departs markedly from Mach waves, which, in their three-dimensional understanding, are thin shells of sharply compressed fluid, behind which the fluid gradually returns to its ambient pressure. In most practical vehicle this is then interrupted by the next Mach wave exuding from the next appropriate surface. Mach wave visualizations should exhibit a falloff after the initial sectioned wave front. Mach waves are sharply demarcated. The disputed effect shows turbulent fluid boundaries, even allowing for the optical dispersion in the photograph. Local variations in the disputed effect boundary cannot be attributed to optical or film-grain dispersion. Mach waves are compression wave fronts visualized edge-on and are not subject to turbulent behavior. Mach waves exude roughly equally from every Prandtl-Meyer expansion opportunity. The disputed effect has its apex at the vehicle's nose. But as evident in the vehicle's geometry and as confirmed in film of the Saturn V breaking the sound barrier, other such P-M surfaces exist. The major P-M convexities are the CM/SM interface, the SLA, and the S-II/S-IVB adapter. Minor convexities also occur at the S-IC outboard engine fairings and the tail fins. The effect is not seen at these locations even though the locations themselves are clearly visible in the photo. You are the one arguing that the photo depicts a Mach cone. Therefore you have the burden to show how it can still be a Mach cone while violating these criteria. it is not mine or anyone's responsibility to determine what it is instead, even if I think I know. In general you suffer from an acute desire to shift the burden of proof. You need to research why that's not allowed in a serious investigation.
|
|
|
Post by chew on Nov 1, 2011 10:47:01 GMT -4
You challenged us to provide witnesses who saw Apollo spacecraft in orbit and when we do you shift the goalposts. It is apparent you are an intellectual coward. How can people be sure they see a rocket and not just a satellite when they see a little flying dot in the sky? NASA published the planned ground track of their launches and disseminated them throughout the world. When you see something extremely bright, brighter than anything you've seen before, follow the published path through sky you can be 100% sure it is the reported launch. Nor would I need to. It was a widely reported launch with published information on where one could see it. And my dad saw it. The disparity of the quality of evidence you levy upon us and the quality of evidence for what you accept as proof of a hoax is ridiculous. Is it so hard to believe that if something very bright is sighted by numerous people following the path you were told it was going to follow that it is exactly what you were told it is? You accept without question the math of some nut on the internet, yet you will not accept numerous counter-examples of a higher speed and simple eyewitness accounts and photographs of Apollo in orbit. No, there were no satellites that could be mistaken for the third stage of a Saturn V, Instrument Unit, Spacecraft Lunar Module Adapter, Service Module and Command Module all combined into one humongous spacecraft. It was huge and it was very bright.
|
|
|
Post by JayUtah on Nov 1, 2011 10:49:17 GMT -4
Incorrect. There are no flaws in this method, it worked perfectly well when applied on Ares X-1... I identified at three flaws in my list. You have not yet addressed them, so you don't get to say there are none. I'm not being ridiculous. I'm being thorough. Remember, engineering analysis is what I do for a living. While I'm thinking about it, I've asked you numerous times what your qualifications are in this field. Your training and experience in scientific or engineering fields is important in helping the reader decide how well you're able to determine that a method is valid. There are two other photo analysis methods on the table that do not suffer from methodological flaws and which confirm the expected results. The first is the liftoff footage. We can know from the circumstances that the line of sight is almost exactly at right angles to the direction of travel. We also have an independent time base. And finally we have an independent scale reference of exceptional fidelity, known to be in the same location as the vehicle. The second is the sound-barrier footage. When a vehicle breaks the local sound barrier, there is a visible effect that is seen on film in the Apollo 11 ascent. We know what the effect is and we know when it occurs. It occurs precisely as the official dynamic model predicts it should. So we have two independent confirmations of the standard model in Apollo 11's case. No. Getting the expected answer in one case does not validate the method. You got the expected answer in one case and an unexpected answer in another case. And in the anomalous case there are additional checks that contradict your method and confirm the expected results. In the real world that is very solid proof that your method is invalid. If you instead argue that your method is the only one that's valid, you have additional validation steps to perform. One test case -- especially in a method designed to model a dynamic circumstance -- is not at all a sufficient validation exercise. Especially when there are numerous qualitative inconsistencies between the test case and the subject case.
|
|
|
Post by grashtel on Nov 1, 2011 10:50:41 GMT -4
Incorrect. There are no flaws in this method, it worked perfectly well when applied on Ares X-1, so don't be rediculous. The method is thereby validated. Have you ever heard the phrase "A stopped (analog) clock is right twice a day"? Just because a method works once in one particular situation doesn't mean that it will work in even slightly different one, which is why proper validation involves checking a method against multiple different situations, not just one.
|
|
|
Post by Jason Thompson on Nov 1, 2011 10:59:14 GMT -4
By the way, here is an exaggerated but nonetheless illustrative example of how trigonometry does affect the proportion of shadow length versus object length, and how you cannot assume that just because the shadow moves its own length the object has done likewise. You can see that in a 45degree illumination case the shadow cast by the object after it moves one object length upward still overlaps the shadow from when it was in the first position, and in the more sharply angled case the object has moved forward twice its own length despite the shadow only moving one shadow length sideways.
|
|
|
Post by echnaton on Nov 1, 2011 11:33:10 GMT -4
You keep using that word. I do not think it means what you think it means.
|
|
|
Post by Mr Gorsky on Nov 1, 2011 11:46:35 GMT -4
Incorrect. There are no flaws in this method, it worked perfectly well when applied on Ares X-1, so don't be rediculous. The method is thereby validated. A few weeks ago I wrote my very first computer program as part of the Computer Science degree I have just started. It was a pretty simple program in which you enter two numbers and the computer adds them together and displays the result. The lecturer had deliberately left a bug in the program that meant it was adding the second number entered to itself every time rather than adding the two together. As long as the user enters the same number twice the program would appear to be working flawlessly as it will produce the correct answer. You sound like the user of that program ... you have worked through it, got the answer you expected and assumed that it was correct. Life is never that simple.
|
|
|
Post by nomuse on Nov 1, 2011 12:40:13 GMT -4
Does JayUtah's link provide enough names for you? If so, are you ready to answer my original question? People see UFO:s all the time, and many of them can swear by God that they have even seen the aliens. Should we take their words for the truth? How can we be sure people didn't see a satellite? This kind of "witness evidence" isn't proof of anything. The only kind of evidence I would accept are concrete calculations based on concrete films, not just someone's "words of honor". You mean, a handful of observers with stories differing in many basic details, are equivalent to mass observations that generally agree in detail? According to you, then, the Elvis that is still alive and hiding in a trailer park in Jersey is just as well-established as the Elvis who headlined "Aloha from Hawaii."
|
|
|
Post by JayUtah on Nov 1, 2011 12:45:24 GMT -4
Life is never that simple. Indeed not. Computer programmers sometimes have to develop innovative ways to test complicated computer programs whose ordinary inputs and outputs and internal state can't be easily examined. The old Netscape browser had a set of "cheat codes" you could type into the URL bar to examine the internal state of the program. There was also a TCP/IP port in the debug version of the browser that one could connect to and have a conversation with parts of the program. Similarly other kinds of engineers have to come up with innovative ways to measure the performance of their products in order to determine whether the design hypothesis was valid. Photographic analysis in forensic engineering arose because happenstance events began to be caught on film, and because it became more feasible to put cameras in places we want to observe. It evolved to a science as we discovered that what we intuitively believed about photographic evidence turned out to be simplistic. Intuitively we believe that we can correct perspective merely by elongating the image along the foreshortened axis, and that all the proportions will remain intact. No! The difference between affine space and projective space will eventually bite you, just as it bit poor Jack White in front of the Congressional committee. Because of the many ways we can make mistakes, methods for creating and validating predictive models have to be robust, not just cursory. First you have to determine whether you're modeling behavior or process. In materials science we often model behavior because we know that modeling process will be tedious and computationally expensive, and will yield only marginal accuracy above a purely behavior model. That is, if we obtain samples of steel rod of varying diameter, and we measure the yield strength versus the cross-sectional area and determine that the behavior is roughly linear, then we may be safe using a linear model -- not because the underlying process is linear by nature, but because the numbers are close enough. We may determine that behavior is linear for very small diameters, but then becomes logarithmic for very large diameters. Often we'll then just blend algebraically between the two models for the disputed middle ground. This method works because it's predictive -- that is, the numbers work out. But it's intellectually unsatisfying because the model doesn't reveal anything about underlying process. A model that is designed to follow and account for the various physical forces bears the burden to determine what they are, determine how they can be measured or estimated, determine to what extend measurements may legitimately vary from the true value, and then properly express the mathematical relationship among them. Validating such a model is a two-pronged approach that first analyzes the method qualitatively to determine whether all the pertinent factors have been included. Then you have to compare the model against real-world behavior. mcclellan's method fails several qualitative tests of validity. Assuming, for example, that a fluid dynamic feature on the edge of a rocket plume represents a suitable fixed reference against which to measure velocity is folly. If one knows enough about rocket plumes, one realizes that even in the short term they are not fixed references of any kind. A quantitative test must employ more than on data point. f(x) = x and f(x) = x 2 produce the same result when x = 1, but that doesn't make them equivalent models. Knowing that rocket velocity must be a third-order behavior, perturbed by atmospheric interaction, the proponent has to determine that his measurement method works for some large set of suitable inputs -- in this case, rocket altitudes, plume geometries, and rocket attitudes. One data point for one rocket flight doesn't prove that the mathematical kernel of the model fits the real world. Further, one would have to know that solid-fueled rockets typically have conical nozzles while liquid-fueled rockets typically have parabolic nozzles, and this has a dramatic effect on plume geometry and exhaust velocity -- and especially upon fluid shear within the plume. It is the proponent's responsibility to show by measurement what that effect would be on his model. A "validation" in one type of engine doesn't prove that it works for all engine types. And still further, the dynamic behavior of a rocket during staging is dramatically different than that of a rocket in accelerated flight. During staging the plume is not likely to be propulsive, and the rocket is not accelerating or maintaining steady-state velocity. A staging rocket is undergoing a number of state changes. Trying one's model during such a sequence of state changes and purporting that it applies also during steady cruise flight is invalid. And on and on. Coming up with good methods and models is very difficult. It's the hard part of science.
|
|
|
Post by nomuse on Nov 1, 2011 12:48:56 GMT -4
No, right now you have a dubious homemade method whose flaws you have not addressed, and we have the full weight of an entire industry. Incorrect. There are no flaws in this method, it worked perfectly well when applied on Ares X-1, so don't be rediculous. The method is thereby validated. I just pulled my Grandfather's old watch out and had a look at it. The watch reads 10:40 According to both the clock on my computer and my cell phone (both updated automatically) the actual time is 9:47 By that I should perhaps assume my Grandfather's watch is running a little fast but is otherwise in fine shape for an old watch? Please think about that for a moment -- and consider how it applies to your case.
|
|