An Examination of Post-Classical Physics

Has science taken a wrong turn? If so, what corrections are needed? Chronicles of scientific misbehavior. The role of heretic-pioneers and forbidden questions in the sciences. Is peer review working? The perverse "consensus of leading scientists." Good public relations versus good science.

Moderators: MGmirkin, bboyer

Locked
Keith Ness
Posts: 41
Joined: Fri Jun 06, 2014 6:53 am

An Examination of Post-Classical Physics

Unread post by Keith Ness » Sun Aug 17, 2014 3:23 pm

Introduction

Unlike in other fields, in science we must always accept only the simplest theory which best fits all the properly interpreted, properly represented empirical evidence which we have been able to acquire to date (fitness trumps simplicity; and fitness and simplicity break each other’s ties). Some may argue that falsifiability is also a requirement. However, I would argue that falsifiability is merely another trump; that a falsifiable, unfalsified theory beats any theory which is either falsified or unfalsifiable, while the simplest, best-fitting among unfalsifiable theories is still the best among unfalsifiable theories. I would also add, though it should go without saying, that science also requires a theory be logically viable. With these things in mind, I now point out some serious scientific limitations to post-classical physics.

Chapter 1: Basic Problems

First of all, I concede that we can explain all four states of the bucket and water in Newton’s bucket experiment (1) without using an absolute spatial frame of reference. For example, measurements of inertia in the water and bucket will indicate all blocked linear motion and establish that the bucket and water must be spinning, even when they are spinning with each other (and even if we ignore the inertia of Earth’s gravity). But even when we just consider a rocket in free space, undergoing inertia in one given direction due to thrust, and then it experiences a cessation of all inertia without any inertia in any other direction, we can, from the well-tested laws of motion, deduce that the thrust stopped, and that the rocket is still moving in that given direction and at the same speed as when the inertia stopped. And, even though there is no relative motion and no inertia from changes in motion in the exceptional case of one object in a perfectly circular orbit around another, with both objects always keeping the same face to each other, there is still inertia in the form of surface gravity. As such, we could still deduce their rotational and revolutionary rates from measuring their surface gravities. Furthermore, by sending a rocket from one of the two objects to between the two objects, and thrusting the rocket in various directions, we could deduce the directions of revolution (and rotation) of the two objects from the rocket’s resulting trajectories. So, with sufficient measurement of inertia and its consequences across time and space, we can always establish motion without referring to an absolute spatial frame.

However, this in no way demonstrates the premise, universal to all the many forms of Einsteinian relativity, that there is no absolute time and/or space (2), or that localized rates of time and/or space are even possible, let alone that localized rates of time and/or space exist. Furthermore, there are kinds of absolute time and space which would exist even if localized rates of time and/or space did exist. For example, a common rate of time between any set of objects is a universal, or absolute, rate of time for that set of objects. With that in mind, how do we know two objects are existing in different rates of time if there is no common rate of time between them? For example, if you say one object aged more than another in the last ten years, then the ten year period is a common rate of time between them. You need a common rate of time (such as the ten year period, in this example) in order to compare the aging rates of two objects, period; you can’t make a valid comparison without it. If there were no way to make valid comparisons, then absolutely nothing would be the slightest bit predictable. The same logic applies to space.

Einstein’s assertion that there is no absolute simultaneity (3) is even more basically invalid: simultaneity of events is relative to the observer only if the observer does not take into account and factor out the linear speeds, directions of motion, and distances, relative to the observer, of the phenomena carrying the information about the events to the observer. And even when observers don’t take such things into account, for observers on Earth directly observing events on Earth, light is so fast that it carries information practically instantly, anyways.

Next, while De Sitter’s binary stars demonstrated that light’s vacuum linear speed is, like that of a normal wave, independent of the source, Einstein’s notion that the linear speed of light is independent of the observer (4) (5), a notion which some textbooks and professors still promote to this day (6) (7), is in an entirely different situation. Not only is such a notion incredibly far-fetched and problematic, it is also resoundingly falsified by simple astronomical observations, such as those performed by an astronomer named Ole Roemer back in the 1600’s (8).

In Roemer’s observations, he noted that, the faster Earth was moving towards Jupiter, the less time it took Io to pass behind Jupiter, in exact correlation with the linear speed at which Earth was moving towards Jupiter; and, conversely, the faster Earth was moving away from Jupiter, the more time it took Io to pass behind Jupiter, in exact correlation with the linear speed at which Earth was moving away from Jupiter. The simplest theory which explains the observed variations in the duration of Io’s passage behind Jupiter is that the linear speed of the Earth, relative to the light waves carrying the information about the passage of Io behind Jupiter, was modifying the linear speed of those light waves relative to Earth. In other words, just like everything else in the known universe, the linear speed of light is dependent on the observer.

Roemer’s observations also falsify Einstein’s similarly far-fetched, problematic proposition that an object’s rate of time slows down as its linear speed increases towards the linear speed of light in a vacuum, stops when it reaches the linear speed of light in a vacuum, and would go backwards if the object were to go any faster, but the object can’t go any faster, because an object going backwards in time would violate causality (9). The question is, whose causality would it be violating? Certainly not the causality of its own inertial frame of reference, because Einstein also proposes that the rate of time of any single inertial frame of reference appears constant to observers in that inertial frame of reference (10). Thus, Einstein must be talking about the causalities of inertial frames of reference relative to which the object is moving faster than light’s vacuum linear speed. In other words, if an object were moving faster than light’s vacuum linear speed relative to an observer, then that, according to Einstein, would be a violation of causality. And, since we know that light’s linear speed is dependent on the observer, we know that, whenever the Earth gets closer to another light source, such as the Sun or another planet in our Solar system, as the Earth orbits around the Sun, the linear speed of the light heading towards Earth from that light source is, relative to the observer on Earth, faster than light’s vacuum linear speed.

Chapter 2: Evidence

Then there’s the scant evidence which people have misinterpreted and/or misrepresented in claiming it confirms or proves Einsteinian relativity.

For example, the Hafele & Keating experiments (11), with atomic clocks on airplanes, were only capable of demonstrating effects on a scale of nanoseconds, due to their relatively small potential for speed and altitude variations and flight durations. The engineer A.G. Kelly has also fairly convincingly demonstrated that their results had been unwarrantedly manipulated from the raw data, and that the raw data was not compatible with Einsteinian relativity (12).

The GPS system, for its part, is classified, run by the US DoD, and was developed under Cold War ultra-secrecy with the objective of nuclear deterrence (13) (14) (15). Why, when time discrepancies due to localized rates of time would supposedly show up increasingly, dramatically, and over a relatively short amount of time as a satellite orbits the Earth, are we so fixated on a classified government system of satellites to give us an answer? Why are there no discussions of dedicated science satellite missions to test for localized rates of time, for example? Furthermore, Bernard Burchell makes some solid points on GPS at his website (16), where he goes into how consumer GPS receivers don’t have atomic clocks, how the satellite clocks are centrally reset weekly anyways, and how varying latitudinal rotational rates and varying altitudes of receivers relative to satellites would make relativistic correction around the world so complex that receiver-side relativistic corrections, which do not occur, would be necessary.

For another example, light may bend in the presence of gravity (17) or not (18), but even if it does, I’ve never seen anything indicating that gravity must warp time-space in order for light to do so. It’s not surprising that gravity could indirectly influence light through confounding variables, such as coronas, atmospheres (even Mercury’s tenuous atmosphere), and plasmas in magnetospheres and stellar winds (19). It’s also not surprising that gravity, as a distinctly different type of energy more associated with matter, could directly influence light without warping time-space (regardless of whether light has mass or not), given that matter does in the cases of reflection, refraction, et c..

And it’s like this across the range of claims of evidence supporting Einsteinian relativity, from the earliest to the present. Despite claims, there is no resoundingly clear, accessible, widespread evidence which we would expect would be necessary to justify so suddenly and strongly accepting so complicated and far-fetched a theory as Einsteinian relativity as people have.

Chapter 3: Superluminal Motion?

Astronomers report plasmas blasting out of some quasars faster than light’s vacuum linear speed (20). The explanations provided by promoters of Einsteinian relativity suggest that the linear speeds are optical illusions, because the blasts are heading in our direction, and the angles of the blasts are relatively close to our line of sight, and, so, as the plasma follows behind the light it emits towards us, the light it emits is compressed (21) (22). However, this is a basic feature of astronomical observation generally. We would, for example, take into account, and factor out, Earth’s angle of motion and linear speed relative to Jupiter and Io in order to assess the rate of Io passing behind Jupiter. If we were not to, then we would have to interpret the compression and expansion of the light train from Jupiter and Io to Earth, resulting in different apparent rates of time for Io passing behind Jupiter, as being due to something else, such as that Io was speeding up and slowing down that much for some reason. So, astronomers must take into account the angle of motion of the object they’re observing in order to get an accurately comparable measure of its linear speed in any situation. It’s obvious. Thus, I highly doubt that they had not already done so in the cases of superluminal plasma observations, and the first source I refer to in this chapter confirms this doubt.

The Big Bang theory interprets the redshift of distant galaxies, which increases with distance from us, as evidence that the space in the universe itself is expanding, and expanded from a single point in the past. However, astronomers routinely observe some redshifts in those distant galaxies so high that, if the redshifts were due to motion of the galaxies away from us, then they’d be moving away from us well faster than light’s vacuum linear speed (23). Promoters of Einsteinian relativity, tending to prefer a stretchy time-space structure Big Bang universe, accept the redshift as due to motion away from us, and suggest that there is a comoving coordinate system (24) (25), in which an object has two types of location, coordinate and physical. They further say that the observations are due only to the physical locations moving apart that fast, while the coordinate distance between us and those galaxies is remaining largely the same as the structure of time-space expands faster than light, and that makes the observations not a violation of Einsteinian relativity.

However, Einstein also proposes, in his general relativity, that the massiveness of a mass-bearing object in any inertial frame of reference appears constant to observers in that inertial frame of reference (26), so it’s only a matter of the observer’s perspective in both his special and general relativity. Furthermore, the physical locations those promoters mention still represent valid frames of reference with respect to each other; therefore, the physical linear speeds which those promoters ascribe to those distant galaxies relative to our physical location would still represent a violation of Einsteinian relativity, even if a second, coordinate system of locations did exist.

Chapter 4: Intergalactic Redshift

While it’s true that at least part of the universe might be expanding without Einsteinian relativity, direct observation of the proposed motion of those distant galaxies is impossible at the distances we are from them within any near-term time scale (27), so the redshift ascribed to expansion of the universe could be due to something else. However, with the Big Bang theory being the most popular recessional velocity theory, it is likely that most arguments against the fitness of alternatives to recessional velocity have been developed by supporters of the Big Bang theory, which is in turn based on Einsteinian relativity. In light of the discussion of Einsteinian relativity above, we should thus question those arguments with particular keenness. And when we do, we find they fail scrutiny.

For example, supporters of recessional velocity sometimes refer to Olbers’ paradox, of a sky not filled with light at night, as evidence against a static, infinite universe, and in favor of an expanding, finite universe (28). They say things like, “If the universe is infinite in extent, then its luminosity should also be infinite,” (29). However, waves dissipate in amplitude according to an inverse square law; and all materials, including stars, absorb, re-radiate, and reflect, some electromagnetic energy away from the observer. If you extend these properties out to infinity as well as the light producing capacity of the universe, then you end up with infinite light dissipating, absorbing, and reflecting capacities, as well as infinite light emitting capacities, and the average lightness of the infinite universe is determined by the ratios of these two opposing characteristics. For though a characteristic might extend across space forever, it might do so at a finite, variable amount per finite volume.

For an example of how ratios of these opposing characteristics could work out, imagine we are looking out in one direction, and in the first ten parsecs out from us, there are ten stars of the same amplitude evenly spaced (one star per parsec) on our line of sight. There would be ten times as many stars in that line at ten parsecs from us as in one parsec away from us; yet, thanks to the inverse square law by which light dissipates, there wouldn’t even be twice as much amplitude reaching us from that whole line of ten stars as the amplitude reaching us from the star one parsec away from us in that line alone. Actually, when I plot out the total amplitude reaching us on a graph extending this same rate and spacing of stars out to thirty parsecs, the total amplitude reaching us still doesn’t exceed about 1.7 times the amplitude reaching us from the nearest star in that line alone. The same is true when I extend the graph out to one hundred parsecs. Finally, as those extensions suggest, the plot appears to have the same structure as a hyperbolic curve, which would mean that the amplitude would increase forever with increasing distance, and still never cross a certain fixed threshold.

Regardless of what the plot does, though, I offer another, simpler, more general perspective. When you have finite light, it is easy to see that you have to take into account finite amplitude dissipation with distance travelled to determine a specific, finite, observed brightness (which may be less than what the naked eye can see); and that, on the other hand, if light sources are distributed in space at a finite rate per finite volume, then you also have to accept that you only have infinite light when you take into account the stars at "infinite distance" from us. And, in that case, you also have to take into account "infinite amplitude dissipation". And, if you say star A's light added a specific, finite, greater-than-zero amount to the amplitude of star B's light, then that light of star A's can not possibly have undergone infinite amplitude dissipation.

Beyond that, supporters of recessional velocity point out a variety of other reasons why known mechanisms would not work as alternatives (30) (31) (32), and conclude that, short of a novel mechanism never put forward yet, the observed redshift must be due to recessional velocity. Some of the reasons they present again seem invalid; for example, if particles are more or less evenly distributed throughout space, then I would expect changes the particles cause to light’s momentum to even out across wave-fronts over time, and that, thus, changes in momentum alone would not cause blurriness to increase with distance, much like how, under the right conditions, light can go through air, glass, and water without significant loss of clarity. But, regardless, one of the possibilities those reasons do not exclude is that the difference in gravity, density, temperature, and/or chemical composition between intergalactic and intragalactic space could account for the difference in redshift of light from the two media. All that possibility needs is a description of the mechanism behind it, perhaps something like what Ashmore (33) is working on.

By contrast, the Big Bang theory faces a much more challenging problem set. Einsteinian relativity, on which the Big Bang theory is based, is invalid in a variety of ways, as discussed above. And how would we even directly observe comoving coordinate systems, or the mysterious dark energy? Further, a time before time, a space beyond space are oxymorons which make an infinite universe more fit than a finite universe on their own. Yes, the statement, “there was no time before time X,” is using a time coordinate system to describe a time before time X, and is therefore self-contradictory; the same logic applies to space. So, while neither a finite nor infinite universe may be falsifiable, a universe which is infinite in all directions of time and space is the only logically viable option.

Chapter 5: Quantum Mechanics

Quantum mechanics theorists inaccurately claim that their electron beam double-slit experiments (34) (35) (36) demonstrate that a particle is simultaneously both a wave and a particle. Any waves naturally interfere with themselves after passing through any two slits of sufficient size and proximity to each other in any sufficiently thin, flat sheet of any sufficiently solid material. In the electron beam double-slit experiments, they don’t use just any particles, they use charged particles; and they don’t use just any materials for the borders between slits, they use a charged wire. So the patterns the charged particles make on the detector screen could be due to interactions of the charged particles’ electromagnetic fields with the charged wire’s electromagnetic field. It should also be obvious that if you diffract molecules with a standing wave of light, then the problem remains the same. There are so many uncontrolled confounding variables in such experiments that their interpretations are unconvincing.

There are more recent quantum mechanics interpretations which are even less credible. For example, Kim et al’s “Delayed Choice Quantum Eraser” experiment comes with the amazing interpretation that its results are based on the consciousness of the observer (37) (38). Yet, if you look at the setup, you can clearly see that basic physics determines the outcomes. They take two wave trains of light coming out of the two slits, and send them through a non-linear optical crystal, a common industrial tool (39), to get two pairs of beams, which they then split and send in various directions, various numbers of times. Two go to detectors without overlapping beams at, or lenses between them and, their detector; thus they should exhibit no interference at their detector, and, surprise, surprise, they don’t. If you have overlapping beams, then you get interference (40), so the two pairs of beams which they overlap and then send to detectors together should interfere, and surprise, surprise, they do. If you cross two beams, then they interfere with each other while crossing, so the other two beams they send through a lens to focus near their detector should exhibit interference at their detector and, surprise, surprise, they do. While I find it hard to believe that they didn’t know this before even going into the experiment, and find it curious that they would deliberately present a scientifically invalid interpretation as scientifically valid, for now I merely point out that this interpretation making popular circulation without any significant resistance completely fails scientific standard.

Conclusion

Considering all of the above, there are serious scientific limitations to post-classical physics. I am concerned that, if we languish inside these limitations long enough, then increasingly unscientific methods will increasingly become the most widely accepted scientific methods.

References

1. [Online] http://functionspace.org/articles/zen/9.
2. [Online] http://en.wikipedia.org/wiki/Absolute_time_and_space.
3. [Online] http://en.wikipedia.org/wiki/Relativity_of_simultaneity.
4. [Online] http://www.bartleby.com/173/7.html.
5. [Online] http://www.relativityoflight.com/EinsteinMaxwell.pdf.
6. Henderson, Hugh. Kaplan SAT Subject Test Physics 2013-2014. New York : Kaplan, 2012.
7. [Online] http://www.phys.unsw.edu.au/einsteinlig ... _logic.htm.
8. [Online] http://pw1.netcom.com/~sbyers11/litespd_vs_sr.htm.
9. [Online] http://en.wikipedia.org/wiki/Special_relativity.
10. [Online] http://www.dummies.com/how-to/content/e ... ivity.html.
11. [Online] http://hyperphysics.phy-astr.gsu.edu/hb ... irtim.html.
12. [Online] http://www.cartesio-episteme.net/H%26KPaper.htm.
13. [Online] http://www.nyjournalofbooks.com/review/ ... martphones.
14. [Online] http://courses.washington.edu/gis250/lessons/gps/.
15. [Online] http://en.wikipedia.org/wiki/Global_Positioning_System.
16. [Online] http://www.alternativephysics.org/book/GPSmythology.htm.
17. [Online] http://www.earthmagazine.org/article/be ... relativity.
18. [Online] http://www.newtonphysics.on.ca/eclipse/index.html.
19. [Online] http://en.wikipedia.org/wiki/Stellar_magnetic_field.
20. [Online] http://physicsfromtheedge.blogspot.com/ ... s-ftl.html.
21. [Online] http://math.ucr.edu/home/baez/physics/R ... minal.html.
22. [Online] http://en.wikipedia.org/wiki/Superluminal_motion.
23. [Online] http://www.youtube.com/watch?v=myjaVI7_6Is.
24. [Online] http://en.wikipedia.org/wiki/Big_Bang.
25. [Online] http://en.wikipedia.org/wiki/Faster-than-light.
26. [Online] http://abyss.uoregon.edu/~js/21st_centu ... lec06.html.
27. [Online] http://abyss.uoregon.edu/~js/cosmo/lectures/lec14.html.
28. [Online] http://en.wikipedia.org/wiki/Olbers%27_paradox.
29. [Online] http://www.astro.utu.fi/~cflynn/astroII/l12.html.
30. [Online] http://news.sciencemag.org/2001/06/tire ... s-re-tired.
31. [Online] http://en.wikipedia.org/wiki/Tired_ligh ... e-Zwicky-6.
32. [Online] http://www.astro.ucla.edu/~wright/tiredlit.htm.
33. [Online] http://www.lyndonashmore.com.
34. [Online] http://en.wikipedia.org/wiki/Double-slit_experiment.
35. [Online] http://philsci-archive.pitt.edu/3816/.
36. [Online] http://www.hitachi.com/rd/portal/resear ... eslit.html.
37. [Online] http://www.youtube.com/watch?v=sfeoE1arF0I.
38. [Online] http://en.wikipedia.org/wiki/Delayed_ch ... tum_eraser.
39. [Online] http://www.directindustry.com/prod/cohe ... 69569.html.
40. [Online] http://www.madsci.org/posts/archives/20 ... .Ph.r.html.

User avatar
Zyxzevn
Posts: 1002
Joined: Wed Dec 11, 2013 4:48 pm
Contact:

Re: An Examination of Post-Classical Physics

Unread post by Zyxzevn » Mon Aug 18, 2014 11:02 am

Keith, can I separate your post into different parts?
Science introduction
I don't think science is much unlike other fields, and I don't think that the "simplest" theory should be the only accepted theory. The simples theory is one that is the best to work with. If we use gravity as an example, and I am building a house, no one would care if I used G= 9.81 m/s² instead of Newton's or even Einstein's gravity formulas.
And maybe there is even a fractal gravity theory that we don't know about, that is much more exact..
So my idea is that the most practical theory should be preferred, but behind it there might be a more complex phenomenon.


Relativity
You might be interested in my post on Relativity.
http://www.thunderbolts.info/wp/forum/phpB ... =3&t=15109
This is about Special and General relativity.

I think it would be interesting to understand the assumptions that were made to construct these models.
General relativity assumes that we can use formulas that manipulate space-time coordinate systems.
Special relativity assumes that the physics of light is reversible. The speed of light is the constant at the receiver, as it is at the sender.

The problem you described is with the time-space manipulation by gravity as proposed in General Relativity.
The reason why it is there, is because Einstein (and Feynman). They proposed that some change in time is needed for the conservation of energy. When light goes to a different potential in gravity, this should give a change in the frequency of light. But from measurements does not appear to be true, and there should be another way energy is conserved.
(See relativity thread)

Red-shift
I think that the redshift is a simple Mirage. Somehow, the light changes in frequency other than expansion. And this makes everything a lot simpler.
I have a started a thread somewhere that discusses that too:
http://www.thunderbolts.info/wp/forum/phpB ... =3&t=14811

Wave-particle duality of electron
I am a fan of Quantum Physics, and I think that it is the basis from which the other models should be derived.
There is certainly some kind of wave phenomenon going on with the electron. And I think there is an instant connection between two electrons on a long distance, which is visible in the spin property of the electrons.

I think that the best model about light, is that photons are actually non-existent and are just mathematical constructs. And a similar construction might be true about electrons. So why do we see particle like phenomena? That is because all interactions between systems are always in packages. This also explains time: time is related to the amount of packages that are exchanged.
So, from my point of view, we should construct our world from what we know from quantum physics.

Alternatively there is the unquantum effect, which is still partially unexplained by my ideas. ;-)
http://www.thunderbolts.info/wp/forum/phpB ... =3&t=14973
More ** from zyxzevn at: Paradigm change and C@

Keith Ness
Posts: 41
Joined: Fri Jun 06, 2014 6:53 am

Re: An Examination of Post-Classical Physics

Unread post by Keith Ness » Tue Aug 19, 2014 2:34 pm

Thank you for your polite response.

"I don't think science is much unlike other fields, and I don't think that the "simplest" theory should be the only accepted theory."

Making simplicity second only to fitness is the only sensible prioritization. Make simplicity less important than that, and you'll be wasting time; make simplicity more important than that, and you'll be outright wrong. In fields where simplicity is not explicitly prioritized in precisely this way, it is more likely to be misprioritized at great expense to human progress. Recall the hostility towards Galileo's promotion of Copernican heliocentrism, which was only better than Ptolemaic geocentrism because it was simpler and at least equally fit.

"And maybe there is even a fractal gravity theory that we don't know about, that is much more exact.."

Well, exactness is a matter of fitness, and therefore influences what we accept as the simplest possible theory.

"...the most practical theory should be preferred, but behind it there might be a more complex phenomenon."

This is in agreement with my description of science: simplicity is second only to fitness, so science is not biased against discovering more complex phenomena.

Thanks for your links; even on first glance, they have provided me with some good reminders.

ZenMonkeyNZ
Posts: 63
Joined: Tue Nov 19, 2013 7:19 am

Re: An Examination of Post-Classical Physics

Unread post by ZenMonkeyNZ » Sat Sep 20, 2014 4:57 am

You may well be aware of this, but if not: look to Karl Popper's Logic of [Scientific] Research (logik der forschung) for the best definition of simplicity and the role it plays in science. I have not come across a better explanation of simplicity in 20 years of study of the philosophy of science.

Nice summary of ideas. Thanks for the post.

PS — nice to see some examination of the Newtonian concept of absolute space and frames of reference

Keith Ness
Posts: 41
Joined: Fri Jun 06, 2014 6:53 am

Re: An Examination of Post-Classical Physics

Unread post by Keith Ness » Mon Nov 10, 2014 6:41 pm

Thanks. I have not read that yet. From what I have read of Popper, he generally seems to be on the right track.

Does anyone want to comment on my example application of the inverse square law of amplitude dissipation to Olbers' paradox? It likely empirically invalidates Olbers' paradox more succinctly and compellingly than anything else I've ever seen.

Plasmatic
Posts: 800
Joined: Thu Mar 13, 2008 11:14 pm

Re: An Examination of Post-Classical Physics

Unread post by Plasmatic » Sun Nov 16, 2014 7:02 pm

Popper's book is "Logic of Scientific Discovery" not "research"...
"Logic is the art of non-contradictory identification"......" I am therefore Ill think"
Ayn Rand
"It is the mark of an educated mind to be able to entertain a thought without accepting it."
Aristotle

Zendo
Posts: 78
Joined: Thu Apr 03, 2014 2:57 pm

Re: An Examination of Post-Classical Physics

Unread post by Zendo » Mon Nov 17, 2014 12:04 am

Keith Ness wrote:Thanks. I have not read that yet. From what I have read of Popper, he generally seems to be on the right track.

Does anyone want to comment on my example application of the inverse square law of amplitude dissipation to Olbers' paradox? It likely empirically invalidates Olbers' paradox more succinctly and compellingly than anything else I've ever seen.
I think Olbers paradox could be explained with that an electromagnetic wave loses energy the further it travels (also known as tired light) or as you say inverse square law of amplitude dissipation. This would explain the dark spots of the cosmos without having to invoke the big bang.

It's interesting to note that all imaging techniques we have when taking pictures of the cosmos requires some sort of exposure time to "capture enough light" from distant objects. While this sounds like a given, it kind of gives credence to the fact that we need a certain amount of kinetic energy from an object before it pops up as a result in our telescopes (small nudge to Plank's early loading theory of light here).

To make sense of the small and distant signals sent from the Rosetta probe for example, the signalling source needs to be accurately pointed to a very high gain antenna here on earth. While a sun or galactic core is an incredible source of EM-radiation, it's signal is not of infinite extent and power, much like our probe out there.

With what we know in total about imaging of the cosmos it's nonsensical to even use this paradox as evidence for, as wikipedia says: "The darkness of the night sky is one of the pieces of evidence for a non-static universe such as the Big Bang model"

It's like explaining, while being in a pitch black room: "The reason I can't see anything with my eyes right now is that every light source around me is expanding infinitely fast away from me so that all the light-radiation that should reach me doesn't". :lol:

User avatar
nick c
Site Admin
Posts: 2483
Joined: Sun Mar 16, 2008 8:12 pm
Location: connecticut

Re: An Examination of Post-Classical Physics

Unread post by nick c » Mon Nov 17, 2014 10:55 am

Zendo,
Scott's view of Olber's:
http://electric-cosmos.org/Olber.pdf

Zendo
Posts: 78
Joined: Thu Apr 03, 2014 2:57 pm

Re: An Examination of Post-Classical Physics

Unread post by Zendo » Mon Nov 17, 2014 2:06 pm

nick c wrote:Zendo,
Scott's view of Olber's:
http://electric-cosmos.org/Olber.pdf
Ahh, thanks! He managed to put it very briefly and nicely :) It's funny because I saw the "paradox" first mentioned in this post. It's such a simple problem that is taken way out of proportions looking at the wikipedia article :lol:

Keith Ness
Posts: 41
Joined: Fri Jun 06, 2014 6:53 am

Re: An Examination of Post-Classical Physics

Unread post by Keith Ness » Tue Nov 25, 2014 12:07 pm

Zendo wrote:
I think Olbers paradox could be explained with that an electromagnetic wave loses energy the further it travels (also known as tired light) or as you say inverse square law of amplitude dissipation.
I would like to point out that there are two types of energy here, amplitude and frequency, and tired light theory deals strictly with loss of frequency, not amplitude.

Zendo
Posts: 78
Joined: Thu Apr 03, 2014 2:57 pm

Re: An Examination of Post-Classical Physics

Unread post by Zendo » Wed Nov 26, 2014 2:24 am

Keith Ness wrote:
Zendo wrote:
I think Olbers paradox could be explained with that an electromagnetic wave loses energy the further it travels (also known as tired light) or as you say inverse square law of amplitude dissipation.
I would like to point out that there are two types of energy here, amplitude and frequency, and tired light theory deals strictly with loss of frequency, not amplitude.
Yes, very important distinction. I actually thought tired light was related to frequency and amplitude loss. It's only a hypothesis to explain frequency shifting over large distances.

Also I remember this paper: http://arxiv.org/abs/astro-ph/0401420 - Which I guess is a more likely candidate than tired light in terms of explaining cosmological redshift.

ZenMonkeyNZ
Posts: 63
Joined: Tue Nov 19, 2013 7:19 am

Re: An Examination of Post-Classical Physics

Unread post by ZenMonkeyNZ » Wed Nov 26, 2014 6:01 pm

Plasmatic wrote:Popper's book is "Logic of Scientific Discovery" not "research"...
I gave the literal translation. In German forshung is usually translated as research. The German does not include scientific either, which is why I square-bracketed it.

Zendo wrote: Yes, very important distinction. I actually thought tired light was related to frequency and amplitude loss. It's only a hypothesis to explain frequency shifting over large distances.

Also I remember this paper: http://arxiv.org/abs/astro-ph/0401420 - Which I guess is a more likely candidate than tired light in terms of explaining cosmological redshift.
There are several different tired light models. These include explanations involving photon-photon interaction, inelastic collision between photons and free electrons, or between photons and molecules, interaction between the light emitted by the galaxies and the molecules or ions of the intergalactic medium1, and so on. I have read a good summary account of the different models but cannot find the source at the moment, sorry.

[1] Andre Assis, Relational Mechanics, see p. 355 footnotes for further papers relating to tired light models.

scowie
Posts: 91
Joined: Tue Jan 21, 2014 8:31 am

Re: An Examination of Post-Classical Physics

Unread post by scowie » Thu Nov 27, 2014 5:34 pm

In reply to the opening post...

I agreed with almost everything you wrote, with the exception of this:
"De Sitter’s binary stars demonstrated that light’s vacuum linear speed is, like that of a normal wave, independent of the source"

He did no such thing. He claimed that if light's speed was dependent on the speed of the source, then we'd get a double image of visual binary components; a vacuous claim. For binaries components to be visually distinguishable the system needs to be close enough to earth or the components need to have a large enough separation (which means low orbital velocities). For a double image to be detectable you need the exact opposite conditions. Light from a star moving towards us in it's orbit would need time to catch up with the light it emitted when moving away, which would require either a nearby system with orbital velocities greater than any we have ever discovered, or binaries of the sort we have discovered would need to be visually distinguishable at ranges at which they are not even today. De Sitter proved nothing at all, except maybe that the scientific community won't look at your claims too critically if you are supporting Relativity. It is quite possible that some variable stars are actually [non-visual] binaries due to the light-bunching that results from the speed of light being additive: http://www.datasync.com/~rsf1/binaries.htm

I would say that it was Bryan G. Wallace who showed us back in the sixties that on an extra-terrestrial stage, light's speed is very much dependent on the speed of the source. When radar signals were bounced off the planet Venus from multiple radar stations around the globe simultaneously, he found that the delays of the signals from different stations suggested that these signals had the velocity of the earth's surface added to them (http://www.ritz-btr.narod.ru/wallace.pdf). Here on earth, the illusion of a constant speed of light emerges due to the influence of the combined electromagnetic fields of all the earth's matter. Down here, the earth's surface effectively takes over as the source for any light sources that are moving relative to it.

I reckon the speed of light being additive is evident on a cosmic stage too. imo it explains the time dilation we see in type 1A supernovae [more sensibly than expanding spacetime does!]. At the beginning of a supernova the luminous matter has a greater velocity than it does nearer the end due to the slowing effect of the surrounding interstellar matter. As a result, the light emitted near the start races ahead of the light emitted later, hence the light curve of the supernova is stretched out, more so the further away it is.

Another observation relating to the type 1a supernova's use as a standard candle can shed light on the variance of the speed of light. This is the observation that led the mainstream into believing the universe's alleged expansion is accelerating...

Type 1A supernovae are assumed to explode with the same absolute magnitude each time so their luminosities can supposedly tell us how far away they are independently of their redshifts. According to the mainstream, the redshift-distance relationship between different type 1a supernovae can therefore tell us about the universe's historical rate of expansion. If the rate was constant, a supernova's redshift would be directly proportional to it's luminosity-inferred distance. What we actually found though was that the supernovae with the greatest redshifts had lower luminosities than expected. The mainstream attribute this to an increased rate of expansion over time, hence the contrivance of dark energy. I, on the other hand, attribute it to the intergalactic medium's effect on the velocity of the light...

Light with a velocity greater than c relative to the intergalactic medium gets gradually slowed down towards c through it's interaction with this medium (known as an extinction shift). Redshifts are proportional to distance (imo caused by light's interaction with intergalactic molecular hydrogen), whereas luminosity is inversely proportional to the travel time of the light (rays of light diverge at a constant rate so more time for divergence equals lower luminosity). The further away the supernova is (measured by it's redshift), the more it's "super-luminal" light has been slowed down by the IGM, thereby causing it's luminosity to drop to a greater degree than it's frequency. No dark energy required.

Keith Ness
Posts: 41
Joined: Fri Jun 06, 2014 6:53 am

Re: An Examination of Post-Classical Physics

Unread post by Keith Ness » Fri Dec 05, 2014 3:10 am

scowie:

It's good to see criticism of De Sitter; I can't remember seeing much, if any, previously. As for Wallace, the last time I read his articles, it looked to me as though he was focused on the effect of Earth and Venus' speeds on the radar wave's speed, and Shapiro was focused on the effect of the Sun's gravity on the radar wave's speed, so Shapiro failed to understand what data Wallace was looking for, and so Wallace never got that data (http://bourabai.kz/wallace/farce06.htm). I also couldn't tell if Wallace was speaking of c+v for the source, or just the oberver. It's good to see he actually published some work on what data he was able to osberve, and that he did seem focused on the speed of the source.

Thanks.

Locked

Who is online

Users browsing this forum: No registered users and 5 guests