Economics & Theoretical Physics

Wednesday, 14 March 2012  -  Jan 2022

Time may not be what you think it is, Distance 
may not be what you observe, Space may be
something else altogether. Maybe gravity can finally
be understood.
The recent earthquake in Japan and its impact on the Fukushima nuclear power plant is a tragic reminder of humanity’s ever growing dependence on energy for its socioeconomic development. Energy plays a central role in determining the effectiveness of economics. However, are the fundamental difficulties associated with understanding the true nature of energy impeding development? This paper is a reflection on theoretical physics from an economic vantage point. As difficult as it may seem to band them together as this article will attempt to do, physics and economics are conjoined. Space, Time , Matter and Energy all play a significant role in the capacity of economics to better provide for humanity. For example, energy; its, provision, evolution and consumption play a significant role in the capacity of economics to develop strategies with which to satisfactorily manage human development. The impact the price of oil has on the global economy is testimony to the impact the cost of energy has on economies and governments in general. If global incomes could rise or the cost of energy could fall this could significantly increase its affordability. Therefore, advances in physics and the natural sciences in general can have a positive impact on economics.


Economics and the Natural Sciences

There are parallels between economics and the natural sciences. The law of conservation of energy and the law of conservation of financial resources appear evident in how market forces behave. Market forces represent a closed system in which growth does not take place without being cancelled out by the equal and opposite interaction of demand and supply. This cancelling out inhibits growth and gives rise to either inflation or deflation. This paper will analyse some of the parallels between contemporary economics and observe how similar ideas concerning reality and the foundation of the natural sciences share substantive logical impasses with contemporary economics that may hinder the capacity of human beings to fully understand the nature of reality and how this influences the evolution of the natural sciences. Using a purely theoretical approach this paper will attempt to draw inferences backed by a teleological flow of logic the reality of which are open to debate. Consequently, there is no better place to begin than with demand and supply. Amongst the most renowned scientists in the field of physics is Albert Einstein. In the same way contemporary economics uses Supply and Demand, Einstein’s theory of reality or Relativity Theory uses two basic structures to build an understanding of the Universe, these are, Space and Time. The known Universe can be described as being governed by Space-Time, it is therefore a Space-Time Universe. This concept forms the basic building blocks of Einstein’s model of the Universe and how it works or functions.



What the diagram demonstrates is that the entire technical structure of conventional physics depends on the interaction of Space and Time without which it appears no logical inferences can be made and upon which nearly all calculations depend to arrive at mathematically accurate descriptions or predictions concerning the nature of the Universe. However, as it is with the expenditure fallacy that hinders the ability to end poverty in contemporary economics there may be fundamental fallacies in this model of the Universe that hinder progress in physics. It is possible at this stage to identify fundamental inaccuracies in the Einstein-ian view or explanation concerning the nature of the Universe. The first critical inaccuracy in this model may stem from how it identifies and describes Space. Einstein may make two fundamental mistakes in the model he uses for his basic understanding of the Universe. Firstly, he assumes that distance and Space are one and the same. For example the measurable distance between a proton and an electron at the quantum level entails that there is “Space” between them. This assumption may be logically inaccurate. “The Space-Time Universe proposed by Einstein could be flawed for ….[the] reason that in his analysis Einstein may not clearly differentiate between “distance” and “Space”. This can lead to a number of inaccurate descriptions about the nature of Space and Time. The concept of distance belongs to a..construct in which ‘Space’ being a vacuum is accommodated or accepted to validate distance or separation between objects [or matter]. However, Space… [may] not be the same as distance... In other words one may talk about the distance between the earth and the moon, however, it would then be incorrect using the same principle to theorise that there is any Space between the earth and the moon..If someone were to say to you, ‘Look I need some Space.’ Technically, it would be completely different from saying, ‘Look, I need some distance.’ To give you distance they could simply move further away from you, to give you Space they could remain right next to you and hand you Space…. Einstein might point at the Sun theorising on how the [Space-Time] continuum functions and say it is in Space, when in fact it is not in Space since this conclusion could not have been made without factoring in distance.”[1] This flaw in Einstein’s model is made glaringly obvious by two facts of his analysis, firstly Einstein groups together Space and Time. Time in his analysis is inseparable from distance; in other words by incorporating Time in his interpretation of Space Einstein automatically computes distance in the workings of his theories, which, as we shall go on to discern, may be a fundamentally flawed method. Secondly, it becomes clear Einstein is aware of these weaknesses in his own model as he ascribes its flaws or what it fails to explain to the existence of the Ether. Einstein seems compelled to accept the existence of an ether where he states, “Newtonian action at a distance is only apparently immediate action at a distance, but in truth is conveyed by a medium permeating space, whether by movements or by elastic deformation of this medium. Thus the endeavour toward a unified view of the nature of forces leads to the hypothesis of an ether. This hypothesis, to be sure, did not at first bring with it any advance in the theory of gravitation or in physics generally, so that it became customary to treat Newton's law of force as an axiom not further reducible.”[2] Einstein further notes the characteristics of the ether “Within matter it takes part in the motion of matter and in empty space it has everywhere a velocity; so that the ether has a definitely assigned velocity throughout the whole of space.”[3] However he further states that, “More careful reflection teaches us, however, that the special theory of relativity does not compel us to deny the ether. We may assume the existence of an ether; only we must give up ascribing a definite state of motion to it, i.e. we must by abstraction take from it the last mechanical characteristic which Lorentz had still left it. We shall see later that this point of view, the conceivability of which shall at once endeavour to make more intelligible by a somewhat halting comparison, is justified by the results of the general theory of relativity.”[4] The fact that Einstein gives up ascribing a state of motion to the ether allows us to conclude that this may entail it has no definitive motion since it does not have distance and consequently is devoid of time and vice versa. Despite this the fact that he still goes on to explain the Theory of Relativity and Special Theory using absolute motion demonstrates that it is a top view theory and therefore though its inferences may be accurate, they will be accurate only to the top view model. It may therefore be concluded that Einstein’s model is in fact inaccurately theorised on Space and on a presumption based on distance. Let us examine this argument diagrammatically.


If we can for now, to catch the teleological flow of this logic accept that Space and distance are not the same this enables us to correct Einstein’s model by replacing Space with distance in the diagram. If distance and Space are not the same then where are Space and the ether? Turning diagram A on its side reveals that the intersection X of distance and Time may in fact still be inaccurate since distance and Time only appear to intersect when viewed from the vantage point of the model upon which Einstein based the logic used to describe how the Universe functions which can be described as the front or "top view". To the contrary when viewed from the side it is found that Einstein’s fundamental model for “Space and Time” are incomplete in that the two may not in reality intersect, they only appear to do so from the top view.

The fundamental model on which modern physics is built depends on the intersection at X in diagram A. For example, in trying to determine how long it would take to travel to the moon one might use distance and durational time (X). Chapman (2010) explains that “Every particle or object in the Universe is described by a "world line" that describes its position in time and space. If two or more world lines intersect, an event or occurrence takes place. The "distance" or "interval" between any two events can be accurately described by means of a combination of space and time, but not by either of these separately. The space-time of four dimensions (three for space and one for time) in which all events in the Universe occur is called the space-time continuum.”[5] This represents, what we saw earlier in the first diagram, that is, Einstein’s model uses distance for Space. Einstein’s view that Space and Time, like demand and supply in Economics, must intersect is a faulty perception based problem. This "intersection theory" is found to be untrue when observed from the side view where Time and Space (distance being Space in his model) do not in fact intersect. Consequently, the world line if improperly applied can be a fundamental misinterpretation based on perception of how laws in physics function; as what is observed is not always what occurs. The workings of Einstein’s space-time continuum and some of the inferences made based on it may be no more than a mirage when observed outside the paradigm that is the top view. As we have shown there may be no actual or fixed intersection between distance and time. The potential inexistence of this connection is capable of reducing the value of distance to 0 or make it a non-existent aspect of the continuum. Space separates, calibrates and predetermines Einstein’s notion of Space (distance) and Time. This can entail for instance that there is in fact no distance between the moon and the earth. If there is no distance as a result of the earth and the moon occupying the same Space then it can be concluded that Einstein’s model also incorrectly labels Time. There is no “durational time” required to cover the distance, since in actuality the distance is 0 (earth and moon occupy the same Space), therefore Time is 0, making the progression of Time in the “Space-Time” continuum inaccurate or simply an illusion created by the model’s top view interpretation of the Universe observed in diagram A. The concept that time does not exist is one that will take a while for the scientific establishment to digest, however, there are ever increasing signs this property may eventually be understood. Folger (2007) reveals, “Efforts to understand time below the Planck scale have led to an exceedingly strange juncture in physics. The problem, in brief, is that time may not exist at the most fundamental level of physical reality. If so, then what is time? And why is it so obviously and tyrannically omnipresent in our own experience? “The meaning of time has become terribly problematic in contemporary physics,” says Simon Saunders, a philosopher of physics at the University of Oxford.”[6]

What is labelled as “Time” in Einstein’s model is in fact a form of chronological decay or motion observed in matter taking place in the absence of Time validated by the fact that in reality the intersection X is governed by the separation Y. Economists may make this same fundamental mistake when they presume the intersection of demand and supply creates an equilibrium that generates economic growth, like Einstein’s model they mistakenly attribute economic growth (distance) to stability in the intersection X when in fact stability and economic growth are not the same, as distance and Space are technically not one and the same.




The Mystery of Growth in Economics and Origin of Gravity in Physics

The same way physics tends to have difficulty pinning down the source or origin of gravity economics has difficulty identifying and understanding the origin of wealth and economic growth. If the answer to these problems where comprehensively known and understood; poverty would not exist and the ability to control and manipulate gravity would be a common aspect of human technological civilization. Both these problems may be perception related. The logical deduction that Time does not exist (Time=0) does not confound mathematical models in physics, what it implies is that a calculation where 10 seconds elapses will mean that motion took place while time stood still, for example, a speed of 100 meters per second entails that the same calculations for “time” is used in the equation, however, analytically time itself should not be considered to have elapsed, 10 seconds for instance becomes 10 cycles or motions; it is a conversion of the idea not a loss of the idea itself. Similarly, the object travelled 100 meters, however, since in Einstein’s corrected model there is no distance covered (distance =0) something is covered, but analytically it is not distance. The conversion of this idea entails it changes from 100 meters to 100 motions. 100 meters per second changes from a spatial concept to 100 motions per cycle (100 motions being equal to 1 cycle) which is a frequency based on the relativity of the movement of objects in relation to one another in the absence of Time and distance; this entails clocks technically measure the absence not the progression of time, however, even this is suspect since from the side view the progression of Time would be considered a primitive human concept, the absence of time being a cornerstone of how the Universe functions. The measurements in physics remain the same but the fundamental properties with which they are associated change, creating a conceptual paradigm shift in how this phenomenon is understood; instead of seeing a limb as an independent force or object we instead attempt to see how the limb is structured and what it consists of and what it is attached to. What we have just analysed is that what is experienced at the intersection X in economics or physics is relative. The distance between the earth and the moon, for example, is governed by laws of physics which in turn are rendered predictable by the properties created by the intersection X; however, the intersection at X is governed by properties of Y. This new model devoid of durational time and distance is more practical since it presumes to use much less energy and resources to create the experiential or top view Universe. Think of it this way, we do not need to expand the size of a laptop’s screen to the extent of the heavens to study the stars, it is impractical to do this as it would use up vast resources, instead we compress the image to fit a 15 inch screen; the Universe may use the same approach. By the earth and the moon occupying the same space and using other properties to define the “distance” between objects the Universe uses less “energy” or effort and operates more efficiently; distance can be maximized without sacrificing “space”. Time is discarded (remains zero or unchanging) to create a continuum allowing motion in matter (which is confused for durational time) to be extrapolated over it. The chronological progression of time as human beings understand it even in its scientific context is no different from markers covered over distance travelled; both this kind of time and distance travelled are a wave form, that is, an illusion or a form of paramnesia required for the human mind to process its own reality. Since there is no distance between objects the idea that gravity, weight or mass is created by action at a distance without a medium and created by acceleration proposed by the genius, Isaac Newton, may also be inaccurate. The apple striking Newton can in fact be interpreted as a front view description of the event that is said to have directed Newton to arrive at a deduction based on a perception formed on a premise induced by the effect of the falling apple. In the same way there is no distance between the earth and the moon, there was no distance between the apple and Newton’s forehead, objects have no actual or substantive volume or mass and therefore no genuine weight. Mass and time are useful for the experiential Universe, but are not practical or efficient to the mechanics of how the Universe is created (that is, the operational Universe) it is not scientifically practical for matter or objects to be of excessive volume or weight and of primitive top view "Space" itself to be of great “distance” or of time to be of a burdensome duration [i.e. there is no past being maintained as a physical reality waiting for a time traveler to come back to]; these will all be inevitably seen as very crude ways of understanding the Universe and the physics that applies to it. The same applies to Time travel. The belief in the ability to go back in time is classical example of what happens when Einstein refers to Time as Motion. When this mistake is made it appears  possible to physicists that a time machine can be built to take someone back in time, the maths and means to do this can be shown. However, this is misleading because Time is not Motion, that is, a clock ticking, even an atomic one is not Time, its Motion. Motion is Time in Einstein’s approach to understanding gravity, which is why science has not gained the ability to control gravity to this day. However, I identify True Time (Tt) is in fact similar to absolute zero where there is the absence of Einsteinian Time (Te). Te is in fact Motion not True Time (Tt).  Similarly, when Einstein refers to Space (Se - Einsteinian Space), it is not true Space (St) but Geodesics, Geometry or a form of Euclidean Geometry, i.e., the Distance. True Space (St) is the 1s and 0s of information (code) from which Space is fundamentally created as it may pertain to or be understood by quantum computing. Einstein’s Model needs to be corrected to understand this problem, however, for the most part science today does not make this distinction and therefore makes mistakes in interpreting how physical forces work even if the math appears to add up. 



Let me try to explain more succinctly why it is practically impossible in physics to go back in Time as Einstein may postulate. To begin with the human physical world takes place in the absence of time. This means the here and now or present is equivalent to Time=0. Time being at absolute zero means there is no past and no future, these states fundamentally do not exist as a "place" you can travel to. The present is the only reality, therefore, to travel to the past or future involves leaving absolute zero, the absence of Time or the present. However, once you step outside of Time (absolute zero) everything experienced is not real, in that it cannot be interacted with, it has no free will, events cannot be changed or altered, it is just a record. This record can be compared to a hologram or more accurately compared to virtual reality (VR). It can be described as a record or recording of the past preserved in Space-Time that cannot be changed, but that can be interacted with in the 1st, 2nd or 3rd person. These records for the persons accessing them are preserved in Space-Time and can therefore be described as a type of hologram or VR that is indistinguishable from reality or the real world. This is like walking through the light of a star. As you walk through the light toward the star you see its future if you turn and walk away from the star you see the light it shines ahead, that is, its past, you are therefore accessing natural, stored historic records. You cannot interact with any of these because they are not in absolute zero or True Time (Tt), that is, they are not in a condition that can be endowed with the present or absolute zero (Tt). To enter the present you have to exit the star's light and step on the star itself, where there is no time. Only then does the here and now present itself, and become interactive in the sense that events can be changed. When Einstein talks about Time (Te), it is not True Time (Tt or Tr) because it is Motion taking place with the mistaken understanding that Time elapses which is simply not possible because if time elapses as he believed, then what is being observed is not based on the present (Tr) and therefore it is a measurement of a recording or something that is not real, but an accurate rendition (record). Therefore, when you talk about going back in time or into the future nothing that is done there can have any bearing on the present because it is outside absolute zero. For instance, when you sit down to watch a movie or a television series the fact that you are seeing it for the first time entails that you do not know the outcome of decisions and choices made by actors on the screen. Therefore, as what you are watching progresses it would appear as though free will is active and the outcomes on the screen are unpredictable. However, what is being observed and experienced is a recording. The same applies to to consciousness. What is lacking is the capacity to discern that what is being observed and experienced is a record.  Time shift takes place such that the "captive" mind is incapable of discerning the limitations of its own consciousness. This captive state skews or affects modern scientific observations and knowledge about Time. For instance, Einstein does not make a distinction between Tr and Te, his assumption is that the past and the future are real places you can go to, that is, where you can experience zero time, when in fact this is false, because like a movie recording on a VCR this past and this future can only be changed in Tr or Tt, the true present. However, in the same way a person can learn from their history and use it to alter their decisions in the present, they can see the future and alter their decisions in the present, but once an event occurs it becomes fixed and cannot be altered. In other words, what will happen in the past and will take place in the future can both only be altered in the true present where Time does not elapse. To create a time machine that steps outside of this (the true present/Time not elapsing) anything the physicist encounters as the past or future is no different from a recording or hologram because Motion (Te), or the movement of mass and matter, stops or becomes suspended (becomes a hologram or recording). For instance, going back in time will take a scientist to a record of the past. Since this record was generated using Time it can be viewed as an accurate historic record preserved in Space-Time and accessible in locations during jumps. The record itself cannot be altered. If the scientist interacts with the record or that period in history, any of these interactions will generate virtual reality (VR). Technically, this will be no different from the scientist sitting down to watch a documentary on television, except with events being accurately recounted. The level of conscious immersion will be higher and it can be determined by the scientist's level of self awareness such that it can be on par with VR.  

Motion (Te) can only take place in the absence of Time (Tt). Technically one of the few ways a person can possibly discern if the world they experience is a fixed record (Te), rather than the true present (Tr or Tt) is by attempting to measure if a time-shift is in place, for example, through the comparison of two clocks. If the two clocks are not scientifically identical  then as real as the world they experience my seem the indication will be that it is VR or like a simulation, which they have not evolved the ability to distinguish from true present (Tr). This deviation in time between clocks can be likened to the "totem" used by Leonardo Dicaprio in Inception. If the totem kept spinning then he knew it was still the dream state .e.g. Te, if the totem stopped spinning and fell he was awake e.g. it was the true present (Tr). Similarly, the ability for separate clocks to either have the same time or not is possibly one of the few means of humanity being able to detect whether it is in recorded VR (Te) or in real time (Tr). 

At present human beings experience reality at Se-Te. They do not have the capacity to tell that this is a hologram or recording of sorts, neither do they have the mental ability to discern there is no free will in this state. This attests to the kind of power time shift may have over human consciousness. The only possible means of distinguishing these different states is for a person to experience a shift from Se-Te to St-Tt, only then may it be possible to tell that the previous state was not the present, fairly much like waking from a dream. Proof of this separation of humanity from St-Tt can be proven by testing scientifically for the lack of simultaneity for distant events. The time shift of human conscious reality or existence from St-Tt to the confined and more limited Se-Te may be shocking to science to discover and have empirical evidence for, yet once again religion seems way ahead of science in its knowledge of this fact where it states humanity was banished from Eden, into a safer albeit harsher less enviable existence, the description of which fits a time shift from St-Tt to Se-Te. 

The fact that time has a direct affect on how consciousness and perception are processed is likely to make this kind of potentially mind bending disassociation and the ability to determine states in time, what is real and what is not, (like sea sickness when travelling on water or re-learning how to move in zero gravity) and is very likely to be one of the aspects of technologies that tamper with time humanity will have to learn and be trained in how to navigate through. When the technology that allows jumps through Space-Time becomes available, it is very likely vessels or passengers will be required to have specialized time dampeners to precisely control time-shift disassociation before this kind of travel can be used.





As you can see in the diagram above there is no past and no future to "time travel" into. There is only the true Present depicted by the blue circle. The blue circle represents the Side View or True Time (Tt) where Time=0 and True Space (St) or code/information/spirit. You exist in the Top View (Einstein's Space (Se) and Time (Te) at B or the "backup") where you experience the chronology of A in a kind of time-delayed-safe-zone in the past that appears to you as the present. The evidence this is not the real present but a history of it is in the fact that the clock at the bottom of a building and the top of a building or GPS clocks on earth and in space have different times, proof that you are not A but B. What you believe is the future, does not exist because it is in fact the real or true present, that is, true Space-Time (St-Tt). To go back in time would instead create a jump to another part of the universe/multiverse (see Julian and Mandelbrot sets or treat the blue and red circles as such) and during the jump you would have access to or see a true record of events or history of those locations in the orange circle of Distance-Motion which is in fact Einstein's Space-Time (Se-Te) and a jump into the future, is not the "future" but an excursion or view toward the true present A. Should anything catastrophic happen to the universe at A, the fail-safe is that it can be recovered or "resurrected" at B to restore A, since anything lost at A, is unrecoverable in this way. This model of true Space-Time (St-Tt) and Einstein's Space-Time (Se-Te) which is in fact "Distance-Motion" where Distance refers to Geodesics or Geometry and Motion is Time are sufficient to give a theoretical physicist a better framework for more accurately formulating and understanding the universe and forces acting within it.





Past, Present and Future can be generated simply by creating a perpetual
feed-back loop, where A acts in real time at 0, B is unaware of sharing real-time existence at A
and experiences, observes and reviews these actions at 1x, unbiased and believing it is the
present and that actions and consequences (cause and effect) are based on free will, when in fact this is
not true. B forms a second opinion of A's recorded [preserved in Space-Time] actions, views, feelings, thoughts and emotions 0y, independently of A. If B is displeased this will be communicated spontaneously to A . A may then make changes that improve what B experiences. A is spontaneously
updated with new unbiased opinions/reviews from B and forms cumulative feelings and thoughts 1z and
any behavioral or other change and different outcomes will once again be fed through the loop
to B and consequently is self-aware basically by creating multiple "people" at A and B who
are in fact the same person. The Future(0)  is in fact the Present, and the Past is 
simply a record of the Present (1x). B is kept unaware of an existence at A
by quantum mechanics using time as a processor to force disassociation
using time-shift which links A to B in the first person.
The scientific evidence of this process at work is that at B (top view)
simultaneity for separate events does not exist. The fact that
the experiential universe observed by B appears real is possibly
due to time shift disassociation and may be a demonstration
of the power of quantum mechanics to make  a
the recording or hologram (Te) indistinguishable from or almost as good as
the real thing (Tt). When a civilization arrives to the point where its technology merges
the consciousness of B and A, which is inevitable, then it "graduates" 
so to speak, from this level of moral self-actualization
and evolves to a higher order civilization.

One practical way to try to visualize this is as follows: The time variance
between A and B will be determined by the natural processing speed
available to the universe. The processing speed will determine how long it takes
for activity at Tt (A) to be saved or preserved at Te (B). For instance, when you are
working on software, writing a manuscript, drawing or some other activity the
pc will constantly back-up what you are working on. There will therefore be a time
delay between what is being created in real time as you work and what is being preserved, i.e.,
the time its takes for the pc to back it up. Should there be a cataclysm, for instance power failure,
and your pc crashes or switches off the only data preserved will be that which was already
backed up. The work done between the back-up and the work you produced in real time will be lost.
Now imagine this process on a cosmic scale. Just as there is a "speed of light" yardstick, there will also be a similar processing and capture speed that determines the degrees of time-shift between A and B.  


If a person understands that Tt is different from Te, he or she will know that a jump back in time, will not cause time travel, but will instead create a jump from Te through True Space (St) into a new geographical location in (Se), in the time shifted present, see the section later on Julia sets and Mandelbrot sets. The universe uses the chronology humanity views as the past and future to divide the existence into separate universes to create a multiverse rather than waste this energy preserving a past. Why? Efficiency: the universe will not waste energy and resources preserving records that can be stored with minimal resources as records or soft memory preserved in Space-Time, when it would rather use this Space-Time hardware to create a separate, functional independent and inherently unique universe. Efficiency and prudent use of resources is key to how the universe functions. This is why, should a jump to the past or future be attempted, what is accessed during the jump is the VR or record (soft memory in Space-Time) while what emerges is a new physical location or "hardware" in the geography of Space-Time. 


You are a simulation, physics can prove it - TedxSalford by
Astrophysicist, cosmologist and Nobel Prize winner Professor George Smoot

There is actually nothing strange or alarming about Professor Smoot's Tedx lecture, although I don't agree with every part of his talk. As physicists move toward a greater understanding of quantum mechanics and quantum computing, when it comes to what the universe is made of it becomes ever more practical to compare it with code or information, which is Space (St) distinguished from Einstein's Geometric Space (Se). What is presumptuous and disorienting is the belief held by humanity that it created code which is comparable to the belief that the sun and the universe once rotated around the earth. If inorganic substances such as metals are able to communicate with one another at the quantum level (using the "fields do not exist approach") and as a result give rise to the movement of electrons (electricity) and their own physical movement and mobility through this communication, for example, magnets, magnetic particles and other forms of electromotive force (which light itself relies on) then it is not improbable that some of the earliest forms of life and intelligence could have been inorganic. These could very well have naturally evolved in complexity into superconductors, formed rare alloys, rare earth elements or compounds growing like a biological organism that relies on a form of quantum bio-mechanics in place of a biological system found in organic life. Being inorganic and capable of using ever more complex compounds and magnetic fields to move particles and minerals around they could evolve in intelligence and complexity over billions of years, from the very inception of a universe and through various types of fields could cover vast areas of Space. Objects and materials that appear as bland inorganic structures need not be regarded as lifeless simply because they do not show life-signs expected of organic life (this would be like finding the limb of an animal still kicking and because the rest of the carcass cannot be found, assuming that the writhing limb is just an unintelligent, "non-living" thing or force in physics to which a law is ascribed, which is the attitude toward magnets). The need for growth, like the biological need to feed, would lead to ever increasing processing power in these inorganic materials and would naturally create or merge with a quantum realm alongside which or from which organic life emerged. For a scientist to say the universe in its inorganic complexity is not an intelligent living thing, in this context, seems rather naive and somewhat ignorant. Electromagnetic fields and electrical activity associated with the brain, cell and limbic activity are fundamentally more inorganic than they are organic, in the sense that this activity can take place without the need for an organic physiology. There is nothing strange, special or bizarre about this unless a person cannot think outside conventional approaches and cannot at least try to broaden the scope of reason. Many physicists have already transitioned from traditional views that have been overtaken by the advancements being made in an age where information technology is reshaping ideas and scientific concepts. Human beings tend to have a kind of arrogance that emerges from ignorance, regardless of education or the lack of it. For instance, a person can reach out and pick up a glass of water and drink from it, on the other hand a magnet can pick itself up and attach itself to another magnet or metal surface, intelligence, life and purpose is ascribed to one action and not the other. This is a kind of arrogance.

link to video
Neodymium Magnets and Super Ferromagnetic Putty

If a scientist does find a way to build a machine to jump into the past they would find the time in the past they tried to jump into instead turns out to be a geographical location in the multiverse that uses the location in time to instead create a location in space (a Portal or Star-Gate of sorts), if they persist in the pursuit of this technology they are likely to instead also stumble upon how a new universe is created .i.e. a means of generating or unleashing a catastrophic force or generating and controlling  tremendous amounts of useful energy. It may also be important to note that the universe will not waste energy and resources creating duplicate universes (so called parallel universes), it will instead use this resource to allow each universe existing in parallel to form and develop independently and uniquely due to the fact that variation and unique traits are naturally more valuable than endless duplicates. However, there may, now and again, arise two parallel universes that are binary or almost identical. It would probably be best to refer to these as binary or twin universes rather than parallel universes because they will be rare, "parallel" as it is currently used  tends to imply that nearly identical universes are the norm or are conventional, when in fact not. These natural records, stored in Space-Time, are likely to be considered an invaluable resource, since not not only do they preserve history by time and location, they also offer a VR library of genuine, uncut, life experiences that can be reviewed as anyone pleases, in 1st, 2nd or 3rd person depending on how remote the audience wants to be from objects, actors and events being watched or experienced. These records can have many uses including verifying the truth, accuracy and quality of information which would be useful in many industries and types of work. Its probably worth noting that these VR records of history are likely to be highly detailed being telescopic, macroscopic and microscopic which entails that how the universe came to be, from inception to date, from an astronomical perspective can be reviewed as well as the historic activity at the cellular and atomic level, which would form an invaluable first hand resource for use in scientific research, weighed against previous knowledge that relied heavily on assumptions. These concepts and whether they can be realized can only be verified through the development of technologies capable of shifting time, currently outside the reach of modern science.

The interesting aspect of this though, is that if a physicist did find a way of building a time machine, they simply need to review their math and theory for how the device operates and what it does because in essence what they have done is created a means of generating large amounts of energy, which can be very useful and/or a means of teleportation between locations in the multiverse (without the need for a space craft) which may be a discovery that is also just as significant as time travel. However, during the "time travel jump", which in fact turns out to be teleportation, it will indeed appear as though they are going into the past, but these images the traveler observes are in fact records of the past between the two locations that exist as a kind of by-product of the jump (recording), not the past itself (just like watching the terrain from a car window except that what is observed is recorded history based on the locations being crossed or jumped through). These records which are comprehensive (include location, objects, thoughts and feelings deepening on which person they are accessed in) will be accessed or reviewed in 1st, 2nd or 3rd person in the space in between the jump from one geographical location in the multiverse to another. The universe keeps a time stamp or accurate location based record of everything that has ever occurred in a given location since its inception and physicists can learn to access this accurate history rather than a biased re-telling of history by individuals (the way DNA is used to gain more accurate information on events or crimes) by time travelling, which in this context means observing the past or future outside of the present (outside of Time=0). For instance, if a physicist at a location uses the time machine (which Einstein's physics shows can be built) makes a jump back in time of 24hrs to the very same location, then he or she will stay in the same place and will not go back in time by 24 hours, they will instead be able to observe a record of everything that occurred in that location (where the jump took place) over the 24 hour period during the jump. This is no different from playing back a File, CD, cassette in a VCR or watching streamed video online. A time machine is thus a device for viewing accurate records of history stored in Space-Time (e.g. a jump through time in the same location), it is a means of generating energy that can be used in industry or to power devices and it is also a means of teleportation, depending on how it is operated. This technology would be useful in science for providing empirical evidence on the history of any location in the universe from inception to date and useful in education for showing students first hand what actually took place in history. It would also be useful for determining or reviewing legal cases since the Time-Space historic records can be accessed for an audience to see what actually transpired in a case, the same way the discovery of DNA exonerated innocent people wrongfully incarcerated. By following the design for creating a time machine (which physics shows is possible, especially when re-worked to account for difference between Te and Tt) what has been explained above is what the outcome of the device would be. 

The reason why the historic records between locations appear to be a past that a time traveler can create a time machine and jump into is very simply because a scientist is led to believe, by Einstein, that Motion (ticking clocks) is Time, when this is in fact not true Time (Tr). A physicist therefore has to make this correction or adjustment in his or her analysis to arrive at truly accurate or more informed descriptions of how the universe actually works: he or she must make distinctions between Tt and Te as well as St and Se to interpret observations and outcomes in physics accurately, as they are not one and the same. For instance, physicists like to say the measurement of time between a clock at the bottom of a tall building and its top will yield different results. Then what is being measured is not the real world or present (T=0), what is in fact being measured is what the physicist would measure if they had already built the time machine entered it and where observing the past or future, not the present. The "time travel" recording or hologram is evident in the time difference or distortion between the two clocks, which is not real, it is empirical evidence or an example of interacting with a recording or hologram (which is what the time machine would do with much greater depth by going back to access further records in history). However, because Einstein does not differentiate between True Time (Tt) and Motion (Te) they believe the time differentiation between the clocks is real, that is, taking place in the present when they are in fact making an accurate measurement of a recording or hologram (records of history stored in Space-Time), which they mistake for the present because they are making the measurement close to or believing it to be T=0 (the real present). The difference in time between the clocks is empirical evidence that time travel is possible, with the exception being that it will yield true historic records of the past rather than move people back in time. If this where not true the time between the clocks would be exactly the same. In the present (T=0), the clock at the bottom of the building and the top of the building have the exact same time and this unchanging, constant or universal timeline is the present (T=0) and is the same throughout the universe and across the multiverse. What does this mean exactly? If you were in the present (T=0), when as a physicist you measured time using clocks at the bottom of the building and the top of there should have been no time variance between them. The time variance is evidence that everything you are experiencing right now is a record of your past (Te) that the real you, which you mistakenly believe is future you who exists in the real present (T=0) has already done. However, your human consciousness which is actually in the past believes it is doing and experiencing everything (life) in real time, for the first time, right now, on the incorrect assumption that this is the present. Lets say that CNN records the news live (Tt or T=0). It then broadcasts the news with a 6 hour time delay (Te). You the viewer are watching this news or broadcast and reacting to it believing it is live (your current belief about where you are in time at this very moment) when it fact it is a record of what took place 6 hours ago.  Everything that you are doing right now, a future you who is actually in the real present has already done. However, your consciousness being unaware of this time shift is processing a record of the past (everything you are seeing and doing right now) incorrectly as the present. In other words your understanding of your own place or existence in Space and Time is primitive because it is not aware of existing in this variance. This raises an important question, which is why?

Firstly, in essence its the realization that you exist in Te as a version of yourself existing in the past, not the real you functioning in the future Tt, that is actually the real present. The answer to why the universe is structured in this way may be quite simple, human beings do the same. Its called risk management. To understand this process you have to differentiate between Einstein's Space, which is Geodesic or Geometry (Se), not True Space (St), which is created from code or information [or spirit to add a religious perspective. Religion seems way ahead of science in this respect, it explains there is more than one kind of death - a physical death (Te) which a person can be spared from and second death (Tr), which is a death of the spirit from there is no recompense or recovery]. Should anything cataclysmic happen to the real you operating in Tt, this code, information or data and knowledge would be permanently lost beyond recovery. Not only would you die a physical death, you would also cease to exist. Therefore, the universe is cradling or protecting the real-time you (Tt) by running what it may consider just as important or more important, that is, a fail safe, off-site backup of you (Te) that exists further back in time without being aware of this discrepancy [which is you right now]. Te can be used to restore or resurrect Tt in the event that a cataclysmic event destroys a valuable part of the universe. If viewed in this way, it makes perfect sense.

Secondly, the version of you, B (functioning at Te) that is actually a backup of the real you A functioning in real or true Time (Tr) is re-living, re-enacting and reacting to what has already occurred, in first person as though it is spontaneous, has free will and happening for the first time, when it is in fact reacting to a pre-existing record flowing through time. The scientific proof of this being the case is observed by Einstein himself in that in the present (Te being experienced by B) simultaneity for separate events does not exist. Why is it important for you - B, to experience Te as though it is taking place in real-time? Once again, the answer to why the universe would function in this way may be very simple, it does this once again for risk management purposes. Technically, although B believes it has, but in fact has no free-will, the fact that it relives the record believing it has free will and is responsible for outcomes allows it to form an unbiased second opinion of events that have already occurred. A becomes immediately aware of this second opinion in real time (Tt) because B is just earlier time shifted A. If B relives the record in Te and forms a different opinion, A will spontaneously become aware of it and make an adjustment in behaviour in real or true time (Tt). This adjustment in behaviour will lead to a new record moving down the timeline that will once again be reviewed by B, unbiased because of being unaware of A, the opinion of which A will be immediately be aware of. In this bending of time, the perpetual loop (which behaves like RAM) that is created may give rise to what scientists today refer to as self-awareness and a conscience. The memory and improved ability to make decisions becomes intelligence. Human intelligence or intelligence in general, in this case, can be described as being created by manipulating time through Te and Tr as a risk management process designed to create self-awareness and improve intelligence through the interaction of A and B, which is one and the same person or organism functioning in different time settings for decisions made to increasingly have better outcomes. These improved outcomes are what accumulate as the record, which is an accessible history preserved in Space-Time (ROM). This seems to be a method for forcibly jump-starting intelligence. What is interesting about this process is the manner in which time is manipulated into behaving like a natural transistor or processor where only the present exists, but by making the system treat or view the present (Tt) as the future and the data being backed up which is past, it naturally creates a pseudo-present (Te) that is in fact the past, and uses the record of events as history, which in combination create Past-Present-Future. The reason why there is a delay between A and B is likely to be due to time it takes to capture, backup and archive A at B, which must have a maximum processing speed and involve vast amounts of data. This natural method of structuring and organizing time and information would be remarkably useful. This analysis may make some people uncomfortable, however, where a hypothesis is formed every stone needs to be turned to increase the depth by which greater clarity on a subject may be gained.

Of course, this same property will apply to gravity. All of the universe's mass, forces and distances that seem tremendous seem so only because they are seen from the top view. The reality is that they in turn are controlled by underlying properties yet to be discovered in physics, occupy just one tiny dimension and are a very small part of something much greater; namely the underlying code by which they are written and operate. Later in this paper an attempt will be made to explain why there may be a scientific basis for this in theoretical physics. In this regard the “increase” in weight or g-force such as that observed when an object is accelerated may in fact not be created by velocity since distance=0 and consequently velocity remains zero; this idea of g–forces would only be a top level “illusion”, that is both quantifiable and measurable from the top view, but that is easiest to manipulate from the side view. Economics is gripped by a similar illusion. It believes fundamentally in scarcity and the definiteness of economic resources, when in fact the volume of economic resources is not determined by what is observed or what appears available, but by the underlying operating system by which those resources are made available.

The Measurement Problem Explained: A Refresh Rate for Matter & Analytics in Economics

Economics faces the same human constraints in logic that are found in physics in that what is observed and interpreted analytically may not be what occurs. For instance it is possible to theorize a scientific basis for the inexistence of time;  an attempt shall be made here to explain this view. To make the explanation simpler or clearer let us begin with a very simple approach such as this; a cartoonist or animator drawing a car can flip pages with drawings to show a car travelling at 100km/h, however, in reality the image on each page is standing still and has no velocity; consequently, similarly to begin to understand forces like gravity scientists may have to learn to accept the non-existence of durational time and rationalize this in physics.

This flip book animation demonstrates how matter on every page is standing
still [is a particle], that is, in the Bose-Einstein Condensate. However, when the artist begins to flip the
pages [refresh matter] or heat back up the car miraculously begins to move [it becomes a wave] appears as sodium
or the sodium atom effectively solving the measurement problem. 


If Einstein explains the Bose-Einstein Condensate as a matter in its fundamental wave form this is misleading. The problem with professors continuing to teach students of physics that the fundamental state of matter is a wave not a particle is that this misinterpretation then makes it impossible to explain gravity because the particle rather than the wave function or form is the conduit for mass or gravity. As we explain later this mass is not contained in the particle itself, but consists of true Space acting on the particle. It is also important remember that for physicists to see and be able to understand a "particle" is a top view or Einsteinian description of matter. We know that from the side view there is no particle. There is only some form of side-view code, and when the code is observed from the top view it appears as and behaves like a physical particle. Understanding this procedure explains why and how a wave cannot be more fundamental than a particle or object. Technically, this means physical matter does not exist as is commonly thought (from the top view) and all the physical or sensory reality associated with matter (even mass-energy equivalence in nuclear energy) is created by the interaction of electromagnetism and gravity, rather than the matter itself which consists of code. In other words, mass, texture and all "physical" attributes associated with matter do not come from matter itself but are simply "attributes" added to the code, contextually by the interaction of gravity (refresh rate) and electromagnetism, without which matter could not be interacted with .i.e. it would be incapable of being physical (solid) neither could it  have mass - even something as simple as picking up a coffee mug would be impossible because it would have no physical attributes or properties. However, if magnetic fields can in turn act on the particle, it then becomes possible to manipulate gravity indirectly using the plasma's mass. 


In the same way that there is an electromagnetic spectrum, logic
points to the fact that there must also exist a corresponding quantum electroparticle
spectrum (QEPS) at the subatomic or quantum level consisting of visible (physical) and invisible (elemental) particles of
various types whose constitution allows them to create their own atoms 
or parallel material [quantum electroparticle] worlds. In other words, at the quantum level there can be as many variations
or strains of the sodium atom as there are states on the spectrum. When a sodium molecule is broken down in a visible part of the spectrum, the part of the sodium molecule that is in the invisible part of the QEPS may continue to exist unscathed, which entails the sodium molecule continues to exist except in a form undetectable to modern science. For instance, if a scientist cut leaf in half and part of it fell away, scrutiny of the same leaf under the QEPS would reveal a leaf that is still intact due to the fact that parts of the leaf consisting of invisible sections of the QEPS were unaffected by the physical break up of the leaf. In essence it points to the fact that elements in the periodic table, the substances and organisms they create should not be taken at face value in that elements can be made up of particles from any part of the electroparticle spectrum, even those that are invisible and/or intangible do not lose their structural or molecular properties but remain an inherent part of quantum mechanics.  For instance, a water molecule, that consists of hydrogen and oxygen atoms can exist in states (other than ice, liquid and gas) that are presently unaccounted for. If water can exist in these uncharted states then it is also possible to deduce that living organisms in parts of the spectrum that are currently invisible and/or intangible can function biologically in a state science has not as yet interacted with. These intermingled and interspersed worlds may support complex organisms which co-exist only in different states of the spectrum. Just as some wavelengths in the electromagnetic spectrum pass through matter, some of these particles (e.g. QEPS variations of sodium) may share the same properties and space but create matter (a sodium molecule) that is not mutually visible and does not interact (they passes through one another due to having different wavelength properties) that would make sense of unexplained phenomena relating to the human experiences that current science cannot explain. It is also feasible to deduce that the complex structure of an atom can consist of visible and invisible particles of elements of the QEPS.  This entails that not only can biological organisms created from invisible and intangible parts of the QEPS exist, but that it can be concluded that complex machines and devices can possibly be built by engineering materials from the invisible and intangible parts of a QEPS periodic table that retain their inherent molecular properties and that are not naturally detectable. Although science today may have radio telescopes it remains unknown whether the radio frequency range they cover is comprehensive and if they are capable of covering extreme sections of the QEPS. The quantum level mechanics of the "physical" world can be built from any segment of the QEPS. When a particle is broken down, for example, in the Large Hydron Collider (LHC) the sum of its yield will consist of elements of the QEPS. However, even after destruction in the LHC, it is possible some aspects of a particle that are not visible remain unscathed because they are at frequencies too low for the impact to break them apart. This quantum electroparticle spectrum and the diverse states physical matter can occupy has yet to be identified and fully accounted for in quantum physics. 

Proof that the fundamental state of matter is a particle, not a wave, was empirically proven by the Bose-Einstein Condensate achieved using lasers to cool sodium atoms down to 177 nano Kelvin. The result is a plasma that on observation is easy for scientists to mistake for a wave form. It is in fact atoms being observed as a fine cloud of fundamental particles or a particle soup. (It appears to align only because the particles appear uniform and cascade only because they are aligned under the influence of a magnetic field) in basically the structural pattern, thumbprint or blueprint of sodium. Hypothetically, interfering with this field would alter its "field pattern" or blueprint causing the plasma to change from sodium into some other substance when it condenses  (heats back up). Technically using lasers to melt substances by freezing them to 177 nano Kelvin  and below should allow  scientists to create any substance or form of matter by altering the pattern of plasma while still in its melted state and let it condense into some other form of matter (performing what would be considered the equivalent of the Philosopher's Stone in Alchemy). 



Image of Bose-Einstein Condensate 


Britannica describes Bose-Einstein condensate (BEC) as "a state of matter in which separate atoms or subatomic particles, cooled to near absolute zero (0 K, − 273.15 °C, or − 459.67 °F; K = kelvin), coalesce into a single quantum mechanical entity—that is, one that can be described by a wave function—on a near-macroscopic scale." Firstly its not a single quantum entity, it consists of a cloud of fundamental particles from which the atom is built, secondly the wave function is created by a magnetic field (it is not evidence of wave particle duality, in fact it proves the opposite which is that fundamentally matter is a particle not a wave,  i.e. the state of matter when the projector's wheel stops spinning (when time stops) the object in each frame is a particle, shown in the above image captured of Bose-Einstein condensate. Physicists to this day are still getting this wrong. In all likelihood if temperatures fell further the electromagnetic field itself would collapse leaving only the plasma without a wave function or pattern that forms the blueprint for sodium. 







Both the electron [all perceived matter, physical forces and how they act] and the observer [people and other forms of intelligence] are just constructs within the BEC. There is nothing outside the matrix, metric or clouds of the BEC. Physical matter begins and ends or only exists in the fine particle cloud of the BEC everything else (including electrons) is just information or a construct created by processes within the BEC. Initially it was thought the electron is matter and the BEC is finer particles of the electron. However, technically only the particle cloud or BEC contains physical matter, the electron is just a construct of the code or patterns generated in the particle clouds, therefore, it has no "real" physical properties. This provides a clear separation between code and the matter it creates. This explains the discrepancy between the electron appearing as a point when viewed in SeTe and a smeared out cloud when viewed at low temperature. The BEC is information or code that tells the observer that the electron is a point and therefore that is what the observer sees, therefore the electron itself is nothing more than information and does not, in this sense, exist. Technically the BEC both creates and observes or appraises its own reality. In other words the electron does not "appear" as a smeared out cloud, scientifically nothing else exists but the smeared out cloud.


It seems inevitable that the homogenous or uniform particle in fact generates bits, that are, no different
from bits of information in a transistor and the particle cloud generates clusters of bytes
to process information that create the atom. This observation is important because
it implies that the primordial and fundamental function of a uniform particle is in fact to act as or function as part of a transistor. These particles clouds or transistors create the bits that are processed into a conventional "particle" or what is described later as a Tron that in turn generates the atom. The atom itself can be further described as nothing more than a cluster of information (an atom) with mass tagged to it identified later as T4. This observation is very critical because it identifies how a bit in a transistor for instance, becomes what can be described as a physical object, an electron which created the atom.  Understanding this transition from one state to another is important. The uniform particles in the BEC being verified as generating and processing bits can be a critical piece of the puzzle. Unfortunately, it seems no one has filmed, x-rayed, MRI-ed or photographed the energy patterns given off when bits and bytes are processed through billions of transistors in microchips to see how individual bits and clusters of bytes behave in a transistor processing information, that is, how code written by programmers actually behaves as electrical energy in microprocessors. Despite the fact that the uniform particle is smaller than than electrons in a semiconductor the expectation or prediction, is that the patterns or clouds observed will be almost identical to the Bose-Einstein condensate or particle cloud since there are likely to be similarities in how information is processed in nature to that in a microchip in the sense that it will be similar to the format of a man-made transistor. If the patterns in the Bose-Einstein condensate and electrical energy exhibited by bits and bytes being processed in microchips are for the most part similar or identical in nature this will be a critical discovery for modern science as it provides a feasible  scientific construct for an interface between code and reality or the physical world. It may also be useful to note that nature is using the uniform particle in the BE condensate to process matter at incredible speeds that no method of computing in the world today can match. It may also be much harder to produce quantum computing from the quantum level itself. It may be much easier to delve to a smaller scale beneath the size of electrons that can be described as being a sub or super quantum scale in order to have more dexterous control of entanglement and other particle states. However, to do this it must be accepted that there is a vast scale infinitesimally smaller than electrons and  that is currently not accounted for in physics, but that is evident but currently unappreciated in the BE condensate.

Chip manufacturers are having problems with transistor density. They can fit billions of transistors onto a microchip. The microchip uses silicon as a semiconductor to create a transistor that uses electrons to process information. What they are unaware of is the fact that an electron is itself built from a natural microprocessor where each electron may have as many as a trillion transistors within it. In this construct the BE condensate is the natural microchip and the uniform particle in the cloud is the equivalent of the electron, except that it is infinitesimally smaller than an electron. The proposal is that Space (St) consists of 1 trillion transistors per electron and is ubiquitous this. The proposal is that Space (St) consists of 1 trillion transistors per electron and is ubiquitous. This gives an idea about the difference in scale and its potential processing speeds. By understanding what the uniform particle in the cloud is and how the cloud works it may be possible to use this to create microchips with the same processing power found in nature. For this to happen it becomes necessary for science to recognize that the BE condensate is not a wave but a cloud of fine particulate already acting like "electrons" in the act of being processed in a conventional silicon based microchip. If there can be as many as a trillion transistors in a single electron what impact does this have on current parameters for transistor density and computing power, especially with the knowledge that there are billions of electrons already deployed in silicon microchips? Without this becoming a scientific fact the manufacturers of microchips will assume that the smallest element in physics they have to work with, when it comes to matter, is a particle in the form of an electron, when in fact this is far from the truth. It in turn affects the limit concerning the density of transistors on integrated circuits manufacturers believe are possible to create thus limiting processing speed and power. In the same way that a modern smart phone can have more transistors than the Apollo guidance computer, there could be more natural transistors in a single green leaf using sunlight for photosynthesis than there are on all computing devices on earth. The BE condensate appears to function as a natural processor that designs and generates or creates electrons, controls entanglement, predetermines what they become, how they function and the physical matter they produce therefore offering a means to developing computing power able to operate at this sub-quantum or super-quantum level which can have a revolutionary impact on industry and manufacturing processes. 


The atoms when viewed normally in Se-Te (Top View) are viewed as physical objects
objects. However, when temperature is sufficiently lowered toward absolute zero using lasers
the view changes to St-Tt (Side View) and the particles that make up the atoms become
visible. They are uniform. Each particle has mass and is entangled with
other particles in the cloud. Though the cloud appears as a single indivisible substance it should not be mistaken for this as it consists of fine particles. Entanglement furnishes each particle with 
information on how it should behave or move and this makes energy and matter programmable. The entangled particles moving in harmony create the particle cloud or Bose-Einstein condensate.
The code, algorithms and other information being fed to the particles through entanglement,
that determine how they behave, is what determines the substance they become.
The manner in which the particles move in unison make them appear
as though they are a wave or are under the influence of a magnetic field. Any fields should be considered a by-product of quantum entanglement. Quantum mechanics programs the particles, is responsible for communication and for harmonizing the fine particle cloud or Bose-Einstein condensate. Fields are not the origin or source of harmonics.  
The fundamental state of matter is an individual like particle, not a wave. Although waves
and fields are useful for explaining what is being observed, it may be more
accurate to state that the waves are not being created by fields,
but by quantum mechanics. Each uniform particle, at this stage of analysis,
can be considered part of the process by which a bit of information is created and processed. 


 

This depiction and structure of neutrons, protons and and electrons,  the orbits depicted in the atom and how it works are not accurate. Let me explain. Technically the electron, neutron and proton do not exist as separate entities. There is only what can be described as  a "Tron". The stage where the Tron is in its orbit determines its polarity. Therefore, the singular Tron generates protons, electrons and neutrons. The diagram depicting these and their orbits is useful but wholly inaccurate. The more accurate diagram of an atom is shown below. There is no static or stationary central collection of neutrons and positrons being orbited by electrons as depicted in the classical explanation of an atom. 






The image above is the more accurate emulation of an atom. The element can be referred as a "Tron". It only shows one element in orbit, for simplicity, to make it easier to understand. Though there is only one Tron in the diagram, it remains plausible there can be a number of  Trons and orbits in an atom, each dipping into and out of the nucleus at the same T3 neutron nexus. There can be Trons in the same and different orbits moving as electrons while some are moving as protons in the nucleus and they exchange places as they travel into and out of the nucleus switching polarity as they do so. When the Tron is at T1 it is described as having a negative charge and is therefore called an "elec-tron". When its orbit reaches T3 it reverses polarity. During reversal or the process of switching polarity it has no charge may be referred to as a "neu-tron", when it is at T2 it has reversed polarity and is described as being a "posi-tron". It cycles back into orbit to T1 repeating the process. It moves under its own internal power directed by quantum mechanics. 






MRI scan of an atom. The dimple in the atom which can vary in depth
and width corresponds with "Trons"dipping into and out of T1-T3-T2 orbits.
The scan is 3 dimensional and it fits what is predicted by T1-T3-T2. .
see Magnetic resonance imaging of single atoms on a surface.

   
  


What is the dimple? In my assessment, the Tron is the electron, proton and neutron. Even though the nucleus of the atom is not visible in the MRI scan we can peer inside it using T2 and T3. This allows us to label the dimple "T4". T4 confirmed by the MRI is nothing less than the illusive "Higgs Boson". The Tron, in addition to being the electron, proton and neutron is by subtraction also the Higgs Boson or gravi-tron. The Tron is using variances in acceleration between the nucleus and shell to generate a gravitational force or mass, the dimple, the Higgs Boson, Gravitron or T4 of the atom visible in the MRI. In essence at T4 the atom acts as an amplifier. It can take the infinitesimal mass of Trons, the immense velocities in atomic orbits and amplify them millions of times over to facilitate movement and generate mass in what appears to observers as gravity. This of course means that the force
mass or exertion referred to as the Higgs Boson can be measured, however, a physical Higgs Boson
shown in grey as T4 will never be found because it does not exist.

This allows us to determine that whereas  electricity can be referred to as the individual or collective movement or flow of electrons. Gravity is the individual or collective movement or flow of atoms. When electrons flow the current observed is called electricity, however, atoms can do the same and this movement, which is a current consisting of flowing atoms, when observed is referred to as gravity. Both forces are created by the behaviour of Trons. When atoms flow in a direction the result is the movement of the body or object they create, for example, when the earth circles the sun it is the equivalent of a gravitational current, in the same way that when electrons flow in a wire it produces electric current
 




When it comes to quantum mechanics the purpose of energy bands becomes clearer
 using the new construct. The diagram above shows that the orbital energy bands are
designed to repel any Trons in orbit. Electrons that are negatively charged will be compelled
to exit a negatively charged energy band in external orbit at C. When in external
orbit they will therefore be repelled
until they find the nearest exit, which is the positively charged energy band or orbit in the
nucleus. However, when Trons enter the nucleus they become positively charged. 
A consequence of this that the positively charged energy band in the nucleus pushes them away
and they once again have to look for the nearest exit which is back into external orbit at C.
However, when they enter external orbit they become negatively charged electrons and
are once again pushed along until they find an exit. This continuous cycle inevitably creates a "motor" or the energy observed and referred to as atomic energy. The fact that this energy can be 
harvested in a reactor entails that this design not only creates a "motor", but also in the process
creates a "dynamo" or "generator", basically a reactor. The energy bands act as power lines feeding Trons with momentum that pushes them along and keeps them circling. Trons dipping into and out of the nucleus does not give off radiation or photons because they are riding energy bands into and out .
the nucleus at constant velocity. When the atom is observed the process of dipping into the nucleus is not obvious and it appears as though the electrons simply maintain a round or 
circular orbit, which is the classical manner in which they are depicted. Understandably
the Trons are moving at such high velocity that when when observed it will appear as 
though they are standing still and have a permanent residence in the nucleus and
in orbit when in fact they are constantly on the move throughout these locations
making them fundamentally impermanent. The Tron likely has a standard mass. The only
reason why the proton is heavier than the electron is that when the Tron circles
into the nucleus its rate of acceleration obviously increases due to travelling
a  shorter circumference and this makes its mass greater than
 when it is in outer orbit as an electron.

Importantly, it explains why Trons appear to never run out of energy and collapse into the nucleus, 
 as is expected for a Hydrogen atom for example. Rather than collapse, they intentionally dip or dive into the nucleus, however, this design entails the energy bands act as accelerators.
It also explains why neutrons are difficult to find,  Trons become neutral for the shortest duration. 
They are riding energy bands where they  circle into and out of the nucleus. These energy bands (from which quantum mechanics gets it name) by repelling Trons both in the nucleus and in external orbit keep feeding them with energy or momentum with which they amplify mass at T4 and sustain their orbital movement. The energy bands in quantum mechanics are simply "fields". This complex explanation is necessary when the atom is viewed to have electrical charges and magnetic fields.

However, all this complexity can be dropped if the Trons, which at this stage can be compared to graphical "sprites" are simply programmed to move in the manner observed, and therefore orbit under their own power, which may very well be the case. In addition to this the path they take may be determined by underlying code, in which case the energy bands or "quantums" of quantum mechanics, like "fields", do not exist, as they are nothing more than  by-products of processing taking place hidden below the size and scale of electrons where the BE condensate or fine particulate constructs an atom - see the animation below. 


The Collision Drive: harnessing gravitational force 

By advancing the understanding of how gravity works from Newton to Einstein to autonomous matter using the "fields do not exist" method applied from brute force analysis it is possible to engineer an apparatus and method that emulates how the Gravitron in atoms shown in diagrams above as T4 generates gravitational force. This apparatus and method is called the Collision Drive. The Gravitron is the Graviton or "Higgs Boson" and the mechanism replicates and generates a gravitational force at T4 that can be pointed in any direction to create propulsive force.


The Collision Drive uses Mechanical Engineering to emulate the 
force at T4 creating what can be referred to as entry level 
or tier 1 gravitational force



Both the magnet on the left and the earth and moon on the
right are not floating on any medium or levitating as a result of "fields"
,but as a result of independent mechanical processes in their atomic
structures that are attributed to quantum mechanics and function
on basic tenets of mechanical engineering. Understanding
these mechanisms and how they work led to successful
replication and design of the Collision Drive.

                         
When analysing the mechanics of an atom it is important not to be mislead by  terminology. A certain degree of flexibility is required. For instance, descriptions such as negative and positive charges and energy bands or a neutron can be useful when trying to explain  how particles behave. However, a flow of electrons may be, in terms of mechanics, no more complicated than the flow of water in a downward sloping river. There are implications when it is believed that negative and positive charges actually exist when in fact these are just different directions, types or stages of acceleration or that there are "permanent" electrons, neutrons, protons and gravitons in an atom when in fact these are just Trons momentarily in different parts of an atom or that orbits are circular when in fact they are curved into and out of the nucleus. To use analytical brute-force to unlock the secrets behind these problems we can  then say "electrons, protons, neutrons and gravitons" do not exist in order to force a different explanation, approach or dynamic to the analysis, when what we are in fact  saying is all of these components of an atom are created by Trons.   Just like the "fields do not exist" method of analysis it can be said electricity and electrical charges "do not exist" because they are just aspects of the mechanical nature rather than the "electrical" method in which an atom is thought to operate. They therefore have to be handled dexterously to avoid interpretations being limited by how any terminology is used, framed, defined and applied as the very process of defining can hinder the capacity to make deeper and more accurate inferences. Therefore, we need to be careful not to lose the capacity to think and solve problems outside the definition when scientific terminologies are created. 



Whether its a planet and comet, Trons in orbit in an atom, or particles 
in a Bose-Einstein particle cloud, communication between elements shown
by the red-line vector (depicted in the animation) achieved using quantum entanglement which allows
each body regardless of whether it is large or tiny to adjust its movement using quantum
mechanics, the movement of which when observed is described as mass or gravity.
The two bodies internally adjust their movement in relation to one another
using acceleration generated within the atom [tier 1 gravity].
There is no quantum energy band, gravitational or magnetic field drawn, depicted, created
and generalized, as shown by the blue line in the animation
based on the behaviour of bodies and elements,
technically these fields or lanes do not exist. Einstein and other physicists to this day
believe in the geodesic or geometry of space as well as magnetic fields and quantum energy bands, which is fine and appropriate for teaching ideas as long as they are used in the correct spatial context of Se-Te. Though still useful for accuracy, simplicity, equations and drawing conclusions they can also lead to  common mistakes, misinterpretations and misconceptions in physics if their limitations
for explaining phenomena are not observed or understood when applied or prescribed.
Hence, the "fields do not exist"approach (St-Tt) is useful for refining how these elements
 and forces are understood to both exist and work especially while
very little is known in physics about how entanglement works
to process matter, reality and apply physical laws.

Communication between the two elements or bodies, as they negotiate
their navigation passed one another, is depicted by the red vector
and can take place 10,000x faster than the speed of light
(as has been observed in entangled particles). Mobility or mass is achieved
by adjusting the size and repositioning the graviton - T4 on the atom surface shown
earlier at the atomic scale, by which it can move in 3 dimensions 
with great precision and do so without the need for a spatial medium to ride on or through.  
The atom, its internal and external parts are in turn controlled by deeper communication
taking place between particles in the particle cloud of the Bose-Einstein 
condensate from which it is designed and created. This scenario leads to refined
movement and exceptionally high levels of control and accuracy in the
construction of matter from the infinitesimally small particle cloud to the
exceptionally large astronomical bodies they build. It governs how matter
 is created and mass is attached or applied to it, in what is observed as the 
laws of physics  or what can be described as or compared
to level 5 autonomous quantum mechanics.

And yet, even at these dizzying depths of exploration,
humanity has only just began to scratch
the surface of this science.

If all matter is in fact created from light as a fundamental particle infinitesimally smaller than an electron that can be observed in the Bose-Einstein condensate then it may be necessary to revisit the structure and nature of light itself. If magnetic fields are dismissed using a "fields do not exist approach" it may be necessary to consider that light is made up of a uniform particulate that propagates through transistors (Space or St) not fields (Se or Geometric Space) as is observed by the naked eye and experienced. If light is controlled by transistors not electromagnetic fields it means that it is inherently, non-contiguous and does not move from one place to another, but uses a stimulation process that only makes it appear to travel. Light can behave as both a wave and particle. However, we have seen that light waves can exist without the need for magnetic fields by being non-contiguous and controlled instead by using entanglement and made to produce "light waves" moving at 299,792,458 m/s when in fact light does not travel this distance or move at all (see the animation below). By generating light waves these transistors are not only able to make light appear to travel, but also generate depth, width and motion (where motion is mistaken for Time in Se-Te). For this process to be understood it must be accepted that light in general consists of a fine uniform particulate as observed in the BE condensate. This particulate is turned into waves using a similar process to the way electrons are used in semiconductors to process information. 



The animation above demonstrates how a smeared electron BEC interface
where each square is a single fine particle controlled by transistors may generate
 an electron. The waves are interfaced with code such that they are indistinguishable.
The electron only becomes visible as an electron when observed in Se-Te
or at room temperature. However, when observed close to absolute zero
the code rather than the electron becomes visible, proof that code and 
the electron are generated through a time variance, in this case the variance
is cancelled by subjecting the electron to very low temperature the
hypothesis being that this extreme drop in temperature then reveals
 the code from which its is designed.The smear is currently mistaken for waves
by physicists, when it in fact is a cloud consisting of fine particulate.   

In this great video, Dianna Leilani Cowern (Physics Girl: follow the link to her
awesome YouTube channel) uses an experiment
with a tone generator, mechanical vibrator, plate and sand to 
create patterns from sand. The sand can be compared to
the smeared atom or fine particulate in the BEC. The black plate
represents the medium for entanglement connecting all the particles together.
The pattern created by the plate is code that designs the electron.
This is the view of the atom in 3 dimensions, that is, the atom
at close to absolute zero or in the absence of time (without the curvature of Space-Time). When the same pattern is viewed in 4 dimensions, at room temperature what is seen
is the atom instead of the smear. Her experiment is identical to how  it is predicted
the atom is created from the Bose-Einstein condensate in diagrams above.



Familiar structures: How the BEC designs orbits and atomic structures 


If you can understand how this system works then it should be clear why nature's processing speeds are faster than  any microprocessor in the world today. Its clear nature uses 1s and 0s to read and store information but not to process it. To process information, find the solutions to equations and calculations the 1 and 0 system used by computers is far too slow. Even if the fastest microprocessors in the world were clocking 10Ghz and above this would seem infinitely slower than the method nature uses to process information. Earlier it was noted that nature seems capable of communicating 10,000x faster than the speed of light. Processing at these speeds and more provides a clue as to just how much more efficient this method is. The fastest computer in the world today could not keep up with this method. In the same way that digital is better and faster than analogue, this system is faster and better than digital computing methods.This method is not based on absolute answers, but the highest probability of the solution being correct. Computing at these high speeds explains how different kinds of entanglement are possible. Its very interesting. Can you see how it works?


Light waves and fine particulate in the BEC (est. at 1 trillion transistors per electron) are controlled through entanglement by transistors to create electrons and atoms are non-contiguous. This
means the waves are created by fine particles remaining in place 
and transferring energy - using the Mexican Wave method.
When this is observed it appears as though light is traveling 
from one location to another, when in fact it is being transferred not propagated.
The fine particles in the animation are not flowing from one location
to another, by staying in place, rising and falling in synchrony transistors
can create the illusion of  distance and movement from location to location.
Light does not travel from the sun to earth. It is simply stimulated.
This implies that light itself, like any other substance, is in this sense created by natural transistors.
All 4 Space-Time (Se-Te) dimensions, that is, 3 directional movement plus Time
are created by these transistors (St-Tt).The fine particulate moves and forms waves as though
they are in a field when in fact this is not the case as their movement is being
controlled by entanglement shown by the red line vector in 
the animation above. 

The basic method for differentiating waves is wavelength or size then we must conclude that the fundamental particles in the BE condensate, behave like electromagnetic wavelengths (see the wave motion in the animation above) only because they are being controlled by natural transistors when they propagate, using entanglement to apply the 1s and 0s of code not electromagnetic fields ,which like the blue line in the animation above are just a by-product of the process. The animation demonstrates that "fields" and waves are a by-product of the manner in which entanglement orchestrates the movement of fine particles. The wave form is non-contiguous. It is created by transistors orchestrating the movement of fine particulate not magnetic fields, although the movement when observed will be attributed to the presence of a field or waves propagating from one location to another, just like the blue field line in the animation above. There is no field line, this is just a misleading observation, the two bodies negotiate movement around one another using the red line vector or entanglement seen in the animation. Entanglement is linked to the presence of transistors. Earlier we proposed that a single electron can be made up of a density of 1 trillion transistors with true Space (St) seen in the manner the BEC acting as a semiconductor mechanism that processes and controls light or the fine particulate to produce atoms. This process governs the uniform particulate that produces atoms of different size and therefore of different mass that propagate on a measurable scale. In other words, in the same way that there is visible and invisible light on the electromagnetic spectrum there must also be visible and invisible atoms or matter on what can be described as being created from an electroparticle or matter spectrum currently unaccounted for in physics. This fine light particulate when manipulated by natural transistors form complex matter and organisms and endow it with mass at T4 to give it physical properties observed in material objects. T4 is simply tier 1 gravity, or 1st generation gravitational force the production of which has already been made possible and worked out using a collision drive. Shortly, how to emulate the St transistors will be similarly figured out, like tier 1 gravity this need not initially be a complex process. They are part of how the BEC manipulates uniform particulate to create atoms that in turn create the physical universe, except that in the BEC natural transistors processing uniform particles, that appear as light or photons but that are actually made up of fine particulate infinitesimally smaller than electrons (est. 1 trillionth the size of an electron) is currently not accounted for in physics. (see the diagrams above that describe how a Tron creates atoms and amplifies mass at T4.)   


Wikipedia describes the above chart as "Wavefunctions of the electron in
a hydrogen atom at different energy levels." As mentioned earlier this 
description is of course another commonly accepted and propagated mistake in physics.
These 
images represent homogeneous particle clouds or "plasma" not waves. The
patterns are created by electromagnetic fields which define the 
characteristics of the substance they create. This plasma consists of
fundamental particles at the quantum level. Each of these particles moving
as though they are single entity has mass. This mass comes from true Space
(We know that when Einstein refers to space he actually means geometric space and
when he refers to time, its not genuine time, its motion.) Since the mass in the particles
come from true space and the patterned fields are electromagnetic this is
the only level at which gravity and electromagnetism intersect. This intersection is
important because it allows electromagnetism to control gravity through the mass of fundamental

particles. This is a derivative of quantum mechanics because controlling gravity in this way involves
the acceleration of the mass inherent in these plasma clouds and is mechanical in nature. It is not a
direct control of gravity itself which requires operation through true space rather
than the particle. [When observing phenomena, even at the quantum level physicists need to retrain themselves to understand what they are seeing. They have to observe from the 5th dimension not Einstein’s 4th dimension. To understand the universe from the 5th dimension it becomes necessary to subtract the lower dimensions (1st to the 4th). This entails removing distance, motion, time etc. Through the subtraction of these the observation takes place from the 5th dimension (side view). Here mass, volume, weight, energy do not exist or are not the same as they are seen and experienced from the top view (4th dimension) for instance, in the way it is between software and hardware - an image on the screen is different from the underlying code from which it is written. This way what is observed can be correctly interpreted. [For instance instead of structure, lines, instead of patterns, code, instead of energy levels, markers]

Furthermore, this plasma state and the ability to influence it using magnetic fields to create a particle beam possibly offers one of the few plausible methods for using electromagnetism to indirectly control gravity by manipulating the plasma. This works because suspending motion in this manner is an artificial way of slowing down time [motion] (another safer route to the Hutchinson effect), which gives physicists access to the manipulation of the mass of the plasma through electromagnetic fields. 


Professor Jim Al-Khalili explains how everything
can be described using information

The reason why this aspect of physics has not been done and the technology not made any progress thus far is because, once again, the observation has been misinterpreted, in this case by calling what is observed in the behaviour of the cloud or plasma waves instead of fundamental particles. In other words the method and technology for controlling gravity and potentially building any substance using fundamental particles as building blocks in manipulated fields has already been developed. The challenge with this type of technology of course will always be the method used to suspend motion which requires extremely low temperatures.


Mistaking particles moving in formation under the influence
of a magnetic field for waves. The atoms are made up of fine particulate clouds.

Firstly what is being observed are particles from which atoms are constructed (forming a fine particulate cloud or plasma) not waves. The problem is that atoms are currently thought to be particles, when in fact the atom is not the final particle. It is in turn constructed from the particle cloud. These particles appear to move in unison because they are uniform and under the influence of a magnetic field, not because they have literally become waves. This is a misinterpretation of what is being observed. This mistake is easy to make if an observer believes an atom is the [smallest] particle, when in fact the atom itself is constructed from the particulate cloud or Bose-Einstein condensate. If this is true then technically by advancing this very same laser based technology not only can any form of matter be designed and built from the ground up by changing its blueprint, that very same plasma under the influence of electromagnetic fields should be able to indirectly control gravity. This could be described as the equivalent of 3D printing at the subatomic level or on quantum scale. The only problem with this method is that it cannot be done at room temperature. However, achieving this should simply require a little innovation. Each atom is constructed from a cloud of fine uniform particulate. Each particle in the cloud is acting in unison with other particles by communicating directly with one another through quantum mechanics to perform a dance, so to speak, that appears as the wave form being observed which is in turn thought to be under the influence of a field, when in fact the movement of each particle and the cloud occurs through quantum mechanics, most likely by the particles being entangled.


Electromagnetic Fields Do Not Exist


If the fundamental state of matter is a particle and not a wave, it requires us to reflect on the ultimate validity of electromagnetic fields. This means that the entire superstructure of physics where electromagnetism is concerned sits on a fallacy or misinterpretation of observations concerning electromagnetic fields. I have for a very long time viewed the interpretation of what electromagnetic fields are with some scepticism. The greatest weakness of modern-day physics is misconception, misperception or incorrectly interpreting observed phenomena, a problem that affects the works of even the most renown physicists in history. We see this with Einstein’s Space-Time, when fundamentally Space exists outside of Time making this basic descriptive of its own theory an oxymoron. The same kind of mistakes may apply to our understanding of electromagnetic fields which affects a host of research outcomes from renown figures such as Faraday, Maxwell, Schrodinger, Dirac and so on. 

Magnets of opposite poles attract, magnets with similar poles oppose one another, magnetic fields can be drawn using lines of force with iron shavings. Try to force two similar magnetic poles together and the resistance is potent, the magnetic field is almost fluid like in your hands as the two magnets oppose one other. However, the reality is more likely that there is no “field” whatsoever between the two opposing magnets. The “force” that is created that pushes the magnets apart is not a field at all. It is merely acceleration taking place within the magnetic material. In other words, the two magnets (in and of themselves) are accelerating against one another, in opposite directions. In other words, there is no fundamental difference between how magnetism and gravity works both are merely forms of acceleration. 

Let’s say at every conference a person hosts, as soon as he arrives, he asks all the people to stand up and all those sitting in the front row to form a circle around him, facing him so they can introduce themselves, then everyone sits down. Doing this then becomes a general rule at all his conferences. Would you then say as soon as this person enters a conference, he exerts a magnetic field? Those sitting in the front row are pulled or drawn by this field to him while the rest only stand because they are further away? No. Yet this is exactly how modern physics describes the action of “force” or “fields”. There is no field. Everyone in the room is following a rule or script (equations and laws in physics and code or algorithms in software). They are moving into position not as a result of an external motive force, but by their own internal energy and movement. In other words, something is taking place inside the magnetic material, not around it. This points to the fact that all matter contains this electromotive ability or internal capacity for motion observed in magnets. Even when iron shavings are used to draw a magnetic “line of force” or flux lines, this is an illusion or delusion because each individual iron shaving is moving itself into position, it is not being moved into position by the force exerted by a field. In other words, the entire process consists simply of internal acceleration. Essentially most of what you've been taught in physics about magnetism, as early as secondary school or high school, all the way into university, to this day, is misinformed. 

One way of looking at this, is that when a child takes a magnet and brings it close to another magnet on a smooth table and that magnet is propelled away, it can be interpreted that the magnets exchanged information and the magnet on the table propelled itself away from the approaching magnet. The entire process can be viewed superficially from the position of cause and effect. However, for this to be true how did the magnet on the table know it was required to move? The current response is it was pushed by a magnetic field or force. This is inaccurate because we can now deduce there is no magnetic field or force. The two magnets are communicating with one another at the quantum level and using a quantum level process to create internal self energized motion. If this is true then what implications does this transfer of information between magnets or  communication have for quantum computing? If magnets are already using some form of quantum computing by using entanglement to control their movement, behaviour or reaction through communication with other magnets or magnetic materials based on an internal algorithmthen why is quantum computing so difficult when simple objects and materials like magnets and metals are already using it in place of physical contact that is required to validate cause and effect?



This video makes an observation about magnetic force
that is ingenious


This then requires physicists to look deeper into how these magnets are interacting. Each opposing magnet is not being pushed away by the other magnet. It is in and of itself moving, in such a manner that it only appears to be under the influence of a “field”. There is no field. When a pole is reversed and a magnet jumps up, dashes across a table and attaches itself to the other magnet, it is not being attracted by a “force”, but is in and of itself providing the energy, momentum and movement or acceleration with which to jump and attach itself to the other magnet.  There is no field. It's difficult to shake the notion that permanent magnets somehow behave like powerful transistors and emitters capable of naturally storing, exchanging and acting on information in the form of algorithms to create magnetic effects and reactions. The perceived "magnetic field" though technically not a force seems to present evidence of this being the case.To postulate that electromagnetic fields do not exist, also requires us to revisit our understanding of electricity, how it works as well as the function of positive and negative charges and what they really are. When a magnet moves this seems to be a response to or execution of code (in the form of algorithm). The evidence that the magnet is moving as a result of an algorithm or code being executed is the appearance of what are observed as fields or field effects. Fields appear due to the fact that when nature initiates an action through code, matter will move as fast as it can to obey an instruction and the "field" appears due to the fact that processing speeds exceed the physical frame rate of  matter or what can be technically described as "reality" experienced by human beings. For instance, if the attribute that governs the location of an object in 3 dimensional space is changed using code, the instantaneous nature of the relocation from one place to another will occur through Space (real space not the Distance) side stepping the slower speeds of cause and effect which causes a disruption to reality will be observed and described as a field. This implies that quantum computing may be the means that offers access to Space and may provide the technology with which it would seem possible to hack reality itself. It makes sense that Nature would create reality from code as this represents the most resource efficient way of generating a universe. This direction is not in conflict with religious beliefs [since this approach may bring into question certain values]. In fact religion seems to be ahead of science, as referred to earlier, since long before science developed "code" religion had already identified it as "spirit" therefore alluding to the fact that it was and is already aware of this being the nature of reality. For instance religion is already aware that spirit, reality or matter (code) must obey instructions (faith), for example, telling a mountain "be removed and be cast into the sea." perfectly demonstrates how instructions given at the sub-quantum level (code) will generate results that appear to take place outside cause and effect.  The ability to speak directly to spirit and instruct it is therefore no different from a programmer accessing and instructing data or basically coding. However, religion which is yet again way ahead of modern science appreciates that this access requires knowledge and a unique kind of effort the nature of which modern science does not as yet understand and is still too backward to grasp; "However, this kind does not go out except by prayer and fasting." Nature appears to use quantum tunneling in both both biological and plant based processes, for instance in photosynthesis. Quantum tunneling can itself can be easily explained using code with the detection of "waves"or "fields" not being the cause but evidence of changes in code that affect attributes of matter or particles related to their location.



The circular magnet hovering above the pad is not 
suspended in mid air by a magnetic force field, that
force does not exist. It is suspended in mid air by itself,
that is by changes taking place at the quantum level, 
in exactly the same way that gravity only appears to suspend the earth in Space. 
Gravity and magnetism are therefore both simply forms of acceleration. 

When the circle or disc in the video is forced down it appears as though there is a magnetic
field (cushion) resisting this downward push by the hand. Modern day physics
as high up as colleges, universities and engineering or science and technology 
institutes teach and interpret what is observed as such. This is actually wrong.
The circle or mass within its own matter independently resists the downward
push of the hand. There is no field or field "force" acting on the circle,
this is a modern misdirect or misinterpretation of what is being observed.
Physicists are misdirected by the belief objects cannot independently suspend themselves
in mid  air or space. To appreciate this requires a counter intuitive approach to understanding
this phenomenon. A bird flapping its wings requires air to fly or glide. A bird cannot
fly in a vacuum. However, any material be it magnetic or non-magnetic can fly or float, even in a vacuum on nothing more than the interaction of the subatomic particles within its own mass without 
the need for a "field" or "force" to act as a means of cause and effect e.g. like an [air] cushion or field. Levitation is a fundamental property of all masses. Physicists can work out the mathematics
and algorithms required to do this. The only reason it has not been done
is simply because self levitation or internal propulsion stimulated or occurring independently within any given mass appears to defy logic, when in fact it does not. When the "fields do not exist method
of analysis is applied how planets orbit and remain suspended in Space is 
fundamentally no different from how magnets behave. This 
property makes magnetism and gravity simply aspects of acceleration at
the quantum level. There is no mystery to make this problem unsolvable.
[As you've probably guessed by now the algorithm is solved, done and 
has been applied.] 

What is one of the major implications of the "fields do not exist" approach? If magnetic fields do not exist then one of the major inferences is that energy itself, not just information can be transferred wirelessly through quantum entanglement. Since entanglement ignores distance and is instant this widens the scope of wireless charging beyond anything currently considered. It means a power source and the machine or device being charged can be in different places anywhere in the world or across vast astronomical distances without proximity being a requirement for any device or machine's access to power. This implies that through entanglement a power station on earth could supply power in real time to anywhere on earth, but it could also do so just as competently to a colony on the moon or on mars. Supplying electricity using entanglement would be precise, as it could be to a specific entangled device, machine or vehicle, within a specific range, metered and precisely controlled which makes it more attractive than  Nikola Tesla's attempt to broadcast power. In essence it represents a new kind of smart programmable energy that has diverse applications and can be supplied irrespective of the distance between the power station supplying the electricity and the client. These are some of the broad range of benefits of switching from trying to understand and control energy through magnetic fields, to doing so through quantum mechanics.



When a superconductor is cooled such that it is able to levitate, 
quantum lock or travel along a non-existent
"magnetic field", the reality is that this levitation, movement and locking effect take place 
within the superconductor itself. Cooling the superconductor may 
simply alter acceleration of particles
within the superconductor itself. 

An explanation for this behaviour without fields is quantum entanglement. When two magnets or magnetic materials resonate they become entangled. Without magnetic fields how electricity is viewed may need to be reviewed, for example, by viewing positive and negative charges as bits of information (ones and zeros) rather than charges and electrical power or the "flow of electrons" and electrical effects such as heat, light and so on as being just a by-product of exchanges of information between materials more than it has do with electrical power, that is, electrical energy, gravity and magnetism are all merely a by product of materials exchanging information though processes related to quantum entanglement. 

In other words, if fields do not exist, then technically, neither does electricity as its just a by-product of exchanges of information between materials at the level of quantum mechanics. Since the focus is on the utility value of energy, emphasis on the unseen exchange of information, the science of which is more important to quantum computing, is overlooked. The analogy "information is power" may be closer to the truth than is thought. The only attribute that separates gravity from magnetism is the quality of resonance between different materials. Magnets resonate when in proximity and consequently begin to exhibit the push and pull effect mistakenly attributed to fields or exchanges of information or mistaken for electrical energy. Resonance is therefore what predetermines entanglement and whether there will be a magnetic or gravitational response from within an object's mass. This implies that there are different levels or types of quantum entanglement possibly created from exotic combinations of elements and minerals. Human beings are acquainted with resonance. Its the powerful feedback experienced whenever two frequencies overlap and consequently involuntarily create a loop. This loop is basic a algorithm. For example, resonance can be caused by the acoustic setup of a microphone, amplifier, speaker and guitar which create positive loop gain that causes powerful feedback. Some materials can use resonance to become entangled regardless of distance, some can become entangled regardless of substance to take on gravitational properties, while other materials resonate when in close proximity and take on attributes of magnetism. Resonance which is observed in the alignment of domains creates or is affected by entanglement which alters the behaviour of atoms that generate an internal  motive or electromotive reaction that is observed as magnetism or electricity. The same applies to gravity. No fields are involved. Fields are only descriptive of how the gravitational effects and magnetic materials will behave. This means this behaviour is not based on a “force”, but on an algorithm, much in the same way an engineer designs software to encode how a program behaves. Equations to do with fields or magnetic force are no different from loops or algorithms. Fields become apparent or observable when changes in code take place and are executed e.g. a magnet moves, pulls or pushes and cutting magnetic flux lines generates a charge. This is how electricity and movement takes place [magically] without there appearing to be an external cause (action at a distance). In other words this tends to imply that gravity and magnetism are actually one and the same. They are both caused by acceleration with motive force operating and originating internally within masses. Not by fields. Neither magnetism nor gravity are created by "fields", therefore both mass and motion are not created directly by fields (see the example later were blue spandex, when stretched is used to explain to science students how a field creates gravity and how these students are actually being mislead by the teacher as this is a misinterpretation of what is being observed.) When Special Relativity is used to explain away the fact that "magnetic fields" or "magnetic force" do not exist, this becomes a significant misdirect and the truth is lost. 

Profound levels of ignorance

Profound levels of ignorance despite great strides in sciences.


Simple but fundamental mistakes like this that prevail in physics to this day have dire consequences when it comes to the capacity of scientists to conclusively understand gravity or how it relates to electromagnetism. We see similar intellectual and cognitive impasses in economics where technocrats running and managing important organisations are unable to see or understand losses to subtraction that erode as much as 100% of GDP per annum, where they tend to focus on or become consumed by non-essential or less critical economic problems which cause far less damage to modern economies and their inhabitants. 

What are electromagnetic fields then? If it is true that all mass is inherently capable of movement then it is quite easy to explain what fields are. Fields can be observed everywhere, for instance when you are watching a game of football on television, if the rate of motion exceeds the frame rate [also comparable evident in the phenomenon of frame dragging] a blur forms on the screen, for instance when a footballer is running or kicks the ball. To deduce that the blur (e.g. magnetic field) on the screen propels the athletes movement, as is currently the case in physics where magnetism is concerned, is a misrepresentation or misinterpretation of what is being observed. A ball can rise and remain suspended in mid air or space. It is not held in that position by a field. It is kept in that position by properties within its mass, generated by acceleration using quantum mechanics. Raise a hand in front of your face and move it rapidly from side to side, it creates a blur, i.e a field, which is not causing the hand to move, but is merely a byproduct of the hand's movement. Fields simply trace or record movement, they are not a force that causes movement or a source of movement itself. Similarly, magnets, magnetic filings, auroras, even a compass needle aligning itself to true North with the earth's electromagnetic field, is not being moved by the field (the blur), it is being moved by activity at the subatomic level of its own mass or matter ascribed to quantum mechanics. It is important to note that though this explanation may reveal that both gravity and magnetic fields are simply created by acceleration, it should not be forgotten that mass itself is merely an attribute of code or true Space (St). This generally means that mass and acceleration will so whatever St codes them to do.



Of course I repeatedly explain or bring up "quantum entanglement" by stressing the need to correct Einsteins view or explanation of "Space-Time". The attributes of Space are that there is no distance [all objects and matter are located in one point or frame], therefore, there is no conventional time. 

Let's explain this more succinctly.


There is no Distance

Though the car in the video below is moving at 100km/h, from the time its started off and reached its destination it was visible in the same frame, a frame is no different from an entire universe. Since the starting point and the destination are in the same frame, in reality there was no distance between them, neither is there at the smallest point, the quantum level or the greatest, the astronomical level. If there is distance, there is matter to quantify distance, much like a ruler is scaled to measure, and if there is distance and matter, then time must exist to quantify rates of motion; if motion exists then it represents time and Relativity Theory applies: all this leads to a Distance-Time, Matter-Time [Time being Motion] or top view of the universe not a Space-Time view of the universe which is where Einstein errs. If there is no distance, then there is no motion, there is no durational time and matter exists but not as it is understood in the top view; since all matter exists in a single point when observed from the side view, that single point is a single frame or the entire universe. At this stage physics begins to move outside observable phenomenon from existing as matter to existing as information or a type of code.

None of the images on the pages in this video are moving. They 
are all static. By flipping pages and refreshing these static
stills the images come to life. The Universe follows the same principle
bringing Matter to life.

When this underlying code and how it works is understood, humanity will gain a new threshold in physics and be able to manipulate gravity with the greatest of ease.  The image on each page standing still entails it has no velocity and yet the object when observed on refreshed pages appears to move. This condition creates what scientists call the “measurement problem”. To date physics has failed to explain the measurement problem. As I have suggested here, if matter indeed refreshes then this very elegantly and comprehensively explains how it can appear to be a particle and a wave at the same time essentially providing a pragmatic end to this debate in physics. This very simple explanation does not need the concept of a "superposition" of two states when its not being measured, there is no need to attribute active observation or measurement or a state of non-observation that affects the particle which is quite weird, or that only conscious beings affect or are affected by this process, if matter refreshes this problem is solved. If you don't know what the measurement problem you can watch this video.

The measurement problem has confounded the best minds in physics for many
years; yet in this paper I quite easily, pragmatically and elegantly explain it as a refresh rate
or property in matter presently unaccounted for.

When frames used to animate an object are examined more closely it will inevitably be found that the object on each frame is standing still yet when the flipped pages are observed less closely or at a distance the object is animated. This phenomenon also represents the problem experienced in quantum physics where matter appears to be able to exist as a particle or a wave. Since we know that fundamentally the image on each page or frame is standing still we are able to infer the movement we observe in science as a “wave form”, velocity or on the screen as motion or movement is in fact an illusion, a form of paramnesia or a kind of “trick” of nature, it is a scientific phenomenon which creates empirically verified wave properties, yet without this “illusion” [Einstein’s interpretation of a “top view of the Universe] matter as human beings understand it and reality as they experience it would not exist. When the “pages are being flipped” and motion appears to take place through an animated object, matter seems to behave as a wave, however, when “moving” matter is examined more closely; on each page it has no velocity, is in fact standing still and therefore appears a particle, explaining the inevitable dilemma of how an object can be moving and standing still at the same time.

Wave particle duality in physics may in fact be a flawed concept, since waves as they are conventionally understood do not exist, but are more likely a top view illusion earlier described as a kind of paramnesia attributed to observation at the top level induced by motion facilitated by the refresh rate of the universe and therefore the refresh rate of matter; this illusion is what is referred to as the experiential Universe or that aspect of the Universe people inhabit on a daily basis. It is brought to life by matter being refreshed thereby endowing it with mobility and free will: since motion is time in Einstein's model, the refresh rate is the origin of both motion and top-view time. The experiential Universe is not an illusion per say, it is a real, flesh and blood world or existence since it has origins in a particle form, however, the fundamental properties upon which it is created rely on the wave form of matter which is technically produced by ephemeral or impermanent processes. In other words even though reality is rooted in the particle form of matter, a particle itself is impermanent. This impermanence can be better understood by appreciating that matter refreshes, constantly; in other words particles must persistently appear and disappear in order to exist for matter to appear to be capable of animation or motion. Consequently, it can be deduced that to have the ability to move matter or a particle from which matter is constructed must have a third property that is currently unaccounted for in modern physics and this is the be ability to “refresh”. To “refresh” refers to the ability to disappear and reappear; a property of matter currently unaccounted for in modern science, but that inferences show may occur. At this stage we have leaped past Einstein's model of the universe. We are able to begin to understand how we live is a universe without conventional distance, and therefore without conventional Time. This makes it easy to understand how and why the phenomenon of both quantum tunneling and quantum entanglement occur.

When we see an object moving, no matter how fast, it is in fact never in motion. If the image is closely observed, like a single frame in a movie projector's real, it is in fact completely still or frozen in place. In order to appear to move each frame must be successively removed and replaced by another. The refresh rate of matter entails that matter exists and ceases to exist so rapidly it is difficult to say which state it occupies leading to a paradox, such as that observed in Schrödinger’s Cat Experiment; it is dead and alive, standing still and moving, existing and ceasing to exist all of which appear to defy conventional thinking in physics. However, using the flow of logic thus far we can dismiss the top view concept that a particle is a wave since we know distance is a construction designed specifically for perception or put simply, it is experiential and conclude that matter always is and fundamentally remains a particle which is why when observed more directly and scrutinized closely matter will tend to appear as a particle, while its wave properties are induced by the process of refreshing these static particles or “stills”. Wiki (2010) explains that “Wave–particle duality postulates that all matter exhibits both wave and particle properties. A central concept of quantum mechanics, this duality addresses the inability of classical concepts like "particle" and "wave" to fully describe the behaviour of quantum-scale objects. Standard interpretations of quantum mechanics explain this ostensible paradox as a fundamental property of the Universe, while alternative interpretations explain the duality as an emergent, second-order consequence of various limitations of the observer. This treatment focuses on explaining the behaviour from the perspective of the widely used Copenhagen interpretation, in which wave–particle duality is one aspect of the concept of complementarity, that a phenomenon can be viewed in one way or in another, but not both simultaneously.”[7] Clearly, as we may note here, there is no duality; a particle is never really a wave in the same way the stills on frames of a projector are never actually moving, they always remain static on each frame and consequently remain in “particle” form therefore the Copenhagen interpretation may be a little misleading.


Schrodinger's thought experiment is a classic example
of how trying to adhere to Einstein's flawed model leads theoretical physics on comedic wild goose chases 
and compromises the ability of even reasonably accomplished people to think outside the box in order
to solve simple problems.

If the cat in the box is considered alive when it is moving (a wave) and dead when it is static (a particle) then the idea is that before we look at the cat we do not know if it is dead or alive, a static still on a single frame or flipped frames showing it moving around. However, if we do not look into the box the cat is in a “Superposition”, that is, a precursory state the actuality of which will only be revealed when the observer actually looks. There is actually little or no use for probability in this thought experiment. The reason for this conundrum is that physicists taught to evaluate this experiment using Einstein’s approach cannot theorize a universe without Time. Since motion is Time in Einstein’s model the idea is that in the same way that lightspeed is an unsurpassable limit, Time can never stop. To bend over backward and accommodate Einstein’s flawed model the poor cat must potentially be alive 50% of the time and dead 50% of the time. The fact remains that the simplest way of getting the correct answer to this thought experiment is to correct Einstein’ flawed model. When this is done it is understood that when the box is open, the state the cat will be in will be predetermined by which method the observer uses to view the cat. If the observer suspends Time (sic Motion) the cat inside will be static, like the still on a single frame; in this inanimate state it will be considered dead; the poison or grenade will consequently be considered to have been activated. Should the observer allow Time (Einstein's Motion) to exist in the box, when he or she opens it the cat will be moving about normally and therefore considered alive; the poison or grenade will not have been triggered.To believe there is a Superposition governed by the rules of probability where the state of the cat cannot be determined is just another example of how flaws in Einstein’s model forces physicists to make mistakes, that are then justified by a twisted kind of logic that defies pragmatism. In the case of Schrodinger’s cat there is no extensive probability that determines the state of the cat, this unknown is merely a symptom of Einstein’s model clouding the ability of even highly intelligent people to think objectively about an idea or system of thought.


I must say, finding the underlying flaws in economic theory the correction of which would end poverty, unemployment all the host of problems experienced by modern economies was quite challenging. Finding a solution was a personal life goal set from the time I was first introduced to economics at university. Poverty and its negative effects on humanity very deeply moved me and I was determined to get answers to why economics has not resolved this issue despite so much research, literature and knowledge gains. I am glad to say I did find the solution. I found that the problem is in fact the economic model itself which is basically built on a Total Revenue (TR) - Total Cost (TC) = Profit model. This model is a virtual Pandora’s box. It cannot be fixed. To attempt to do so as is the common practice in economics today is an exercise in futility, that will continue to lead straight into dead ends. It is a problematic scarcity generating model that maintains a state of equilibrium business cannot survive in unless they retaliate with disequilibrium, it is in fact operating in direct conflict with the interests of the businesses and financial systems it claims to serve at a fundamental level. Economic problems therefore cannot be comprehensively dealt with without changing the model itself. To attempt to do this without changing the model itself is a complete waste of time and resources. This model has to be transformed into a Total Revenue (TR) = Total Cost (TC) = Profit model which is fundamentally a new growth and resource generating model. End of story. Scarcity ceases. Poverty ends. Financial institutions, businesses and industry thrive like never before because they now operate in the model they are supposed to be in, that supports rather than smothers growth and development. Finite resources against infinite demand is thenceforth no longer the basic economic problem.  By no means should you be beguiled by the simplicity used here to describe these economic models as to think they are easy to understand, arrive at, interchange and implement. In economics swop out these models and economic woes of any nation, any government you see today should come to an abrupt end.

With physics on the other hand the mistakes in fundamental theory feed on one another and therefore emerge one after the other like whack-a-mole, no sooner is a solution found for one another raises its head.

Einstein and Misconceptions about the Speed of Light

It is my belief that we are entering an era of science in which the public will no longer be able to take seriously a highly qualified physicist from a reputable institution who believes the speed of light is both an invariable constant and unsurpasable limit. They must now be viewed as relics who are being eclipsed by the evolution of  the paradigm that defines the very foundations of knowledge in a subject area in which they were once experts who must begin to evolve and advance to remain of any relevance to mankind's future.

Dualities in physics are often symptoms of models that are flawed and therefore do not concisely explain phenomena therefore they allow contradictory statements or theories ascribed to "relativity" to co-exist and equations are tailored to suite these weird juxtapositions. The measurement problem is one of these, but we have used a refresh rate to concisely explain how a wave and a particle co-exist. A refresh rate can be used to explain the relationship between matter (-) and anti-matter (+) and how the two states co-exist by refreshing from one state to the other (.i.e. the same matter is altering states between electron and positron). Matter and anti-matter is very likely to be the same matter alternating between positive and negative charges, its either in one state or the other which explains why it does not explode spontaneously. If alternating charges back and forth between negative and positive charges in this way occurs during each refresh then it explains why anti-matter can be present but unseen or intangible making matter appear dominant. This switching back forth may be fundamental to how matter exists.

Another such fiasco in physics has to do with light. For instance, light can transfer momentum something only bodies with mass should be able to do, evidence of this is in light pushing a solar sail. When light passes a gravitational field it bends providing further evidence that light has mass. 


Physicists often talk about mass becoming infinite as 
an object approaches the speed of light. However, there is another
intriguing problem that physicists need to address. Let's treat mass or an object's
weight as an attribute. Its accepted that when an object has its velocity increased its
inertial mass increases. This is fine, but what is the rate at which the attribute becomes
true? In other words if your weight increases from 80Kg to 95Kg due to acceleration
what is the rate at which this change in weight takes place? At low speeds such as a car
accelerating from 0 to 60 mph this is straight forward. However, at very high or supernormal
speeds we cannot assume that the rate at which weight transfers or changes is the same as the rate of 
acceleration. What if the rate at which weight changes (e.g. 85kg to 95kg) is slower than 
the rate of acceleration? Well, this would mean a vehicle travelling fast enough can achieve 
certain speeds before the weight created by inertia has a chance to catch up.
This is a strange observation, but one worth noting. For instance,
why is it that a person can run their fingers through the hot flame
 of a candle and not get burnt? Well, its because the rate at which the temperature
increases to the point at which the skin burns is slower than the velocity at which the 
fingers are within and out of the flame. Also, its a known fact that 9gs is enough to kill a person, but why is it that a person can survive 9gs for a second? Can it be said they are in and out of the 9gs before it could crush them? What if the same principle applies to G-forces at
supernormal speeds. Lets assume it takes 3.14s for inertial 
mass to catch up with supernormal velocity. This might mean that if a vehicle 
accelerated fast enough from 0 to 10% the speed of light it would take 3.14s before objects
in the vehicle experienced the crushing load of the g-force from this increase in velocity. Within the lag
of the time frame occupants of the vessel would not even be aware the acceleration has taken place
even though they were already moving at thousands of kilometres per second (the way you
might yank a table cloth [the vessel and occupants] while the contents on the table [their mass] are still standing, i.e. a slip-stream), they have left their own mass behind. How? When you read on you will see the example of gravity were blue spandex is used, where I note that the stretched spandex does not endow the round objects (marbles) moving on it with mass. If mass does indeed come from Space (St) and not directly from  Geodesics then my inference that acceleration can take place faster than Space attributes mass to create a "slip-stream" is absolutely possible even if Geodesics appear to bend light. Lets say that at a threshold of 50% the speed of light this is why the atoms of an object begin to behave more like energy than matter, if acceleration continues to light speed then inertial mass coming from Space does not have a chance to catch up and the occupants of a vehicle escape into supernormal velocity without g-forces being able to catch up, they don't have mass while moving at supernormal velocity and behave like energy for a simple and outrageous reason, they've outrun their own mass. Here the table represents Geodesics or the Geometry of Space (Se) while Mass is represented by the objects on the table. Mass comes from Space (St) and the table cloth represents the vessel and its occupants. Geodesics are left holding the mass while the vessel and occupants slip-stream through them. This would be the equivalent of a supersonic jet breaking the sound barrier and leaving a sonic boom behind because it is travelling faster than the speed of sound, or of seeing a lightening strike and only hearing the thunder moments after the lightening is long gone. The sound (mass) and the lightening (supernormal velocity) are not one and the same, which is why lightening outruns thunder. Could the same principle apply to g-forces? Does light itself shed its mass in a similar way at its point of emission to make the jump to light speed? If an object is travelling with sufficient acceleration that g-force cannot keep up with it then the occupants in the vehicle will not feel g-forces because the tensor they travel through is not aligning quickly enough with the vessel and
the gravitational "sonic boom" [Geodesic Boom] may not produce any sound at all but will generate a burst of radiation, electromagnetic or gravitational energy of some kind, e.g. a momentary distortion of Einsteins Space-Time, Euclidean Geometry or Geodesics as mass is dumped on them and they vibrate in place without their prey, i.e. beyond the reach of the accelerating craft or vehicle. In other words at supernormal speeds/rates of acceleration a vehicle may naturally and inadvertently create a "warp-bubble" or warp-slip-stream so to speak
 in which g-forces are trapped in a time delay and if they can't catch up the Geodesics straighten back
up (vibrate absorbing and dispersing the mass) creating a gravitational boom (release of energy) where the inertial forces that would have crushed the occupants of the vehicle take effect, but are spent or dispersed outside the vessel, as its "wake", since they cannot keep up with its rate of acceleration. Once the slip-stream is created the vessel is travelling without resistance, which opens the door to faster then light travel. A warp slip stream would be more efficient than a warp bubble in that it only creates a warp system equivalent to the mass that is travelling at supernormal velocity. This would mean that vehicles traveling at supernormal speeds can possibly create a natural slip-stream through which they can travel without any kind of resistance from inertia,
be it from g-forces, Geodesics or the surrounding atmosphere/air pressure as they are traveling or accelerating 
at such high velocity they slip-stream right through them, the assumption being that at supernormal
velocity a vehicle can travel faster than the rate at which aerodynamics, air pressure and friction can attach or apply themselves to the craft, it would move through the air in such a manner that it appears to defy the laws of physics. This would mean that supernormal acceleration and jumping close to light-speed within a given time frame or rate of acceleration can take place without inertial mass affecting the occupants of a vehicle simply because it can't keep up with them, i.e. there is a lag between cause and effect that the craft slips through. Physicists often refer to the need for a vehicle being launched into space having to reach escape velocity. In other words the vessel has to escape from earth's gravity. Could it therefore be plausible that any object has an internal escape velocity, that is, a rate of velocity at which true Space-Time (St-Tt) cannot "tag" the vessel, fails to keep up with it and it escapes its own mass, i.e. travels unaffected by g-forces and could the critical point at which this takes place be at a supernormal velocity that is well below the speed of light as a result of differentiation between acceleration and speed? If the pilot knows the exact parameters of this lag or time difference in cause and effect, how, when and where it takes place the vessel could be piloted expertly in such a way that the occupants are not affected by g-forces simply by following a specific parabolic curve for supernormal acceleration. It may be a mistake to assume that g-forces or inertial mass and the speed of light are one and the same, in the same way that we understand that air, sound and light each have individual properties that unfold with different attributes .e.g. light and sound/thunder both travel through air during a lightning strike but have different outcomes separated by Time, the same may apply to a craft travelling through Geodesics. It would also mean that even if a vehicle accelerated to the speed of light, the accumulation of Geodesic pressure [Einstein-ian Space-Time] or inertial mass to infinity. Infinity in this context may simply mean upto the gravitational limit of a given mass' rate of acceleration, which causes a boom or vibration as Geodesics are restored or to disperse a dumped mass. This mass would not be able to catch up with the vessel and would therefore create and leave behind a momentary vibration or disruption in Einstein's Space-Time. It might appear as a momentary gravitational boom or distortion equivalent to the accumulation of mass left held by Geodesics while the vessel and occupants depart. This would take place within the conservation of energy laws found in physics. Cause and effect still take place and laws of physics are obeyed and are measurable, but there is a Time lag between them creating what can be described as a gateway,  which the vessel slips through unscathed. The slip-stream becomes a gateway which in turn opens the way to accelerating to superluminal speeds since once it is crossed there is no so called "Geodesic" (Geometry of Space) resistance to travel. The possibility of this increases if it is verified that mass is not created by Geodesics but created by quantum mechanics where it is endowed (tagged or attached to matter) by Space (St) as an attribute. Once mass is tagged on matter by St the route it takes becomes the Geodesic using the "fields do not exist" approach in this analysis). The ability to naturally shed mass at supernormal velocity and slip-stream in this way would be useful to space travel. Shedding mass in this manner may only become available to light in the moments that it is created/emitted when it emerges from a light source as a photon. However, this would imply that the emergent photon instantaneously accelerates to a supernormal velocity, up to light-speed before its speed becomes constant due to light having shed its mass. This acceleration may take place with such rapidity that it is undetectable, however, if light has to spontaneously work its way up to light speed when it emerges as a photon this entails that it is not initially a constant; it only becomes a constant after ditching its mass at the filament or source of emission during acceleration. If mass can only be shed in this way by and during supernormal acceleration (rather than by a dodgy unexplained spontaneously acquired constant velocity with no acceleration) then a case can be made for a photon shedding its mass during acceleration from  0 m/s to 3,0*10^8 m/s. Another aspect of this to consider is that Einstein caps velocity to the speed of light, however, supernormal acceleration can entail  accelerating faster than light-speed and cutting back on the throttle before the limit, that is, without exceeding the speed of light itself, which is not impossible to do. This is all hypothetical, but it is intriguing to play around with these ideas nonetheless. To know exactly what happens to a vessel travelling at supernormal acceleration will inevitably need to be tested empirically.




The elasticity of Space-Time may change depending on two factors, acceleration and constant velocity. This elasticity will affect Einstein's predictions about what will happen to an object approaching the speed of light. As an object increases speed its inertial mass will increase. However, how it increases speed and the rate at which it increases speed (acceleration) may alter the behaviour of Space-Time on the premise that Space-Time (Se-Te) though consisting of Geodesics does not endow an object with mass as Einstein thought. The Geometry of Space and Geodesics are fields and if we assess them in combination with the "fields do not exist approach" then this allows a more accurate assessment of what an object travelling through Space-Time may experience at high rates of acceleration and high velocities. The diagram shows the slip-stream process of a space craft or object.The craft is accelerating
at different rates. In 3,2 and 1 the object or craft is steadily accelerating fast enough to bend space-time behind it. Its speed can possibly be determined by the degree of gravitational lensing. If the Geodesic or Geometry of space is bending like spandex, then this "field" is merely directional, that is, showing the direction in which mass being endowed from true Space-Time (St-Tt) will follow. Since "fields do not exist" the path that mass takes that becomes the Geodesic is in fact dictated by quantum mechanics taking place within matter and not outside it. After all, these lines are in fact imaginary therefore it is easy for physicists to generally misinterpret what they are,  what they do and where  they come from. The mass and the direction it will follow (Geodesic) need to be treated as two separate attributes. Though the object or craft may be travelling quickly, it is initially not travelling at a supernormal rate of acceleration (3,2,1) because Space-Time (Se-Te) has the luxury of pace and is able to keep up with the object or craft and curves Se-Te behind the object or craft (keeping pace with it) allowing true Space-Time (St-Tt) to endow it with the attribute of mass. Space at this acceleration and velocity is elastic to perfectly elastic In this case Einstein's prediction that an object travelling at very high velocity begins to increase in mass consequently creating what may appear as a black hole as its wake. The object or craft would not be visible because it is pulling or curving space time towards itself as its wake. The object or craft would experience crushing levels of gravity. As mentioned, this would be no different from a black hole albeit a fragile one.  When the object or craft accelerates at supernormal velocity shown at 0, it is travelling too fast for true Space-Time (St-Tt) to endow it with mass. It is puncturing or simply slicing right through the very fabric of Geometric Space-Time (Se-Te). This allows it to slip-stream right through such that when Se-Te closes behind it, it has effectively ditched its mass there because it is travelling at supernormal acceleration or speed, that is, too quickly for mass and the object or vessel to be in the same place at the same velocity or time (like lightening and thunder). The Gravitational or Geodesic Boom will be left at 0 while the object, photons or vehicle escape with no mass, no resistance of any kind and therefore without g-forces. The slip stream created at 0 is like the table cloth (craft or object) accelerating quickly enough to leave mass (objects) on the table (Geodesics). What this implies is that flying through Space-Time, like flying through air requires a substantial knowledge about the peculiarities of supernormal acceleration. For instance, the viscosity or elasticity of Space-Time will be pre-determined by the craft's rate of acceleration. For instance, a skilled pilot could make a right angled turn at very high velocity but deploy sufficient acceleration as to make Space-Time perfectly inelastic such that the g-force and the craft are out of sync consequently leaving the crushing force of gravity behind and deploy this procedure momentarily while traveling at low speed simply to execute a manouvre that will avoid g-force and spare the craft and its passengers discomfort. At certain rates of acceleration it will appear as though Space-Time (Se-Te) opens up and closes like a curtain to let a craft through without it being affected by g-forces, that is, the vehicle slip streams. This may happen at a low or high velocity executed with momentary supernomal acceleration. This means a craft of this kind could be moving at the speed of a normal aircraft but make right angles, even reverse its direction instantly from moving forward to moving backward at what would appear as the same velocity without g-force affecting the craft as long as the pilot ensured it remained momentarily within the envelope of the slip stream during execution of the turn. This would make the craft more maneuverable than any vehicle ever seen before.

This points to the fact that there are other ways a black hole can be created other than by collapsing or imploding star. It can also be created by an object travelling consistently at very high velocity, such that it becomes trapped as its velocity moves against the equal and opposite resistance created by the Geometry of Space-Time. If the black hole is created by an imploding star it should have a somewhat round appearance regardless of what direction it is viewed from. However, if it is created by high velocity the shape of the black hole will vary depending on the direction the object creating the black hole is being observed from. Black holes as a wake created by objects moving at high velocity are likely to be predominantly temporary and fleeting, existing and disappearing almost as quickly as they were formed.

1st Generation Gravity

This is where gravity is produced purely through motion based on the physical mechanics of how quantum gravity works, that is, by generating as well as alternating between negative and positive mass. Motion involves the control of gravity through mechanical engineering and basic fundamentals, namely acceleration and mass (matter). [Note: that Motion is Time in Einstein’s approach to gravity. However, we identify True Time (Tt) is in fact absolute zero which is the absence of Einsteinian Time (Te) which is in fact motion not True Time]. Einstein’s Model needs to be corrected to understand this problem, however, for the most part science today does not make this distinction. Knowledge of the algorithm for gravity is required for creating this form of gravity in mechanical engineering. This is the simplest and most accessible form of gravity and can be referred to as entry level or tier 1 gravity.   

2nd Generation Gravity

Next is controlling and manipulating gravity through electromagnetism and so called “electromagnetic fields”. Although it should be understood that gravity and magnetism are fundamentally the same force. Fields as an approach to controlling gravity are somewhat a misnomer since “Fields” be they electromagnetic or gravitational in nature are forms of residue rather than forces in and of themselves, this allows the postulate that “gravitational fields and electromagnetic fields do not exist” to be accurate in the sense that they are a consequence and not necessarily a cause of Force.

3rd Generation Gravity

The other method is through Geodesics (this is the most difficult and almost impossible method which requires tremendous amounts of energy. Euclidean Geometry, Geodesics involves so called “gravitational fields” and the Higgs Boson – being pursued by projects such as LIGO). [Note this is Einstein’s “Space” (Se), it is in fact not True Space (St) but Distance and Geodesics, Euclidean Geometry etc]. Once again Einstein’s Model needs to be corrected to understand this problem. It should also be noted that mass is not endowed by the Geometry of Space as theorised by Einstein, but is endowed as an attribute by true Space (St) 

4th Generation Gravity

There is the control and manipulation of gravity through True Time (Tt - that is, absolute zero where there is no chronological time). [Note: Te – Time according to Einstein, which is in fact just motion interacting with Geodesics described to date as Einstein’s Space-Time (which is in fact Motion-Distance)]. Lasers can be used to access absolute zero for this approach.

5th Generation Gravity

This is the control of gravity through True Space (St - where space is created purely through code or information).  [Note: Not the understanding of Space attributed to Einstein]. Controlling gravity through True Space (St is currently the most scientifically advanced method for controlling gravity. This is achieved through the code that applies to quantum mechanics and has attributes related to the field of quantum computing.






 The limitations of a jet engine is that it requires an atmosphere to operate, the advantage is the raw power it delivers as a power-plant. Depending on what it will be used for tier 1 gravity should be able to be configured to amplify thrust many times greater than that from exhaust gases from a jet engine. This is why jet engines are more useful as a power-plant harnessed to tier 1 gravity than for providing direct thrust as they are currently used. What this  means is that a sufficiently powerful jet engine harnessed to tier 1 gravity of appropriate design, size and build quality should  be able generate sufficient thrust to enable supernormal acceleration and supernormal speeds. This makes tier 1 gravity very cost effective for companies looking for a huge jump in performance at a reasonable cost and without having to invest in the development of a completely new engine design that may only yield a 5% or 10% increase in thrust.

Gravity is a fundamental force of nature, like heat, it should not be too complicated to access. Heat to cook a meal can be easily accessed by throwing a few dry logs together and setting them alight or it can be accessed by placing food in a microwave and heating it. One method is simple and primordial, the other is complicated and requires in depth knowledge of electricity. Tier 1 gravity [1st generation gravity] is primordial and accessible. What has delayed the scientific development of harnessing this kind of gravity is the scientific community's desire to focus on Einstein's Geometry of Space and interpretation of Space-Time as Se-Te. Trying to harness or control gravity using the Geometry of Space and Einstein's approach [3rd generation gravity] to Space-Time is like trying to build a microwave oven to cook a meal before the discovery of electricity. Tier 1 gravity corrects Einstein's approach and uses St-Tt to understand how gravity works making it possible to emulate and harness it using mechanical engineering.  

 Tier 1 gravity will offer the first genuine opportunity to begin to explore the possibility of reaching luminal speeds and superluminal rates of acceleration which ideally should be accessible to power-plants of approximately 30,000 horsepower and above. These may be currently considered impossible, but achieving them should not be a big deal for tier 1 gravity. Most manufacturers of jet engines would never have imagined a state of the art jet engine could generate thrust at these levels, however, tier 1 gravity is expected to widen the performance envelope of what jet engines can achieve. Whether this is at all possible will be verified from the technical designs for tier 1 gravity and what it brings to the table. Tier 1 gravity is designed using mechanical engineering, this makes how it works relatively straight forward to understand.

  This is a significant advantage as it allows existing engine and electric motor technologies to be adapted to a new propulsion system in order to improve performance above and beyond expectations. They can be used to reach speeds and deploy rates of acceleration that are, for the most part, currently considered improbable (as long as the power-plant harnessed to tier 1 gravity is powerful enough). The fact that above Mach 3 air resistance and heat severely hamper performance makes it necessary to supersede operating at speeds above Mach 3 by generating sufficient thrust to make the jump to supernormal acceleration with the hope and expectation that a slip-stream process will enable a jet harnessed to tier 1 gravity to travel to supernormal  speeds where it is anticipated it will be less affected by air resistance and heat. The ability  of  tier 1 gravity to amplify thrust means that electric engines should be able to generate the thrust required for supernormal acceleration. The advantage with electrical battery powered engines is that they can operate in an atmosphere and in space (combining jets and rockets into a hybrid atmosphere and vacuum operable turbo-jet will mean they too can operate in space and in an atmosphere). The ability to achieve these performance levels has to be tested.



The knowledge in physics required to travel faster than the speed of light in theory is currently available. The problem remains that these approaches will be based on Einstein's Space-Time (Se-Te). This requires that the problem of faster than light travel to be approached from the Geometry of Space. For instance, there is the Alcubierre Warp Drive (AWD). It achieves this by compressing Space in front of a vessel while expanding it behind the vessel creating a warp bubble it can travel through. 

link to video
Alcubierre Warp Drive (AWD).

Although the warp bubble
is shown surrounding the vessel, the "fields do not exist" 
approach implies that the warp bubble can be generated
at the subatomic level within the volume of matter occupied
by the vessel, which requires much less power to achieve the 
same result. 



In this interesting video Arvin Ash explains the 
Alcubierre Warp Drive. (Arvin has a great channel
with amazing explanations about otherwise complex
theories in physics - click the link) 

The major challenge that is immediately obvious is that it has to bend the Geometry of Space to accomplish this form of propulsion. Space is built to prevent this kind of manipulation. Why is Space (Se) so rigid? Earlier we saw that going back in Time is not possible, instead a vessel that attempts to do this will find itself relocated to another location in the same universe. A record of Space-Time will be observed during the process. Similarly, the  speed of light limit most likely exists as a buffer between universes. Distance in true Space (St) technically does not exist, to exceed the speed of light will cause a vessel to move into the next universe in line. To travel at twice the speed of light will cause a jump into the 2nd in-line universe. To travel at 3x times the speed of light will cause a jump into the 3rd in line universe and so on. However, as the vessel decelerates it will jump to the universe it originates from and the "distance" it has covered can be gauged in terms of universes crossed. For instance if the vessel traveled at 9 million times the speed of light, it will appear to cover this distance in its universe of origin, when in fact it has crossed 9 million separate and individual universes in the multiverse. Even though it has traveled this "distance" the reality is that in true Space (St) it remained in the same location, while this "hardware" was instead used to create the multiverse. To cover the distance of travelling at a billion times the speed of light, a vessel will cross a billion universes. This is hypothetical, but the analysis needs to be made to test the concept.

The number of universes that exist can be counted, calibrated and lined up in this way to predict how many there are. They may be infinite, in the sense that there is unlikely to be 1 multiverse, but many multiverses as this reduces redundancy. Each Multiverse is a Host to countless universes separated by calibrated or sequential light-speed boundaries that create Einstein's Geometry of Space-Time (SeTe). A collection of Hosts can be referred to as creating an Omniverse. The universe earth resides in, on this scale, is infinitesimally tiny, seemingly insignificant. This gives some idea of just how expansive a multiverse is. Each multiverse is sustained in a unique band and each universe a unique frequency, for instance, in the CMB. Keeping individual universes in the multiverse apart or separate within individual singularities lowers existential risk. The destruction of a universe will be contained by light speed barriers and will not affect other universes. It allows laws in physics within each universe to act and function independently. Although universes can be described as being calibrated, stacked or layered in this way, they occupy the same Space. This is what makes Space (Se) so rigid and inflexible. True Space (St) does not need distance, in the same way that true Time (Tt) does not need motion (Einstein's "Time" in Se) to generate what human beings regard as reality. This may take some time to understand, but it is not complex. 






A hypothetical representation of Multiverses shows that navigating space at superluminal
velocities may be more complicated than is currently thought. At these incredible 
speeds the relationship between  geographical locations and destinations in Space
require such a high degree of refinement that frequencies become the geography 
that governs the dynamics of navigation. Here locations are tuned into and out of, rather
than traveled to. Therefore, the velocity, trajectory and other factors that affect the 
orientation of the vessel in relation to Space are the tuning dial that direct the spacecraft
toward the location it seeks. At these speeds the normal geographic distances that are
used on a map will appear to over-lap or occupy the same location. For instance Los Angeles and Lusaka when observed on a normal map will appear to physically be in exactly the
same place. So how does a pilot fly to one destination rather than the other? The only feasible means
to reach a location is its Space-Time frequency. This is an area of physics in which there
is little or no research. This essentially means that humanity does not currently have the
knowledge required to be able to navigate a vessel when it is travelling at superluminal
velocity. The relationship between frequencies and Space-Time mapping of the universe 
are a critical area of research that requires significant and appropriate attention.



It may not be possible to create Einstein's Space-Time
without multiple universes since they form its very Geometry
and fabric.


A jump to superluminal velocity at any location in space
is likely to reveal the layers or calibration of separate universes.


From which ring should the pilot drop out of
superluminal speed? At which section of the ring?
In which direction?
At which trajectory? At what rate of deceleration?
How a vessel enters and exists superluminal velocity
will be critical to its ability to reach its
destination.

Entanglement may be one of the few means of
communication between universes and the singularities
in which they are contained.

As shown above travelling at superluminal velocities in a  spacecraft is likely to feel like tunneling because of breaches and will have its own peculiar navigation problems.  Pilots steering vessels will require heads up displays and screens that show detailed information about rings being breached, geographical locations being covered, with notices of where to exit on a breach digitally overlayed with data to aid pilots navigating descent from such high velocities. How a vessel navigates will determine  whether it emerges at a desired location in its own universe or a location in the multiverse. The compression rate applied is 31.54 million universes per light-year, it consists of the compression of data or code (as we saw earlier with the Bose-Einstein Condensate), therefore, it should not be confused with compression of water or gases, in that there is no physical pressure being applied which causes the coexisting universes occupying the same Space to be under any kind of stress of this kind, this stress is rather understood to be exerted in terms of processing speeds. However, if this rate is much higher then there will be an increase in universes and therefore the degree of navigational difficulty due to the need for entry and re-entry to be more refined. The geography of Space-Time and how to navigate to specific locations may not be as  straight-forward as moving from A to B using a map on land or a 3D rendering of space.  It may not be as easy as it seems at such high velocities and getting lost must be a very  real complication to travel of this kind.

For instance to hypothetically travel to a destination in earth's universe that is 100 million light years away, at a speed 100m times the speed of light  implies  that a spacecraft will breach 100 million universes of compressed Space or Multiverse to arrive at a destination in its own universe in the shortest possible route and time. Astronomers, Astrophysicists and Cosmologists estimate the size of the earth's universe is 93 billion light years. This allows an estimate of the number of  universes in the earth's local multiverse.







The funnel shape of the black hole provides some evidence of the existence of
other universes. In fact the very fabric of Space that is referred to as the "Geometry of Space"
(SeTe) may not be able to exist without a matrix of light speed barriers that separate and 
ring-fence countless universes.The abundance of Dark Matter and Dark energy may merely be
evidence of the construct of other universes hidden or cloaked in light speed barriers.
It only appears as "Dark Matter" because only the tail end is 
 being viewed and it is being observed from within the
individual singularity that is "earth-universe".






Infinity Sphere

If the singularity within a black-hole is a sphere that creates an internal double-sided funnel
structure then a black-hole will behave like an independent, sequestered universe with an internal
Doppler Effect which can mean that once a black-hole is entered exiting it
will only be possible by overcoming the Space-Time curvature and the gravitational
pull being created at its centre. In this scenario for anything that enters a black-hole
from any direction becoming trapped within it, becomes a possibility. A vessel travelling 
in a straight line within the black-hole sphere will follow its geometry and travel in an infinite loop.
Matter will tend to accumulate or clump at its centre since Space-Time Geometry is
likely to be the most dense there compared to the outer edges of the sphere 
and will remain trapped there unable to escape. In this scenario  the centre of a black-hole may be volatile, energetic behave and look like a sun. If it's bubble or sphere of gravitational influence  is vast, there  may be planets, planetary systems and other  objects circling the centre of the black hole fast enough to  be able to delay falling into its volatile sun-like centre creating a miniature universe very little can escape from. (For instance a black hole in galaxy cluster Abell 85 is roughly the size of our solar system). There may be  need to avoid overthinking what black holes are and what they do. By contracting Space they induce acceleration in any matter that comes near them. However, any object of significant mass does this. For instance, the earth does the same. If the mass of the earth were replaced by a point exerting the same pull any matter would mysteriously fall towards this empty space it at  9.8 Newtons per kilogram of mass.  A black hole is first and foremost an accelerator. The next important issue is what is the distance between the singularity at the black hole's centre and its circumference or what may be referred to as its outer rim, where its influence begins. Knowing this distance is important due to the fact that the crushing pressure caused by the gravitational force it exerts will only begin to be felt when contact with its singularity is made. Contact with the black holes singularity can be compared to an object falling towards the earth making contact with the earth surface. When this contact is made, acceleration created by free fall becomes converted into pressure.  It is also like to be true that if an object enters a black hole at a high enough velocity it will  be able to resist the gravitational pull of its singularity and will begin to orbit or cycle around the black hole, the same way objects enter orbit around a planet. What makes black holes mysterious is that any light that they accelerate will naturally exceed the speed of light. If the speed of light is a limit, then any light that exceeds this limit will become invisible, because it will either go backwards or forwards into time according to Einstein or it will breach a barrier and cross from one universe into another according to Multiverse theory (StTt). This explains why black holes cannot be seen. On the one hand it is said that light cannot escape the back hole's pull and is redirected from the observer which is why the black hole cannot be seen directly. However, it should also be considered  that any light around a black hole that is already travelling at the speed of light, which is then accelerated to superluminal velocity is leaving the universe altogether, which is why the black hole cannot be observed directly. It is either moving into the Multiverse (StTt) or it is going into past or future (SeTe).  Any matter that moves beyond the black hole's event horizon that is not accelerated faster than the speed of light will not breach a light speed barrier and will instead go into orbit around the black hole. The velocity of its orbit will determine how fast it spirals until it makes contact with the singularity where its acceleration is converted into the crushing force of its impact. This allows us to deduce what is going on beyond a black holes event horizon. The centre or singularity of a black hole is likely to be extremely bright and appear like a sun depending on how much matter is striking its singularity. Any matter that is travelling below the speed of light inside the black hole will be processed like matter in a galaxy, therefore, the subluminal interior of a black hole, depending on its size, is likely to appear like just another a galaxy that is a part of earth-universe, albeit one contained in an infinity sphere or in sequestered space. However, any matter that has been accelerated passed the speed of light will exit earth-universe and will not be found or be detectable inside the sequestered universe created by the black hole. According to StTt a person who enters the rim or event horizon of a black hole at subluminal velocity will enter sequestered space or an infinity bubble, but technically they are still in earth-universe. A person who enters a black hole's event horizon and is consequently accelerated to superluminal velocity will not be found in the sequestered earth-universe space created by the back hole, they will have breached the barrier and moved into an adjacent universe depending on the factor by which their velocity has been increased. When they cross from one universe to the next they will not go backwards or forwards into time as proposed by Einstein but will feel as though this is precisely what is  happening and that they are "in a time machine". The reality is that what they experience and see is a record of past, which they emerge from to find themselves in another part of the Multiverse (StTt). If this analysis is accurate, this would make a black hole a place where a number of different eventualities await matter and where it is screened in the sense that what happens to it depends on whether its velocity is greater than or slower than the speed of light as it crosses the black holes event horizon. This goes back to what was discussed earlier about the need to know precisely how entry and exit from a black hole will take place prior to reaching its event horizon.

What is interesting in all this is the fact that the energetic singularity at the center of a black hole may be indistinguishable from a star or sun. When approaching this kind of star or sun inside its black hole exterior there would be no way of knowing it was in fact being created or fueled by a black hole because it would burn like a sun, it would exert a gravitational pull like a sun but there would be no way of seeing beyond the event horizon by which it is surrounded. The Multiverse, in this situation,  would basically be recycling matter through stars across universes. What may give it away for being a black hole is if it is burning fuel or matter and giving off more heat than can be accounted for especially if the singularity or sun it creates penetrates several universes. If this is the case it becomes plausible that in some universes a sun may be visible, while the event horizon funneling energy and fueling it is in another universe because its light and heat breaches into several universes. Whereas it would appear as a simple sun in some universes, like earth's sun does, in some other part of the Multiverse it is in fact identified as a black hole that transferred or is transferring energy from one universe to another. If this were true, it implies that a single sun, which is by default also a dense singularity could have multiple solar systems or many planets orbiting around it but in different universes, otherwise it could not generate the continuous heat and light observed and taken for granted. This inter-connectivity between universes may explain why both black holes and suns tend to be found at the center of a solar system, galaxy or universe. They need the this nexus to either generate tremendous amounts of heat, as suns do or absorb tremendous amounts of energy to fuel stars or singularities as black holes are observed to do. A solar nexus allowing a sun to provide heat and light in more than one universe simultaneously due to forming a nexus sounds like the kind of efficient use of resources common in how the universe functions.  


If, for the sake of argument, let's say the singularity created by the 
sun's black hole has a density high enough to affect or influence 
1 million universes simultaneously. Therefore, the sun appears and shines in 1 million
universes illustrated by the segments in in the diagram above. Its energy is therefore diffused 1 million times over. Each universe in this Multiverse sees a full, round burning sun in the shape of a ball. However, the particles that phase in and out are quantum tunneling through a spread
1 million times greater than the segment of the sun observed in earth's solar system
shown by the wedge shaped segment in the diagram above with a blue outline.
If this were true, then for the sun to keep burning continuously using fusion scientists
would currently be underestimating the pressure or density required for this to happen
continuously and spontaneously by a  density factor of 1 million. In other
words to correctly achieve "fusion" physicists have to
account for energy escaping across universes through quantum tunneling or "diffusion". The 
consequence of this is that fusion experiments or projects will fail
to work properly because they are only measuring the sun's density in one universe
which is only a tiny segment of the full intensity of the whole singularity creating it and are not accounting for diffusion or its spread across universes. Diffusion  may offer some proof of a Multiverse
were particles squeeze through light speed barriers that separate them 
using quantum tunneling thereby constantly re-distributing energy, a process which 
is not being accounted for in fusion. More importantly, this would mean that the source
of the sun's energy should be identified more accurately as a [black hole] or singularity
that is causing fusion rather than by fusion measurable in one universe alone, which is simply the proverbial spark from the flint in a lighter, whereas the singularity itself is the propane. 

At the quantum level particles are known to appear and disappear
and it is believed they move in and out by phasing from one universe
into another. This movement may be how the sun evenly distributes its
energy across universes. If every sun is in fact the singularity of a black
hole then this would mean black holes also cut across universes
drawing energy to feed the sun they will become. In some universes there may be 
little or no material entering a black hole to feed its sun, but in others
there may be material close enough to pull in. For instance, any light a
black hole captures that is accelerated by its pull will exceed the
speed of light and should naturally quantum tunnel or phase into another
universe causing a drop in energy and density in earth's universe.

Now, the amount of light drawn by the rim of a black hole may seem insignificant almost
negligible, however, if it is pulling this light from not just one universe, but from
millions of universes and focusing it like magnifying glass into its singularity,
then this would explain how what appears as  a small amount of light being "hoovered" into a black hole at its rim can become a hot smoldering sun. For this to happen successfully the density
or number of universes would need to be extremely high. The estimate given later on is
that a Multiverse Array hosts approximately 2.96476E18 Multiverses and each Multiverse hosts 3x10ʌ8 Universes. So many universes may seem outrageous, however, it is plausible that the universe could not exist without a high level of density. The advantage of so many universes packed into Space (StTt) is that even if a black hole's pull on light were mild, and even if its gravity allowed it 
to cut across a tiny insignificant portion of the Multiverse, the sun and stars it could create by
focusing light from so many sources would be very powerful. But the Multiverse
creating this intensity is not currently being accounted for in physics. Technically,
this means the sun you see every day when you look up at the sky,
could not possibly work or exist without a Multiverse.

Spreading an  energy network across universes in this way would increase efficiency,
in the sense that it serves multiple universes simultaneously, rather than just one universe.
However,  a consequence of this is diffusion or energy loss due to it being shared. 
It is also plausible that a black hole's singularity can keep going while the 
force and magnitude of the gravitational pull at its outer rim fades away until it is 
eventually nothing more than an electromagnetic radiation that will be detectable some
distance from the sun at the outer edges of the solar system. This 
would mean fundamentally, the sun is still a black hole, in the sense
that what appears as a "sun" is in fact its ongoing singularity.

The size of the singularity of a black hole ideally should be determined by the number of
universes it breaches, since by breaching them it unifies or connects them within it.
If singularities within black holes are forced to share their light, heat and
gravitational force across universes within their influence this may mean that
though their outer rim exerts or once exerted a massive gravitational pull
contrary to belief the gravitational force at the center of a black hole
becomes weaker not stronger towards its singularity when it redistributes energy
and matter across universes. It may 
become so weak that it is no different from that exerted by a sun. 
This would allow light and any debris from the rim falling into the black hole
to be distributed to other universes
through the singularity where it forms into planets and
eventually into solar systems, not only in the black hole but also
in the adjacent  universes within its influence. Eventually the re-distribution of
energy  and matter would cause the gravitational force in its outer rim to
fade away leaving a newly formed sun with solar systems circling it with little or
no evidence it was once or basically still is a black hole. This same procedure
of forming solar systems could be taking place around the same shared 
sun,  simultaneously but in multiple universes each developing its
own planets and characteristics illustrated in the diagram above. These
would be referred to as "green-belt" universes in the CMB, which created under similar
conditions, like earth, may have habitable planets or may be teaming with life, 
very close by albeit outside earth-universe.


If a sun spreads the energy and gravitational pull of the internal singularity that creates it and shines in more than one universe simultaneously, there could be two or three other earth-like planets orbiting earth's sun created by similar conducive conditions. However, there would be no way of knowing they are there because though they share a sun at a nexus they are in  different universes and in independent solar systems that occupy the same Space. The only evidence of this may be quantum tunneling. In one or some of these spaces, earth's sun is or once once was a black hole. In essence they would be separated by light speed barriers and therefore intangible to one another whereas suns breach these barriers and connect universes within their influence. This is why crossing light speed barriers to test the Multiverse hypothesis may be as important as scouring the universe for habitable, earth-like planets. These habitable planets may be close by, within arms reach but out of being grasped because they are across a light speed barrier in an adjacent universe and therefore intangible to one another. This further implies that Cosmologists, Astrophysicists and Astronomers are not paying close enough attention to or are glossing over suns. Suns, like earth's, need to be revisited. They may contain as much evidence and may help explain the existence of a Multiverse as well as help explain Dark Matter and Dark Energy. For instance, is how suns work fully appreciated and scrutinized or has the conventional explanation led to complacency and myopia? Where does earth's sun get all its energy from, really? Is how it just keeps burning continuously for billions of years being correctly understood or simply being misunderstood, especially in light of the fact that there is a possibility the energy given off by suns may be impossible without being part of an energy network shared between universes feeding suns to keep them going. What is the link between Dark Energy and the sun's source of fuel? Why do dying suns become black holes? This in itself is a red flag. If black holes are at the center of a nexus between sequestered universe (sub-universes), the Multiverse and earth-universe these linkages may take place due to the fact that these three conditions are the mechanism through which suns gain and distribute their energy, which is currently being misinterpreted as being provided by the conditions for nuclear fusion alone. Nuclear fusion may simply be like the flint in the lighter that lights the propane, but physicists make themselves believe sparks from the flint are the sun itself.  Is where this energy comes from and how it is distributed fully appreciated? How can it be linked to the Multiverse, other dimensions or universes? Can all this fuel and energy really be explained by fusion alone or is this an oversight? What is the explanation for the sun's perpetual density that causes persistently high pressure and high temperatures? Is the sun's mass alone truly capable of generating the pressure and consistently providing the fuel required for fusion without being part of or having once been part of a network pulling in [black holes and dark spaces] and pushing out energy [suns and stars]? Is there a hidden singularity at work behind suns applying the pressure for this fusion to take place consistently linked to black holes and a Multiverse? Where is the continuous pressure that causes nuclear fusion within stars coming from? Is this fuel and energy being taken for granted? Where does all the material the sun burns come from? When solar flares take place what causes them and where does the additional fuel burned suddenly come from? If a hypothesis that every sun is in fact the singularity of a black hole pulling energy from one universe or multiple universes and feeding it into other universes is followed, then where is the event horizon created by earth-sun's black hole? Where is the inferred black hole that at one time funneled or is continuing to funnel energy to it located and why can it not be seen or found in this universe? The answer may be quite simple, its event horizon is a black hole in other universes with which the same Space is shared but in an alternate Multiverse setting or was once as such but has now faded into obscurity leaving a singularity behind that continues to power the sun. This continuous flow and exchange of energy then becomes the premise for proving the existence of a Multiverse. Is all the sun's fuel and how it works to generate light and heat accounted for?  If black holes and suns are indeed linked through a Multiverse "energy" network and scientists are willing to consider looking for Dark Matter and Dark Energy in black holes, then maybe they should consider looking for the same in suns, the way they would look for propane in a lighter instead of believing the spark from the flint which is the equivalent of nuclear fusion is responsible for all the energy put out by the sun. There is a need to work to find deeper explanations concerning how suns work and are connected to black holes, empty space and an energy network. How this network works may provide a scientific basis for a Multiverse and may answer questions about the universe that are as yet still illusive. What is implied is that many of the unanswered mysteries that defy explanation in astrophysics, cosmology and astronomy may be a result of not thoroughly scrutinizing earth's own sun and too arbitrarily explaining away what it does as being accounted for by nuclear fusion alone without understanding the role a Multiverse may play in generating and distributing this energy in a continuous cycle. If suns can produce energy endlessly from nuclear fusion so easily why are current attempts to reproduce it artificially failing? Why is the longest fusion recorded less than 7 minutes (the equivalent of the duration of a spark form a lighter's flint), why doesn't it just keep burning like a sun when triggered in intense magnetic fields, especially when temperatures greater than those found in the sun can be produced in a lab?  How does the sun keep burning for billions of years on nuclear fusion, when the same process can't seem to last more than 7 minutes in a lab? The arguments seem to point to inadequate knowledge about suns or what is known about how they work being far too shallow. 


Suns and "Negative Universe Theory"


Humanity, being a race and civilization that is still in its infancy has a tendency of observing the universe from the inside out.  What this means is that it tends to take its own experiences and overlay these on observations as being the fundamental nature of reality, when in fact this human filter can lead to inaccurate interpretations of the universe, fundamental laws in physics and what is understood about the universe in cosmology. One of the views I have held for a very long time now is that the very manner in which human beings observe the universe, even in cosmological terms is possibly inherently flawed. I refer to this as "Negative Universe Theory". A species that emerges in any environment becomes a product of its ecosystem and is shaped by the most prevalent forces to which it is exposed. On earth this prevalent force is the sun. The sun floods the earth constantly and daily with light and heat. Everything on earth is dependent on and affected by the energy emitted by the sun at the centre of the solar system. All life on earth, human beings included are therefore, sun creatures, so to speak and have developed everything they do around the sun's energy. However, if it is true that suns are in fact simply a minuscule aspect of the universe and represent nexus points created in conjunction with black holes and appear where the Multiverse by chance happens to converge as has been described earlier using the magnifying glass, then it should be considered that Dark Energy which makes up close to 70% of the universe  and not sunlight or the universe as it is observed is the fundamental nature or state of the universe. In this sense creatures that emerge in sunlight as human beings do, use sunlight to literally be able to see and observe the nature of the universe. Just because human beings literally use light to "see" or "see" in sunlight does not mean that the sun "emits" visibility. This is a flawed assumption. This view may simply be for the lack of intellectual development of human beings in the sense that what the sun emits is in fact a thick form of opacity and darkness and human beings like other creatures have developed the highly specialized sensory ability to convert this darkness into a visual method for being able to see colours, detect objects and obstacles in the surrounding environment. The ability to "see" in darkness does not make darkness perceived as the emission of light, it makes it "light" to human beings. This is what I mean by human beings having a tendency of viewing the universe inside out. To a race or civilization that similarly alternatively emerges under the powerful influence of Dark Energy, lets refer to them as DEs, instead of the strong influence of a sun, the universe may be a very different place in the sense that they would  emerge with the ability to naturally observe and detect the Multiverse. When human beings look into the night sky they see Space as being a dark blanket and stars as being bright pin-pricks of light spread across it, their experience and view of the universe is trapped or confined to being able to observe it using electromagnetic energy.  When DEs look at the night sky Space would not appear dark as human beings see it but would consist of an expanse of boundless "day light" and the stars would appear as black pin-pricks against a bright and clear universe, hence "Negative Universe Theory". Black holes would be brighter than the rest of Space, appear as bright or "sunnier" domains attractive to venture into, while suns would be dark secluded uninviting places casting thick shadows over planets in dank solar systems overrun with creepy crawlies, the way human beings regard subterranean caves hidden in the earth where creatures that emerge in zero light have evolved. To the creatures in the cave that have emerged in complete absence of light and developed the ability to see without the need for light the cave is a wonderful and colourful place to live and feed due to the fact that they have developed different methods of "seeing" that do not require sunlight where as to human beings observing the same ecosystem in the cave, it is lifeless, creepy, the surroundings and creatures appear colourless due to the nature of pigmentation in this environment. The fact that human beings can pick-up a large stone with a single bare hand, hurle it hundreds of meters, swim through acid (water), breathe fire (oxygen), build tools and weapons, strategize and assemble, would make humans the stuff of nightmares. Human beings living in secluded shadowy planets, with the ability to see in absolute darkness, pupilled eyes capable of rapidly tracking multiple moving objects, the ability to smell, lightning fast reflexes, a mouth full of hardened ivory teeth, a mane of hair on the head, nails that grow rapidly on hands and feet, the ability to learn and act individually or in groups with high levels of intelligence, incredible agility and strength enhanced by gravity, high bone density, the ability to rapidly develop muscles with moderate effort, incredibly strong, impervious to high levels of gravity, resistant to disease  and a tendency for mood swings would not only appear as a frightening "super-predator", but hideous as is the human reaction to Anglerfish, deep sea creatures human beings may see as frightening but which rightly and most likely view themselves as cute and attractive  (a little tongue-in-cheek introspection there). DEs would look to "bright" black holes for signs of life whilst being less interested in planets for reasons explained  as eagerly as humanity searches stars, solar systems and planets in "habitable zones" for signs of life whilst ignoring black holes in such searches for similar reasons. To DEs Space would appear as a bright place with an infinite number of universes to explore, whereas planets in the shadow of the electromagnetic array of suns would appear as strange, colourless, secluded and unattractive places with strange ugly or grotesque looking creatures regarded as bottom feeders potentially dangerous to interact with best observed from safe distance for the time being. Human beings experience the same when they view places and creatures without light at the bottom of oceans so deep that sunlight cannot reach them. Now imagine, that in those ocean depths there were curious somewhat hostile creatures with an intelligence equal to that of humans? How much more foreboding and dangerous would those depths be? The isolation human beings experience in a single universe may be a product of emerging under the influence of a powerful electromagnetic array created by the sun. It should not be assumed that all creatures and beings emerge under these same conditions or that the way human beings see Space and the Universe is fundamental - Negative Universe Theory, can be taken into account to broaden this analysis.   As the adage goes, there is light in the darkness and darkness in the light.

A simple explanation of how Negative Universe Theory comes about is that as the sun rises and sets the fundamental physical state of the universe is that there are cycles of boundless "daylight" during the night and complete darkness during the day when the sun rises. The radiation of opaque-light from the sun creates an absolute impenetrable and cavernous darkness in an inhospitable, bone crushing 1G of gravity that almost nothing sane should be able to survive in. However, it also generates solar heat. Animals and plants respond to this heat and become active during the opaque-day. Human beings and other creatures in general develop the ability to see in the sun's complete darkness using its radiation by a process of inversion, the consequences of which is that they begin to increasingly see black objects as white and white objects as black until they gain the ability to see in complete darkness. As inversion evolves it also begins to intensify its ability to discern small variations or hues of grey and begins to exaggerate these consequently creating an array between black and white that becomes the colour palate observed in the electromagnetic spectrum (VIBGYOR). It is possible that human beings have never actually seen any of these colours, only a sensory rendition or weak recreation of them as they are genuinely only visible in the absence of opaque-sunlight, that is, in genuine cosmic light which occurs at night and that permeates Space. Though the environment is potentially bland and consists of  shades of grey these are accentuated into a beautiful array of colours as they cannot be seen in opaque light. Negative Universe Theory proposes that the inversion of sight that shows Space as being dark and opaque and that shows black objects as white and white objects as black in human sight and that of most animals in general is completely artificial and isolated to creatures using negative sight. It is not representative of how these environments actually look or exist. In other words the brilliant colours and hues human beings observe during the day are renditions or recreations of an otherwise exceptionally bland planet when these hues are enveloped in darkness created by the sun as might be expected of an environment in the deep oceans or caverns deep in the earth where there is no light whatsoever and creatures have adopted specialized sight to see better in poor conditions and learnt to use this inversion process to survive. For instance a pitch black polar bear will appear as a bright white polar bear and snow which is pitch black will appear as white to improve visibility. If this inversion did not take place visibility would remain poor making it difficult for human beings and other creatures to fend for themselves and survive. However, when these views are artificially inverted by the retina and brain, what is poor visibility in an opaque environment becomes "day light" where objects near and far are easily discernible. This process is similar to night vision technology, where infra-red goggles are deployed in complete darkness and objects become visible, except that it has evolved to become much more sophisticated. The consequence of human beings processing "sunlight" is that they cannot process cosmic light, therefore, by a process of inversion what the sun emits as "sunlight" obstructs cosmic light and its radiation is darkness. In order to be able to see in sunlight when it is warm and there is increased activity the retina and brain drops or cancels out visual information which enables it to see natural cosmic light. This is why Space looks dark and is the reason why instead of visibility improving at night when the sun sets it in fact worsens and becomes dark. Sight is inverted. Cosmic light which shows images as they actually are is very likely to be very powerful, far more refined in terms of pixels, extensively more colorful, contain much more visual information and is more visually stimulating than sunlight, but for basic survival this visual information is dropped to accommodate the ability to invert sight and see in the dark, that is, in sunlight when the sun is up and temperatures rise. This would mean the everyday experience of human beings and what they observe about the earth is not fundamental but highly specialized. Observing the earth in its natural state in "daylight" would show it as a bizarre, dank, unattractive, colourless and cavernously dark planet teaming with creepy crawlies in the soil, on land and in the air from bacteria and viruses, to insects and animals in jungles to birds and human beings, a place to be avoided. However, to human beings, during day-time this same environment will appear as a paradise, inviting, luxurious, nature filled garden teaming with boundless colours, when it is in fact quite the opposite as it is enveloped in complete opacity. Where light is in fact a shadow or disruptive form of radiation, as is observed in Negative Universe Theory would require a measurement of not just the speed of light, but the speed of true light  .i.e. cosmic radiation in the background which light from the sun disrupts to create or cast opaque darkness over the earth.    


 Subterranean filter: The sun radiates an intense pitch black darkness, that also generates heat.  To be active in this heat and warmth human beings and other creatures have developed vision that processes the sun's radiation and inverts colours effectively creating a specialized form of night vision that allows human beings to see in complete darkness. This filter sees the popular image of the earth in relation to the sun shown above. Subterranean refers to planets and other bodies under or beneath the radiation of a sun which effectively fully or partially cloaks it in darkness. It must therefore be considered by cosmologists that, as vast and open an expanse Space may appear to humanity, it may not be much more than a boundary between Subterranean Space and Higher Space (Negative Universe Theory). Humanity searches for intelligence and life amongst other galaxies and planets with a mindset that when it is found human beings will somehow be at the apex of beauty and prowess, like a juvenile who thinks he or she has already won a popularity contest and just needs to be discovered for everyone to realize how great they are. This skewed mindset is a "subterranean filter".


Celestial filter: The image above shows how the earth in the presence of the sun and earth in relationship to cosmic light actually looks without the night vision filter. The sun is radiating darkness through the cosmic radiation background and casting or radiating a warm but intensely opaque shadow over the earth. This reduces visual information from 100% visibility in cosmic light confining it to 1% or less consisting of sunlight which is in fact opaque darkness. It must be considered that the impact on biology of the sun displacing or muting cosmic background radiation may have affected other areas of the nervous system and brain. This isolation, loss or muting of information may go much further than just sight. It may affect the brain and nervous system thereby muting their sensitivity to cosmic radiation thus muting other fundamental forms of sensory ability and perception that isolates human beings from the rest of the cosmos effectively sealing of humanity into its own subterranean world. This 'subterranean' isolation enhances and sharpens more predatory and primary forms of perception as seen in the 5 senses driven by solar energy which are required for foraging and survival while temporarily muting higher order methods of sensory perception (6th sense, telepathy, hive-mind and intuition) that are supported by being predominantly exposed to cosmic radiation (Negative Universe Theory). Foraging and the need to constantly feed would also dramatically shorten lifespan of human beings and other creations born in opaque light, making them mortal.

A pitch black polar bear in snow as it actually looks in reality. Visibility
is extremely poor (Negative Universe Theory). Images in negative
should be as rich, toned and detailed as inverted images
but they appear blurry and are not as clear as inverted images because they are missing 
essential information on the tones, variations and nuances of black hues.


Inversion of sight allows human beings and other creatures to 
see the polar bear and surroundings more clearly in pitch black surroundings
(Negative Universe Theory)
Although this improves visibility, it is essentially a filter that
does not fundamentally represent how what is being observed actually
exists and looks.

What human beings see when they look at Space or the night 
sky

How the same night  sky actually looks (Negative Universe Theory)

Space and the Universe as human beings see and experience it



Space and the Universe as it actually appears (Negative Universe Theory)
The areas in darkness would be viewed as "subterranean"
due to the fact that they are submerged in shadow.

The same way human beings view the bottom of the deepest oceans
where there is no light as dark, oppressive and inhospitable. The very high
pressures experienced at these depths, would be synonymous with very high
levels of gravity experienced on earth, that make it intolerable to land on earth
and stay there for long durations. The levels of gravity human beings have developed the musculature and bone density to withstand and take for granted in every day life may make humanity an intelligent, but highly specialized form of life with a unique physiology designed to withstand intolerable conditions. Gravity and the ability to breathe oxygen may be no more profound and specialized as the impossible bars of pressure experienced by creatures that inhabit the deepest oceans and their ability to breathe in water at these depths. 

The Negative Universe Theory view that life on earth is very highly specialized to live
and survive in an exceptionally extreme and inhospitable ecosystem is more likely true 
(like the earth is not the centre of the universe) than the belief held by many today that earth is some kind of attractive paradise. Human  beings tend to think, "why on earth is there no life anywhere else in the universe?", whereas the rest of the universe looks at earth and thinks "How in the universe did life and intelligence ever manage to emerge in these terrifying, awful, horrible, hostile, inhospitable and bone crushing conditions?" 



Negative Universe Theory may offer an explanation for the Fermi Paradox. Earth may not be as hospitable and paradise-like as human beings experience it, in fact it may be quite the opposite, neither may human beings be as beautiful as they look to themselves, which may offer an alternative explanation for the Fermi Paradox. For creatures that depend on cosmic light entering a planet covered in the sun's rays, that block cosmic light, would be like a descent into, bland, colourless and unpleasant abyss.

We think of water as a marvelous life giving liquid
and being the naive species we are even look for it on other planets 
for "conditions for life",
yet to creatures that did not emerge on earth it may
be a toxic, hazardous, virus and bacteria filled substance with the
attributes of sulfuric acid that nothing should 
be able to survive in. Negative Universe Theory on Anglerfish;
Why do humans assume their features and species specific 
 standard of beauty is universal? 
As scary as you look, do you think you are really prepared to
interact with some DEs and them with you? 
It is not unreasonable to consider that Human 
beings may look like the Anglerfish shown above
yet look beautiful to themselves and that 
earth is a paradise to the creatures 
that have evolved to survive on it, a place
enveloped in opaque darkness, bland
colorless, hostile, dangerous, disease infested, 
 and inhospitable to
"aliens", hence the Fermi Paradox.

Can we ever be objective about ourselves?
Has the analysis really been exhausted or 
do human beings simply chose what is convenient to believe?
Especially when it is self gratifying.

Black holes and how humanity sees them in nuances of light. Through one filter 
they appear to be a mystery leading light itself pasty infinity
across and beyond the boundary of the unknown. 


How black holes look in Negative Universe Theory. Through another
filter they are a mystery leading darkness past the unknown across and 
beyond the event horizon. Ideally the negative
image of the black hole should be brighter (sun-like) due to increased 
sensitivity to variances of darkness or nuances of black than the rest of 
normal Space, for accuracy. To be accurate the image would need to
include missing information on variations and nuances of cosmic light.
In the same way that stars and suns disrupt cosmic light casting darkness 
over solar systems it should be considered that black holes do the 
exact opposite, which is that they concentrate cosmic light. This is
why black holes will appear to shine like suns against the rest
Celestial Space in Negative Universe Theory.
However, since the electromagnetic spectrum does not effectively 
extend to encompass cosmic light and disrupts it, black holes will
appear no different from the rest of space to the naked human eye
that is using a process of inversion to see. Finding a filter sensitive
enough to reveal nuances of black should be able to demonstrate 
that cosmic light is far more intense in black holes than it is in 
the rest of Celestial Space. For DEs that function on a sensory system
tuned to cosmic light the way human beings function on a sensory
system highly tuned to sunlight, black holes will be the equivalent of 
basking in the sun and will most certainly be far more attractive and 
stimulating abodes than normal/Celestial Space and lesser Subterranean Space.
This basically divides Space into 3 realms of existence, namely, the lowest
that is in heat and darkness, i.e. Subterranean Space, a middle boundary, namely
Celestial Space, and the realms with the highest concentration of cosmic
light, namely black holes or Higher Space.   

 
In order to see in sunlight (darkness) the filter has to drop or cancel out  interference from cosmic light forcing it to invert from true light into a black "microwave" background. This should be similar to forcing this light to fade by muting it, for example, by the visual cortex or visual system rendering it transparent. Hence, even though cosmic light is present its interaction with the colour spectrum is dropped in favour of using sunlight to see, which is how human beings would be able to see in absolute darkness (i.e. in the absence of cosmic light) by using a process of inversion as was demonstrated with the polar bear. The challenge of finding or detecting cosmic light is that it may not be electromagnetic or, like anti-matter is to matter, it may be a form of inverted electromagnetism consisting of properties of light currently unaccounted for in physics - light that can be used to "see" even when the eyes are closed, some people claim to be able to sense and refer to as an "aura", but that has been dismissed by mainstream science.  Basically, conventional electromagnetic forms of light such as sunlight may drop tremendous amounts of visual information e.g. revealing as little as only 1%  of the visual information available in cosmic light by forcing  the visual cortex or system not to see objects and their properties even though they are in plain sight. Muting cosmic light in this way makes sense as it most likely allowed creatures moving about, foraging for food and so on using sunlight to only see what was immediately relevant to their survival. This entails that to be able to see using cosmic light as shown in the images above, aspects of electromagnetic light or sunlight would would in turn  have to be muted to transparency allowing cosmic light to emerge thereby inverting  images back to their actual form as seen in the negatives as an initial method for beginning to restore lost visual information (Negative Universe Theory). Research on negatives and negative images is possibly a good place to start in this area of the sciences.



black is white and white is black in 
Negative Universe Theory

Although negative universe theory may rub some people the wrong way, that's well and good, if it makes you feel uncomfortable or you're offended then maybe it was meant for you to see. If anything, negative universe theory allows us all to appreciate how silly and fickle racial discrimination really is; if in order to be black or dark skinned the filter requires you to be white or light skinned and if in order to be white or light skinned the filter requires you be black or dark skinned. It shows that what we should embrace is our humanity, focus less on differences in a manner that is divisive and instead focus more on our capacity to show acceptance, empathy, care, love and affection regardless of a person's skin, how they look or where they were born.


Dark Energy, side note...

On a side note concerning Dark Energy there is an interesting presentation by Dr. Josh Frieman on dark energy. Follow the link to watch the presentation.





The assumption on black holes implies that entering a universe will be much easier than leaving it. 
From within the bubble that forms the black-hole there will be no evidence another universe exists beyond the borders or event horizon of the bubble in which it is contained, much in the same way earth-universe would likewise not know of other universes if it were also in a similar bubble. If even a tiny black hole continues to indefinitely consume anything that comes near it the inference is that it surely will inevitably grow from a baby universe into something like an earth-universe while its host like-wise continues to expand into an older more grandiose universe. If this is the case the earth-universe is a universe that is still in its infancy or early stages of development. The possibility of this creates a potential multiverse scenario of endless universes within universes in plain sight of cosmologists. This further  reinforces the inference that green-belt universes in the CMB that are the same age as earth-universe may have habitable planets closer to earth. The assumption would be that in the multiverse array each universe is essentially a multiverse disc while the universes it contains, seen as black holes at diverse stages of development, are on a path that may lead to them becoming universes with evolving black holes themselves. Even in these conditions the black-hole can still  be used to accelerate a vessel, however, the vessel must have planned how  to generate its trajectory on exit such that it is able to escape the gravitational pull towards the centre of the black-hole that will be ever-present.

Matter accelerates  from a region where Space-Time Geometry is expanding to
where it is contracting. Where matter is concerned, for a double-sided funnel matter will
therefore not enter in one funnel and exit through the other as is inferred with a wormhole being
a shortcut between locations in Space,it will simply become clumped or trapped at the centre of the wormhole where the two funnels meet. The double sided funnel is only useful if the direction of acceleration is what is required to enter and exit locations in opposite directions allowing for return journeys. This means that generally where Space-Time Geometry is contracting this should be indicated by matter being observed to accelerate through Space. An example of this is meteors and other objects falling towards the earth. Where Space-Time Geometry is expanding the opposite should occur in that matter should be seen to decelerate or to experience a resistance to acceleration. For instance when 
a rocket takes off from earth it must accelerate against the resistance of earth's gravity
until it reaches escape velocity.

When it is said that stars and galaxies are accelerating due to the expansion of Space or
Space-Time Geometry (STG) this seems to be a contradiction in that if STG is expanding
then resistance is increasing and therefore the consequence is deceleration. Acceleration
can only take place if the furthest reaches of Space are contracting.  If matter such as
galaxies and stars are observed to be moving away faster at "72 kilometers per
second per megaparsec" then technically this implies Space in those regions should
not be expanding but should be contracting, that is moving from a region of lower
to higher geometric density or from a region that is
expanding to one that is contracting. For example, towards the centre of a back hole.

There is a caveat on black holes. Although black holes fit comfortably into Multiverse theory as is depicted in the diagram above it cannot be ruled out that black holes simply have a Multiverse-like structure. If this is the case black holes would  more ideally be identified a "sub-universes" existing within a single universe that is part of a Multiverse rather than being viewed as the Multiverse itself. For instance, if earth-universe is contained in an infinity sphere (black hole) that resides in a much larger and older universe, then this sequestered space does not represent multiple universes, it is simply one universe being divided or "sequestered" into sub-space regions. This means that the black holes create divisions or singularities, however, these remain within a single universe with sections sequestered by the singularity created within the bubble closed-off by black holes. However, this sequestration takes place within a universe rather than across universes and is therefore not, in this sense, spread across the Multiverse, in fact other universes in the Multiverse are likely to have similar sequestrations. This only adds to the complexity of Space-Time. One means of determining this is by identifying what percentage of dark matter black holes account for. Nevertheless, caution is required since whether black holes account for or do not account for Dark Matter and Dark Energy does not inevitably affect the existence of a Multiverse, which most likely can only be verified fundamentally by faster than light acceleration. Clarity on this would require deeper enquirey. Ultimately the Multiverse hypothesis may only be truly verifiable by achieving superluminal or faster than light acceleration to see how the framework between universes exists and testing the elasticity of Space-Time. Fortunately the resolute scientific answer to this question can be answered in the next few years with the debut of the Collision Drive.

One major advantage of black holes may be the ability of an advanced civilization to relocate its infrastructure into it. For instance, when humanity gains the technology to enter and leave black holes at will, it may make sense to move its civilization in future into sequestered space, which would protect it by making its presence and continued growth undetectable to any other civilizations in open Space some of which may be hostile. Infinity spheres may be useful in the sense that they will prevent any kind of radio signals or activity that may give away the location and presence of a civilization such that it could exist in Space without detection, thereby increasing its security.  
 
A wormhole or black hole with funnels on both ends has the added advantage of possibly offering a return route between universes, otherwise a single funnel black hole remains a one way trip, unless there's another black hole nearby that offers a reverse trip. As long as the pressure or contraction of Space at the mouth of the black hole is greater than than pressure or contraction at the singularity, a black hole should in essence not be completely different from Alcubierre's Warp Drive. Note that when words like pressure and compression are used it is reference to the elasticity Space and how it affects acceleration. When an object enters a back hole it will be accelerated in a similar way that a warp bubble will move an object through space faster than the speed of light without violating Einstein's light-speed limit . However, despite Einstein's light-speed limit not being violated, light-speed barriers (divisions that separate universes that create the Geometry of Space or Geodesics) will still be breached. This means that the last singularity (U10x) is merely the final speed at which the object will exit the black hole where resistance from light speed barriers or boundaries has managed to force the breach to close consequently halting further propagation of the black hole. A vessel should be able to exit any section of the black hole and the singularities it crosses, it does not have to emerge at the tip of the black hole which is hazardous because that's where matter is potentially collecting and being ejected. As we have hypothesized above travelling at the speed of light creates breaches across calibrated light-speed barriers that separate universes, in this case creating 10 singularities or 10 exits (U1x-U10x) dependent on velocity. Each barrier forces the black hole to contract until it is stopped at U10x. If the final exit speed at the final singularity is 10x the speed of light the object can exit in any area of the multiverse along the length of the back hole (U1x-U10x) which represents a geographical area of Space 10x bigger than the earth-universe. Which universe it exits into from the singularity may depend on many factors such as exit location, velocity and trajectory. This will be a one way trip since the vessel can only travel from regions of expansion to contraction which create the direction of acceleration or gravity. If the vessel does not have the technology to match the acceleration it gained from the back hole it cannot return to the universe it originated from. However, if this is the case, it can avoid this by exiting in a new location back into the universe it came from depending on how the pilot navigates the exit from the black hole. A vessel may also be able to use a black hole like a slingshot by combining acceleration from its own engines with the additional speed provided by the black hole. Any time travel distortions experienced by occupants of the vessel will not be real, they will simply be holographic.



Pilots navigating through a black hole 
would need to pay close attention to its Geometry.

By comparing black holes to Alcubierre's Warp Drive
it is possible to conclude black holes fundamentally
accelerate any matter that passes through them and
are capable of doing so at speeds faster than the
velocity of light.



Even if a black hole weighs "6.6 billion" suns, hypothetically this gravitational force can be converted into slingshot acceleration as long as knowledge of how to slipstream, navigate the Geometry of Space, manage and control g-forces and resistance at superluminal rates of acceleration is available. It should not be forgotten that gravity is not just about mass, it is also a form of fuel for acceleration in and of itself. Therefore, it makes sense to harness it. In StTt the Geometry of Space does not exist outside a vessel neither does it create Space, these are all internal changes taking place in the atomic structure of matter. However, the Geometry of Space visualized by Einstein in SeTe is likely to be as useful to knowing how to pilot through a back hole as a pilot's knowledge of air currents in aerodynamics. If the Expansion of Space nearby and at the funnel or opening of black holes, being greater than the contraction of space at the singularity, can be deduced as what causes the gravitational acceleration of matter from open Space, toward the black hole's opening and from the opening down its funnel then technically the principle is not very different from Alcubierre's Warp Drive. The wave created by the AWD that propels the spacecraft appears to be almost identical to the wave that accelerates mass toward and through the black hole. At superluminal velocities the internal frequencies of atoms viewed as this Geometry, generated by the vessel's trajectory and velocity are likely to be critical to its ability to enter, move through and exit a black hole safely. For instance note that in AWD the spaceship is kept safe by riding a section of the wave (Geometry of Space) parallel to the spaceship while the cause of propulsion namely expansion and  contraction or compression move the spaceship along. The Geometry of Space remaining parallel to the velocity and trajectory of the spaceship should probably not be trivialized as this harmony may be what keeps the occupants in the spaceship safe from dangerous time distortions associated with the Geometry and light-speed barriers that separate universes by creating singularities between them. Since the Geometry of Space in black holes is vast, and concave or angular from the funnel to the tip, to maintain the same harmony or "bubble" it is likely the pilot must ensure that the spaceship's superluminal velocity is parallel to the Geometry of the black hole. A spaceship traveling at superluminal velocity that travels against the Geometry is likely to be similar to an airplane hitting turbulence except that with this Geometry at superluminal velocity the turbulence will emerge as loss of trajectory toward intended destination, erratic geographic or Spatial displacement and distortions of Time that can have serious consequences that affect the capacity for the spaceship and its occupants to arrive at their destination, in the right universe, at the projected time, in the right frame of mind. If frame dragging occurs in the Geometry of the black hole pilots are likely to have to carefully follow this thereby keeping the frequency of the spaceship in harmony with the frequency of the Geometry it travels through consequently keeping the "bubble" safe and intact.  Its possible that physicists have not as yet considered these attributes or the importance of beginning to document such frequencies. These frequencies will consist of the X,Y,Z  of the 3 directional (dimensional) Space with T for Time. The Geometry consists of specific frequency combinations of these 4 factors that may mean very little when travelling at low speeds, however, navigation at superluminal velocity may simply be impractical without knowledge of how a vessel emits them and how these emissions affect navigation. Even though for now how they work and what they are is not fully understood, developing a catalog still remains vital since the very procedure of figuring out how to collect this data and keeping records of what is found impacts on humanity's ability to understand how navigation through frequencies works.

One place to start in this science is to begin to understand and map the frequencies emitted by microchips when they process diverse types of information. This refers to the actual frequencies emerging from microchips on a motherboard and not the familiar sounds of a modem transmitting code or information. Understanding the relationship between the frequencies emitted by microchips and the information they process is something easily accessible and therefore an interesting place to begin as it may reveal some useful insights.


In the diagram above to travel the distance from A to B in earth's universe, there are two options. To travel at subluminal or superluminal velocity. If the distance from A to B is 100 million light years, it becomes impractical to try to get there at subluminal velocity. However, if physicists in future develop the technology to enable travel at 100 million times the speed of light, this jump in speed is not a straight-line journey from A to B through earth's universe. By traveling at superluminal velocity it will have to breach 100 million universes that occupy the same "Space" to get from A to B in shorter time.

It may get even more complicated. Some universes in earth's local multiverse are out of reach. If the furthest universe from earth-universe marked as C on the multiverse diagram were 48 billion light years away from earth there would be no way earth's civilization could reach it unless it developed the technology to travel 48 billion light years per second in the same way that earth could not reach the nearest universes marked as D and E, which are only 1x light speed jump away, if earth's civilization did not develop the technology to travel at least at 1x the speed of light.

Nevertheless, it cannot be ruled out that the prevalence of black holes at the center of galaxies and universes may allude to the possibility of the core of a multiverse consisting of a black hole. If this is the case then they form a nexus or "inter-connector" where short cuts to other countless universes and locations within universes will be found. It may be possible from this location to access all other locations in a multiverse and universe without being prevented from doing so by having to cross countless barriers. If multiverses, universes and galaxies are commonly formed around black holes these may offer easier routes for getting around distances and speeds that may be otherwise impossible to achieve.   

The layering or calibration of universes in this way may offer physicists, cosmologists and astronomers the option to consider that black holes that end in a singularity can in fact act as bridges between universes that could otherwise not be breached without the ability to travel at superluminal velocity. A Collision Drive propulsion system is likely to be able to offer scientists in these fields the first opportunity to approach and study diverse features of Space first hand, in person. If black holes do offer links like this they will be one way streets unless their architecture consists of a double sided funnel. Lightspeed barriers between universes effectively isolate universes from one another, therefore a black hole ending in a singularity makes perfect sense because exiting it leads to an alternate adjacent universe that is completely cut off from other universes.

In this presentation internationally acclaimed physicist Dr Neil deGrasse Tyson 
explains why the concept of a Multiverse needs to be considered a very real construct.




The Multiverse array may account for currently unexplained Dark Matter and Dark Energy. Since these parts of Space are being detected through the gradient of the array from within a earth-universe singularity, only a small section of the array is detectable. This means that the  Dark Energy and Dark Matter detected in physics may represent a minuscule part of the Earth-Multiverse. This is a possible explanation of why Dark energy does not become diluted as the universe expands, its because there is a lot more of it than is detectable. Even if Dark  Dark Energy is explained by Modified Gravity this does not affect the Multiverse hypothesis. This is especially true if the hypothesis for gravity is based on "the fields do not exist" method of analysis. This is due to the fact if gravity is purely based on the mechanics of atomic and subatomic particles as is implied by the "fields do not exist approach" then what Modified Newtonian Dynamics finds by offering a modification of Newtonian equations (e.g. F=ma) is a measurement of the effect without necessarily dismissing its cause. In other words MOND can still be regarded as a measurement of gravity using Newtonian laws for which the cause is still Dark Energy. The two approaches are not necessarily mutually exclusive.  For instance, if Serena Williams smacks a tennis ball at you on a court then leaves. You catch the ball. Someone hears the ball being wacked and walks onto the court to find you holding it. The person knows the ball was wacked because they heard it and they know you caught it because they found you holding it. They may be able to figure out and fill in all the dynamics about how it was wacked and how you caught it (MOND) but this doesn't mean using all this physics they can figure who exactly it was who wacked the ball or why they did (Dark Energy). Just because you may have a theory for how something works does not by default mean you have an explanation for why it exists, this requires further and deeper research. MOND does not rule out or demystify Dark Energy. Gaining the ability to test superluminal acceleration using tier 1 gravity [e.g. using a Collision Drive] is likely to unravel the mystery behind Dark Energy and Dark Matter.


It will take a mind boggling amount of energy to flex Space in order to be able to achieve the technology to warp even though the mathematics shows that it is possible. This is 3rd Generation Gravitational force. Humanity will inevitably be able to create energy technologies that can do this easily in future. But right now, bending Space does not seem an accessible method. The other problem is that physicists do not know how to create negative mass.  Although this has more to do with the weaknesses in theory than actually being unable to achieve it. For instance, mass created by Trons at T4 is omnidirectional. This means the gravity they create will act in any direction on the dimple where it is pointed when generated. So if positive mass pushes it toward an object, to gain negative energy, gravity or mass that will push or repel it from the same object the dimple, T4 simply has to face in the opposite direction. Therefore, there is no mystery, difficulty or impasse when it comes to creating negative energy, mass and gravity. It is something achievable that tier 1 gravity can easily demonstrate today. 




Space is rigid because each universe and the multiverse occupy the same "Space" at a hypothetical compression rate of 1/2.96476E18. The hypothetical number of universes earth shares with its local multiverse is potentially staggering....in fact beyond staggering. This is if we assume universes are separated on a per second basis. However, this is just an estimation and may be understated. If there's a need to delve deeper into this then we need to look at an individual second (in the 2.96476E18 seconds), how it relates to the speed of light and how this in turn creates the  fabric referred to as the "Geometry of Space-Time". The baseline speed of light used in physics is C=3x10ʌ8 m/s. The standard measurement used is the distance light travels in 1 second, which is calibrated in meters.  In every second there are 3x10ʌ8 meters. 




The diagram above shows how physicists conventionally tend to understand the relationship of the speed of light and the Geometry of Space-Time based on Einstein's approach (SeTe). This view does not include the Multiverse. However, if the analysis is shifted to StTt then the same diagram no longer makes sense due to the fact that StTt does not recognize "distance". To create the entire universe as human beings see and experience it in SeTe. The entire universe in StTt only requires the wavelength and frequency ABC to be created. 



If this is true then the earth-universe occupied by human beings only uses up 1/3x10ʌ8th of a second to be created or exist, the rest [of 0 - 1 seconds] is just curvature or "Time" shown in the diagrams above, more accurately Motion in SeTe. This is the time it takes light to cover 1 meter. Technically the universe only needs "1 meter" of SeTe (the universe as it is observed) to create the entire universe. It can do this by using a compression rate of 1/3x10ʌ8 simply by re-purposing the hardware used to create a "physical universe" from "distance" (which would be a waste of resources) to instead create a Multiverse. It is still able to project "distance" and achieves this by applying curvature to the segment ABC which enlarges, inflates or projects it into earth-universe SeTe. This observation allows us to begin to understand how Einstein's Geometry of Space-Time is created and how its fabric consists of a material Multiverse. It also shows that other universes can have the same laws in physics and the same light speed caveat as that observed in earth-universe.






The diagram above shows how each segment of the frequency is re-purposed to create a separate universe ABC by simply being cocooned or contained in a singularity using lights-peed barriers. Each of these segments is then subjected to curvature and exists as a separate universe (frequency) independent from all other universes (frequencies in the CMB). This is done by suspending Time, within each singularity (Time = 0) and instead replacing it with something that mimics or emulates Time, namely Motion or Spin commonly referred to as "curvature". As long as the speed of light within each singularity remains constant and conforms to the speed of light limit the wavelength λ can be of any magnitude on the spectrum, it will still be contained within its singularity. 



There was once a time when scientists of a bygone era believed that
a journey that persisted towards the horizon would eventually fall of the
edge of the earth into an abyss. This kind of primitive or limited thought persists today
for scientists  who do not believe in a Multiverse or that other universes can
begin where the singularity of one universe ends. 



Multiverse Hypothesis: It was once believed as a traveler continued toward the horizon they would cross the barrier and fall off the edge of the earth. Today the misconception is that if a traveler crosses the barrier they will go back in Time to change history. By understanding the difference between SeTe and StTt, today it should be understood that to cross the barrier simply leads into an adjacent universe. 

Space-Time Curvature is similar to the Doppler Effect for sound and light. As long as 
sub-light velocity is maintained matter of a specific frequency will remain within a singularity
represented by the strolling silhouette C-D. Experiencing a Doppler Effect is consequently first
hand evidence of being confined in a singularity (in a similar way that an echo is evidence of the deflection of sound by a wall .i.e. a barrier). It is evidence of being trapped in or confined to a bubble in that whichever direction they move in objects being heard or observed are circling back. The singularity exhibits the Doppler Effect by a process experienced as a loop within it as perspective C-D-C-D. Though the silhouette appears to shrink in size as it walks away this is just the Doppler Effect, it remains the same size and is trapped in a singularity, as a result it cannot cross the event horizon or light-speed barrier. The Doppler effect is first hand evidence of a barrier created by curving Space-Time. Therefore, if an object
continues in one direction it will eventually arrive at the same place. When a spaceship 
begins to travel at superluminal velocity the inference is that it escapes the Doppler Effect
and therefore also escapes Space-Time curvature A-B. The consequence of doing this is that it
does not find itself where it started, but instead crosses the light-speed barrier and exits at X. To the
passengers on the spacecraft the universe they are leaving will appear to shrink out of existence C-D  while the universe they enter will appear and  grow larger F-E. The passengers themselves
will have a normal experience, i.e. they remain the same size A-B as they cross the horizon at X. For those trapped in a singularity moving at sub-light velocity the horizon can never be reached and technically is a closed loop, if they continue in one direction they will wind up where they started C-D-C due to the barrier. For the vessel traveling at superluminal velocity A-B the 
inference is that the horizon yields and literally opens, like a gate, revealing the next or adjacent universe as it is entered or crossed. Doing this successively by repeatedly crossing singularities
creates the "tunneling effect" experienced when traveling at superluminal velocity mentioned earlier. A Collision Drive offers the first feasible propulsion technology at hand powerful enough to offer the opportunity to find out what happens when moving through A-B passed the singularity X, to
gain answers to many of these questions. The Doppler Effect implies that rather than  traveling at 
the speed of light alone superluminal or faster than light acceleration rather than just light speed
itself is likely to be key to exploiting the elasticity of Space-Time.


This implies that a Multiverse Array hosts approximately 2.96476E18 Multiverses and each Multiverse hosts 3x10ʌ8 Universes (.i.e 1 second of Einstein's Space-Time there can be as many as 3x10ʌ8 universes. However, the earth-universe is comprised of 2.96476E18 seconds or Multiverses). There is also the possibility that the number of of Multiverses may exceed the the number of universes per Multiverse. If this is the case, then it will have to be considered that each Multiverse is not spherical, but spiral, winding or helical in structure. This causes the Multiverse structure to overlap beneath each spiral with what is being observed simply being the top view or surface of a Multiverse. Since the Multiverses and individual universes occupy the same Space as a result of compression (where distance does not exist) the result is both the Multiverse Array and an individual universe cocooned in a singularity having the same fundamental shape. This helical, cork screw shape which seems to accurately fit some models of how the universe exists is shown in the diagram below.


A Multiverse Array: If there are more Multiverses than there are universes 
per Multiverse then it follows that Multiverses don't just form a top view
but in fact spiral to enable them to take on this greater Multiverse mass.
Strangely enough this hypothesis seems closer to the structure
of the universe proposed by physicists complete with an
originating tail end attributed to a big bang. The Multiverse
is identical to the structure of a single universe due to  
compression being used to multiply the availability
of Space. Each line in the diagram is a light-speed barrier 
that holds a universe within a singularity, while each screw shaped disk is
a multiverse. These screws must be fine and in great number
such that the number of Multiverses (screw turns or levels) exceeds
the number of universes in each Multiverse. The Multiverse
Array being so vast in structure and having so many disks
will create a bulky and gargantuan structure.


The matrix of potential independently existent universes is on a scale that is potentially beyond anything currently imagined. What makes this incredible scale and complexity possible is that it is fundamentally created form code (StTt). In terms of where earth and earth-universe exists in this construct goes beyond the analogy of trying to find a needle in a haystack. Universes may have parallel laws of physics, in the sense that they are governed by more or less uniform sets of scientific rules across singularities. However, each universe will evolve independently and uniquely despite operating on the same laws in physics due to its protection and isolation in a singularity for eons. This can be observed where regions isolated and closed off from the rest of world on earth for long periods of time develop an ecosystem with unique plants, insects, animals and species despite facing "parallel" laws of physics (.i.e the same laws of physics as the rest of the world). This view is of course hypothetical, but it may provide a refreshed avenue through which the concept of Multiverses can be revisited and it may emphasize the need to broaden views on just how expansive the universe actually is, as well as earth's ever shrinking self-ascribed significance in this greater scheme of things. 



The M1 - Mx on the CMB 3D map demonstrates that the CMB is indeed 
responsible for gravity by way of its connection to Universes and
Multiverses. These signals are the superstructure of the M-Array that
creates the very fabric (conceptual lines) which become the 
Geometry of Space-Time (SeTe). What was a hypothesis
now has a framework and means for verification. The
assumption there are inter-connectors at the centre of each
universe provides a means by which spacecraft may have access
to far-off regions in the M-Array that would otherwise be
difficult to reach.  

When viewed in SeTe the big bang will appear a one-off event (snapshot)
 in history that created earth-universe. However, from StTt where
Time is viewed differently (across singularities or histories)  the snapshot may 
in fact simply be a single frame from a continuous emission or process. As matter is released from
a continuously energetic big bang it is sequestered into universes or singularities [snapshots] 
which in essence would mean that the big bang is in fact a continuous energetic state contributing
to the continual expansion of the multiverse to this day. The illustration of the multiverse
above shows it is in-rotation. Material spewing from the source of a continuous big bang would be
 required to rotate from the source and spiral outward creating the funnel or screw shape
 and M-disks being observed, therefore this theory makes sense. This means that 
for earth-universe the big-bang started and ended within the singularity as is 
evident in the CMB, however, for universes being created, even as 
this is being read, the big-bang is still in progress. If this
is the case it cannot be detected due to the fact that it is
outside earth-universe singularity.

Earth Universe: a single "greenish" dot in this CMB map, 
near the centre of perspective.


CMB Dots: Each Dot is an individual universe.
When observed in SeTe the CMB is a cosmological history of
earth-universe. However, when understood from StTt, where there is no 
conventional time (Te) and no conventional distance (Se) what is being 
observed is very different. These resources (X,Y,Z and EF) are instead used
to create new valuable Space, by creating new universes hidden in the same
Space and spread across disks referred to as Multiverses. When you roll back 
earth-universe's sky, sun, moon and stars, like a scroll and view the universe 
through the CMB in StTt other worlds and universes are revealed, rather 
than just its history. These layers created by signals in the CMB 
also form the Geometry of Space-Time. 

Green-belt: If the green sections of the CMB consists of universes created
during the same time period and conditions as earth-universe, these universes may contain earth-like planets that are remarkably close to earth [except in adjacent universes]. "Green-belt universes" may have more habitable planets in very close proximity to earth than there are potential habitable planets in the entirety of earth-universe. 




The CMB is a useful map resource for the multiverse to begin with because earth universe
 emerged from it and it offers an outside view or "overhead" time capsule view of earth universe, which can be used to map where other universes are located. However, the present day Geometry of Space-Time is the intangible fabric or superstructure of 
each universe. Therefore, it also contains map information 
on the location of each universe in the Multiverse but this information
would be a view that is from the inside looking outward. Therefore, the structure that is mapped
would be created from extrapolation and may be inaccurate. This is the same problem faced when cosmologists trying to determine what the Milky Way galaxy looks like, which has to be extrapolated and rendered and may therefore contain inaccuracies until the galaxy can actually be viewed from the outside. The CMB on the other hand offers
this view and may be useful in this regard. Since it is
unlikely that universes change location .i.e. their frequency and wavelength. 
Where they where at the time'of the creation of the CMB, is very likely where they are, in terms of frequency, in the Space-Time Geometry of the present day. In the way an accurate 100 year old map of the earth and its land mass can still provide accurate details about locations on earth, background noise 
or interference from the CMB that is billions of years old can still provide 
an accurate map of where universes are located today even though they
cannot be seen or interacted with and may therefore be thought not to exist.
Without these map sets travelling at superluminal velocity will be high risk. 
   

Viewing the CMB in SeTe  is in essence a time capsule or record of how the universe formed. However, as the diagrams above show, the CMB in StTt (M1-Mx and X,Y,Z) is likely to also be  a map of the Multiverses and Universes that can be used for navigation. This can be deduced because the 13.8 billion years, described as the age of earth-universe by Cosmologists consists of Einstein's Time, which is Motion-Time or Spin (Te) . It does not consist of actual or true Time (Tt). Therefore, what Cosmologists view as the history of the universe in the CMB does not actually exist .e.g. as a past that can be accessed through time travel. The universe has instead transferred this "time-travel" hardware or physical reality (E - F) to the construction of a Multiverse which is referred to as True Space (St). St like Einstein's Space (Se) has 3 dimensions from which the universes are constructed (hypothetically 1 Multiverse every light-second and 3x10ʌ8 universes per Multiverse [disk] giving each individual universe within a singularity the same properties as earth-universe observed in SeTe). The premise that there so many universes co-existing in the same Space (St) is that the "density" of so many signals or "dots" is the framework that creates what is perceived by physicists as the Geometry of Space-Time. Having so many Dots, signals or universes in the same Space (St) is why the Geometry of Space-Time (SeTe) is so inflexible when it is studied from within a singularity which is what physicists, astrologers and cosmologists find when they study Space - it only appears empty due to being observed from within earth-universe's singularity, when it is in fact swarming with signals (other singularities, signals or Dots) that cannot be seen but their presence is detectable through gravitational lensing or the gravitational "pressure" exerted by the Geometry of Space-Time, which they create. This Geometry, at present, is thought to be empty Space. It appears empty only because the other signals are outside earth-universe's range, signal or singularity. As great in number as the universes may be, each can be as vast as earth-universe. Doing this is a more prudent and efficient use of resources because it creates more useful Space through which more universes can be generated. 

To begin to understand the universe, the forces within it and how it works physicists must begin to see that Einstein's SeTe is based on observations within a singularity observed from earth-universe, and SeTe is simply a tiny part or fragment of StTt. Einstein's 4th dimension in SeTe which physicists refer to as Time, is not true Time, but a Motion dimension. That is a dimension that allows 3-directional movement. Motion expressed as Time in singularities has limits when it is applied to extremes concerning the very small and very large, which is generally why there are discrepancies between General Relativity and Quantum Mechanics, that appear to make them incompatible. This incompatibility can be resolved by simply understanding that General Relativity uses Motion as a dimension in substitute of true Time (it mimics Time), and this also consequently affects how distance is understood at very small and very large scales. There is a difference between Einstein's Space (Se) or 3 directions/dimensions which in turn are substitute for true Space (St) which also has 3 dimensions or properties, but that are outside the scale of Se. Therefore, being able to make this distinction is important for understanding why scale [from the very large to the very small] affects outcomes in physics. The same applies to objects moving very fast, at luminal or superluminal velocity and objects moving very slowly, at subluminal velocity, for example below mach 20.

The Multiverse Array (M-Array) will consist of all the universes and they will be present in the CMB as isolated signals or singularities referred to as CMB "dots". The signals, discovered by Robert Wilson and Arno Penzias in 1964 will be mistaken for interference or "noise". However, as demonstrated in the diagram above the noise or chaos of these signals when viewed in StTt can be organised into the superstructure that creates and provides a map for the distribution of matter (stars, planets etc) in the Geometry of Space-Time. Signal density can be used to refer to the number of signals in a given Space (St). When scale is sufficiently adjusted individual signals or dots can be observed as individual universes, signals or singularities in the CMB. Tuning therefore involves changes in scale that allow navigation into and out of singularities or universes. For instance approaching the speed of light becomes a form of tuning due to extreme changes in scale, the same applied to making observations from General Relativity to Quantum Mechanics, the change in scale is literally a form of tuning. Without tuning the two approaches appear in-congruent or irreconcilable. At superluminal velocities Space remains 3 dimensional or 3-directional, but distance does not exist due to the fact that the rate of travel becomes very high, such that locations are in the same place; XYZ as directions can only be navigated as frequencies.


To travel to earth-universe signal or dot in the CMB 
the spacecraft has to tune out of the Multiverse scale into
a specific universe scale corresponding with the frequency
ABC as shown in the diagram.  Tuning can be compared to
searching for a new channel on a television. 

While tuning (St) technically the vessel is physically moving or traveling from one geographic location in Space (Se) to another. Unless how to tune is understood superluminal navigation is  likely to be impossible. This means that in terms of scale, the CMB cartographic structure is 5th dimensional. It is only by an increasing scale from 4th dimension to 5th dimension that observations outside earth-universe that show the multiverse (other universes) becomes possible. When the CMB is viewed from the 4th dimension, it is simply a history of earth-universe. However, when the same CMB is observed from the 5th dimension it is in fact a map showing the cartographic structure of the Multiverse. This is one of the fortunate conditions in which Einstein's 4 dimensions and the 5th dimension of the Multiverse share the same reference data, therefore, it is possible to pivot from one view to the other (from 4th to 5th dimensions) and to be able to re-interpret what is being observed. As much as it is referred to as the 5th dimension, like SeTe which consists of 4 dimensions (Time and 3 directional Space), the 5th dimension or StTt, in terms of navigation will also consist of 4 dimensions [Time (Tt) and 3 directional Space (Se)] separated by significant differences in scale that affect physical and therefore mathematical outcomes. For instance, to travel across the CMB from earth-universe to another universe in the CMB makes no-sense to Einstein's Space-Time (SeTe) due to the fact that the alternative universes are beyond infinity or outside the scale (signal or singularity) of Einstein's Space-Time, which consequently reinterprets this activity as "Time Travel" a common mistake physicists still make to this day. Time travel negates the existence of other universes or a Multiverse creating a flawed interpretation in this science that persists to this day. This can be corrected by a physicist understanding that going into a universe but at a different time, should be replaced with going from earth-universe into another universe, without relocating to a different time (time travelling). It should also be noted that the term or word "scale" is itself an oxymoron due to the fact that in the change of view from 4th to 5th dimension "distance" is not a recognized term. Locations are in the same place, therefore, the concept of "measurement" required for scale is somewhat redundant and requires reinterpretation. For instance, when you sit in your living room and change channels on the television, technically the locations are all in one place and each one is coming to you rather than you travelling to them - similarly from this position all the universes occupy the same Space. From your vantage point to say you are travelling from one geographic location to another does not make sense when you [the spacecraft] are standing still. The issue with dimensions is a perception based problem that nevertheless has to be understood when it comes to this kind of navigation.

When observed from a singularity using StTt the CMB should be able to provide an accurate and current 3 dimensional geographic map of the Multiverse showing where each universe and the matter associated with it is located in the M-Array; this data also happens to correspond to the history of earth-universe. It is very unlikely that the entire 3 dimensional cartographic structure of the M-Array will be present in the CMB. However, what is available to be viewed will depend on the age of the singularity or universe the CMB is being observed from. For example, the M-Array could be 820 billion years old, wheres earth-universe is mounted on a Multiverse disk or spiral (Mx) that is only 13.8 billion years old, therefore, in the CMB, it may only be able to view the map structure of its own universe and Multiverses that are either younger or adjacent. In the same way that a Multiverse disk can only hold so many universes before it begins to collapse or spiral, it is possible that a Multiverse Array can also only hold so many disks before it begins spiral. It is likely this spiraling facilitates continuity allowing the smallest part of the cosmos to reflect or mimic the largest part of the cosmos as is seen with individual universes and the multiverse array being almost identical in shape. A spacecraft traveling to earth from another universe would first have to locate the precise earth-universe in the XYZ coordinates of the CMB, right down to its individual signal or dot. It would then be able to use the StTt colour ledger to determine from the "chaos" or jungle of signals which multiverse earth-universe is found in. With this information it would have the exact frequency to travel at during superluminal flight to tune into earth's location or "channel". A cosmos where individual Universes spiral into Multiverses, which in turn spiral into Multiverse Arrays, which in turn spiral into a more complex structures that as individuals are themselves part of an Omniverse may imply that this form of expansion is orderly and continuous even if it may be created from what at first appears as a chaotic swarm of signals in the CMB that can be compared to the noise in between channels on a television. This emphasizes why scale is critical to understanding how the laws of physics function between singularities, since technically, what separates one universe form another is continuous event horizons into and out of singularities differentiated and governed by scale as is seen in the maximum and minimum limit of the wavelength and frequency of a signal shown earlier.  Cosmologists may need to prepare themselves for a reality far more vast than could possibly have been imagined before.  


When travelling from earth-universe to n6-universe there
is no change in location, scale or size in terms of StTt. However,
in SeTe in the spacecraft, earth-universe will appear to 
grow ever smaller until it is left behind and n6-universe will appear
to grow ever larger as it is entered. 

Both types of relocation are irreconcilable if terms to explain them are not adjusted and can lead to different outcomes e.g. time travel or inter-dimensional/inter-universe travel, where one inference is plausible but inaccurate and the other correct. This same problem affects General Relativity and Quantum Mechanics. For instance Heisenberg's uncertainty principle may only remain true if a  singularity cannot be breached to reveal another universe where certainty is restored. Uncertainty no longer exists outside distance and therefore outside Space and Time (SeTe). In SeTe Heisenberg's uncertainty principle is true, in StTt it is false. This condition makes Heisenburg's uncertainty principle plausible, but inaccurate, it can be applied in physics for example in quantum mechanics and results that work can be obtained, but critically, it means that what is being observed by physicists is being inaccurately interpreted in the same way the inference in SeTe is that time travel is possible when in fact only records of the past can be accessed. Depending on how observations are made the results in this case will change. Nevertheless, as is the case with the example of time travel one inference will be plausible but inaccurate while the other is true. If the observation is adjusted the exact  position and speed of an object can be known at all times because the fundamental state of any object is that of a particle not a wave. This means Einstein who interestingly is quoted as stating "God does not play dice with the universe", was correct in the sense that uncertainty will only exist within singularities. It is not the fundamental state of matter. Ironically,for him to be correct the observation has to be made outside his understanding of the universe SeTe, in StTt showing that interpretation is key to accurate outcomes.


The ability to detect matter across singularities e.g. between
n3 and n6 means it is possible to develop technological devices
capable of communication between universes.


The diagram above explains why Heisenburg's Uncertainty Principle (HUP) is plausible but inaccurate. Imagine you're trying to find the location of a basketball. But the only way you can locate it is by throwing tennis balls at it and observing how they bounce back. If you manage to hit the basketball you would be able to know where it was, but now its moved. As long as the basketball and the tennis ball are in the same earth-universe n3, inferences made from HUP are accurate. However, if the tennis ball is in earth-universe n3 and the basketball is in the exact same place or location but in n6-universe then the exact location of the tennis ball and the basketball will be known at all times. They will occupy the same Space but need not make contact, the tennis ball can discover the location of the basketball without affecting it thus maintaining certainty. The belief is that position and momentum are conjugate variables, information for one can only be obtained without losing information about the other is untrue, because n3 and n6 demonstrate the level of uncertainty can be adjusted between 0 and 1 or that wavelengths can be shortened without increasing energy if the technology for controlling inter-dimensional Space or the technology to link and detect n3 and n6 exists. If it is understood that signals or singularities in the CMB collectively create the superstructure that is the Geometry of Space-Time, that in turn is regarded as the "source of gravity", then this demonstrates that gravity cannot exist without accepting the existence of a Multiverse that is currently unaccounted for in physics due to mistakes in its understanding of time travel and that Complimentarity attributed to Niels Bhor is only accurate within the limitations of SeTe or within a singularity or individual universe. From StTt or across singularities, Complimentarity in physics is not absolute and with the appropriate understanding and technology can be manipulated to change outcomes. In this condition constants become malleable and laws of physics can be manipulated. This means that the accuracy of constants that currently govern laws in physics is limited to the singularity in which they are observed to function with reasonable accuracy. The currently known constants, e.g. the Fine Structure constant, Planck's constant, Newton's constant, the Cosmological constant must be regarded as plausible, but inaccurate because their parameters do not include or recognize that they operate in a Multiverse, to which they are inherently subject, and that they are therefore being applied in a limited construct either in denial, ignorance or incompetently.







A new parallel universe is not created every time a decision is made,
that is, every time the box is open. These multiple universes
should be considered to already exist as the Multiverse. Rather than
the choice creating multiple universes, the pre-existence of multiple
universes makes it possible to choose or have autonomy. In other 
words motion or Space-Time itself measured using Geometry 
could not exist without a Multiverse, since, if you noted earlier, 
the universes collectively form the fabric of Space-time. In the same way that 
twins and doppelgangers do exist, universes that are almost identical may occur,
but they are likely to be rare in that they are the exception rather 
than the rule. In quantum mechanics the energy
moving between multiple universes will be observed to appear and 
disappear out of existence. This movement of energy was earlier 
also described as being how suns distribute energy across universes
which is required to gain a more accurate understanding of the 
hidden mechanics behind fusion.


Heisenberg's Uncertainty Principle can further be compared to Schrodinger's Cat Experiment. When you open Schrodinger's box reality splits into outcomes with two probabilities that play out in two universes.  In one universe the cat lived and in the other it died.  So opening the box is the equivalent of creating or crossing from one universe to another, right? Let's imagine the observer of the experiment and energy used in observation are one and the same. When the box is opened the energy that observes the experiment splits into parallel universes. When it arrives in the next universe it suddenly does not know what a Schrodinger is, or what a cat is or what the box is about, it does not have the same descriptors.  This energy becomes and is therefore observing a boy in a sunny meadow covered in green grass under a blue sky where for an hour he is not sure whether to pick a yellow flower or a purple one. Just as he reaches for a flower you (the observer or energy)  switch back to the universe where Schrodinger’s experiment was and find it took place an hour ago.  You're shown where the cat is up and about, it is chubby and bright eyed.  When you switch back to the other universe you find the boy picked a yellow flower.  There is never any uncertainty. Instead of duplicating outcomes, e.g. the same person with a live and dead cat, or a new universe every time a choice is made, which would be a waste of energy,  the same energy is shared between  independent universes that already exist. However, "a new universe every time a choice is made" should be a hint to physicists that indicates they should be prepared for the number of universes in existence to be of a great magnitude that defies what would seem possible, for instance, more universes than there are stars in the night sky and in all of known Space. Be prepared. Accepting that there are other universes amongst us, in unprecedented magnitude, oblivious to a mutual existence, going about their business as we do, is likely to be one of the most significant advancements in humanity's fundamental understanding of the science that underpins reality. The same cat could be alive and dead in two separate universes, but we have seen with time machines that the universe does not waste resources to continuously create duplicates in this way, it uses a more efficient process that leads to an independent universe that causally does not need to have anything to do with other universes. The Multiverse makes choice possible because it creates Space-Time (as we saw earlier in the relationship between the Multiverse and the CMB). Choice is being described merely as the ability to move autonomously as a result of Space-Time. The only aspect that splits following a choice, that is parallel or similar between the two universes is the energy being shared or exchanged between them. When this energy crosses from one universe to another as a result of any kind of activity determined by choice it changes in that it is stripped of information and it emerges pure and ready to become whatever it encounters in the next universe. This process creates two ongoing independent universes linked only by energy in its purest form using a process which when observed moving from one universe to another, is likely to be described as energy that is quantum tunneling or magically appearing and disappearing as is described in quantum mechanics. Let's assume that a multiverse or countless universes do not exist because one is created every time a choice is acted on,  but that rather these countless universes already formally exist and that because they exist they enable the very act of choice itself to function by making motion possible,  where every action rather than creating a new universe causes the exchange of pure energy between universes. Universes will be parallel in the sense that they share the same energy moving in and out, back and forth, constantly appearing and disappearing as shown in the diagram above but each will be unique because a choice in one parallel universe does not affect the outcomes in another.  This is made possible by the hypothesis that matter exists in different universes, but is sustained or created from the same shared energy. This same process is why a time machine invented to take a person back in time will instead move them into another independent universe and the past will be viewed during time travel as a record or recording of history.



Without this knowledge it is very likely that travel at superlumnial velocity would be impossible. A vessel could end up anywhere in the multiverse, unable to locate earth, recognize planets and star formations in uncharted Space, without the means of communicating across singularities and have the potential to be permanently lost. It would be like tossing a penny somewhere in the Atlantic ocean, not only may it never be found again, it may never be able to find its way back.




The Full Matrix of Space-Time Geometry


Currently there is no proof of the existence of other universes. This is likely due to each universe being cocooned in a singularity. Scientists assume that when a vessel warps through space it just travels through earth's  universe from point A to B. However, what calibration implies is that even if a spacecraft were built to  accelerate at 3x the speed of light using tier 1 gravity the vessel potentially accesses a geographical location in space 3x times the size of the earth's universe for which there is no cartographic information because modern science is not aware of the existence of universes other than that occupied by earth let alone how to communicate between them since how to communicate across a singularity is an unknown, and possibly a science unto itself. Whether quantum entanglement and quantum tunneling can work across barriers that create singularities is open to discussion.

At present most Space Agencies assume that travelling at  superluminal speeds is as simple as flying from earth to mars, when in fact it introduces new challenges since travelling at speeds faster  than light even in Alcubierre's theoretical Warp Drive cannot take place without breaching  other universes. In terms of Spatial geography and cartography breaches represent and open up access to vast uncharted areas of Space. If a vessel had a mishap and dropped out of these high velocities it would emerge in any number of universes as vast as earth's but of which potentially little or nothing is known. It would be like Vasco da Gama in a boat searching for a route to the east travelling through vast oceans and places little or nothing is known about, unable to send messages home.  Technically, it means that even if scientists were to develop faster than light travel, to jump to superluminal speeds without understanding how to navigate at such high velocity the risk that a space-craft that made this jump in ignorance, having crossed a singularity could disappear and never be heard from or seen again, is very real.


There is no mystery concerning how to create  negative energy/mass/gravity.
It is easily created at the subatomic level by manipulating T4 or emulating how this is done using tier 1 gravity. It may also be useful to note that the propulsive or gravitational force (negative or positive energy - E) a single atom is capable of exerting at T4 is equivalent to E = mc2 . 

The general assumption is that gravity is a weak force being – 1040  weaker
than the electromagnetic force that holds atoms together.
As shown by Einstein's mass energy equivalence equation this assumption is in fact misplaced. 
Atoms collectively need only harness tiny proportions of kinetic energy to implement
what is observed as gravity. Most of this huge potential energy remains dormant and unused.
This means its propulsive force can be harnessed directly, for instance, to turn a generator
without the need for heat and steam as is the case
with modern nuclear reactors which would seem primitive in comparison.
This is in line with the assumption that all matter is self propelled. It must
therefore have contained within it the dormant propulsive force
to act when it receives instructions to do so through entanglement
depicted by the red line vector as the two bodies communicate and navigate
around one another. For instance, a teaspoon of neutron star weighs 10 million tons.
This makes 5th generation gravitational force the most powerful and most advanced
 method for controlling gravity. This is what powers or drives the mobility shown
earlier in the animation reproduced below:




In the above animation what is being illustrated is that current theories concerning gravity are wrong. The earliest of these theories came from Isaac Newton. Newton believed that gravity was created by bodies exerting a force on one another. Albert Einstein showed Newton was not entirely right by proposing that gravity was created by the bending of Space-Time and consequently introduced the concept of fields and symmetry seen in the theory of General Relativity and Special Relativity. The animation above tries to demonstrate that like Newton, Einstein's concept of Space-Time and fields being responsible for gravity is also quite wrong or not altogether accurate. Gravity is not created by fields or symmetry, no more than the invention of a ruler can be regarded as what creates space, distance and time. A person may argue they can see fields around magnets or magnetized space, however, as mentioned earlier these are made by particles moving autonomously into position as is observed with iron filings around a bar magnet. This autonomous movement, the mechanics by which it operates through exchanges of information with particles is the next most important layer beneath quantum mechanics physicists will need to investigate. Fields are very useful and accurate for measurements, but to assume that what is measured is created by what is used to measure it is flawed. Fields to this day are very useful albeit are extremely misleading if the context in which they are used is not understood to have limitations, especially by scientists. Fields are the imaginary friend of the 21st century scientist who guests visiting a lab must be careful not to forget to greet and hold a polite conversation with before proceeding inside. Economists have a similar disposition in that they spend all year long trading arguments and counter arguments, charts, graphs and statistical data about how to get growth going, share theories on how to create more wealth and mitigate financially against unemployment and human strife, which at the end of the day is all quite pointless if they cannot recognize that every economy in the world loses the equivalent of GDP every year to flaws and inefficiencies in the circular flow of income. The consequence is that they cannot control inflation, are always on the back foot of how to create and sustain a successful economy. Economists have failed to create prosperity that satisfies nations no more competently than physicists can currently demonstrate a device for controlling gravity. 

 The third explanation for gravity, which is possibly the most accurate to date, is that gravity is created by communication between particles which then negotiate how to position themselves relative to one another, and they do this autonomously. This is illustrated in the animation above. Fields, which are imaginary and an invention, like a ruler cannot fully explain the objects they measure, which is why the physics community after centuries of gathering knowledge still cannot demonstrate any device for creating or controlling gravity. This is because fields do not create gravity, in fact they do not exist anywhere except for in the imagination of a person trying to make a measurement of a physical force. They are useful in this sense, but if this limitation is not understood they turn from an aid into an impediment. Even in situations where physicists cite gravitational lensing or light bending around an object due to gravity, the fields do not exist hypothesis would rightly identify that photons or particles of light are not being moved by a gravitational field but are in an of themselves aligning with other objects through red-line communication as is depicted by the animation above. Even if a scientist where to feel the pull off a gravitational force on his or her body, it would be caused by particles in their body positioning themselves in relation to some other object and the impression would be that they are under the influence of a field when in fact not, since their physical mass is responding to communication with other masses and repositioning autonomously. Each particle is fundamentally designed to have mobility and position itself in this way thereby making it able to naturally assemble, create or replicate any type of force OR matter. These same particles, at the atomic level, through this very same autonomous mobility, generate gravity as we saw earlier with the Tron weaving through T1, T2 and T3 to generate Tier 1 gravitational force at T4 in atoms, this same Tier 1 gravitational force is replicated or has been reverse engineered using mechanical engineering in the Collision Drive. In order to understand how to do this fields had to first be dismissed as what causes gravitational force otherwise they impede the ability and need to look beyond the field for a cause, mechanism and method for what is inducing levitation in both magnets and gravity. Fields being a by-product in general or a generalization of forces in physics do not have the requisite refinement and dexterity required to create matter, which requires the coordination and control of each individual particle and sub-particle in the BEC or the atom. They therefore are not responsible for creating matter and become a blunt instrument when trying to explain it, especially at the level of quantum mechanics, which itself becomes too shallow to fully grasp various phenomena, after all, energy bands and "quanta" in quantum mechanics are basically a reference to fields. If matter in general and particles are actively autonomous and coordinate their movement through communication, then fields observed by physicists do not exist, they are just a visual by-product of this communication and autonomous coordination to create matter, forces and reality as it is perceived and measured in physics. The fields do not exist hypothesis or method of analysis is an important tool because it demonstrates that the very fundamental mechanics upon which the universe functions are not understood by contemporary physicists, be it in aeronautics, fluid dynamics and  so on there are substantive misinterpretations permeating modern physics in its entirety concerning how forces deploy to fulfill its laws. For instance, if matter has delicate and intricate mobility and is autonomous then even observed phenomenon such as buoyancy of objects on water, is in fact not caused by them displacing water equal to their mass as is deduced by Archimedes Principle. The water molecules and the atoms of bodies that rest on water communicate and align themselves in a manner which when observed by physicists is  thought to be created by Archimedes Principle, when in fact Archimedes Principle is also flawed as is Einstein's observance of fields or Space-Time Geometry being what creates gravity. However, if a physicist assumes that a boat floats on water, and the water, like a field, is responsible for this buoyancy then his or her reasoning meets a brick wall in that the belief the cause of flotation is the medium or field deters any further investigation. Without further inquiry it is then never established that negotiation between particles and not the field is what is responsible for buoyancy in any medium, be it water or air, magnetism or gravity. This is like going to a movie theatre and leaving believing that the story, stunts, action and characters were all real simply based on what was empirically observed to have transpired on the screen and then basing all of the laws of physics on the movie. There may be accuracy and empiricism, but it is all based on a misinterpretation of underlying facts because a movie is the result of many intricate parts and movements from directors, writers, actors, producers and so on functioning behind the scenes. Even simple concepts such as the physical interaction between matter is very likely to be negotiated at the particle scale, which means that objects do not pass through one another and are regarded as "solid" only because the particles communicating with one another at a scale smaller than that currently viewed in quantum mechanics negotiate positions and use autonomous mobility to create what is experienced as contact between matter and masses, which in turn makes them appear solid or "real". Particles and sub-particles may be tiny, however, the energy they have readily at their disposal to negotiate movement and positions is governed by the mass equivalence equation. This means the cup you casually place on a table is not supported by the table but by the activity of its own particles at the atomic and sub atomic scale. Technically it is suspended in the air, and only appears at rest on the table due to the fact that this is the position it has negotiated with the particles of which the table is comprised, which in turn are doing the same. The same applies to the table being at rest on the earth. Particles and sub-particles appear to behave like or can be compared to complex naturally evolved nano-particles or nano-technology, but on a minuscule sub-atomic scale which is currently outside the reach and direct measurement of quantum mechanics. The next stage beyond quantum mechanics may very well be referred to as sub-particle nano mechanics with the understanding that particles are created from sub-particles, which in turn are complex entities. Without this red-line communication and decision by matter or particles to respond to other particles, matter could not interact and may even become invisible or indiscernible to other particles. This process of selective interaction in particles is very likely tied to frequencies at which particles choose to operate, which in turn explains how particles and matter in different universes can occupy the same Space and yet have no physical contact thereby setting the stage for the existence of a Multiverse that is also currently unaccounted for and unsubstantiated in modern physics.  As is observed with mass energy equivalence these sub-particles form particles, which in turn create atoms, which means they wield or manage tremendous amounts of power and since each sub-particle in the universe is connected they also access a tremendous amount of information, both of which create what becomes the laws of physics, chemistry and the sciences in general. Earlier an attempt was made to show what the spiral structure of these sub-particles may look like in terms of nano-mechanics.  The blue curve in the animation above is the modern day physicist's imaginary field, it does not exist and is simply an example of how empirical evidence, when misinterpreted can still mislead the sciences.  It is quite sad and unfortunate that physicists are caught in this trap of misinterpretation as it inevitably means by failing to escape from it they will never grasp or be able to explain  how important phenomena in the sciences actually function and will subsequently fail to produce important new technologies or understand how to control them as is the current case with gravity. The fields do not exist hypothesis affects how fundamental forces such as electricity actually work and imply that even how electric current works is being misunderstood and misapplied. It means that, like fields, electricity is a by-product of continuous entanglement or red-line communication even between unpaired particles since even the absence of measurable entanglement between particles needs to be negotiated and communicated between particles. This was deduced earlier with the floating magnets being described as levitating as a result of processes in their atomic structure, not fields, which in essence implies that information and power can be instantaneously supplied, transmitted and broadcast, across tiny or vast distances without the need for a conduit such as wire. However, this process cannot be fully understood due to ongoing misinterpretations in the sciences of what is observed. These flaws in physics and economics may seem benign, however, the reality is that some of these shortcomings, without intervention, may have consequences. For instance, if humanity is required to be able to achieve a required rate of economic growth to sustain its civilization, but this evolution is failing then it can lead to internal collapse by virtue of an advanced civilization subsisting on 1%-3% of its potential for economic growth instead of accessing the full 100% the majority of which remains idle and goes to waste simply as a result of insufficiency of knowledge, inefficiencies in the circular flow of income and failing to understand the means by which to create the resources by which to control economic outcomes and create the resources required by a civilization to support itself and its populations. In the same vane, if knowledge about how to control gravity is required to allow the mass movement of people from earth before man made or natural cataclysmic events build up but is stalled due to  myopia or low intellectual ability in the sciences then this flaw becomes quite significant as it means an advanced civilization such as that developed by humanity today is likely to face certain demise from conditions it is intellectually powerless to address. In both cases, economics and physics, the inability of a civilization's learning curve to exceed critical primary challenges creates latent and inadvertent extinction level impasses that over time will inevitably reach critical mass and therefore need to be taken seriously before its too late. In humanity's case one of the key causes of extinction is likely to be a prolonged history of racism and discrimination. Without reparation and correction to balance the curve the consequence of this extinction is a deformed distribution of resources where the entire intellectual ability of a civilization is underutilized which leads to gaping holes in theory and practice in the sciences, specifically as is observed in economics and physics. The ingredients required for the survival of a civilization are scattered across diverse tribes and races. In humanity's case its civilization is not accessing and has not accessed its full intellectual ability due to the fact that discrimination has biased participation in education, access to which is decisive about whether a civilization will survive or face extinction by its own incompetence. This is why equality should be taken seriously and an effort should be made to ensure the highest quality of education is available to everyone regardless of what they look like, which part of the world they come from or reside in, as a minimum standard. Discrimination practiced by a small group threatens the entire collective often in ways society cannot see or is too self involved to understand its long term implications, therefore, it is the one vice that a civilization should not accommodate for purposes of its own long-term self preservation. The collective growth of a civilization is governed by its collective intellectual development that is in turn nourished by equality and the diversity prevalent in participation that maximizes its capacity for problem solving creating a test, balance or processes of natural selection that establishes longevity for a civilization or effectively shortens its lifespan. For this reason every civilization, including that of humanity, is always moving to critical mass on the path to survival or mass extinction, where both outcomes are determined by its own hand.


The graph above shows that the extinction of the human 
civilization based on the inability of its learning curve or intellectual development
to rise above or match critical challenges is very real, but not taken seriously. As it is said
humanity will be eating and drinking until the last day.
Ignorance is bliss.

The required learning curve shows a civilization maximizing the full intellectual
ability of all its people across the world without discrimination and is therefore able to counter
adversity and escape the extinction curve, whereas nature has three basic options to cull a civilization
that fails this basic test, that is, through inadequacy in sciences, inadequacy in socio-economic
development or through natural disasters which can include pandemics, floods, earthquakes, asteroids and natural cataclysms. In any of these cases the only means of survival is the required
learning curve. 

From the "fields do not exist" position the AWD theory is trying to bend, compress and decompress something that in reality does not exist. The forces that create gravity, mass and how they interact, that are being referred to, are created by interactions taking place at the subatomic level of matter, as has been pointed out at T4 in the proposed anatomy of an atom. If this is accurate, then changes to Space itself to create the warp bubble are wholly unnecessary as the same results can be gained simply by influencing the mechanics of the vessel alone at the atomic level, i.e. "compressing and decompressing" the atomic structure of the vessel and not the Space around it or simply manipulating the direction and rate of amplification of force in the atom at T4, because technically the Space around it does not exist. This is like light passing through a very thick pane of bullet-proof glass, nothing has to be done to the glass, the vessel and the volume it occupies simply has to behave like light by ditching its mass to get passed the glass. Therefore, how Space creates gravity though useful in explaining concepts, fundamentally is not being correctly interpreted. However, bending the Geometry of Space to travel faster than light, for now, may not be necessary. To overcome this problem we simply need the ability to penetrate the Geometry of Space, rather than try to bend it or travel directly against it. This can be done by building a propulsion system that can accelerate a vessel, for the sake of example, momentarily at 3x the speed of light without actually travelling at the speed of light. This can be achieved today using tier 1 gravity. Tier 1 gravity also  solves the problem of how to create negative mass. It achieves this by reversing polarity [see Collision Drive]. Once faster than light acceleration is achieved it can be used to unlock all 4 remaining generations or forms of gravity manipulation. What is the hypothetical result of stationary matter interacting with matter or vessel travelling at 1x-1,000x the speed of light? This knowledge is outside the framework of what is known in physics. Matter being accelerated at such high velocity initially may not need a warp bubble due to the fact that it is travelling faster than the natural rate of cause and effect or causality in physics. As a result it begins to slip-stream and travel without resistance from the Geometry of Space. In addition to this, should inferences concerning the elasticity of the Geometry Space prove true, then it implies that as a vessel accelerates faster than the speed of light whilst being momentarily at a sub-luminal velocity Space itself becomes more elastic, then it also alludes to the possibility that during this elastic phase very little power is required to compress and decompress space occupied by the vessel to generate the warp bubble required for the AWD. In this scenario there is no speed limit to how fast a vessel can travel and the technology with which to achieve this is available in this age. All this can now be tested experimentally using tier 1 gravity or a Collision Drive [Patent Pending].

Physicists have not addressed how the elasticity of the Geometry of Space changes at exceptionally high rates of acceleration, instead the tendency is to focus on the resistance of Space, regard it as absolute and how to counter this hurdle. Technologies that need to compress, decompress and bend Space-Time can come later when advancements in energy and science make them more practical. What can be done right now, today,  is use the same approach in air travel, which is to make vessels more aerodynamic, consequently reducing wind resistance. Similarly, exceptionally high speed (supernormal velocity) is anticipated to have the same effect on the resistance created by the Geometry of Space. Supernormal velocity can be used make vessels more "garvi-dynamic", to coin a phrase, which can be compared to matter transforming more toward behaving like energy allowing vessels to slipstream and travel through Space at high velocity without resistance (the way light moves through an inflexible medium such as glass). More on this is explained below:







Piloting through Geodesics will be different from aeronautics. Pilots will need to understand how Geodesics perform and react to an object traveling through the Geometry of Space, in much the same way the performance of craft in a wind tunnel are assessed, the diagram above, in this sense is like a Geodesic tunnel. Once again viewing Space-Time as having Geometry (Se-Te) is useful for depicting or explaining gravity but it should be applied with caution in the knowledge that with analysis from St-Tt "there are no fields" outside an object or vessel, all forces act within their mass. If this distinction is not understood it becomes an obstacle to understanding gravity. There are speeds and rates of acceleration for which the Geodesic elasticity declines (moves from being elastic to inelastic, X-Y) and speeds and rates of acceleration for which Geodesic elasticity increases (moves from being more inelastic to more elastic, Y-Z). For vehicles or objects traveling at normal velocity in the super-slow range of Mach 20 or so toward 0x the biggest problem is air resistance. For objects approaching the speed of light Y, the biggest problem is the resistance of Space-Time (Se-Te), which is expected to become infinite at Y. However, it is important to note that there is a difference between speed or velocity and acceleration. A pilot can navigate a vessel in such way that there is control over the g-force the passengers experience. For instance, to perform supernormal acceleration, its possible a pilot can accelerate from close to 0 to 3 times the speed of light for 1/10th of a second in order to take off or execute a turn. He or she can use a rate of acceleration that exceeds the speed of cause and effect thereby slipping through the light barrier without ever having to actually reach the speed of light limit. Technically, this means for objects below supernormal rates of acceleration, the speed of light can become a fixed barrier at early sub-light speeds because they cannot escape their own mass, which would agree with Einstein's view. However, objects accelerating at supernormal rates may alter the elasticity of the Geometry of Space allowing them to maneuver around this barrier by manipulating their own mass or g-force. Se-Te Geodesics are likely to respond relative to the rate of acceleration, not just a constant velocity. This means a pilot can possibly by-pass the speed of light limit at Y by accelerating from 0x to 0z at a supernomal rate of acceleration long before it reaches the speed of light, such that mass cannot keep up with the vehicle consequently making it impossible for the barrier to prevent it from crossing over. Whether this kind of slip-stream is possible or not will have to be verified through experimentation. In this example, unlike in the first, there is no Geodesic boom as the pilot skillfully accelerates through Geodesics at a predetermined pace ditching mass and moving directly into the slip-stream where the craft experiences no resistance since it is moving outside normal timing of the physical laws of cause and effect that restrain objects or craft moving at lower speeds and lower rates of acceleration. Even conservation of energy laws are governed by functional rates of cause and effect; there are natural rates at which objects and chemicals react, move, fall, rub, turn and burn and without circumventing any of these laws, it must be accepted that when an object begins to move at supernormal speeds that exceed the pace and timing of these natural physical laws the consequences of the increasing lag between cause and effect has to be taken into account by physicists. For a vessel travelling at Mach 4 heat at 0x, wind resistance, friction and heat are obstacles, however, for an object travelling at supernormal speed, air may make contact with the vessel's surface but it may be moving faster than it takes for natural physical laws to convert that contact into pressure, if doesn't have time to build up pressure before the vessel has moved on then there will be no resistance, without resistance there is no friction and without friction there is no heat. Furthermore,  though traveling at the super-slow speeds at 0x the pilot can still make turns with no g-force simply by momentarily accelerating into the slip-stream while executing the turn then returning to normal 0x flight speeds with the objective of increasing maneuverability without gaining excessive speed. Though Geometry of Space and Geodesics are a Se-Te reconstruction used to make explanations seem simpler when analysis switches to St-Tt the "fields do not exist approach" applies.







The diagram above shows that physical laws are governed by cause and effect and are tightly chained together such that as a vessel begins to travel at Mach 3 and above friction and heat are at high levels due to air resistance. The way around this is for engineers to design aerodynamic vessels, apply special surfaces and so on which improve performance by reducing friction, pressure and heat. This can be described as reducing outright resistance to reluctance. Every force in physics be it mechanical or chemical is applied through cause and effect that is calibrated by time. When a vessel begins to accelerate so rapidly that it begins to move faster than the time frame for cause and effect the links in the chain start to come apart and reluctance is likely to give way to submission allowing the vessel to move in peace unhindered by air resistance. Technically, none of the laws in physics be it pressure, friction heat or chemical reactions will work effectively at this point because it will be as though cause and effect are held at bay or frozen in time. This allows a vessel accelerating fast enough to slip-stream with no resistance or with full submission.  Its also important to note that the slip-stream for Space-Time (Se-Te) at Y and the slip-stream for air resistance are likely to give way at different velocities and rates of acceleration. Every medium, be it air, water, solids, Se-Te will have a specific slip-stream threshold for resistance, reluctance and submission that can be discovered through analysis, research and experimentation. The slip-stream for air, for instance, may occur at velocities much earlier than scientists expect. Its also important to note the difference between reluctance and submission. At the reluctance stage the bond between cause and effect is strong reinforced by time being unaffected for the most part the vessel is what creates the reluctance. However, as velocity increases the bond between cause and effect will begin to weaken until a speed is reached where cause is moving toward effect at a pace slower than a vessel's velocity. When this happens it is not the same as aerodynamics. The process will appear to neutralize physical reactions at the molecular and subatomic level allowing the vessel to move unhindered by resistance from air and will have to be studied in greater detail especially through experimentation. Its also important to note once again that causality is not the same for all substances. Causality for electricity, for instance, can be as low as 100th the speed of light. Signals in the nervous system travel at 70-120 meters per second. Some chemical reactions are very slow while others can be extremely rapid, for instance scientists have observed hydrogen atoms bind onto and then leave a sheet of graphene, all within ten quadrillionths (10^-14) of a second. Within its own framework causality will often appear instantaneous to individual mediums. Therefore, resistance, reluctance and submission will have to be studied on a case by case basis.   

Tier 1 gravity, though entry level, may prove important for testing faster than light acceleration through experimentation. The rate of acceleration rather than just velocity may hold the key to understanding gravity to the same degree electricity is understood. Accessing the speed of light for geographical distances on earth may be impractical, since at this speed a vessel could circumnavigate the earth in a fraction of a second. By using the slip stream it could do so without causing adverse effects to the vessel or the earth. However, when it comes to astronomical distances the speed of light can be considered exceptionally slow. The technology to generate thrust many times greater than the speed of light is currently possible through tier 1 gravity, and velocities greater than the speed of light may be possible once a vessel is in the slipstream where these is no resistance from Se-Te. Whether it can travel at several times the speed of light in the slip-stream would have to be tested through experimentation. Should the slip-stream be possible but prove to impose limits on velocity the other option is to further enhance the vessel by including a warp bubble. When the vessel accelerates at supernormal velocity much lower amounts of energy may be required to then generate a warp bubble to further increase its velocity - this would involve a jump to light speeds, then a jump to warp speeds be that can be limitless in terms of velocity. The other option is to develop the technology to make jumps from one location to another over distances that are so large they are impractical even for vessels traveling at light speed. As we saw earlier this technology that requires teleportation to cover extraordinary distances is very likely to be gained from technologies thought by physicists to allow time travel based on Einstein's accurate but limited experiential side view of Space-Time (Se-Te). As shown by the diagram the speed of jets and rockets even if they were to reach Mach 100 would barely register on this scale. They would be like an ant crawling along the ground to a far off destination and would remain somewhere around 0x. Even if a vessel were traveling at Mach 20 a vessel built on advanced entry level (tier 1) or first generation gravity could circle the earth fast enough to find the vessel moving at Mach 20 in relatively the same place. This is why even 6th generation jets depending on air, (be it jet, rocket, turbojet, ramjet, hyperjet, or scramjet) which can only operate at super-slow speeds at 0x would be no comparison for vessels able to operate the full range from 0x to 0z. This type of technology will continue to be useful for super-slow speeds, shorter distances and for acting as a power plant rather than for propulsion.




If it were easy it would have been achieved long ago.

Considering the fact that LIGO funded by the National Science Foundation and operated by Caltech and MIT, despite finding the Higgs Boson have no practical, working gravitation based devices shows that this is not an easy field. Despite Paul Dirac unifying Quantum Mechanics and Special Relativity way back in 1928, as well as advanced approaches that are current such as String Theory there are no working devices, no applications to show that harness gravity; should give you an idea of how peculiarly hard to grasp this field is. This is like a Jedi Master on the Jedi Council that can't build a light-saber, despite years of effort and years of toil. The fact that some of the best universities, companies and other institutions and minds in the world have failed to develop the physics and engineering required to design tier 1 gravity is testimony to the degree of difficulty involved with deciphering how this device will work. Though it may appear deceptively easy to understand once accomplished and the designs are seen, the level of technical difficulty is quite high. As mentioned earlier the most significant hurdle that prevented this technology from being developed before this are the misinterpretations or misdirects found in Einstein's descriptions of Space-Time (Se-Te) and misleading analysis on magnetic and gravitational fields which easily led anyone in this field in the wrong direction. Basically its difficult to find something if you keep looking in the wrong place.   

The mock-up of twin jet engines powering a tier one gravity drive in the diagram on the left shows that the exhaust from the jet engines is not necessary for propulsion. However, placing the jet engine perpendicular to the direction of travel is more stable in inclement weather and makes for a highly maneuverable vessel by vectoring the twin exhausts in the same or alternate directions to make the vessel omni-directional and still retain supersonic velocity. The engines reduce redundancy and can still be a useful source of thrust applied to enhance mobility rather than just wasting it. The jet engines can be swapped out for a powerplant that consists of electric motors and batteries to have a zero emission vessel as shown in the diagram on the right. In future it can also eventually be swapped out for fission and fusion energy sources currently in development. Each powerplant design and configuration, be it jet or electrical will have its inherent advantages and disadvantages depending on what it will be used for. Tier 1 gravity harnessed to a powerplant able to put out 30,000 horsepower and above should be capable of supernormal acceleration and speed for a small vessel as shown. However, without it the thrust from a jet engine directly applied may only generate 40,000 lbf. The advantages of  using the propulsion device become clear.





V-Tol can be standard for all types of vehicles

The explanation is that light bends because it follows geodesics, yet later in the blue spandex experiment with the students we identified that an object's mass is not induced or caused by geodesics, but comes directly from Space (St) or the programming executed by quantum mechanics. This is based on a line of thought further reinforced by postulating that both gravitational and electromagnetic fields do not exist in the sense that they do not exert a force. What is observed as geodesic is in fact the path a host of given masses choose to take directed by quantum mechanics as entangled masses communicate with one another, much in the same way level 5 self driving software and technology today is expected to safely guide vehicles on busy highways, except of course these processes become viewed as natural. The vehicles are not being guided by fields, but are communicating with one another or what they "see" and making internal changes in their mass that move them into different positions and when this movement is observed, such as the movement of the earth around the sun, it is interpreted as gravity. This is in fact an example of tier 1 or 1st generation gravity at work. The difference, where quantum mechanics is concerned, is that this self-driving is what becomes the natural laws in physics that govern the behaviour and interaction of matter and mass. For instance, this entails that even the magnetic fields in the Hadron Collider and in fusion energy do not exist and the elements moving through them are not being guided by magnetic fields but this Level 5 "self driving" process created by quantum mechanics. Subatomic particles, magnets, planets , moons and asteroids are all generally moved in this way by "level 5 self driving quantum mechanics" so to speak. The path they take is determined by quantum mechanics, the result being the behaviour of matter and observed path is being misconstrued for a magnetic or gravitational field in Einstein's Space-Time (Se-Te). To consistently and unwaveringly try to explain gravity through the Geometry of Space, basically fields or Einstein's Space-Time  and compare it to magnetism can be useful, but ultimately misleading if technically these fields do not exist. It poses a significant problem. However accurate the results may be, basing them on an inaccurate assumption, comes with consequences that significantly hinder progress in physics. For instance, it encourages physicists to believe that since Geodesics determine mass, then the way to control gravity is by bending Space-Time. This is most likely why important undertakings in science like LIGO attempted and succeeded in detecting gravitational waves or fields. It should be noted that matter appearing to react to a field or detectors is not evidence that a field exists. This was alluded to earlier with metal shavings tracing lines around a bar magnet and using these "lines of force" as evidence of a field. This evidence does not hold if the shavings are moving themselves into a formation on signals or instructions from quantum mechanics. The behaviour of the shavings imply fields exist, when in fact they may not. 

As mentioned earlier, technically, it means there is really no difference between gravity and magnetism. This would require a re-examination of these processes and devices. Since entanglement ignores distance, there is a capacity for simultaneity for  distant events regardless of how far apart they are and it therefore is able to operate in true Time (Tt) or where Time = 0, where zero represents entanglement or a processing speed for information applied to matter by true Space (St). Simultaneity for distant events becomes necessary for maintaining accuracy when processing information (the side view), but is not necessary for the experiential universe (top view), which is time shifted and where Einstein's theories on relativity appear to work accurately. I can understand why it is easy to believe that entanglement or simultaneity for distant events means free will does not exist. This is actually incorrect. It is the other way around, free will can only exist with this simultaneity. When there is no simultaneity then this means there is no free will. This was explained and alluded to earlier by showing that lack of simultaneity reduces events to what can be described as a "record" of events, in which case once an event has taken place it cannot be changed, therefore, there is no free will in this condition. These records or recordings can be reviewed in real-time to form new opinions or views of these events.

Imagine that the vehicles moving in the Coruscant Supercut from 
Star Wars were doing so under Level 5 autonomy and each 
had a beacon that allows each vehicle to know where other
vehicles are in relation to itself in order to enhance navigation. 
A field is not ascribed to their movement
, however, the manner in which planets, moons, asteroids, stars, magnets, and 
subatomic particles, particles in Bose-Einstein condensate  move can be
described using a similar process driven by
quantum mechanics. Tier 1 or 1st Generation gravitational 
force does not need a geodesic or geometry of space to ride 
on. An untrained eye may say the vehicles are moving along a 
geodesic, floating on a field or riding on a medium the way planes ride air 
or boats ride oceans when in fact there is no medium and no field
because each is being steered autonomously
by making changes within its own mass 
using what can be compared to "Level 5 autonomous driving" which 
is really just a way of referring to the code that manages laws of physics.
This is how tier 1 gravity works and can be emulated using
mechanical engineering.


In terms of quantum mechanics there is little difference between
how ships in the Coruscant Supercut are moving and how the planets
in the above clip are moving. They do not need Geometric Space, Geodesics,
a magnetic or gravitational field to do this, however, these can be considered
 a useful method for visualization. The drawback with physicists using 
Geometry of Space to try to understand gravity is that it encourages the belief 
that mass is created by this geometry when in fact this is not true or accurate.
The technical design for the first device to use tier 1 gravity will make this 
evident.
 

However, according to Einstein any object travelling at the speed of light will gain infinite mas. This should make light speed impossible, even for light itself. And yet there is clear evidence light travels at the speed of light. According to Einstein, as an object approaches the speed of light it experiences time dilation, in other words, time begins to slow down and at the speed of light, time dilates to zero, but an object is said to gain infinite mass making the speed of light a limit appear as simply another blatant and possibly uninformed contradiction. If time dilates to zero at speeds approaching or equal to light then mass in relation to time should cease to exist causing the properties of an object reaching such high velocity to begin to convert from matter into energy or from a particle into a wave in order to accommodate super-normal speeds, creating the impression that an object's velocity in relation to its mass does not easily follow convectional physics at super-normal velocities as we see it do the same with the infinitely small in quantum mechanics.  How can these contradictory theories coexist and the ideas remain meaningful without becoming just another misdirect that compromises the ability of physicists to realistically draw accurate assumptions in theoretical physics? It's simply not good enough to just inappropriately explain them away using "relativity" as this - like clever, believable and provable but ill-explained physics leads to faulty misconceptions - has a negative knock on effect that hinders future developments in the sciences - the inability to manipulate gravity despite the strides made in physics is a red flag that points to there being fundamental mistakes in theory being swept under the carpet to worship dated and inconsistent ideas to the extent that it is no longer science being followed. The answer to why light can travel at light speed, according to Einstein, is that it is because it has no mass. So… light exists in state in which has mass and is mass-less?

This video illustrates the unacceptable contradictions, conundrums and patchwork in theoretical physics
that arise out of a faulty model from Einstein that needs to be corrected for the facts
to be interpreted correctly. 

This is downright contradictory. However, instead of correcting Einstein’s model the solution in physics is often to explain away improper theory and back this with "relativity", jargon and relevant maths. I am more inclined to believe that light does not travel from point A to point B. Rather it moves like a  Mexican wave transferring light. When this transfer is observed it appears as though light is moving. When a stone is dropped into a pond, where light is concerned, the ripple effect takes place such that the water remains in place and the ripple radiates like a Mexican wave. The water itself does not move from the centre to the periphery. Photons, like the lake itself, are ubiquitous and remain in place. The photons contain their own energy. When there is no light the photons, like spinning tops that are dormant remain unmoving and opaque creating what appears to us as darkness. Despite being opaque and creating darkness they still contain energy, but it is in a dormant state.  When light is triggered from a source the photons remain in place and like falling dominoes begin to stimulate or appear to transfer an energy trigger to one another causing them to spin, emit light becoming transparent.  Since the photons stay in place the speed of light we observe is in fact the rate at which they trigger or transfer the emission of light to one another. The cloaked or opaque photon that was in a dark state of potential energy is triggered and uncloaks releasing light proportional to the source of the emission or stimulation. If this is true then the less photons interact with any other sources of energy the greater their potential or dark energy becomes in the Distance as they lie dormant. With this model all the contradictions concerning light are easier to resolve. It would explain how light can travel at light speed and have mass and yet not have that mass approach infinity. The answer is that light is not traveling at all, photons remain in place and instead change state thereby transferring light. It explains why light is attracted or bent by gravity. It also explains why despite appearing to travel so fast the push effect of light is very weak.


This Mexican wave demonstrates how light propagates. 
The people in the stadium are like photons. When the wave begins
the light is transferred through the crowd. When we observe this "moving" wave
it is recorded by instruments as the speed of light. It travels very quickly but because it moves through transfer the impact of light is very minimal as the mass remains very low and only consists of the final photons in the wave that make contact with a surface. However, Einstein's model states that photons themselves move,
that is, everyone in the stadium stands up and starts running around the stadium. This mistake creates the problems and conundrums we see when physicists try to understand and explain light. If this happened the mass of the running people would turn light into incredible thrust or a weapon like a laser with deadly impact. According to Einstein this does not happen because the people running like this suddenly and miraculously have no mass because they are no longer matter but energy. It is quite unacceptable that the physics fraternity accept Einstein's misinterpretation of how light works as it compromises meaningful future development of new theories. These corrections need to be made.

A test is required to determine if light waves are inherently contiguous or non-contiguous...

We have thus far explained why and how light continually has mass and can travel at light speed settling this particular problem using a stimulation theory. But this is not where it ends. Light has other properties that are confusing. It was initially thought that light is a photon or stream of fundamental particles, a theory from Isaac Newton.  However, a scientist named Thomas Young, through experimentation later discovered that light diffracts after interference creating patterns on a screen indicating light was in fact a wave. This problem became referred to in physics as wave-particle duality. This was yet another conundrum. Is light a particle or wave? Einstein is thought to have settled this argument by declaring that light was both a particle and a wave. The explanation he gave for this conclusion was based purely on observing this duality as fact through experimentation without further interrogation of the underlying circumstances of these results. We know this because this is where the query ended. Yet we all know what is observed and recorded is not always necessarily what is taking place. At this stage we can make certain deductions beginning with Isaac Newton. Is light a stream of particles? No. It is not a conventional stream of particles. Why? If it were a conventional stream of particles this would be once again the same as if all the people in the stadium stood up and began to run around the stadium creating a flowing stream of particles. We can deduce that this is most likely inaccurate because the intensity of a natural beam of light is far too weak to consist of a stream of flowing particles. This moves us to Thomas Young. Is light a wave? With experimentation it was discovered that light scatters. Can we therefore conclude that it is a wave? No. We cannot. Why? Because to do this we would need to know if it is a contiguous wave or a non-contiguous wave. This difference is absolutely important to physics. Both wave forms will create a scattering effect when tested. How then is it possible to distinguish between the two types of waves?

A contiguous wave is a waveform that can be explained descriptively as follows: Two kids take out a garden hose. They stretch the hose out and each holds an end. One child proceeds to wave the hose vigorously up and down. The child's hand on the other end of the hose receives the wave and his or her hand is sharply jolted. This is a contiguous wave. The wave form passes through the hose continuously with little loss of energy creating a proportional impact at the other end. A contiguous wave will pack a serious punch since the area it strikes will experience the intensity of the full length of a beam from source to target rather than just the photons that make final contact, much like the movement of electrons or electrical current through a conductive medium like wire. On the other hand, an example of a non-contiguous wave has already been alluded to as a Mexican wave. It is most likely that these two types of waves on observation are almost indistinguishable and must be tested to tell them apart. For instance when distorted through Thomas Young's experiment they will both yield a distortion pattern. So how can we possibly tell them apart?


Helium neon (He-Ne) beam diffraction pattern through a single narrow slit.
There are probably many ways to do this, but the only means that I can think of on the fly is possibly through the intensity and amplitude of the scattered light and the spread of the distortion pattern before and after distortion. A contiguous wave will tend to pack more energy and will therefore retain more of the intensity of the stream despite it being diffused. It will therefore resist diffusion creating a pattern with a much shorter spread. However, If light is non-contiguous (is like the Mexican wave) it will be less resistant to being scattered and may therefore produce a more widely and more easily dispersed pattern. These are of course merely assumptions and could be wrong. If it were possible to measure the intensity of the pattern it may also be discovered that the energy of the light beam after distortion is weaker as a result of a greater propensity for photons using a non-contiguous transfer method to easily drop away causing energy losses. Non-contiguous waves are light on their feet and can move very quickly but do not pack much intensity since their mass is limited to the last particles to make contact with a surface. The final intensity will tend to only be proportional to the surface area of the photons that actually make contact with a final surface. Since what is proposed here is that all light traverses rather than travels (is non-contiguous) it may not be possible or may be quite difficult to re-create or mimic light that is traveling (moving contiguously) from a source of emission (point A to B), except maybe by employing the use of concentrated light or a laser to mimic the intensity of a contiguous wave.  A contiguous wave will strike a target with the concerted mass of photons that comprise the full beam. The fact that natural light is so weak that we must use laser technology to mimic a contiguous wave is already a strong indication that light is most likely inherently non-contiguous. This is such a simple experiment that it could probably be carried out in a high school lab. It is also plausible that non-contiguous waves contain their own energy and will tend to continue to propagate at a fixed rate or speed for a while if not in a vacuum and endlessly if in one even after the source of emission has been turned off whereas contiguous waves remain tethered to the source of emission and under the same conditions will die out more quickly when the source stops providing energy. If the suns rays were contiguous I doubt it would be possible to step out into the open without being incinerated or electrocuted. However, if whoever conducts this simple experiment discovered and was able, with a decent test, to prove comprehensively and with replicability that light waves are in fact naturally non-contiguous this discovery would demonstrate that Einstein's space is not a vacuum. This would potentially overturn the entire foundation of physics and be one of the most significant corrections or discoveries of this age.


Physicists have yet to determined whether light waves
are contiguous or non-contiguous. This creates many incongruities
 that fail to reconcile important areas of physics.What experiments show is that
particles tend to behave as waves most likely because when they are streamed through double 
slits for instance they flow into other pre-existing particles re-creating a wave pattern before they
strike the surface of the test area. This would indicate space is not a vacuum. Furthermore there is 
no discrepancy when they are observed close to the slits they emerge from as at that point they will 
still exhibit the properties of a particle. 

1. The other explanation for the scatter pattern from a dual slit may simply be that when one slit
is used the scatter pattern can be described as 1 dimensional. However, when two slits are used
the interference pattern is 2 dimensional, yet it continues to be viewed and read as  a 1 dimensional result,
which makes it appear incorrigible when compared to the result from the single slit experiment.
The dual split is present in the scatter pattern except this pattern or information must be viewed,
read or assessed as a 2 dimensional object. When the ideal or correct manner for doing this is 
applied the scattered pattern should appear as two single slits, showing that 
the method of detection or observation being applied is the correct one.

2. Another explanation may be that when light strikes the slits it is breaking up and
consequently rarefying. The slits are therefore acting like an "atomizer". The light emerging
from the slits is therefore more refined, much like that observed in Bose-Einstein Condensate.
If physicists are unaware that the BEC consists of light particulate not waves then it is also unlikely 
that they are aware that light can revert to a fine particle cloud-state in this way where it 
behaves like waves. In this state the particles are finer such that they scatter more widely 
and  appear to become or behave like waves when in fact they consistently 
remain particles except more refined. This would be unusual, because fine 
particulate cloud observed from slits is taking place at room temperature 
when normally it is only observed or believed to occur at very 
low temperatures in the BEC. Nevertheless, this possible mistake in 
interpreting what happens to light as it emerges from the slits
makes perfect sense if the BEC being described as "waves"
when it is in fact clouds of fine particulate is also a substantive 
misinterpretation of what is being observed. For what is being
observed to be understood it must be considered that an electron
is just a construct of finer particles
observed in the double slit experiment and in the BEC. This
also explains why even when one electron is fired at a time
the pattern remains consistent. The pattern that is rendered is the code 
that describes an electron. It implies that even in this
refined state the particulate exhibits quantum properties associated
with different types of light.

Thus far it is known that the BEC can be created through extremes
such as extremely low temperature and extremely high pressure
however, extreme rarefaction (caused by forcing electrons through slits)
seems the most logical means of gaining the effect at room temperature 
if indeed the BEC consists of fine particulate rather than waves. If
the diagram below is understood, what the extremes are doing is merely
forcing a change in observation and not a change in the electron itself.
If this is the case then rarefaction seems the most practical method 
for achieving the same result.





If the the slits are designed to only allow one electron at a time extreme rarefaction will still
take place. Several factors will affect the intensity of rarefaction, for example, the smaller the slits in proportion to the electron the more extreme the rarefaction
will be. Smaller particles acting in a coordinated manner makes them appear as diffracted waves.

The diagram above attempts to explain the single and double slit experiment.
When the electrons pass through the slits they are forced by extreme rarefaction to switch from the experiential view SeTe to the side view StTt where what is observed as a new more complex wave 
interference pattern is in fact the electron breaking up into smaller parts or into cloud particulate. The the fine particulate rarefying creates the interference wave pattern not the other way around. 
 However, technically there is no matter (electron) and no observer, all of this takes place within StTt, which is why in one view, which is like a hologram, the electron appears as a point when in
fact it always remains a particle cloud, which is the code, pattern or information (described as uncertainty) as opposed to what the pattern or information creates (the point, electron or atom) seen in both the double slit experiment and the BEC experiment. This process may illustrate how and where virtual reality and physical reality are integrated. There is a need to de-clutter the science behind particle physics. For instance,

1. There may be a need to reduce the entire standard model that defines atomic structures and accept that firstly there is only a Tron, playing out electrons, neutrons, positrons, gravitons, up quarks, down quarks, Bosons and so on, each and every element and its diverse characteristics in the found and yet to be found in the standard  model. By having a refresh rate it may even be what creates antimatter. 

2. Matter in general as human beings interact and see it, as much as it may appear inorganic or organic is "smart" in that it consists of Trons. The Tron is "smart" in the sense that it becomes and forms any kind of  force or matter it receives instructions to become from the BEC through entanglement, as we saw with the red line of communication or entanglement in the animation of the two bodies passing each other.

3.  The Tron "does not exist" in the sense that it is a virtual particle, as is illustrated by the diagram above. It does not exist except as a hologram or projection (SeTe) of the mechanics and computations in the BEC which is  the only part of the universe that is substantive (StTt) nothing else apart from this exists.

4. Matter and energy in the shape of a Tron remains what it is, but it is fundamentally malleable, in the sense that it remains what it is and  does what it does on instruction from the realm beneath quantum mechanics that is yet to be uncovered by physicists and fully understood. All the matter around human beings appears "dumb", when it is in fact "smart" in the technical sense that it is constantly in a receptive state where it will remain as it is or change and become anything it is instructed or commanded to from the BEC using the red line vector or "entanglement".

5. Matter in the form of Trons appears to mimic some of the properties of fine particles in the BEC (StTt) and is therefore more complex than it appears at face value in the sense that it is formed from a myriad of layers of separate universes, which collectively form the very fabric of Space-Time as we saw earlier with how the Geometry of Space-Time consists of a Multiverse Array. As we see with the Heisenberg's Uncertainty Principle (HUP), motion, distance, space and time, basically the science or mechanics of existence itself as it is experienced are built from the layering universes which collectively create Space-Time.   

Each fine particle in the cloud is linked. This
link (B&C) tells each particle what to do or how to behave in real time presumably using entanglement. Each particle moves itself into position and implements instructions (A) from the link creating the particle cloud and its underlying pattern, which in turn pre-defines what the electron does, how it behaves and how it should look when it is observed. Since both the particle and its observation are processed in the same place this is why there is a discrepancy between electrons and atoms when they are viewed, one state being code (fuzziness and uncertainty) the other being the execution of code, i.e. the point or "physical" matter and its observation. The fine particulate appears to move as one, observed in the propagation of waves in the interference pattern and consequently behaves as though it is in a magnetic force field when in fact there is no force field, but there is an information link presumably using entanglement that instructs and coordinates each of the particles. The reason why it is assumed that each fine particle must operate independently within the cloud is due to the complexity of code, patterns or information upon which the complexity of electrons, atoms and matter in general is created. It is unlikely fields can have the the dexterity with which to control what each individual fine particle does, therefore the inference is that the coordinated behaviour of fine particles creates the appearance of a field effect rather than fields controlling the fine particles which they may not be able to do with the necessary dexterity and complexity. The 2D wave generation, receptor and pattern shown in the diagram are simply a recreation of what is happening using an apparatus such as a sound wave generator discussed earlier. This further emphasizes that the double slit experiment is a Bose-Einstein Condensate (BEC) experiment taking place at room temperature even though experimenters may not be aware of this.

The BEC, resultant patterns, codes or the Cloud is simply what comes into view when matter and the observer (experiential reality or the hologram) is disrupted by extreme temperature, extreme pressure or extreme rarefaction.The BEC can be described as an interface where reality is processed and in  fact nothing exists outside the Cloud.  The inference of this is that the observer (consciousness itself) takes place and is processed within the Cloud. It may be important to note that consciousness itself is a construct of the Cloud especially as this may offer a pathway for how  virtual reality technologies that are indistinguishable from reality are designed to integrate with human thought and consciousness.

Interestingly, anyone dedicated and sufficiently advanced
in meditation will observe all visual and sensory information withdraw
and be replaced by a surrounding of clouds very much like
those seen in the BEC before ascending to a higher state
of consciousness. This internal change in "perception" seems
identical to the change in view from electron to the [smeared out] clouds
shown in the BEC experiment.

What Next?

If this analysis is accurate what comes after quantum mechanics? Quantum mechanics leans on energy bands for its theory and energy bands are just like another fancy name or acronym for fields. This means that quantum mechanics faces the similar limitations to Einstein's approach to understanding the universe which is also based on fields or Space-Time Geometry (SeTe). There's a need for physicists to see the limitation of fields and refocus on the particle (StTt) and red vector or red line communication between particles often described as entanglement. There is also a need for physicists to understand the interaction of particles in the Multiverse Array due to the fact they play a critical role in the structure of matter and energy beyond quantity. Beyond quantum mechanics an attempt can be made to theorize what matter actually consists of.


The Anatomy of a Particle beyond
quantum mechanics
[sub-particle nano mechanics]

Matter is made up of particles and particles consist of sub-particles woven finely through many layers in different universes. Matter cannot exist without a Multiverse from which it is constructed.  The electron does not exist in the sense that it is simply a construct of the BEC fine particulate. However, it is designed to have all the properties of real matter except that it is experiential and confined to a single universe. The nexus at the centre of the particle is where "reality" or perception takes place within a universe, while the code or pattern that manifests it, is formed from the surrounding sub-particles functioning in and woven through separate universes or a Multiverse Array. Energy moves freely between universes through energy receptacles that are like pathways between universes. Therefore, the particle, that is itself built from sub-particles cannot exist without a Multiverse. For example, when  the electron is observed, all that is seen is the nexus or singularity at the centre of sub-particle activity at the periphery or "the cloud" which ties in with Heisenberg's Uncertainty Principle (HUP), where to look away from the singularity or nexus naturally causes it to collapse into a view of the cloud or sub-particles at the periphery. The hypothetical anatomy of a particle strangely resembles the Multiverse view of a sun mentioned earlier, almost as though the sun itself were just the nucleus of  a large electron orbited by planets or sub-particles circling it in multiple universes, the same way an electron in turn orbits the nucleus of an atom. The tiniest parts of the universe would then be observed in how it functions as a whole or at different scales the outcome of which is a series, spiral formation, or pattern that is simply repeated at different scales. This implies that dark matter and dark energy are simply aspects of fundamental matter operating in other universes within sub-atomic structures at the sub-particle level. It becomes dark because a universe cannot see the parts of itself functioning outside its own reality or singularity, these would basically be beyond what are referred to as light-speed barriers that keep universes separate. As much as scientists can use quantum mechanics (QM) to dissect matter at the subatomic level, this becomes inadequate because QM is not refined enough to see matter at the sub-particle level which requires the ability to see across universes or across light-speed barriers. However, if there is an impasse where physicists cannot accept the Multiverse hypothesis especially its role in the construction of matter at the subatomic and sub-particle level, then the impasse in physics is a mental one, in the same way that "fields" are the imaginary friend that is an obstacle to fully understanding gravity, how it works and how it may be controlled. The next significant leap in the sciences is to develop the technology with which to penetrate or breach light-speed barriers and gain access to the Multiverse Array and the sub-atomic, sub-particle structures that create it that are on a scale smaller than that currently observed in quantum mechanics. These sub-particle structures control the movement of energy and information between universes and offer the next most strategic area for advancement in physics.



The hypothetical anatomy of a 
particle strangely resembles the hypothetical
anatomy of a sun, almost as though the sun were
the giant nucleus of an atom. 


The Crisis in Cosmology seems to come about as a result of not accounting for a Multiverse Array, its potential relationship to dark energy/dark matter, and not being able to put all the pieces of this puzzle together. When it comes to the Hubble Constant there may be some simple problems with theory that need to be addressed. For instance if Cosmologists are right in determining that the outer reaches of the universe are accelerating faster and faster determined by red-shift then technically the Cosmologists cannot then state that the universe is expanding. This is an oxymoron, to put it politely. Its like saying , an object speeding away from you is visually getting bigger and bigger. Acceleration is caused by the contraction of Space-Time "Geometry". If this is the case, then how can models of the universe accurately describe it? If the universe is getting bigger and accelerating, it probably means that the rate of acceleration of matter is much greater than the rate of contraction of Space-Time Geometry, to the extent that it is generating a cosmic form of escape velocity and this difference is possibly not being accounted for. In other words the rate at which matter in the universe is expanding will cause it to over-shoot or escape whatever object is contracting Space to cause the acceleration or red-shift in the first place. If these parameters are not viewed correctly then it may make sense why it is difficult to reconcile models of the universe ( Lamda-CDM) with the universe itself. It means that the contraction of Space-Time is being added to rather than subtracted from expansion of the universe because of acceleration being thought to mean Space-Time Geometry is expanding, when in fact it is doing the opposite. 




In the 1st diagram the imaginary Space-Time Geometry surrounding matter is negligible. The matter in 
the universe is more reactive to the interaction of its own local masses. In diagram II a body
outside the universe C is shown to affect Space-Time Geometry by contracting it.
As a result the universe A&B begins to accelerate towards C. Cosmologists need to be clear about what they are describing. In diagram II Space-Time Geometry is contracting (shrinking), which is what is 
inducing acceleration. But if this is the case then the matter in the universe should 
be coming together not expanding uniformly, unless the expansion is linear, that is,
toward C but while A&B (matter in the universe) is clumping or drawing closer together. For matter in
diagram II to be accelerating and uniformly expanding, that is, with A&B also moving
further apart at 3, the rate at which matter is moving apart may need to be so great that despite being 
accelerated towards  C, at 3 it is spreading away form C and may consequently escape 
contact with it.If by saying "the universe is expanding" Cosmologists mean that
Space-Time itself is expanding as shown in diagram III, then this is flawed
because if Space-Time is expanding then this is like going uphill and matter in
the furthest parts of the universe will meet resistance to its expansion the consequence of which is deceleration. Its difficult to visualize matter in the universe moving faster and faster toward a force or body that induces deceleration. If by saying the "universe is expanding" cosmologists mean that it is
moving away from the original force that pushed it out, that is, the big bang, then 
technically it should be expanding toward C while matter is shrinking in that A&B are
moving closer together as they accelerate outwards toward C. If that the case Cosmologists
are not explaining the nature of this expansion succinctly.



The light astronomers see arriving on earth from billions of light years away such that even the star that exploded to create this light no longer exists may not actually travel this distance, it traverses the Distance. Like the water in the lake, the light astronomers observe as it reaches earth will most likely consist of a "sea" or body of photons already here that have simply been stimulated to communicate the light from the exploding star. If this is correct then it must also be accepted that the entire distance of a billion light years is like an ocean filled with photons by which this light becomes transferred. Basically, this would mean that the entire universe is filled with these primary photons and would in fact be what physicists refer to as the "vacuum of space", except that it is not a vacuum at all. Though they are referred to as "photons" here they may not be photons at all. Remember they put out whatever energy is put in. The light we see is therefore just an input out process being propagated by something we do not have the means to detect except through this process. Therefore light may simply be a small attribute of something more complex than is understood today. All the energy from the sun that reaches the earth may arrive using this process of stimulation. This stimulation theory may require us to then accept that even the heat we experience with the light does not travel from the sun to the earth, it is simply triggered here on earth in proportion to the source of the emission. If this true then this implies that empty space or "the vacuum of space" contains more energy than matter. It has to be able to emit as much energy as any source that triggers it to sustain an equivalence or to maintain a universal balance that has little compromise. To do this the vacuum would need to inherently contain what would seem an almost limitless amount of dormant energy, but because it is a responsive energy it must be very difficult to access since it only lets you get out as much energy as you put in. Nevertheless, it may be possible that catastrophic astronomical events may test the resilience of the vacuum being so violent as to attempt to exceed the potential of Space to equal the energy put in causing a tear in the very fabric of the MDT universe, in which case the response from authentic Space is an almighty push back to contain this that appears as a black hole or singularity. This would prevent any further damage to the fabric of the MDT. We also cannot rule out that these "photons" that transfer light and energy propagate throughout the universe and exist in various states from dormant to energized, and that like any substance may exhibit different properties depending on what state they are in. For instance, water can be frozen to ice, thaw to  a flowing watery consistency and even be converted into steam with each state having its own inherent properties but it remains essentially the same substance. If this were correct then the universe would be filled with various consistencies of photons (for instance the way ice floats on water) which on observation exhibit different properties so as to make them appear as different materials when in fact it is one substance.

This explanation of light also allows us to understand a significant problem in astronomy. Scientists have discovered that the speed of light was faster when the universe was expanding at its very beginning. Light speed at this time was faster than it is now. The problem is that according to Einstein's model the speed of light is a constant that has always been the same and will never change. Physicists are then forced to develop very complicated ways of trying to explain or reconcile theories that seem irreconcilable in order to keep their theoretical physics in Einstein's flawed model or risk ridicule. The brilliant physicist Steven Hawking wrote his PhD thesis on this problem. However, if Einstein's model is flawed and light does not in fact travel, but is transferred then it is common sense that the speed of light is in fact not a speed at all, but a transfer rate. If the rate at which light is being transferred between static or cloaked photons is increased, then light itself will appear to speed up beyond the speed of light constant. It can do this easily without violating the rules of light speed by gaining infinite mass because its not actually moving. If the rate at which light is being transferred between static photons slows down, then the speed of light will also appear to slow down. It then becomes quite easy to explain why there are variations in the speed of light during different stages of the development of the universe. Everything related to theories on light fall into place logically and elegantly.

We know that matter and energy are basically the same. Variations in the speed of light may provide insights into the universe as it exists today. During the creation of the universe evolving conditions including the Distance itself were most likely in various states of turmoil. In some of these states light slowed down sufficiently as to break apart and begin to recombine into different substances. It is possible that slowing down the speed of light is similar to melting metals allowing them to be fashioned into different objects with the exception being that primary photons as fundamental building blocks created a virtual soup out of which any imaginable substance could emerge depending on how the primary photons came together during convalescence. It is most likely breaking subatomic particles down to their fundamental building blocks will eventually reveal that they consist purely of primary photons in various states of decay, this decay merely being the various patterns and ways light which slowed down convalesced to form a soup of particles from which all subatomic particles that form matter itself originate.

It is most likely that quests such as the Large Hadron Collider (LHC) will discover
that breaking down subatomic particles to their fundamental parts will reveal 
they consist of primary photons or light that has slowed down and convalesced into the various
forms of matter we see today. But there are significant limitations to this approach. It may therefore be just as sensible to build a light decelerator as it is to build an particle accelerator.  A light decelerator would be able to answer all the questions the LHC seeks answers to in physics and may have fewer limitations. However, to do this scientists would have to accept that the speed of light is not a constant, and that fundamentally all matter consists of light. Being unable to think outside the box like this may simply be another example of the obstacles Einstein's flawed model has created that hinders the advancement of physics.

The LHC has limitations in that it is unlikely matter can be broken below the subatomic state even with an experiment of this scale. The LHC experiment and its magnets could be reconfigured to decelerate light instead of accelerate particles as a way of more concisely achieving its objective. Since light has electromagnetic properties and if it does not travel but traverses distances through a transfer rate and  a non-contiguous wave it may make sense to use magnetic fields to slow down the transfer rate and observe how very slow light breaks down and combines to form new matter. Experiments like this can be conducted on matter itself armed with knowledge that at its core matter is just light since it is constructed from primary photons. All matter is therefore simply a form of non-contiguous light. The LHC attempts to understand gravity through geodesics or the Geometry of Space (Se). 

 
Non-contiguous light can be decelerated using disruptive ultra-high or ultra low frequency powerful electromagnetic fields.
This may allow scientists to slow light down until it begins to break apart. This may in turn reveal the
true nature of primary photons. Being non-contiguous suggests that they transfer energy equivalent to
any source of emission. Should all forms of sub-atomic matter be made from light then matter itself is inherently non-contiguous and evidence of this is that it should begin to break apart or start to liquefy as it  reverts to its individual atomic and subatomic parts when exposed to these types of fields. If this is true it may be possible to observe the behaviour of matter and gravity when affected below the sub-atomic level, this would allow scientists to investigate deeper than the LHC can reach. A decelerator may not only be a detector but can also be designed to manipulate decelerated light or matter and any or all of its properties including how Space interacts with it, which may provide avenues for observing gravitational
effects and eventually finding ways to manipulate gravity itself. It is most likely the observed effects of these kinds
 could not be explained by conventional physics based on Einstein's model because in his model
light is constant cannot be slowed down; in addition to this non-contiguous waves are not thoroughly
accounted for in modern physics investigation of waves and no emphasis is placed on theory that all matter may simply be made up primary photons. The only experiments that seem to be available 
that seem to yield results that are predicted by the theory here though at times dubious is the "Hutchinson Effect" nevertheless the authenticity of these have never been independently and openly verified.


This video shows what has come to be known as the 
"Hutchinson Effect" of substances said to be placed in ultra high
frequency electromagnetic fields. Though these experiments are not verified 
the breaking up of particles at the subatomic level causing liquification
and gravitational effects are some of results experiments designed 
to slow light waves down should expect to observe. Imagine being able to
liquify metals using magnetic fields without applying any heat and what new
advantages physics like this could bring to the mining industry. 
The Hutchinson-Effect and Boson-Einstein Condensate appear
to be fundamentally the same phenomenon achieved using
different aparatus.

Slowing down light is likely to have the same effect as loosening the "glue" that holds subatomic particles together. So it is no mystery should substances affected in this way begin to break apart and therefore liquify. Since its known that the recently detected Higgs Boson plays a role in holding matter together and is linked to gravity it should be no surprise that slowing down light creates gravitational effects. In addition to this Once light slows down it may be able to convalesce into natural physical materials which may account for how 4.9% of the matter in the universe was created while the remaining 95.1% consisting of dark matter and dark energy is light or "primary photons" that convalesced at different speeds in disparate conditions within the same environment so as to produce matter created from the same substance but that has convalesced into matter with dissimilar attributes and properties. Its interesting to note that though space is black, the darkness may simply be a form of light. Decelerating matter, for instance using the Bose-Einstein Condensate (BEC) experiment with lasers to absolute zero may simply yield dark energy and dark matter, but unlike dark matter and dark energy found in space it may be of type that can only exist at low temperatures. . Decelerating light itself is unlikely to yield much because, technically, light is matter that has already been decelerated but that remains in stable state at diverse temperatures but that in its natural state travels at the speed of light. However, decelerating it should convert light into a plasma while it is suspended and if the appropriate electromagnetic blueprint where applied to decelerated light it should transmute that light into a different substance, decelerating or capturing light is only the first step in an experiment to transmute it into a different substance. In its decelerated state, while in a plasma form, scientists should be able to use electromagnetic fields to harness the plasma as a route to creating gravity. What this implies is that there must be an infinite range of attributes to matter that can be created in the lab in this way, i.e. any characteristic can be crafted into it to create designer matter. This further implies that some types of matter, which like sodium, when decelerated using BEC are able to remain in the plasma state at room or any temperature the scientist creating it desires. This type of matter would be ideal as a base substance for building exotic matter with any kind of attribute or property desired and for generating gravity using electromagnetic fields. Finding matter that behaves in this way at present may be a little like Thomas Edison testing different filaments for the desired light bulb. To build matter from light, as strange as it may seem may involve decelerating matter (which is inherently light) as seen in the BEC and transmuting it using fields into some other substance. However, to decelerate light itself to speeds were it loses a straight line trajectory (rather than break down since its already in its fundamental particle form), deceleration itself should be the first step to creating new matter. While decelerated it would have to be exposed to an appropriate electromagnetic field or blueprint. What is exciting is that scientists are already able to physically slow light down by trapping. It should nevertheless be noted that the electromotive characteristics of light that can propagate at diverse temperatures may mean that if transmutation does take place it may only hold briefly. Nevertheless, once light is re-engineered in this way it should be possible for scientists to create light that stays in place, and basically becomes plasma. Finding a BEC substance that will remain a plasma at room temperature is ideal for controlling gravity and designing specialized or exotic matter and it could be designed from the processes like the BEC itself. This is the opposite of the LHC which breaks up matter to study what it fundamentally consists of.


Tibetan prayer wheels are a good example for explaining how
light works. The wheels represent photons. When they are standing still
they are dormant, opaque and do not emit light creating darkness. The monk represents the rate 
at which light is triggered by a source. As he walks he spins each wheel which represents a photon and its state changes. It begins to spin becomes transparent emitting light. The speed of light is the rate at which the wheels are stimulated. The wheels themselves always retain their individual mass and remain in place. The energy they release is proportional to the
source of the trigger or emission.

Dominoes can also be used to illustrate how light travels. 
If each domino is a photon then when it is standing still it creates darkness or opacity.
When light is emitted from a source the dominoes become excited and begin to fall. The falling
dominoes simply demonstrates that the photons stay in place while they transfer light. If this is true then it means that light should be highly manipulable and does not have to travel in straight lines. 


If light traverses instead of travels, then it possible to slow it down immensely. If it can be slowed down then it should be possible to control the direction in which it appears to move. This means new avenues in physics can be opened 
since it can be manipulated to create all manner of exotic combinations of matter as is shown by the 
creative pattern of the different types of dominoes used in this video. However, all this incredible
potential to advance physics will be blocked by people, science journals, the media and institutions policing Einstein's flawed model, who ridicule physicists who think outside of it, refuse to fund their research thereby effectively
stonewalling advances in both theoretical and applied physics.


Faster than light travel

Many unanswered questions about how to harness gravity.
Finally, the first critical answers will be unveiled.


Einstein postulated that nothing can travel faster than the speed of light. This inference made from the top view creates a constant upon which he builds much of his theories. Einstein's understanding of the top-view universe was revolutionary in his time. However, when analysed from the side-view the limitations he associated with the speed of light belong to a more primitive understanding of the universe applied in physics today. For instance the nearest galaxy is Andromeda. Travelling to Andromeda at the blistering speed of light it would take 2.5 million years to get there. For the purposes of astronomy this makes the speed of light exceptionally slow. However, side view analysis would force scientists to dismiss distance as formal barrier to space travel. It would demonstrate that the idea that the Andromeda galaxy is an unreachable distance away is a primitive one because this galaxy and earth occupy the same Space (as does the furthest known galaxy in the universe) which means the science of an advanced civilization would know that technically there is no substantive distance between them. If there is no technical substantive distance between them these locations, perceived as being unreachably far by primitive modern day science, are in fact very easily reachable. They can in fact be be reached at an interval of time determined by side view based technologies in, for instance, half an hour or 1 second. Theoretically this means straight-forward faster than light travel from earth to Andromeda in 1 second without weird repercussions or having to devise wormholes, warping time and other exotic theories is possible very much in the same way that the BBC or CNN switch from a journalist in Perth, Australia to a journalist in Chicago, USA within the same space (or frame, i.e. the area of the television screen) the traveler or "spaceship" simply switches from earth as a location to Andromeda limited only by the duration it takes to turn the dial. This is due to the fact that from side view analysis a scientist is not crossing places separated by "distance", but rather "tuning" from one place into another irrespective of distance as they are located in the same space much more like tuning from one radio or television station to another where all the waves or signals inter-exist (as does earth and Andromeda). For instance, in Zambia when audiences listen to radio they don't say they traveled to Hot FM, then traveled to Radio Phoenix, then traveled to Komboni Radio, they say they "tuned" into these stations because they know that while they listen to one station all the other stations are still present but are simply not tuned into. Similarly, the side-view postulates that being on earth is the location "tuned" into does not mean Andromeda is not present in the same Space. Achieving this journey in 1 second would entail travelling at 2.5 million times the speed of light, something technically possible from the side-view but technically impossible according to Einstein and the exceptional yet more primitive understanding of modern day physics based on a top-view analysis and its relevant or irrelevant constraints. What this means is that any location in the universe can be accessed. Our universe is simply a tiny part of a a multiverse. Similarly any location in the multiverse can be accessed through a similar process. Hopping from location to location through Space entails there must exist a map or geography of the universe and multiverse to allow the precise selection of coordinates for a location or "channel" to jump to. How this  map of the multiverse appears to work would be accurately theorized by  Gaston Julia - (1893 -1978) using the Julia [Map] Set for location based geography of a universe and Benoit Mandelbrot (1924-2010) (Mandelbrot [Map] Set for location based geography of how the multiverse would work. These can be used hypothetically to know in advance the exact location where a jump through Space will take a space-craft. If you want to understand these sets the video below offers a succinct explanation. These sets demonstrate how potentially vast the geography of Space is. 

Ben Sparks' excellent explanation of Juila Sets and Mandelbrot Sets








Mandelbrot Sets may be subject
to misinterpretation because depth is not taken into account
correctly. There is only one type of circle in a plane or layer, namely A.
The remaining circles B, C, D etc appear smaller because they
are not on the same perspective or layer/plane as A. Each circle of different size 
occupies a different lattice. What you then get
is a matrix for matter where the tiniest part of the set is equal
to the largest part. When you go out of the circle another circle
is entered to create a complete map.
 In this case movement or the map is 3 dimensional
or 3 Directional. If you notice when Ben goes outside  some parts of the
circle the figures explode.
This is likely due to the fact that 
Mandelbrot's equation does not account for layers and therefore
for the fact that exiting one circle or bubble in a certain manner may
take you out of that plane into another
one, as shown, i.e. from A-B-C etc. If a circle or bubble is not mapped
then this is likely a design quality of a specific type of matter or
subject being observed, but not of Space itself. The map to the smallest part is
the mapping of the largest part. 




Talk of folding Space-Time, building worm-holes, warping time,  engaging warp-drive and hyperdrives for jumping through space, figuring out how to get there and back without arriving before you were born and so on to create shortcuts to far off places in other galaxies are antiquated ideas in physics of a less informed era like fossilized dinosaur bones because our universe observed in a single frame or continuum is already infinitely compressed since technically distance does not exist from the side view. The theory of a hyperdrive technologically trumps that of warp drive, since a hyperdrive is theorized to move outside of time by jumping into hyperspace although it still faces interference from mass or "mass-shadows". The recently introduced fictional concept of a Sporedrive in Discovery that allows almost instantaneous travel and though exotic is a little closer to Spatial technologies. It trumps hyperdrive and though fictional is closer to the kind of technology we should hope to have in the future. Spatial technologies would at present be the ultimate technology for travel. Using side-view physics a civilization simply identifies where it wants to go or be, anywhere in the universe, and arrives there at a pace, speed and in a duration of its choosing that suites a need, requirement or preference at any particular juncture. Meanwhile a civilization building its technology on Relativity Theory blew themselves up on the way, were crushed like pancakes into their seats by high speed g-forces, scrambled like eggs in warp drive or were woken up from hibernation too early mid journey and the entire crew is nearly 200 years old when they arrive, whilst on earth no-one they know is still living and their space suits look like they were designed in the 18th Century because a thousand generations have gone by. No crew member would like to experience the trauma of going insane as a result of being exposed to time distortions that persist after a jump and other potentially dangerous aspects of this kind of travel. Basically a civilization whose technology is built from top-view physics will be significantly backward compared to a civilization that has leapt ahead through a side view understanding of physics. 

Furthermore accelerating from 0 to 2.5 million times the speed of light in one second does not face interference from any primitive notions of being affected by g-forces, requiring infinite energy to move at near light speed or gaining infinite mass as a result, as would be inferenced by modern top-view physics, as this change of location or "speed" is not applied through the medium of matter-distance-time, but occurs through Space (Remember a fundamental weakness in the Theory of Relativity is that Einstein makes no distinction between distance and space, whereas from the side view distance and space are two distinct constructs). Though visually a vehicle travelling below the speed of light through either the Distance or Space would appear to be moving "normally" as we observe the every day occurrence of an airplane travelling across the sky, principally the method of propulsion using side-view space is completely different from that used conventionally to travel using top-view distance e.g. through thrust generated by an engine. Technically it is moving through space and therefore without the primitive notion that g-forces would make travel at exceptional rates of acceleration impossible. This is due to the fact that a vessel moving through distance as a medium such as an airplane, rocket or other similarly propelled vehicle must experience top-view g-forces. A vessel travelling close to the speed of light would most likely be very difficult to navigate and fatal matter on matter collisions would be almost impossible to avoid. Should it be designed to use a wormholes and so on the extraordinary trauma of traversing biological organisms through the effects of Einstein's Space-Time could prove as lethal as exposure to radiation at a nuclear power plant, whereas a vehicle harnessing Space to change its location does so outside mass, distance without motion or "time" and therefore without any weird, excessive and primitive g-force or dangerous "Space-Time" effects on the occupants of the vessel proposed by mundane limitations in antiquated top-view theories currently applied in modern physics. A civilisation functioning on Spatial technologies would view a civilisation function on Relativity Theory as intelligent but very backward. Traveling through Space it has the option of moving a vessel outside our MDT universe where the vessel simply moves through matter be it a planet, asteroid belt, sun or debris safely as it does not require the vessel to make physical contact. A vessel built on Spatial technology could, while standing still, simply shift into Space and completely disappear from physical visibility because light could pass straight through it at will (no need to try to bend light around it). It could become invisible to radar at will. Objects and people outside the vehicle could pass through it, but it would still be right there observing them. It could do this on the ground or in the air. This gives it unparalleled levels of stealth. It could land on the lawn in front of your house or hover just above it and by today's level of physics, science and technology there would be no way of knowing or detecting it was right there. The advances of Spatial technology over matter based technologies found in the MDT universe are innumerable.

Being able to travel to any part of our universe instantly may allow us to explore it more effectively. Nevertheless, if there are a vast number of places to explore and infinite number of universes similar to our own we may soon discover it could take millions of years to investigate and catalogue all of what's out there. To comprehensively do this would take more than just a newfound incredible speed.


Laser Interferometer Gravitational-Wave Observatory (LIGO)

LIGO has recently been in the news for having detected gravity waves. Since this is such an important development in physics and our technical understanding of the universe I cannot help but comment on it.

To begin with what is gravity? According to Einstein gravity can be observed in the pressure large masses exert on Space-Time. Gravity can therefore be detected by distorting, bending, compressing and stretching Space-Time. In the deliberations we have had thus far it has been noted that Einstein erred when he labelled distance and/or the matter it separates as “Space”. I prefer to refer what Einstein actually refers to as distance, the Distance or matter not Space. Relativity Theory is formulated on a Matter-Distance-Time (MDT) universe. By the way, when time is referred to here, it refers to motion. Time in Einstein's model is nothing more than a construct of motion where matter moves relatively to other matter; hence the use of the term "Relativity Theory". It is not formulated on a Space-Time universe as Einstein postulated and as is believed and maintained in applied physics to this day. If distance is observed as being of a symmetrical nature it can appear to be empty and behave like or mimic Space and can be used to draw force lines popular in drawings used to depict gravity.

Einstein mistakenly labels this Euclidean geometry as Space or Space-Time.
What this image demonstrates is not a distortion of Space-Time as Einstein suggested. It shows a distortion of symmetrical distances denoted by imaginary lines relevant to matter. This distortion causes any object caught inside the distortion to accelerate toward the centre of the object. If LIGO measured this distortion, strictly speaking, it did and yet it did not measure gravity. I will explain. If gravity is caused by some aspect of Space, then it is possible gravity is merely a form of acceleration that is also being misinterpreted by the Einstein error. This simple original error on Einstein’s part as a result of a slight misunderstanding has mislead physicists for decades. It is simply a perception based problem of an age in physics that needs to be corrected to allow physics to progress into a new era. The Laser Interferometer Gravitational-Wave Observatory (LIGO) is on track but is no exception. What has been measured by LIGO is a form of distortion in the top view matter-distance-time universe. This is correct. But it has not been caused by a distortion of Space, to say this is incorrect. Gravity is a reaction to the distortion, however, the distortion is not gravity itself. Gravity is nothing more than a form of acceleration, a phenomenon no different from the force that pushes you and your passengers into the seat when the driver pushes down on the gas pedal.

Here is a simpler recreation of the earlier diagram
with the earth in the middle and a single imaginary line of "force"



Standing on the shoulders of a giant. Albert Einstein
giving a lecture on relativity at Lincoln University, Pennsylvania 1946.

As much as we all love Einstein and what he accomplished in theoretical physics there comes a time when we need to take what he left humanity with and begin the next journey. This is what he would have wanted. We have stood on the shoulders of a giant and its time to use his brilliance to take the next step, the next leap, the next bound. 

This video is a contemporary example of how Einstein's error continues
to mislead physicists to this very day. The blue spandex represents the Distance
or Euclidean geometry it does not represent Space. Though the balls are
being moved by the depressions created in the spandex, the spandex itself
does not endow the balls with mass. You can see this here with your own eyes. This mass comes from Space,
that is, from outside of Euclidean geometry and therefore outside of Einsteins Space-Time model. Without 
a pre-existing mass or push-effect, even if the experimenter pulled the spandex down  with his bare hands 
there would be no gravitational force, no movement. Movement or "motion" being Time itself in Relativity Theory further
demonstrates just how mistaken Einstein's understanding of the universe was at this stage. What more evidence do you need? 
To therefore say that Euclidean geometry creates gravity as Einstein does in Relativity Theory
and his model of Space-Time is in fact seriously erroneous and caused by Einstein mistaking distance for Space. 
This error is echoed by the teacher in this video and by physicists in general across the world.
For how long will students, science and the public at large be misinformed and mislead by this misdirect?  
The use of spandex to explain gravity is a useful tool for visualizing gravity but it is not accurate 
. Its useful only if when being used to demonstrate gravity its limitations are understood by
physicists and this flaw or limitation is explained to students.

If the video above and the explanation below it makes sense. Then do you begin to see how significant the misdirect in physics is? For instance, according to Einstein warping Space-Time creates gravity and is capable of bending light. The bending of light as it passes large masses was predicted by Einstein. It has been proven true, similarly many of Einstein's theories continue to be validated. However, closer examination shows that mass is not created within his Space-Time model. If gravity does not come from his Space-Time model, but outside of it, then this is just another seemingly true prediction that is in fact faulty, deeply flawed or catastrophically misleading.

Einstein’s predictions have been proven true by LIGO, and will continue to prove true as long as they are based on the front or top view of the universe. However, we must begin to look at the theories he left us with fresh eyes. The fact that he confuses distance for Space, will continue to have serious repercussions on the viability of modern day physics and its capacity to add value to the advancement of technology in this area. Is this concern really just semantics, for example, you say tomatoe and I say tomato, but we’re talking about the same thing? No its not. 

We are talking about completely different forces. This is like seeing a v-tol  craft taking off and saying it’s being lifted by gravity, when in fact it’s being lifted by air. The difference is that blatant. Similarly, gravity and distortions in matter-distance-time (MDT) are not the same thing. This difference should be able to demonstrate to physicists why Einstein mislabelling distance by calling it Space should be of tremendous concern. Distance and matter are not Space, the geometry of the observable universe is not Space. Failing to distinguish the Distance from Space creates many hidden pitfalls. Space-Time infers that distortions of the geodesic nature of the universe create gravity when they in fact do not. As long as this view remains uncorrected how to control and manipulate gravity will either prove tremendously costly, weak, illusive or inconclusive.

If what has been described here is true then where is Space in this theory or explanation? If what Einstein thought was gravity in his age is simply a geometrical effect which is not responsible for gravity then where is gravity in our era? Since Space is ubiquitous gravity is the resistance from Space by and through which movement observed as acceleration takes place. I will explain this shortly with just one diagram. When Einstein's view of the universe is corrected, the interpretation also begins to make sense.

The diagram below shows an object that, from the top-view or Einstein's view is being pulled toward the earth. The earth's mass causes the distortion. However, the distortion in and of itself is not gravity as proposed by Einstein. It is not Space as observed in Einstein's model. The force that causes acceleration is the resistance of Space to the distortion which pushes the object downward. It is not being pulled. Gravity is coming from Space, not the distortion.  Therefore, technically what the physicists at LIGO are measuring is not gravity, itself; but a distortion of matter-distance-time (MDT). It also means that objects are not pulled towards the earth and electrons are not being attracted by the nucleus as is commonly believed or as described by Einstein, they are in fact pushed or or kept in place, just like how the seat in the accelerating car, described earlier, pushes the passengers. Objects on earth are not being pulled by gravity they are being pushed, skydivers jumping out of planes are not falling towards the earth, they are being pushed toward the earth atoms are not being held together by a pull effect from the nucleus, they are being held together by force response from Space. Nevertheless, this description should not cause further confusion. The geometric distortion is being resisted by Space, that pushes or resists on and on, forcing the universe to continually expand. What we consequently view from the top view is acceleration. We refer to this acceleration as gravity.

The diagram shows gravity coming from Space not the distortion because
the distortion is not Space. It shows gravity being caused by a push effect
from the resistance of Space rather than a pull-effect.

Fundamentally, whenever physicists refer to gravity the general term used is "pull", for example, "the sun's gravity pulls on the planets..." If it is indeed true that objects entering the earth's gravitational field are being pushed by the resistance from Space in response to distortion geometry, and not pulled directly by distortion geometry itself as Einstein theorised then this is game changer. For instance the fact that the universe is expanding is based on the assumption that matter is being pushed further out. Since distortion geometry itself cannot directly create a gravitational effect, and the gravitational effect acts in the opposite direction to distortion geometry rather than being pushed away from a theoretical centre, astronomers may have to consider that the universe is expanding because it is being attracted by a larger mass. The distortion geometry of this mass may be acting on our universe causing an outward gravitational push on matter that is observed as an expanding universe. If this is true then our universe is continuously expanding due to the resistance of Space (not Einstein's flawed "Space-Time" but the Space as it is understood in the corrected model) in response to a distortion geometry coming from outside our known universe. This approach may also solve a long standing problem in physics of how something infinitesimally small can be astronomically heavy such as a black hole. Our universe is neither infinitely large nor infinitely small and a singularity is merely the point beyond which the MDT universe and any top view substance within it reduces until it arrives at the minimum scale of existence hence infinity or divisibility by 0; whereas Space being ubiquitous exists well beyond this minimum and maximum scale being capable of existing outside of the concept of distance itself. If cause and effect are separated, as has been shown here, then gravity can in fact exist outside the singularity [the infinitely small source of distortion] is receiving a push effect or response from Space [the astronomically heavy effect] and the two are able to co-exist. Since Space keeps universes apart any cataclysmic event that threatens to tear the Distance will be responded to by a gravitational effect that resists the distortion. The more violent the distortion the more aggressive the gravitational response from Space that contains it. Space effectively constrains any mild or excessive physical disruption of the Distance (Euclidean geometry or Einstein's lines of force) possibly sealing off and preventing any direct connection between separate universes effectively responding with gravity to force a submission viewed in our universe as a singularity; which seems like nothing more than a stopper plugged into a hole held in place by gravity . The singularity not being the actual source of gravity itself solves the conflict.  Here is brief presentation of this problem:

Correcting Einsteins Model helps resolve quantum gravity

Correctly labeling Space and correcting Einstein's error may comprehensively answer questions that are presently unanswerable such as the ongoing problem with understanding quantum gravity and why the universe is expanding and so on. The answer becomes quite simple: the universe is expanding because it is responding to a gravitational force and infinitely small phenomena such as black holes or the tiny nucleus of an atom do not pull or are not the direct source of gravity but rather interact with a push effect from Space. The answer is simple, logical and elegant. However, where the universe is concerned, if this makes sense and is true, it raises another question. What is it, beyond our universe that could exert such a powerful distortion geometry that it is able to indirectly generate a gravitational push or force able to "pull" (sic push) toward it all the matter in our universe? Is the matter in our universe on a collusion course toward it? Or is our entire universe merely orbiting or tethered to some other object or universes of incredible mass equal to or greater than it? Does this mean our universe is part of a local group of universes? If galaxies can create local groups or clusters, it may be possible that universes can do the same. If our universe is just one amongst many, then how many are really out there? If there is merit to this, new theories in astrophysics may be required to understand our universe and we may have to revise how expansive this realm is and our universe's place in it. Furthermore, if Space will go to the extent of creating a singularity to prevent physical matter from escaping from one universe into the next it explains why objects become infinitely heavier as they approach light speed. It may become heavier due to the resistance from Space, namely gravity, effectively containing unintelligent matter and keeping or trapping it in a continuum. It also explains why faster than light travel is only possible through Space, but not through the conventional universe.

Correcting Einstein's model improves our understanding of gravity and may require astrophysicists to consider that our universe may be under its own gravitational influence or that of another body, may simply be paired with other universes or just one amongst many tethered together in a local group of universes by gravity perhaps separated by cosmic background radiation. [A logarithmic illustration of the entire universe, starting with the solar system and ending with the cosmic background radiation of the big bang.
(Pablo Carlos Budassi/Wikipedia (CC BY-SA 3.0))]
The resistance from Space is not sufficient to comprehend Space itself. Space operates from the side view where distance and time are meaningless forces to it, to the extent that they do not exist. Since distance is no obstacle, to access Spatial technologies is to gain the ability to go way beyond the size of an electron or proton and interact or work with microscopy at an advanced sub-nano scale where matter in its most fundamental state can be examined and manipulated. Space does not function on exactly the same principles as the MDT universe proposed by Einstein. Distance, mass, weight durational time are insignificant to it. It functions more like code or informatics; a method, operating system or type of physics completely different from anything being currently applied. For instance, to be able to do this the structure of an atom consisting of protons, neutrons and electrons should not be seen as "Matter", a substance, but rather as bits of information coded by Space to behave in certain ways that we view at the top level as quantum mechanics or when coded together in a certain way produces the Periodic Table.  These particles, which are in fact just made up of bits of information in different patterns, are refreshed to give them [the appearance] of mobility we refer as a wave from which Schrodinger gets his famous equation. Being bits of information they are in fact always static and refreshing these stills frees them from immobility to create the hypothetical structure of the atom. This means that in reality protons, neutrons and electrons have no inherent mass or tangible form, all these are just attributes that must be provided or naturally programmed by Space. Like the earth any mass they exhibit and any energy they have or can produce comes from Space, as does their ability to move (wave properties), appearance and any other attributes observed from the top view. Space itself, which cannot be thought of in terms of "size", since size is just an attribute of Space may have to borrow ideas from computer science relevant to hardware and software. To be understood it may need to be approached from the concept of how transistors can hold, process and manipulate information. These tools can provide some insights into how Space works. To control Space is to gain the technology with which to manipulate matter and possibly [top view] reality itself since all the constructs created from its attributes such as mass, matter, distance, energy, time and so on are merely a form of code whose value is determined by the underlying coding or informatics that have dictated how these attributes should appear and behave. For instance when it is said "Astronomers have discovered what may be the most massive black hole ever known in a small galaxy about 250 million light-years from Earth,... The supermassive black hole has a mass equivalent to 17 billion suns and is located inside the galaxy NGC 1277 in the constellation Perseus" (Wiki 2017) we should not be overwhelmed by the more primitive top view values spoken of. Instead we should take the more advanced view that 250 million light years and mass equivalent to 17 billion suns are merely attributes of Spatial code that cannot be greater than the information from which they are being created or generated. The fact that even a black hole with the mass of 17 billion suns is too weak to penetrate or break through the resistance of Space illustrates the power inherent in this technology once how to harness it is understood. In other words we need to stop primitive thinking in science that continues to believe that when a bigger truck appears on a laptop screen, the laptop gets heavier, when the laptop can hold and contain the mass and breadth of an entire galaxy or universe on its screen and not flinch. Mass, matter, size, energy distance and so on are only truly quantified within the top view universe they exist in, whereas, outside it, in Space, these are merely attributes that an advanced civilsation is able to use its technology to manipulate for its own needs. Consequently, even if the mass of our entire universe were to distort cataclysmically, it would be repelled by Space as if it were nothing, and our universe would continue to expand indefinitely against this resistance in accordance with the behaviour determined by the underlying code. To understand, control and manipulate this code is simply another step toward understanding a universe and the physics behind it that may be much greater than Space itself, which is merely the next rung on the ladder available for us to understand. To speak of the size of Space is an oxymoron, because the concept of "size" cannot realistically be applied to Space; it cannot be understood in terms of distance as we know it, or time as we like to do in physics. Fundamentally, Space most likely contains an infinite number of universes or seperated continuums but it does not function on time, distance or matter. How it resists distortions, causes a push effect we observe as gravity be it around the earth or around the nucleus of an atom is unlike anything we have dared explore therefore it is an exciting, albeit new era and field of physics open to exploration and that is definitely manipulable, I have already tried earlier to some extent to explain this using the concept of a refresh rate. It offers science new vistas such as the ability to travel instantaneously to any part of our universe in seconds and other feats of technology previously thought impossible. The limitations once thought very real by Relativity Theory seem like nothing more than an idle dream of a bygone age. The potential of Space makes a nuclear reaction, nuclear fission or the energy given off by the sun seem less significant than the energy given off by a lit matchstick. Energy, time and distance are just ideas or tenets in a Spatial construct, nothing more and therefore almost absolutely manipulable within the limits of Spatial physics. Even if a nuclear weapon or atomic bomb were set off in a small cube designed from Spatial physics or technology the cube would shield or contain the blast and remain completely unaffected, because Spatial potentialities operate on different principles and are by far greater than either atomic or quantum level forces as to render them irrelevant, backward or obsolete in comparison. Our universe being just one continuum, in comparison to Space which probably contains an infinite number of continuums, is probably like trying to compare the known universe sic a seemingly immeasurable force to a spec of dust.  

Correcting Einstein's view improves accuracy and provides a more definitive understanding of how gravity works.  It also allows astronomers to, once and for all, answer a question that has confounded humanity's understanding of the universe, it very simplistically provides the answer for how and why the universe is continually expanding. Einstein would not have wanted physics to remain stagnant or indefinitely trapped in the greatness of his theories. He would have wanted to see progress, to see his unique ability to inspire new ideas that move humanity onward that are as impactful as his own.

Cosmic Microwave Background (CMB)

If its true that Einstein incorrectly labels the Distance as Space in his model and understanding of the universe, then all bits of the puzzle begin to come together. Technically when we launch satellites and rockets, we are not launching them into Space. As a civilisation we are launching them into the  Distance.

Having corrected his model it is possible to see that gravity does not emanate directly from distortion geometry (a distortion of the Distance) as Einstein proposed. It in fact radiates from true Space as a resistance to distortion geometry. From this we are able to conclude that objects are pushed by this resistance, they are not pulled by distortion geometry as Einstein believed.

If objects are not pulled directly by distortion geometry then this requires us to revisit the underlying cause for why our universe is expanding. There is a possibility that matter in our universe is being pushed outward by gravity. However, for this to be true it must take place as a result of Space resisting a distortion geometry created by a mass outside our universe or by our universe itself pushing and pulling itself against the confines of Space, like waves against the shore. This then requires us to entertain the idea that our universe is not the only one here, there may be more separated by Space the composition of which requires further study.

If our universe is not alone and any attempts to see beyond it are obstructed by the Cosmic Microwave Background (CMB) then it would not be ill advised to assume that, though Space is ubiquitous, universes exist such that they are separated, contained and constrained by the CMB. If this in turn is true, then it may require us to accept that the so called 5th dimension or Space, that seems so illusive and impossible to identify, is in fact an aspect of the CMB itself. This would mean that it has to be considered the distortion geometry interacts with the CMB (Space) to create gravity. It would also require us to entertain the idea that the CMB is more than just a remnant of the big bang. It may in fact be the illusive side view, or 5th dimension itself, right under our noses. If there are many or an infinite number of universes and each one of these universes is represented by just one signal which when tuned into becomes a "channel", dimension, continuum or our MDT universe as we observe it from the top-view then all of these continuums or "signals", of which our own universe is merely the one we are tuned into, when combined form the noise observed in the CMB. It looks and sounds like swarming bees because what is being observed is all the continnums, universes or signals in one Space. Consequently, the CMB is our first introduction to Space itself. "With a traditional optical telescope the space between stars and galaxies (the background) is completely dark. However, a sufficiently sensitive radio telescope shows a faint background noise, or glow, almost isotropic, that it is not associated with any star, galaxy or other object" (Wikipedia 2017). What this would mean is that astronomers are in fact observing with a radio telescope is in fact a semblance of Space itself, or the 5th Dimension. The CMB may be key to unlocking much of what science does not know and understand about gravity and that is required to build devices that can control gravity itself. What appears as visual noise in the CMB is probably not noise at all. It only appears as random noise to us because we have not as yet designed a receiver that can interpret what the CMB is broadcasting, which is most likely highly evolved, intelligible and organised and may include inter-dimensional locations that act as beacons for use in tuning into and out of sectors of the universe. Any attempts to understand the CMB or explain it using the four (4) known dimensions is probably a waste of time and will yield a faulty model with misrepresentations and misinterpretations that only further mislead the scientific fraternity. Einstein's Space-Time consists of 3 directions and the 4th dimension Time (Where time is nothing more than moving matter or "Motion") which together form Space-Time. We have shown that this is not Space-Time and corrected it as Distance-Time or a Matter-Distance-Time (MDT) or even better still a Matter-Distance-Motion universe. If the CMB is indeed Space it is of a 5th dimensional construct. Most people try to add a 5th dimension to the 3rd and 4th to arrive at a 5th. Interestingly enough we do not add an additional dimension to the 4th. Instead we should subtract Time and subtract Distance. Why? Because to tune into a new continuum, dimension, signal or universe we do so by tuning out of the one we are already situated in. Having removed these we enter the 5th dimension and tune from here into the next location of our choosing. I have already extensively elaborated how the process of tuning is the function of removing Distance and Time from the physics we use to understand our universe. This 5th dimension is Space which can be identified as or through the CMB. If this is true the CMB cannot be understood using conventional physics. Trying to analyse a 5th dimensional universe using 4 dimensions will yield many false positives and it is very likely that almost everything physics and astronomy thinks it knows about the CMB today is flawed, from faulty top view observations and therefore a half truth. It can only be fully understood when studied outside of Time (Motion) and Distance. Since Time (Motion) and Distance are the foundation upon which the entirety of physics is built today, we do not as yet have the formal reasoning, math, approach or model by which to begin to understand the CMB or how to build a 5th dimensional tuner that will make sense of it, although I have tried to from the beginning of this write up to do this where I have also tried to point out that motion in linear direction may not exist, however, motion with or movement with no vector such as spin may be accommodated in attempts to understand Space; for instance the difference between a distance of a kilometer and 5 million light years should not be seen in terms of how far off they are but rather in terms of the frequency they spin at, their radius from the centre of spin and the direction they occupy in that radius which when tuned into is gained. Every location in the MDT universe from earth to the furthest galaxy, will have a specific frequency in Space on a scale infinitely tinier than that at the quantum level allowing matter to be manipulated below the nuclear level and distances to be covered across the universe, if the Spatial frequency of any location in the Distance is known, it does not matter how far away it is in our universe, it can be tuned into and it doesn't matter how tiny it is, it can be super-manipulated, for instance, allowing bespoke materials to be constructed from the electrons, protons, nucleus and below. Spin would allude to a form of physics centered primarily on frequencies based on spin which become the only basic means of rationally linking Space, where there is no distance to our MDT universe, distance and spin having some shared properties that can be made use of to find workable mathematical linkages to states of existence that function on different properties. Staying in this line of thought, if a complete spin cycle is equivalent to refresh rate and is the only logical means of linking true Space and the MDT universe then this tiny aperture may yield more about technologies capable of directly manipulating Space and therefore a plethora of other phenomena including gravity. Thus far we generally study 2 dimensional waves using amplitude (y-axis) and time (x-axis) to understand magnetism, but to understand gravity we would have to consider a third and fourth property of electromagnetic waves and this is a rate of spin around the x-axis just as fast as the wave moves along the x-axis that loops or corkscrews both the waves amplitude and time from a 2 dimensional construct that is electromagnetic to a 3 dimensional wave. This wave is then pulsed on and off just as rapidly for instance to create a 4th dimensional wave property consequently allowing spin and a refresh process to create specialized frequencies that open a path to harnessing gravity by linking it to a 4th dimensional type of electromagnetism to which Space is able to respond with a push effect. Interestingly enough, in quantum mechanics, it was discovered quite late that electrons do actually spin. However, yet again, we find that this specific spin property was strangely missing from Schrodinger's understanding of waves and it was consequently not included in his famous wave-equation. How is this oversight even possible? Its incredible how great minds in physics such Einstein and Schrodinger could make such immense strides and insights and yet produce ideas that seem to have very obvious flaws that appear to act as misdirects that prevent a clean or clear understanding of gravity and Space. However, it is also possible to conclude that Schrodinger may have noticed spin but because his physics was based on Einstein's erroneous Space-Time model he could not account for it in his equations and decided to ignore it altogether. This possibility simply emphasises the potential dangers Einstein's flawed model has had to weaken the analysis and research of past and modern day physicists. Similarly, Einstein's misdirect affecting the outcomes of Schrodinger's work may have and is still similarly affecting the work of physicists. What other small lab research and billion dollar experiments working in earnest are likely being led in the wrong direction by this misdirect? The potential misgivings the misdirect can cause in physics are real.

Nevertheless, I am of the opinion that the complexity of the technology used in the creation and design of this receiver or tuner, will determine the myriad of ways in which gravity and related phenomena can be manipulated to obtain desired results, much the same way electricity is used by different technologies to produce many devices with innumerable uses. The reason why we fail to identify the CMB for what it really is, is because of the misdirect in Einstein's model that mistakenly labels Euclidian space as Space itself, when it is in fact just Distance linked to Time, Time itself simply being nothing more than Movement or Motion.

One of the misgivings of Einstein mislabelling distance by calling it Space, has been the inevitable confusion it has created amongst the scientific fraternity. It misdirects astronomers by making them believe that when they look up at the stars in the night sky, they are looking at "Space", and misdirects physicists by making them believe the vast emptiness between a nucleus and electrons is "Space"  or that when distortion geometry is observed they are looking at Space-Time. This error is so pervasive it almost seems clandestine in that its easy to conclude that it might be deliberate. It gives humanity only one option for controlling gravity, through Einstein's erroneous Space-Time making it practically impossible to do so. We keep on walking passed the elephant in the room, even though we are desperately looking for an elephant. A consequence for science is that right now gravity, its understanding manipulation and control is pretty much like electricity during the Stone Age, in that electricity has always been here, what's been lacking in the past is the means to see, understand, control and harness it. It seems this is the very same problem with gravity today, its right here in front of us, but because of Einstein's error, we simply can't see it, understand it, harness or control it, even though its right in front of us all. Gravity, when seen and understood should be just as easy to control as electricity. This is unlikely to happen without correcting Einstein's model.

In fact seeing Space and where gravity is coming from may not be as impossible as you think. It won't cost you an arm or a leg either. Should you know what you're doing you may not have to build a trillion dollar multi-kilometre long array to understand gravity. If you have one of those old television sets, go and switch it on. When you are between channels the "snow" you see is your first introduction to Space and where gravity is coming from. About 1% or less of the noise you see on the screen is the CMB.

The CMB: Welcome to Space.
Tune in, to go anywhere you so desire.



Back to LIGO


Why are we being pushed to the earth with a force of 9.8N, not pulled by the earth? The distortion may be caused by the earth, but the gravitational force of 9.8N is not coming from the earth, but from Space.

A billiard ball can knock another billiard ball. This is an example of matter on matter action. Distance is just an aspect of matter, as matter is just an aspect of distance within the same continuum. For a large mass to act on the geometry of distance bending, pushing, squashing or stretching it is nothing more than matter acting on matter. What LIGO has done is very important. To some extent what it has done is not prove Einstein was right, its proven he was close to the mark, but to some extent got it wrong, if the labels are placed correctly. With the labels in the right place it can be seen that the distortion detected at LIGO is not a gravity wave. It is the detection of a distortion in the matter-distance-time (MDT) universe we occupy. Any gravitational effect is "push-back" or "Resistance" from ubiquitous Space: a by-product of the distortion that is a response from Space. Why this cannot be seen is because Einstein mistakenly labels distance as Space in his concept of Space-Time or Relativity Theory.

Matter-Distance-Time (MDT)
The diagram shows that mass, gravity and acceleration
are one and the same and do not emanate from our dimension.
According to Einstein distortion geometry, gravity or bending Space-Time
is what is pulling the blue object down, which is incorrect because Einstein mistakenly refers to the distortion geometry itself as "Space" or "Space-Time" which is a critical perception based misdirect in physics. Shown in red gravity
emanates from ubiquitous Space as a form of resistance to the Einstein's distortion
creating a push or acceleration demonstrating that they are not one and the same. This distinction is critical and can only be
made by correcting Einstein's model.

The diagram above shows that mass, gravity and acceleration do not emanate from our dimension. Correcting Einstein's model shows that they are all created by the same force. Since it emanates from Space, it is outside our dimension. Consequently, as I mentioned earlier: objects have no actual or substantive volume or mass and therefore no genuine weight. Mass and time are useful for the experiential Universe, but are not practical or efficient to the mechanics of how the Universe is created (that is, the operational Universe) it is not scientifically practical for matter or objects to be of excessive volume or weight and of primitive top view "Einsteinian-Space" itself to be of great “distance” or of time to be of a burdensome duration; these will all be inevitably seen as very crude ways of understanding the Universe and the physics that applies to it.

Why do I keep be-labouring this point? Thus far very little is known about Space. We tend to think we know a great deal about it because Einstein mistakenly took distance and called it Space. This is fine because its a perception based error anyone can make and is one belonging to an age in science. It has had immense repercussions in physics, but as it is with any subject of importance changes in perspective bring about new ways of approaching the same ideas. If for instance, distance and Space are completely different things, then the push effect depicted in red in this diagram does not have to be caused by distortion geometry in our matter-distance-time (MDT) universe. LIGO has proven that our MDT universe is very stiff or inflexible. Geometric distortion as a technology or means of creating gravity would require tremendous amounts of power to generate the resistance that would induce acceleration and that we would observe as gravity. Since gravity does not emerge directly from distortion geometry, but from Space, which is outside our MDT universe and the fact that it is stiff provides a very simple explanation for why gravity is experienced as such a weak force. However, since gravity is a by product of geometric distortion why use this very difficult, extremely weak, nearly impossible route to manipulate gravity? Why don't we boldly go where no man has gone before and instead go straight to the source of gravity, namely Space? Tiny manipulations of Space can induce a much larger push effect on matter-distance-time. However, finding Space, understanding what it is and how it works outside of matter-distance and time is the frontier physics needs to delve into. But you cannot look for something that you believe you have already found. You will simply stop looking, which is the tragedy. Today we mistakenly point to distortion geometry and call it "Space" or "Space-Time" when in fact we have mislabelled and therefore have not as yet found what it is we speak of inevitably misleading ourselves. When this happens physics stops moving forward in leaps and bounds because its trapped in a theory loop caused by a misdirect. This is why I keep be-labouring  and stressing the need for science to correct Einstein's model.


Recent Media to Watch:

This interesting documentary released recently (January 2019) concurs with my analysis. Here is the link to it. Its called Einstein's Quantum Riddle. The Institute of Advanced Studies in the documentary is getting closer to the truth (at 46:34 in the video). When Robbert Dijkgraaf (Director of the Institute of Advanced Studies and Leon Levy Professor) talks about correcting Einstein's model or understanding of the universe he implies Space-Time is actually incorrectly interpreted by Einstein. The Institute is absolutely right in the sense that when Robbert Dijkgraaf talks about removing "Space and Time" altogether what he actually alludes to is removing the concept of "Distance and therefore Motion" (as they are conventionally understood) from Einstein's interpretation. Space and Distance are separate and distinct as are Time and Motion and the general mistake that Robbert Dijkgraaf remarkably corrects by removing Space-Time (sic Distance-Motion) is Einstein's assumption Space and Distance are one and the same which is why it then became impossible for Einstein to complete Unified Field Theory, which, with this problem now resolved, it should be possible to. Its certainly interesting to see that Space can exist independently and irrespective of Distance to create a "Holographic Universe" where spooky action at a "distance" becomes somewhat redundant when distance is removed to hypothetically create a universe consisting purely of quantum entanglement. This is a great documentary. Its nice to see my analysis made many years ago proving to be correct today.

Notes:

Space and Time

According to Einstein's model, when you get up in the morning, get dressed then go to the kitchen for breakfast; then stop in the living room to catch the news on TV you have been to three different rooms at different times: the bedroom at 7am, the kitchen at 8am and the living room at 9am. However, according to my theory you woke up and got dressed, had breakfast and watched TV in one location, one frame or one dimension. Imagine you were watching these events on your TV, they would all have taken place in one location, that is, the TV screen in front of you. The bedroom, kitchen and living room are in fact in the same location, frame or dimension. If a physicist was calculating what you did based on the distance between each room and the time it took to move from one room to the next all these calculations would in a sense, be baloney: because you never actually moved to get from one room. You were in fact in the same place the whole time, therefore time itself, as you may have been taught to understand it, did not elapse.


A Final Conclusion: Economics and Theoretical Physics [July 2020]

I can conclusively say I have broken the seals, so to speak, on two important areas modern science has failed to date to deliver conclusive results: the first is the inability of economics, business, accounting and finance to intrinsically identify the cause of and provide a solution to ending poverty. The other is the inability of physics and the sciences in general to explain and provide a working model or mechanics of a system able to deploy and harness gravity. Even though I may try to play it down, I am glad to say in this month of July I have successfully and beyond reasonable doubt accomplished both these tasks. The arguments in the writing above demonstrate I have spent many years trying to get down to the root of these problems and the knowledge paradigms in which they were en-scrolled, therefore this month represents a personal triumph and I feel at peace. Gravity is the most powerful force in the universe when it comes to humanity's physical existence, but scarcity is the most powerful when it comes to the resources humanity needs for its well-being. The fact that the sciences were unable to provide conclusive answers to these problems was a troubling issue for me that raised many questions about inconsistencies and inadequacies in knowledge and ascribed intellectual limitations. These were perception based problems, the kind it seems, are the most difficult even for the most astute minds because they require counter-intuitive processes to unravel the mysteries that cloud the path to accurately determining their truths.


[1] Punabantu Siize (2004) “Time”, Revision of Punabantu S (Nov 2003) “African Time”, Post Newspaper
[2] Einstein Albert (5th May 1920) “Ether and the Theory of Relativity” (an address delivered on May 5th, 1920, in the University of Leyden)
[3] Ibid.
[4] Op. cit.
[5] Jeremy Chapman (2010) “Relativity and Black Holes : The Beginning Becomes the End Becomes the Beginning : A study of cosmological birth and death”
[6] Tim Folger (2007) “Newsflash: Time May Not Exist”
[7] Wikipedia (2010) “Wave–particle duality”





























No comments:

Post a Comment