_{1}

^{*}

Developing a comprehensive model of the early universe that describes events and conditions prior to recombination has proved difficult. Using a new approach, we express Heisenberg’s uncertainty principle in terms of measures and counts of those measures to resolve an expression consisting entirely of counts. The description allows us to resolve explicit values for discrete measures. With these values, we present new expressions describing the earliest epoch and the transition event that initiates expansion. We determine the quantity, age, density, and temperature of the cosmic microwave background (CMB). Moreover, we approach the CMB power spectrum anew, describing each mass/energy distribution, its physical significance, its peak temperature, and the effects of relativity. We do not engage in fitting or modification of the existing laws of physics. The approach is classical and correlates both quantum and cosmological phenomena with descriptive expressions that are measurable, verifiable, and falsifiable.

The cosmic microwave background (CMB) has offered a significant amount of data with which to understand the processes, conditions and events that make up the earliest epoch of our universe. Among them, measurements of the CMB may be presented as a power spectrum revealing five bell-like curves each describing physical traits of our universe [

We propose an approach that allows us to answer these questions without modification of the known laws of nature or using untested hypotheses. The presentation rests upon evidence presented for the physical significance of fundamental units of measure [_{L} of fundamental units of length l_{f}. The approach and nomenclature are referred to as measurement quantization (MQ).

MQ is neither a new theory nor conjectured insight. The results presented are built by applying MQ to our existing understanding of classical mechanics. Recognizing the physical significance of MQ expressions enables us to unravel relationships that underlie the laws and constants of our universe. As such, we may describe the history of the universe from the earliest epoch to the present. We resolve the conditions that lead to the CMB and its present-day properties—quantity, age, density, and temperature.

We should emphasize, the presentation is not subject to one set of measurement data or restricted to one field of science (i.e., MQ may be used to resolve measurable values in optics, gravity, energy, particle physics, and cosmology). The presentation is also not one of generalizations, such as greater energy, lesser volume, and invariant correlations of broadly defined physical properties. Rather, we use MQ to describe measurable values that can be verified or falsified. By example, we resolve peak temperatures and corresponding multipole moments of each abscissa and ordinate of the power spectrum representation of the CMB—the peak multipole moments of each distribution, their relationship, and their physical significance. We use MQ to describe the rate of universal expansion—Hubble’s constant—its correlation to a physically significant discrete measure, the effects of relativity between epochs, and the physical conditions and events that end the earliest epoch and initiate expansion. The MQ approach also allows for expressions describing the physical properties of spacetime, what is curved, and why it is curved. Moreover, it allows us to resolve greater cosmological questions such as whether the universe is flat, open or closed.

The approach begins with an analysis of Heisenberg’s uncertainty principle written in MQ form. We shall also make use of the Pythagorean Theorem, observations of the speed of light, and the expression for escape velocity. We need no additional observations or laws of nature with which to proceed.

We remark that the presentation consists of several ancillary properties, among them being a lack of conjectures and axioms other than those fundamentals of modern theory we attribute to classical mechanics. The expressions are largely temporal in approach and geometric in presentation. More importantly, some properties of the CMB are shown to be correlated to the quantization of measure with respect to the frame of the observer. Finally, we adjust the expressions for effects of relativity, which are a consequence of the time dilation between the earliest and present epochs.

From a broader view, this research has little effect on the existing body of literature discussing recombination or expansion. Rather, the events described herein occur following the initial formation of a quantum singularity and the time elapsed thereafter up to the trigger event that initiates expansion. We refer to this period as the quantum inflationary epoch. The epoch is terminated by a trigger event, a physical process to be described, but otherwise summarized as a consequence of discrete measure applicable to quantum singularities. After instantiating the expansionary epoch, temperatures cool to approximately 3000 K over a period of 2.7 years. Finally, we arrive at more thoroughly studied periods of our universal history such as recombination, decoupling, and the dark ages.

We shall spend most of our time on the first two periods. For instance, we demonstrate that a period of faster-than-light expansion is not needed to explain the present-day heterogeneous and homogenous properties of our universe. That said, MQ does describe a lengthy quantum expansionary period that describes conditions for which inflation theory is conjectured. MQ also offers a physical description of the CMB power spectrum not as a phenomenon consisting of mass and energy, but as mass that has differing physical characteristics with respect to elapsed time and relative distance (i.e., what is measured presently, what will be measured with elapsed time, and what cannot be measured because of the expansion of space). We refer to these geometries as temporal properties of observation. Both the ΛCDM and MQ approaches make parallel predictions, but the MQ approach offers a clearer understanding of the physical characteristics of each distribution: dark energy, dark matter, and visible matter ( [

Over the last two decades, research into the early and present universe has focused on the measure of the CMB as is commonly presented in a graph of temperature versus the multipole moment. Many features of our universe have been redefined as some aspect of this data, such as whether the universe is flat, open or closed—a flat universe being consistent with a dark energy multipole moment around 220 [

To ground the reader as to the accomplishments thus far, we briefly discuss some of the highlights and details of the approach, but before we begin, we bring to the reader’s attention that this paper and the efforts of ΛCDM do not share a lot of common ground. That is, MQ describes the earliest epoch from singularity up to recombination. Whereas there are expressions that describe the ensuing expansion following the trigger event that ends the quantum inflationary epoch, the physical processes of recombination—what is generally the birth ground of ΛCDM research—is not in the same physical regime as MQ. We may compare the two as using electromagnetism to describe fluid dynamics. Not only are the laws that govern each discipline largely unrelated, but rarely would one attempt to correlate them. Although MQ provides a physically significant understanding of the power spectrum of the CMB, it is not to say that it is the whole story. The power spectrum is only partly a story of the temporal properties of observation; it reveals a physical process from which we have yet more to learn.

To that end, we broadly note that ΛCDM describes the power spectrum as a consequence of two physical processes. The first effect concerns the inward pull of gravitation with respect to the outward pressure of baryonic/photon plasma. There exists physical support that the early universe consisted of baryonic matter and photons interacting in a gravitational potential constrained by dark matter. The correlation to dark matter exists only with respect to the 26.8% mass/energy distribution identified as such. Over time, mass concentrations mutually attract via gravity, but as concentrations increased, there was also an increasing pressure which caused repulsion [

The second effect concerns the field of acoustics and correlation of those disciplines to measurements of the sound horizon. One may use an expansion of Laplace’s spherical harmonics to decompose the density field and then look at one single mode or enkelt mode. Although inflation produces compression/decompression oscillations of equal magnitude, models suggest that with elapsed time the oscillations separate into the known distributions (for example, dark energy, dark matter, visible matter). This latter phase is characterized by falling temperatures because of the expansion of space. As temperatures drop to 3000 K a period of neutral hydrogen is allowed (aka., recombination). Thereafter, photons travel freely throughout the universe, which in turn freezes the CMB oscillations that we measure today.

As a final piece to this chronological puzzle, there is no measurable mechanism to address the homogenous observations of the CMB temperature. This is known as the horizon problem [^{23} within 10^{−34} seconds; values vary depending on the model. Short, initially rapid, expansionary periods are more broadly viewed as pertaining to the inflation period and are recognized as a placeholder between two epochs of better-known physical conditions.

With regards to gaps in the historical record, there are periods with a very high level of physical correspondence to theory whereas there are other periods during which an experimental measure has not yet been devised. For the latter, there are placeholders—conjectured evolutionary periods—that fill the gaps between better known periods. It should be noted that placeholders grounded in physically significant endpoints are nevertheless conjectures regarding the physical conditions between those endpoints. The lack of detail overall is such that we fail to achieve the goal of an underlying physical mechanism that explains the sequence of events [

Perhaps from the earliest of human endeavor, we have attempted to quantify an understanding of measure. What is it? What are its underpinnings? Is it possible to define measure by something more fundamental? What does measure have to do with our understanding of the power spectrum of the CMB or the history of our universe?

Measure serves a role not dissimilar to equations, a tool of the trade. The three measures (length, duration, mass) are used to describe the properties of … well … most everything. As such, we cannot understand the power spectrum until we understand measure. Seemingly, this should be an unnecessary place to begin if it were not for the fact that we know so little of measure. We, for instance, cannot answer the questions: Does measure exist outside of the observable universe? Why are the three measures related? Are there phenomena in the universe that have properties with no correspondence to measure?

A more mathematical approach to these questions may be furthered in consideration of the Pythagorean Theorem, this being the simplest expression with which we may describe the measure of length. There are three terms associated with measure, a count of a reference phenomenon n_{La} = 1, a known count of that reference n_{Lb} and an unknown count of the reference n_{Lr}. Whether one is using SI units, counts of fundamental units or any other measurement nomenclature is unimportant. What is notable is that there are three terms, each representing a physically distinct measure with respect to an observer. The theorem brings to our attention that, at a minimum, the phenomenon of measure is a composite of information with respect to three distinct measurement frameworks.

Likewise, one may argue that the reference and the known count may be known simultaneously with respect to the frame of the observer, but an equivalent argument may be made that a count measure cannot be known without a reference, thus complicating the prerequisite of simultaneity (the phenomenon of measure is one that exists in the present). Of equal difficulty, using modern day EM-echo ranging (LIDAR), it may be argued that the measures of n_{La} and n_{Lb} can be simultaneously known by an observer sidestepping the challenges of consolidating the distinct measurement frames where and when n_{La} and n_{Lb} are resolved. Nonetheless, this introduces time, and whereas this approach offers the hope of consolidation with respect to the observer’s frame, it merely introduces an alternative dimension—time—with a new uncertainty, the physical correlation of time to length. Finally, the physical process by which information is consolidated (i.e. the comparison of a reference to an unknown length) is not defined. When discussing an observer’s frame, we mean to resolve a description of the physics applicable at a given space at an instant in time t_{f} with respect to the consolidated information.

Thus, with respect to the evolution of modern theory, we ask, what are the three frames? Modern theory recognizes two, that of the target and that of the observer. However, as we have demonstrated, the phenomenon of measure is a composite of three and it is our inability to quantify this prerequisite and then clarify its ramifications with respect to the laws of nature that underpins the claim; we lack an understanding of measure.

To lay the foundation for the birth and evolution of our universe, we must begin with measure. We approach the topic in a way that does not fix or assume specific properties but allows for enough variation that we may resolve quantifiable properties. This is accomplished with a nomenclature that divides measure into two components. Specifically, for every measure … for example, time … we shall not write the symbol t. Rather, we express time as a count n_{T} of some fundamental measure t_{p}. Naturally, this opens the door to the possibility that time is countable or that there exists a physically significant fundamental unit of time, but not necessarily. The new nomenclature only introduces the possibility of discreteness, yet leaves the door open to non-discreteness. Whether measure varies, is bounded, has physical significance or is entirely meaningless must be resolved from the physical record. Indeed, that is where we begin.

Consider then, that the speed of light may be described as a count n_{L} of length units l_{p} divided by a count n_{T} of a fundamental unit of time t_{p}, then c = n L l p / n T t p such that

n L = n T . (1)

For guidance only, we also consider Planck’s unit expressions [_{p} and mass m_{p},

l p = ( ℏ G c 3 ) 1 / 2 , (2)

m p = ( ℏ c G ) 1 / 2 , (3)

both of which serve as reasonably accurate dimensional realizations. Then, for c = l p / t p and the above two expressions, we resolve that the product of their squares is

l p 2 m p 2 = ℏ c G ℏ G c 3 = ℏ 2 c 2 , (4)

ℏ = c l p m p = l p 2 m p t p . (5)

Using Heisenberg’s expression to describe the uncertainty associated with the position σ_{X} and momentum σ_{P} of a particle,

σ X σ P ≥ ℏ 2 , (6)

We may resolve physically significant values for n_{L}, n_{M}, and n_{T}. The uncertainty principle asserts a limit to the precision with which certain canonically conjugate pairs of particle properties may be known. However, this differs from our goal of resolving the certain minimum measurements of a particle at the threshold, ħ/2. Therefore, we introduce a special case of the use of variances.

Whereas the expression for variance is usually written to describe the certain properties of many targets, we modify this usage to describe the certain properties of many measurements whereby the measurement, whether applicable or even physically significant, is uncertain. With this understanding, we then consider the solution for only the minimum count values for length, mass, and time such that the conjugate pair is equal to the threshold at ħ/2; that is,

∑ i = 1 N ( X i − X ¯ ) 2 N − 1 ∑ i = 1 N ( P i − P ¯ ) 2 N − 1 = ℏ 2 . (7)

One might argue that the substitution of the variance of a physical quantity for the uncertainty of that quantity is not physically clear. In response, we consider that the canonically conjugate pair that we seek to resolve is a certain value, identified by the conjectured set of minimum count values for length, mass, and time. That is, we distinguish the uncertain state of a particle from the certain state we seek to describe. We accomplish this by considering the minimal case of a variance of many certain measures sorting out each dimension separately.

To the extent that the minimal count N is reducible to a certain measure describing a single particle, we consider measures when N = 2. The variance terms for position and momentum reduce such that there is a certain length l = ( ( X i − X ¯ ) 2 / 1 ) 1 / 2 corresponding to the variance in X and a certain momentum m v = ( ( P i − P ¯ ) 2 / 1 ) 1 / 2 corresponding to the variance in P. We write each term in the MQ nomenclature, i.e., l = n L r l p and m v = m l / t = n M m p ( n L l p / n T t p ) . Note also that the count n_{L} for the change in velocity is distinct from the position count n_{Lr}, the latter describing the distance between the observer and the particle. We have

( n L r l p ) 2 ( n M m p n L l p n T t p ) 2 = ℏ 2 . (8)

with these constraints, it follows that the minimum count values at the threshold ħ/2 correspond to a minimum distance n_{Lr}l_{p} and a momentum consisting of a minimum mass n_{M}m_{p}, a minimum length n_{L}l_{p} and a minimum time n_{T}t_{p}. Replacing the value ħ with the result from Equation (5), then

( n L r l p ) ( n M m p n L l p n T t p ) = l p 2 m p 2 t p , (9)

2 n L r n M n L = n T . (10)

Notably, the reference measures cancel out leaving a description that consists of only count terms. In MQ, we recognize such descriptions as geometric relations. That said, the result does not imply that the fundamental units of measure are physically significant or that the counts are integers. To resolve the count values requires that we identify additional constraints, starting with a description of G composed exclusively of Planck Units. Dividing Planck’s mass by Planck’s length from Equation (2) and Equation (3), we then have

m p l p = ( ℏ c G ) 1 / 2 ( c 3 ℏ G ) 1 / 2 = c 2 G , (11)

G = c 2 l p m p = l p 2 l p t p 2 m p = l p 3 t p 2 m p = l p 3 t p t p 3 m p , (12)

G = ( l p 3 t p 3 ) ( t p m p ) . (13)

A final constraint, the upper bound relation between length and mass counts, may be resolved by considering the expression for the escape velocity. Using the expression for G at the bound v = c, such that r = n L r l p and M = n M m p , then

v = ( 2 G M r ) 1 / 2 , (14)

c 2 = 2 n L r l p ( l p 3 t p 3 t p m p ) n M m p , (15)

n L r = 2 n M . (16)

Given 2 n L r n M n L = n T [Equation (10)] and n L = n T [Equation (1)], then

2 n L r n M = 1 . (17)

Moreover, with n L r = 2 n M , then

2 ( 2 n M ) n M = 1 , (18)

n M 2 = 1 4 , (19)

n M = 1 2 . (20)

This count value describes the lower count bound to the measure of mass with respect to an observer. This does not mean that phenomena may not have smaller masses, only that a mass less than m_{f}/2 may not be measured with greater precision. Returning to 2 n L r n M n L = n T [Equation (10)] such that n_{M} = 1/2 and reducing with n L = n T [Equation (1)], then

2 n L r 1 2 n L = n T , (21)

n L r = n T n L = 1 . (22)

Finally, where both n_{L} and n_{Lr} describe the phenomenon of length and n L = n T [Equation (1)], then

n L r = n L = n T = 1 . (23)

Thus, we may state that each of the counts is physically significant describing a lower threshold to measure.

O_{1}: There are physically significant fundamental units of measure: length, mass, and time.

The mathematical approach taken makes no assumptions about the relationships between measures, the discreteness of measure or the physical significance of measure. Our ability to correlate a physically significant phenomenon with discrete counts of reference measures is entirely an outcome of our existing understanding of light, the uncertainty principle, and the escape velocity.

Having established the physical significance of a lower threshold to measure, consider now a macroscopic measure (i.e., any distance greater than the reference l_{p}). For instance, consider a stick 10.00l_{p} in length and another 10.25l_{p} in length. Can the difference,

10.25 l p − 10.00 l p = 0.25 l p , (24)

be measured? No. A difference length is physically the same as any other length and with respect to the Heisenberg uncertainty principle, difference lengths less than the reference l_{p} (i.e., n_{L} = 1) cannot be measured. Thus, we may resolve that all macroscopic length measures may be observed only as precisely as a whole-unit count of the reference measure.

Although the above extensible result is definitive for the entire measurement domain, let us consider one more approach, a difference greater than l_{p} such that one stick is 10.00l_{p} and the other is 15.25l_{p}.

15.25 l p − 10.00 l p = 5.25 l p . (25)

In this case, the difference measure is physically significant. However, to argue that measure is non-discrete is valid only if this measure is also different from a whole-unit count, that is, five units of the reference. To test, we again compare the two lengths,

5.25 l p − 5.00 l p = 0.25 l p . (26)

This case is the same as the first. Thus, all measures are physically significant only for a whole-unit count of the reference. We may then recognize that:

O_{2}: The fundamental measures are discrete and countable.

O_{3}: The fundamental measures each define a reference.

Thus far we have demonstrated that measure has a lower threshold and that all measures are confined to a whole-unit count of some reference measure. We have not identified a means to confirm the number of physically significant measurement frameworks necessary for measure. We have only established that measure with respect to the observer is discrete.

To further our observations, we bring to the reader’s attention that measure is a property of references. Therefore, given that the leading edge of the universe expands at the speed of light, we recognize that the universe can have no external reference. Therefore, the property of measure with respect to the universe must be non-discrete.

This may be summarized as follows. Matter exists with non-discrete measurement properties because there exists no external reference to the universe. Indeed, for all observers in the universe, matter is and can only be a discrete count of physically significant fundamental units of measure.

O_{4}: Measure with respect to the observer is discrete.

O_{5}: Measure with respect to the universe is non-discrete.

To develop its mathematical description, we describe the discrete and non-discrete measurement frames of reference as frameworks. To demonstrate a physically significant property of matter correlated to these frameworks, we devise an experiment described by three frameworks, one of which has the property of non-discrete measure and the remaining two of which are discrete. The experiment also carries two design prerequisites. First, the design must not introduce additional measures such as angles. Second, all information necessary for measure must be available to the observer at every instant in time. With these prerequisites, the three frameworks are:

· A discrete framework describing where properties of the reference are observed ( AB ¯ : the observational framework).

· A discrete framework describing where count properties of the reference are observed ( BC ¯ : the measurement framework).

· The non-discrete framework describing the observed phenomenon ( AC ¯ : the target framework).

For clarification, all three frameworks are described in

With this understanding, we then recognize that the design must resolve how information regarding the count value of the measurement framework is obtained by an observer relative to the observational framework. We also reiterate that the design must allow for a singular expression that correlates all three frameworks.

We propose, then, a system consisting of a grid of points that are a fixed count of a reference measure in separation (along the shortest axis). There must be enough points to form at least one square such that each hypotenuse of the square is also equal in separation. To ascertain initially the distance between any two points, we propose that at each point a laser pulse rangefinder is used along with the time-of-flight principle to ascertain whether each of the axes are equal in distance as agreed upon in advance of setting up the experiment. It is as such that we may ascertain whether the angular measure for each point is either along a line or at 90 degrees (except for those points that follow a hypotenuse). The design does not require that we introduce angular measure into our understanding of the discrete and non-discrete properties of length measure. The experiment also does not initially incorporate time as the experiment is performed only after it is set up.

Note there are the two frameworks, that of A and that of C of which A certifies the length AB ¯ (the observational framework) and C certifies the length BC ¯ (the measurement framework). There is a third framework (the target framework), of which both A and C and the unknown length AC ¯ are members. Thus, we need only the presence of members A and C in the target framework to define all information in the system.

Using the Pythagorean Theorem, then AC ¯ ≈ 1.414 l p . Given that only a discrete reference count of the measure of AC ¯ is permitted in the observational framework, we find the difference 1.414 − 1.000 = 0.414 l_{p} to describe a physically significant property of the universe. What phenomenon this difference describes between the discrete frameworks of A and C and the non-discrete framework AC ¯ of the universe is the subject of the next section.

To achieve the goal—a measurable, verifiable, and falsifiable description of the CMB power spectrum—we proceed with a brief description of gravity. The sections to follow will be concise although more in depth discussions may be found in References [

We no longer need Planck’s Unit definitions as a guide to measure. For one, the Planck expressions are approximate in that they do not take into account the measurement skewing effects of discrete measures. For this reason, the MQ approach to measure is identified by subscripting f to variables, specifically, l_{f} for length, m_{f} for mass, and t_{f} for time.

Continuing with our prior observations, note that

Importantly, a count of 1 on side a is prerequisite to any count along side b to resolve side c. If an argument were presented that side a was arbitrary (i.e., a = 2), we would find a description that “assumes” a reference count of two units of the reference not explicitly incorporated into the definition of our reference. This presents a factor representation of the framework that conceals the discrete count properties we are attempting to describe. Thus, side a = 1 is prerequisite for all considerations of side b in any understanding of the unknown distance along side c,

c = ( 1 + n L b 2 ) 1 / 2 . (27)

We are now ready to resolve a difference between the discrete and non-discrete descriptions for this experiment. To describe the observer’s experience, we conjecture that any non-integer count of the reference along the unknown length AC ¯ relates to a change in distance and may be described by rounding up (repulsion) or down (attraction). The remainder lost to rounding is denoted by Q_{L}. For all solutions, Q_{L} is less than half and thus attractive, as is evidenced by Q_{L}’s

largest count value of ≈0.414 when sides a and b are both 1. The model provides counts of distance measures that are closer by

Q L = ( 1 + n L b 2 ) 1 / 2 − n L b (28)

at every instant in time t_{f}. For example, if n_{Lb} = 4, then Q L / n L b = ( 17 − 4 ) / 4 = 0. 1231 / 4 . Because side c always rounds down, we find that n_{Lr} always equals n_{Lb}. Thus, we always refer to the ‘observed measure count’ as n_{Lr}. Moreover, note that the reference measure against which all counts are measured is defined by n_{La} = 1. With this, we have composed an expression for gravity such that the loss of the remainder relative to the whole-unit count is Q_{L}/n_{Lr}.

Together, Q_{L} and n_{Lr} are conjectured to represent an important dimensionless ratio that describes gravity. We proceed with that hypothesis by presenting the ratio in meters per second squared (ms^{−2}). We multiply by l_{f} for meters and divide by t f 2 to describe the distance loss at the maximum sampling rate of one sampling every t_{f} seconds per second,

Q L l f n L r t f 2 . (29)

Note also that the quantity is scaled and hence requires a scaling constant. As we shall learn later, this scaling constant is fundamental to the relation describing the three measures. To proceed, we multiply by the speed of light c and divide by a scaling constant S. Setting r = n L r l f and c = l f / t f , the expression reduces to

Q L l f n L r t f 2 c S = Q L c 2 n L r t f S = Q L l f c 2 n L r l f t f S = Q L c 3 r S , (30)

Q L c 3 r S ≈ G r 2 . (31)

This understanding of gravity arises as a difference between the discrete measure with respect to an inertial frame and the non-discrete measure with respect to the universe. Comparing the expression with that of Newton’s G/r^{2} we see a decrease in distance between the two curves that is immeasurable beyond the sixth-significant-figure precision for all distances greater than 2.247l_{f}. The difference may also be described as a function of Q_{L}n_{Lr}, a term that approaches 1/2 with increasing distance. Further described in Appendix A, we identify both the expression and the skewing effect arising from the discrete measure as the Informativity differential.

Discussed also in Appendix B, we replace the term S with θ_{si}, not because the measure is radian for all contexts, but to bring to the attention of the reader that the value of θ_{si} = 3.26239 radians is constant in all physical contexts. After resolving several expressions, we shall return in Sec. H to discuss unit analysis of expressions containing this constant.

With a definition for quantum gravity, we may now resolve physically significant values for the fundamental measures. How we resolve these measures is important. Specifically, the Informativity differential is a distance-sensitive skewing effect in the length measure. For distances of 10^{4}l_{f}, this effect is less than can be measured. However, the measure of ħ is a quantum property where the effect is significant. To the extent that Q_{L}n_{Lr} = 1/2 is acceptable for the measure of c and G, we need to avoid an approach that uses ħ.

For the purposes of these calculations, we recall that the units for θ_{si} are kilograms meters per second. This is not the case for all described phenomena, an aspect that shall be addressed later. Thus, the values for each of the three measures, Appendix C, are:

l f = 2 G θ s i c 3 = 2 × 6.67408 × 10 − 11 × 3.26239 299792458 3 = 1.61620 × 10 − 35 m , (32)

t f = l f c = 2 G θ s i c 4 = 2 × 6.67408 × 10 − 11 × 3.26239 299792458 4 = 5.39106 × 10 − 44 s , (33)

m f = t f c 3 G = 2 θ s i c = 2 × 3.26239 299792458 = 2.17643 × 10 − 8 kg . (34)

There are two approaches that may be used to resolve the fundamental expression—the simplest expression that relates the three measures. One, we may use Equation (13) to replace G such that G = c 3 t f / m f . Or two, we may solve the first and third expression for G, set them equal and reduce. In both instances, we find that

n L l f n M m f = 2 θ s i n T t f , (35)

l f m f = 2 θ s i t f . (36)

Here, Equation (35) assumes all counts have value one. This differs from their minimum values where n_{M} = 1/2. The fundamental expression does not describe the lower count bound of each dimension, but the correlation between them. Moreover, in MQ nomenclature, we often ignore the Informativity differential, presenting expressions as though we were describing a macroscopic phenomenon. In such cases, we make the substitution of Q_{L}n_{Lr} with the value 1/2. We refer to this procedure as the unexpanded form of the expression. Conversely, the expanded form of the fundamental expression is

l f m f = θ s i t f Q L n L r . (37)

Likewise, each of the fundamental expressions is also affected by the Informativity differential and has expanded counterparts. Calculation of the Informativity differential Q_{L}n_{Lr} does require several steps, but when describing quantum phenomena, especially phenomena less than 2.247l_{f}, the precision is important. A more detailed description is offered in Appendix A.

The three properties of measure—discreteness, countability, and physical significance with respect to the discrete and non-discrete measurement frameworks—may now be used to describe the earliest epoch of our universe and the transition that leads to the expansionary epoch we observe today. We no longer need additional concepts or experimental results to describe the constants or laws of nature other than the descriptions introduced so far. However, before we launch into an analysis of the CMB power spectrum, we need to characterize the expansion itself. We begin with expressions that correlate the expansion with the fundamental measures starting with the unexpanded form of the fundamental expression, which describes the expansion of the universe as a function of elapsed time,

m f = 2 θ s i t f l f = 2 θ s i n T t f n L l f = 2 θ s i c n T n L . (38)

The physical correlation can more easily be explained once the expressions are resolved. To begin, we approach this as a unity expression ( [_{Lu} and a count of time units n_{Tu} such that the percentage of the universe representing all the mass is Ω_{tot} = 1,

Ω t o t = 2 θ s i c n T u n L u = 1 , (39)

This expression describes a specific case, that of the universe with its leading edge expanding at the speed-of-light. Now, taking_{f}, then the following is also true of our description of the universe,

Finally, with the leading edge of the universe expanding at the speed-of-light c, then the rate of expansion of the universe H_{U} is constant with respect to the universe,

The values for fundamental mass m_{f}, the diameter D_{U} and age A_{U} of the universe are each correlated with

while there are multiple paths by which to approach a description of these phenomena, it might be argued that the expansion of the universe is in part assumed based on a correlation between the fundamental expression and the rate of expansion. Building on this approach as a conjecture, then it may equally be argued that the value of m_{f} is a prediction of that conjecture. To that end, we find the conjecture physically significant to five digits, constrained by the precision of our measure of the age of the universe and the comparison mass

If follows that with H_{U} a universal constant of rate 2θ_{si} m·s^{−1} per universe, the critical density of the universe

is an accurate description of the universe, correlated to its age and diameter. The constant rate of expansion also tells us that the universe is flat, neither accelerating nor decelerating.

We begin with a few commonly used terms in MQ. For one, we refer to the ratio l_{f}/t_{f} as the length frequency. The ratio describes the one-to-one count bound with respect to the fundamental units of length and time_{f}/t_{f} as the mass frequency. There is no specific term used to identify the ratio m_{f}/l_{f}, but the ratio is also an important physical bound.

Another way to describe measurement frequencies is to express each bound as a rate. Doing so filters out the target dimension such that all measures are fixed and equal to the inverse of the fundamental time,

In the same way that length frequency describes an upper bound to measure, we may also resolve macroscopic properties of our universe using the same nomenclature. For instance, length and mass frequency expressions that describe the universe may be resolved by taking the product of the age of the universe A_{U}, the radial system constant θ_{si} and the corresponding dimensional frequency. To the extent that the elapsed time with respect to the present is approximately _{T}, and the radius R_{U} are

Note that the use of θ_{si} reflects the rate of universal expansion as learned from the fundamental expression [Equation (37)]. Note also that, in this instance, θ_{si} caries no units. The dimensionless nature of θ_{si} when describing the universe is because the universe has no external reference. That is, θ_{si} serves in the capacity of a self-defining value (i.e.,

Applying the same operation, the fundamental mass—the upper count bound of the fundamental unit of mass m_{f} that may be observed per unit of t_{f}—is related to the fundamental mass of the universe M_{f} (not to be confused with the fundamental unit of mass m_{f}) by

The measure represents an upper bound to the measure of mass—the mass frequency—and is not directly correlated to what we might think of as visible or observable. When describing mass as a count of m_{f}, the fundamental mass is that mass that can be discerned at any moment in time. Mass in excess of the fundamental mass cannot be distinguished, placing an upper bound to the mass count permitted per t_{f}. Note that this does not place an upper bound on the mass we can observe over an interval of elapsed time.

An example would include the upper bound on the gravitational pull that is in part a description of galactic orbital dynamics ( [

Naturally, it seems confusing that gravity is constrained to some subset of the mass that can be measured within a galaxy, but when we understand the difference between an instant in time (which corresponds to the visible mass) and what can be observed in an interval of elapsed time (the observable mass), we see how mass frequency describes an upper bound constraint ongravity that differs from the total mass of stars that make up a galaxy.

O_{6}: The laws of nature follow from what can be measured in the present, not from what may be measured in an interval of elapsed time.

To resolve a precise understanding of the fundamental mass and its relationship to the visible, observable, and total mass, we shall need a few more expressions. Rearranging the expressions for n_{T} and R_{U} such that_{f} may also be expressed as

Moreover, we may resolve the volume of the universe V_{U} using its radius R_{U} yielding

We shall also need the expression for critical density, a formal definition of the relationship between critical density and critical mass, and the MQ form of the gravitational constant from Equation (13).

To assist in the dimensional analysis, we use a capital M for a mass in kilograms but retain the same subscript as for the term written as a percentage of the total mass of the universe. For example, M_{obs} describes the observable mass in kilograms. Conversely, Ω_{obs} describes the observable mass as a percentage (i.e., 31.6%) of the total mass of the universe Ω_{tot}.

Note also that the temporal approach offered by MQ provides a different naming structure for mass distributions from that of modern theory. There is what maybe measured presently Ω_{vis}, what may be measured given an infinite amount of elapsed time Ω_{obs}, the difference between these two values Ω_{uobs}, the unobserved, and finally that which can never be measured as that mass exists at a point for which light will not reach the observer because of the expansion of space, i.e., dark mass Ω_{dkm}. It follows that

Note also that the dark mass distribution Ω_{dkm} is identified in modern theory as dark energy Ω_{Λ}. We do not use this term in MQ because we identify the distribution as mass that will never be observed due to the expansion of space. Likewise, Ω_{c} is identified in modern theory as dark matter. We do not use this term but instead identify the distribution as Ω_{uobs}. The unobserved mass is that mass that is presently not visible but is given an infinite interval of elapsed time.

There are two important values at work in the expressions that follow such that each is a percentage of the total mass distribution Ω_{tot} representing all the mass in the universe:

· Fundamental mass Ω_{f} is that percentage of mass that corresponds to the upper count bound of m_{f} that can be observed per t_{f} relative to the total mass distribution Ω_{tot} (i.e.,

· Observable mass Ω_{obs} is that percentage of mass that corresponds to the mass that may be observed given an infinite elapsed time, inclusive of the visible Ω_{vis}.

Moving forward, we conjecture that the ratio of twice what may be observed at a moment in time 2Ω_{f} over what may be observed in an infinite elapsed time Ω_{obs} is equal to the sum of what is observed in the moment Ω_{f} and the total in the universe Ω_{tot} = 1,

The relation was resolved with respect to the data, but we argue this presently as a conjecture by considering cases such as a static universe (i.e.,

Recall that the observable mass Ω_{obs} plus dark mass Ω_{dkm} represent all the mass in the universe and thus the dark mass is 1 − Ω_{obs}, assuming Ω_{tot} = 1. The right-hand side of the equality reduces to the form,

We may now focus on the right-hand side of the equality. Given the critical density of the universe _{obs}, we multiply the critical density by the volume of the universe V_{U} to give the total mass of the universe M_{tot}. We then multiply this result by the observable distribution Ω_{obs} to resolve the observable mass M_{obs}. Combined with the expression above, we take advantage of the relation replacing the distributions with the associated SI descriptions.

Thus,

Comparing Equation (71) and Equation (66), we see that

Because the sum of the observable Ω_{obs} and dark mass distributions Ω_{dkm} must equal one—the two measures account for all mass in the universe—we then combine

to resolve

Note that the dark mass distribution Ω_{dkm} matches what in current theory we identify as dark energy Ω_{Λ}. The match is not coincidental, just as it is not coincidental that the total mass in the universe is the sum of the observable and dark mass, or that the unobserved mass is the difference between the observable and visible (this being the value we associate with dark matter Ω_{c}). The distributions are as we have described from the outset, what is presently visible Ω_{vis}, the sum of what is and will be visible Ω_{obs}, and the difference Ω_{uobs} [_{dkm}. The distributions are a consequence of their temporal geometry each correlated to one another.

Proceeding, the locally defined speed of the expansion at the outer edge of the universe is the speed-of-light_{tot} = 1. Similarly, the velocity equal to twice the radial expansion v_{U} is the ratio of the observable to visible distributions Ω_{obs}/Ω_{vis} with respect to the total. Thus, the visible distribution may be resolved as well,

Finally, the unobserved mass Ω_{uobs} is that mass which will be observable given enough elapsed time,

Note that we have approached a temporal understanding of mass in the universe but arrived at the same values we measure with respect to ΛCDM and what are more commonly understood as dark energy, dark and visible matter; see

In addition, in that each of these distributions is a function of θ_{si}—a constant value—this tells us that the distributions are invariant with respect to time. One may also notice the absence of the Informativity differential Q_{L}n_{Lr} from each of the distribution expressions. This absence reminds us that the distributions are geometric. If they were physically distinct mass phenomena constrained to a variable distance relation to the observer, then the Informativity differential would apply. Likewise, relativistic differences between mass moving at high velocities (i.e., distant in the past) and low velocities (i.e., close to the present) would apply. Such modifications would then skew these distributions away from the measured values we recognize today. None apply because the distributions are a consequence of the geometry.

A power spectrum plot of the CMB yields five curves that are from left to right recognized as dark energy, observable matter, dark matter, matter without identification, and finally visible matter. Every second curve is also recognized with respect to optical properties as a compression curve. Finally, we recognize that the curve is a plot of the temperature of the CMB with respect to multipole moments. Whereas the visible distribution is readily recognized as representing all the visible mass in the universe, dark energy and dark matter have remained elusive.

MQ, in contrast, can resolve a clearer understanding of this graph as a temporal distribution of the spectrum with respect to an observer. MQ differs in approach in that all the mass/energy in the universe is recognized at the outset of the problem and then classifies the respective distributions according to that

which can measure in the present Ω_{vis}, that which can be measured given enough time Ω_{obs}, the difference between these two Ω_{uobs}, and that which will never be measured due to the expansion of space Ω_{dkm}. The descriptions match those of the standard model to the same precision as our best measurement data.

In the expressions that follow, a new distribution term, Ω_{v} is present. This distribution describes the fifth bell-like curve in the CMB power spectrum. Its physical description is in part known as an instance of decompression but is otherwise not discussed or associated with a specific phenomenon. In contrast, in Section 3.7, the physical meaning of M_{acr} is discussed, but for now it serves primarily as a convenient term that reduces the nomenclature. Using_{acr} + θ_{si}) and then resolve the value of each oscillation as the ordinate, thereby identifying the peak of each distribution. For example,

with the abscissa representing the multipole moment l, we then exhibit each l as a function of θ_{si}. As noted, the distributions are a temporal geometry, and as such the scaling is a function of π with an increasing exponent (_{obs} and Ω_{v} that demonstrate a specific, but not well understood, pattern;

Then, for consistency, the temperature value of the y-coordinate may be resolved using the factor

However, there is one final step. The presentation must be adjusted for the expansion of space,

The expression is further clarified by substitution using the fundamental expression_{si} with respect to the proper distance. The power spectrum is also a function of the multipole moment—a repeating function of 2π. Therefore, combining the radial expansion of space along with the optical properties of the power spectrum, we find

describes the skewing effects of measurement distortion in the left term of the unity expression. We must divide each x-value except Ω_{dkm} and Ω_{tot} to resolve the x-coordinate value. The MQ description matches to three significant figures corresponding to the most precise values available in the measurement data.

The effect is not applied to Ω_{dkm}. As a temporal description of mass/energy, the Ω_{dkm} distribution has not and will never be measurable. Thus, it cannot be subject to the effects of measurement distortion. The total Ω_{tot} must be adjusted accordingly. Notably,

We bring to the reader’s attention the point that because MQ describes a period of quantum inflation over a period of 363,309 years followed by an expansion, this does not change the order of events in the calculation of the power spectrum during and at recombination. However, it does affect the initial conditions. First, the horizon problem is resolved without a faster-than-light inflationary period. Second, because the universal mass is accreting with elapsed time, we find that nearly all the mass accumulated up to the time of recombination constitutes the CMB we see today, and this is verified with a match to observational data to four significant figures [

Finally, we note some common correlations between the distributions:

Starting with Equation (60) and then using the substitutions above along with Equation (58), Equation (59), the temporal nature of each distribution provides a convenient means to resolve the relationship between any two distribution values or to correlate them all,

With respect to spatial curvature, there exist two notable applications, that concerning the universe and that concerning gravitation. Having completed a discussion of gravitation, we now discuss the phenomena of curvature that are cosmological in scale. Specifically, we discuss a flat universe as described with respect to a constant rate of expansion as presented in Equations (42)-(45). Along with the fundamental expression, then

The expression is collaborated with respect to multiple measurements, for instance, by gravitational curvature, by the predicted value of the fundamental mass m_{f}, by each of the CMB distribution values, Ω_{dkm}, Ω_{obs}, Ω_{uobs}, and Ω_{vis} and by the multipole moment of Ω_{dkm} as presented in Equation (81),

WMAP measurements of the CMB suggest that a value of around 220 is indicative of a flat universe, but a precise measure is difficult as the value is a function of several phenomena with an additional uncertainty in the initial conditions. Conversely, the MQ expressions are a function of one measure, θ_{si}, providing for a precise and straight-forward physical interpretation. Moreover, θ_{si} is a predicted value Equation (C9) matching the six-significant-figure measurements described by Shwartz and Harris in their 2011 paper regarding the quantum entanglement of X-rays at the degenerate frequency of a maximally entangled Bell state [

We also bring to the reader’s attention the fundamental expression that describes the one-to-one correlation between measure and expansion. The arguments for an expanding or contracting universe would impact our understanding of the relative values of the fundamental measures. Notably, this is not observed and there is a limited number of counterarguments along these lines of thought. For instance, one might argue for a variation in the rate of expansion such that the speed of light remains fixed while the value of the fundamental mass m_{f} varies. Alternatively, one might propose a variation in both l_{f} and t_{f} such that the speed of light and m_{f} both remain fixed. One may also argue that a third unknown compensates the undesired variation as the rate of expansion varies. However, in considering any of these conjectures, we would also have to address changes in the orbital positions of the electron orbits as described by this MQ form of the fine structure constant,

Therefore, if the fundamental mass is constant, the speed of light is constant; if stable atoms are also a necessary part of our stellar history, then there is physical support for a constant rate of universal expansion in a flat universe.

We take this moment to point out that because the fine structure constant is typically measured with respect to quantum phenomena, it is important to describe the value in the MQ expanded form (i.e., Q_{L}n_{Lr}) or there will be a significant error in the predicted value (^{–3} from the 2018 CODATA). In expanded form as presented above, the value matches the measured value precisely.

From a broader view, a new understanding of spacetime curvature may seem unnecessary in light of the long-supported notion of a spacetime as described by GR. Hence, how does MQ accommodate curvature when the fundamental units of measure are themselves references and thus by definition flat, incapable of having additional properties such as curvature? The question has already been answered. Many expressions may be used to demonstrate that measure is not the key feature in describing phenomena. Like the Heisenberg uncertainty principle, a reduction of the expression cancels the measure terms leaving only the counts. Therefore, we find that curvature is never a feature of the spacetime itself, but rather a consequence of discreteness, such that the loss of fractional counts Q_{L} of the reference length l_{f} leads to the appearance of a curved spacetime. In MQ, we recognize this effect as the Informativity differential (Appendix A),

The expression is usually taken at its macroscopic limit of Q_{L}n_{Lr} = 1/2 such that the value of Q_{L} is so small as to be physically insignificant. For additional expressions describing spatial curvature and the MQ approach to measurement distortion presently described by SR and GR, the reader may refer to “Measurement Quantization Unifies Relativistic Effects …” [

As discussed in Section 3.4, each mass distribution is a function of the radial system constant θ_{si}. As such, the distributions are fixed. Consider now Equation (66). Then,

However, we also know from Equation (55) that the fundamental mass M_{f} increases with time,

Unavoidably, the mass of the universe must be increasing across all distributions such that each of the distributions remains fixed relative to the other. Using Equation (95) to solve for total mass, such that

Finally, to the extent that

Given that the total mass of the universe may be expressed as_{acr} may be expressed as a count n_{Mu} of m_{f} in the universe per count n_{Tu} of t_{f}; specifically,

There is a significant literature describing the appearance and disappearance of virtual particles in a vacuum. There are also experimental results discussing the decay of virtual particles in a vacuum [

We have not discussed the dimensional analysis of θ_{si} thus far; hence, we take this moment to note that the measure of θ_{si} can take on different units depending on the context of the described phenomenon. For instance, when the expression for mass accretion is written such that _{si} is dimensionless, having no units at all. Likewise, as expressed in the fundamental expression_{si} has units kgms^{−1}. As demonstrated in Equation (C7), θ_{si} has the units of radians. Each measure of θ_{si} is physically significant. Therefore, why does this constant differ from the other constants that we are so familiar with? In part because the other constants are each a composite of this constant, and in part because this constant is a composite of all three dimensions_{si} carries those units.

Not all constants are derived from θ_{si}. Consider the gravitational constant G = (l_{f}/t_{f})(l_{f}/t_{f})(l_{f}/t_{f})(t_{f}/m_{f}) [Equation (13)],which is entirely composed of fundamental measures. That is, there are two flavors of constants—those that are a mix of dimensions and θ_{si} (i.e., Planck’s constant _{si} has specific identifiable dimensions. That said, composition is somewhat an arbitrary human activity. For instance, given

We take this moment to discuss frames of reference and their importance in the description of phenomena. Noting that our understanding of length measure may always be reduced to a description that requires the Pythagorean Theorem, we find ourselves asking, how many physically significant frames of reference are needed to describe the phenomenon of distance? The Pythagorean Theorem, at a minimum, has three terms. Therefore, should not measure require three physically significant frames of reference and if this is readily agreed to, then what are they? In MQ terminology, the third frame is referred to as the self-defining framework.

We often refer to descriptions of the universe as self-defining because there is no reference external to the universe with which to anchor our understanding of such descriptions. Conversely, phenomena are called self-referencing if they are expressed in terms that have definitions based on other terms that are then given meaning with respect to the terms first mentioned. Collectively, the division helps orient the reader as to two distinct classes of phenomena with respect to a framework. This completes the formal definition of the three frameworks, namely, the observational framework, the measurement framework, and the self-defining framework of the universe containing the observed target.

During the earliest epoch, the universe cannot expand because the internal spacetime provides no opportunity to reference points outside of the quantum bubble. To understand quantum referencing, we begin with a review of the three frameworks: the reference, measurement, and target frameworks. Measure has physical significance only when a composite of the information from each of these frameworks. Therein, the quantum referencing we observe today is mitigated during the earliest epoch.

Presently, a physically significant measure exists such that side a describes the reference count n_{a} = 1, side b describes some count of the reference (both sides a and b are discrete), and side c describes a non-discrete reference count between the observer and the target. However, what if the size of the universe is such that side b describes a count less than two?

In this instance, there exists no means by which to distinguish the count n_{b} from the reference count. In other words, side b may be any non-discrete value, but with respect to the observer indistinguishable from the reference count n_{a} = 1. This, in fact, describes the presently defined properties of side c. We therefore modify our understanding of side b when describing measure in the quantum inflationary epoch with the non-discrete count term n_{bn}. The description applies during the entire epoch.

We next consider a universe that has expanded sufficiently such that

More specifically, a radial length count of

At _{f} rounds up and allows the expansion to continue uninterrupted.

Finally, there remains a well-known argument surrounding the coincidence between mathematics as a tool describing nature and nature which appears to abide by the laws of mathematics. We do not broach this subject here but remark that the tradition of providing examples of physical measurement that corresponds with mathematical expression is an important tool of science. It is our ability to correlate this approach with the measurement data that the presentation is made.

We begin with the fundamental expression broken out in terms of counts of the fundamental measures as described in Equation (35). Note, we have added a prefix u to denote that the expression is a self-defining representation of the universe,

As such, the expression describes a universe that expands at the speed of light. To modify the expression appropriately, we must recognize a differing count n_{Lu} of l_{f} other than that afforded by the relation_{Lu} to represent the count of length units during the quantum inflationary epoch,

Given _{i} at A_{U} = 1 second, then

One might ask, what is the role of critical density ρ_{c} considering that the rate of mass accretion _{i} with respect to the leading edge. The steady rate of mass accretion, in turn, depends on the system constant (i.e., in our case 2θ_{si}) associated with a quantum fluctuation such that there could be many such fluctuations in a multiverse. Nonetheless, without a system constant that falls in an acceptable range, the fluctuation never reaches sufficient size to transition to an expansionary epoch. Moreover, the system constant also determines the radius of electrons about the atomic core. Thus, an expansion that is too small or too great will result in a universe that cannot form baryonic matter.

However, as we shall demonstrate, this incredibly slow and decreasing velocity is not what brings quantum inflation to an end. For that, we must return to the definition of fundamental length. Taking the integral of the velocity expression with respect to time (the constant of integration is 0), we obtain an expression for the radius of the universe

expanding until _{f} resolved in Equation (113)) such that

at which point external referencing is permitted and the leading edge of the universe then expands at the speed of light relative to that edge.

The quantum and expansionary epochs describe periods of our evolutionary history with differing frames of reference. The difference is subject to a relativistic offset, one which must be accounted for to align calculations of the CMB age properly with respect to elapsed time in our present epoch. Taking the integral of R_{U} at the conclusion of quantum inflation resolves the age of the CMB with respect to an observer during the quantum inflationary epoch. That tells us, for instance, the age of the CMB prior to recombination but not the elapsed time as viewed with respect to an observer during the current epoch.

Using the expression for the radius of the universe_{s-ref} of an observer in our frame and the self-defining age A_{U} with respect to the frame of the CMB, is a function of the volume

In this way, we solve for the elapsed time with respect to an observer today.

We may also present the expression in relativistic form by arranging the relation in the form of a unity expression. Given the self-referencing age

Now holding this result and considering the root of the following expression taken from [

Equating these two expressions yields

As expected, we find that Einstein’s speed parameter _{T}t_{f}, we resolve a contraction effect that corresponds to a velocity

of the speed of light. Note that _{Lc} in SI units. The terms cancel multiplicatively but are retained for consistency in structure.

When the radius of the universe reaches

expansion is then possible. The accumulated mass drops in density and temperature. We may resolve the elapsed time until recombination such that M_{T}_{0} from Equation (110), Equation (111) is the accumulated mass at the end of the quantum inflationary epoch. Let A_{T}_{1-T0} denote the elapsed time from the end of the quantum inflationary epoch to that time when the temperature drops to T = 3000K, with ρ_{U} the mass/energy density of the universe and _{T}_{1-T0} describing expansionary cooling. This is what we shall solve for next. With definitions,

we may now solve for the elapsed time corresponding to T = 3000K,

The expressions are somewhat idealistic in that they assume all the accumulated mass is in the form of photons. We do not know the physical processes that lead to mass accretion; at best, we can only conjecture that most, but not all, of the accretion is in the form of photons matching the result to the mass/energy equivalent of what we see today. The values match to four significant figures with the experimental results from a study of 2009 by Fixsen [

Adding the 2.7 years elapsed during expansionary cooling to the 363,309 years elapsed during the quantum inflationary epoch then reflects the elapsed time that we must attribute to accreted mass. The majority of this mass is then what forms the CMB we see today.

The age, quantity, density, and temperature of the CMB, which represent all the accumulated mass that exists at the end of the quantum inflation epoch are then

To incorporate the cooling period during which the temperature drops to 3000K, we need a more detailed approach. We take the self-referencing age of the universe at the end of quantum inflation _{T}_{1-T0} during expansionary cooling [Equation (137)], multiply by the rate of mass accretion ^{−1} [Equation (110)], convert to kg·s^{−1} m_{f}/t_{f} [Equation (111)] and subsequently to energy by multiplying by c^{2}. We then divide by the volume V_{U} of the universe [Equation (57)] to get the density and finally multiply by the radiation constant σ to get the temperature T. We may reduce the calculation somewhat before solving,

The difference of 3 × 10^{−6} K from the prior calculation is the same value to six significant figures. The two results differ in the sixth significant figure because of a coincidence where both are subject to a rounding threshold.

Thus, the accumulated mass is confined in a quantum bubble with a radius less than

Finally, when the radius of the universe reaches

In MQ, we refer to this as the universal expansion H_{U}, the expansion of space. This differs from the motions of galaxies within space, which must be resolved separately. There is no specific correlation between the two, but it is conjectured that baryonic mass remains nearly static in space with respect to where it originates.

MQ has enabled us to offer physically significant descriptions of phenomena we observe in nature. Bearing in mind that MQ is a nomenclature, and hence what we present is little more than classical mechanics written in a less familiar form. More significant, is that the expressions demonstrate a physical correspondence that matches the measurement data.

By example, we look to expressions describing the fundamental units themselves, each which matches the 2014 CODATA to six significant figures. We also look to the calculation of θ_{si} and gravitational curvature. Taking the value for either produces the other with the same level of physical correspondence. The calculation of Hubble’s constant and the volume and mass of the universe are also in agreement. Moreover, the quantity, age, density, and temperature of the CMB again match the data. The calculation of the fine structure constant resolves a significant and long-standing discrepancy with the measurement data. Looking to the power spectrum of the CMB, the distributions are in correspondence. The effects of relativity with respect to several phenomena are each reflected in the calculations, each aligning with the measurement data.

While these calculations provide opportunity for establishing a physical significance, we emphasize that it is the description that is under investigation, not the approach. The approach is classical, a field that already enjoys a century of published support.

Perhaps it would be of significance to then consider the mathematical implications of MQ. Does math play a role in physical behavior? Are the laws of physics derivatives of mathematical structure or is mathematics merely a tool coincident with our interest in describing nature?

Indeed, these are questions that have arisen in this presentation. In response, we should focus our attention on the scientific method. These questions are not the subject of this paper. For centuries, the community has been complacent with the coincidence of mathematics to describe nature. That a universe of radius

As such, we hope that these considerations have not been a distraction. Yes, MQ brings mathematics even more into the limelight. And yes, there will likely be even more debate about the role of math in describing nature. However, at least for now, our focus is in providing new tools and new ways with which to understand the early universe. We have, with MQ, revealed that the power spectrum is in part a function of elapsed time, that there is no dark energy or dark matter [

To end, we remark that observations of the physical significance of fundamental units of measure require the interpretations presented. However, they do not necessarily exclude some conjectures, such as inflation theory. At present, there is no support for inflation from the early universe events described using MQ, but this does not mean that inflation theory describes something that never happened. Rather, we add MQ to the collection of tools used in determining the physical significance of inflation among other conjectures that have filled the gaps in our understanding of the early chronology of the universe.

We thank Edanz Group (https://www.edanzediting.com/ac) for editing a draft of this manuscript.

The author declares no conflicts of interest regarding the publication of this paper.

Geiger, J.A. (2020) Measurement Quantization Describes History of Universe—Quantum Inflation, Transition to Expansion, CMB Power Spectrum. Journal of High Energy Physics, Gravitation and Cosmology, 6, 186-224. https://doi.org/10.4236/jhepgc.2020.62015

Throughout the paper, the term Q_{L}n_{Lr} is used repeatedly and is referred to as the Informativity differential in recognizing the central role it plays in describing how fractional values less than the theoretical limit reflect a distortion effect in distance measurements. Knowing the limits to Q_{L}n_{Lr} is essential to resolving the fundamental measures. This product is obtained from Equation (28) multiplied by count n_{L}_{b},

In the initial presentation, we identify sides a, b, and c such that side b has some count n_{Lb} of reference l_{f}. We later reconsider side c such that _{Lb} term and use n_{Lr} throughout all expressions when discussing MQ.

The approach is justified in that what is measured always equals a whole-unit count of a fundamental measure, and with n_{La} = 1, we find that n_{Lr} must equal n_{Lb} for all values. This is easily verified in that the highest value for Q_{L} is obtained for n_{L}_{b} = 1 where _{Lc} is always rounded down to the highest integer value equal to the count n_{Lr} with _{Lr}. Therefore,

The lower limit, i.e., when n_{Lr} = 1, is easily produced,_{Lr}, then add n_{Lr}, square, subtract

Q_{L} decreases with increasing n_{Lr} until the left term drops out. Distance does not need to be considerable to reduce the Informativity differential to 0.5. At just 10^{4}l_{f}, Q_{L}n_{Lr} rounds to 0.5 to nine significant figures.

The MQ approach offers an alternate solution to describe a maximally entangled Bell state with respect to a lattice vector

where the pump, signal, and idler vector magnitudes n (a function of the pump frequency or the phase matching properties of the nonlinear optical crystal) are identified with subscripts p, s and i followed by an x or y representing the coordinate axis.

Placing these values in vector form and breaking out the component vectors, then

Moving the pump coordinate to the right alongside of the lattice vector, taking the angular difference of the y-component to make the sine positive, and matching that form in the x-component, we then obtain

We find that

Thus, the respective angles at maximal entanglement θ_{Max} associated with the signal and idler follow Equation (C7) as described in the second row of TableA1. An additional solution (first row) may be resolved by subtracting each angle from π, (i.e., π - θ_{p}, π - θ_{s}, π - θ_{i}).

In Shwartz and Harris’s 2011 paper, “Polarization Entangled Photons at X-Ray Energies” [

The Shwartz and Harris measures precisely match the MQ calculations (TableA2) confirming the predictions described by MQ to six significant figures, which is the extent of precision allowed by G. Moreover, the error in angular measure for the Shwartz and Harris results is estimated to be less than 2 micro-radians.

Of interest are the component terms that define the scalar constant_{p}, speed of light c, and gravitational constant G. Using the 2014 CODATA [

θ_{p} | θ_{s} | θ_{i} | |
---|---|---|---|

π−θ_{Max} | (l_{f}c^{3}/2G)−π (0.1208) | π−(l_{f}c^{3}/2G) (−0.1208) | π−(l_{f}c^{3}/2G) (−0.1208) |

θ_{Max} | 2π−(l_{f}c^{3}/2G) (3.02079) | (l_{f}c^{3}/2G) (3.26239) | (l_{f}c^{3}/2G) (3.26239) |

Bell’s State | θ_{p} | θ_{s} | θ_{i} |
---|---|---|---|

0.1208 | −0.1208 | −0.1208 | |

3.02079 | 3.26239 | 3.26239 |

The role of the fundamental measures to this point is a mathematical construct, a proposed interpretation of the existing argument. The measures exist only in their expressions until the presentation of formal values for the fundamental measures. Whereas CODATA estimates may be used to guide our understanding of S, up to this point, no theoretical values are assumed. Our confidence in correlating S to θ_{si} rests in the correctness of interpreting S as a momentum and an angular measure, their correlation which accounts for Planck’s length expression, the resulting measurement predictions, and the collaborating measures made by Shwartz and Harris.

In modern theory, we quantify the relationship between length and time with respect to the speed of light_{L}c^{3}/rθ_{si} and its correlation to the gravitational constant G, then removing the Informativity differential

with

We call this the fundamental expression.

Although a macroscopic expression for fundamental length may be resolved directly from Equation (31), we start with the initial geometric formulation finalized in Equation (30) and our understanding of

Then, for all macroscopic distances, the fundamental units are

when we say macroscopic, we mean any distance greater than 2.247l_{f} as described in TableA1. For any distance greater than this, the geometric skew because of the Informativity differential is a value less than 0.5 of the sixth digit of physical significance. To resolve a form of this expression accurately with greater precision, the fundamental length may be written as

The fundamental expression in “expanded” form—the term used when applying this effect—is written as