Getting a Handle on the Conceptual
Measure of Dimensions
Getting the Point of Mathematics (or, Sometimes circular reasonings are best!)
Biologically speaking, diversity is a strength and a positive attribute when seen in terms of survival. To compare it with the more current definitions of Entropy, "diversity" fills in the econiche-spaces of extensive environments. Organisms find (via techniques of "adaptation") locales of stable security, vis a vis all organic and inorganic components they might encounter. Therefore, "adaptation" is a principal example of Perpetual Interaction Capability. "Adaptation", as a dynamic ffunction-event, cannot take place without the pre-existing qualities that permit and enable non-disruptive "encounter-responses" for the participants. Adaptation occurs only in the presence of mutuality. This can be on the chemical level, on the molecular level, on cellular level, on the whole-body level, et al. Diversity is the result of the broad spectrum of adaptation-events. Diversity via adaptation is an entropic ffunction of the general Biota or Gaia. The biometabolism of an entire Biota embraces a broader range of interrelatedness, and is therefore less susceptible to general disruption should localized imbalances or extreme events occur. I say this with qualification.
It is eminently possible for a relatively insignificant energetic variant to wreak havoc with an entire organism. A small hole in the structure of a circulatory system can mortally disrupt the whole organism. Vis a vis "energy", this is out of proportion to a system's entire energy processing activities, yet it is an example of all the more reason why we should focus our attention on the inter-relatedness of information feed-back processes of systems. (The way Fuzzy Logic indicates). Integrity maintaining responses such as tourniquets and sutures can repair "minor" physical damage that would be life threatening if left unchecked. For human beings, this is part of the developed Integrity maintaining abilities we have - on the level of gross motor skills and awareness loops. Cells per se have a limited level of self-reparation. As an alert-responsive-total-organism, we integrally have a spectrum of other abilities (interconnected systems) which we ffunctionally employ to self-sustain ourselves.
Ben Franklin (in Poor Richard's Almanac) related it nicely: "...for want of a nail the shoe was lost, for want of a shoe the rider was lost." For want of a rider the kingdom was lost. For Franklin, a whole society could be lost for want of participation by a single citizen; his thoughts hoping to encourage the establishment of an organized nation.
Diversity and elaboration - even when we regard the process-tool called Linguistics - is no less a real entropic process playing an active role in the survival of life forms and peoples. Entropy is the eminent dynamic found on all levels of ffunctioning and in all behaviors in the Universe.
So it is, when we gather together all we have learned of the world, we can see it, and ourselves, as an elaborate "oneness". Originally perceived "distinctions" lead to "comparatione" which leads to cognitions of similarities ... both in structure and ffunction. Information (energy, space, time) received over time, evaluated over time, understood over time. And all, as a process that in itself, is dependent on time to have even the slightest first-quality of "existence". What continues to become apparent is that there are kinetic patternings of ffunctions that display fundamental similarities...even though on different scales of those activities and with their own distinct domains of impact and importance. I like to think of the cosmological concept of "relativity" not just in the scientific sense that the universe has no "solid ground", if you will, and that all things, even the fabric of space and time, are subject to mutability and variation (as topological domains with a degree of elasticity, but without losing characteristic "Integrity" of specifically maintained qualities and aspects)...but also in the more extensive sense...that everything is truly "related". The Universe, on all levels, is self coherent and consistent. And any specific event ffunctions, barriers or localizations are just regional phenomena where forces exist that determine the allowable space and time of activity. But, there is no point in space that can't be gotten to, or connected with in some way, any other.
To accommodate this totality of perception in an era when existing science has rigidly proclaimed that four fundamental forces exist, and our mathematics (which I currently see as missing some crucial parts..and basic fundamental ones at that) can only describe them by saying that space becomes malleable and "bends" in the presence of "matter", is the true challenge of our age. Currently, space-time, as a "four"(?!)-dimensional continuum is only discussable when matter (nee, "energy") exists. The unvoiced component is the one that needs illumination. To examine Time as an extant, and to ascribe to it coherent ffunctional structure, and make it consistent with everything else we understand about the "structure" of the Universe. To establish a uniform way of interrelatedly discussing all these aspects of existence.
One of the upshoots of all my long windedness is to convey the essence of the language we call "Mathematics". Mathematics is borne of the normal and pre-extant relationships found in physical perceptions. Its inner mysteries are not beyond the capacity of the least of us to understand. Especially since those extreme levels of codified manipulation of information transcription were developed based on simple kinetics, they can be described and re-transcribed into more everyday language!
Moreover, by shifting criteria to emphasis on greater dynamic consistency and coherence...we are led to making some fundamental changes in the hallowed and time honored structure of Mathematics as it exists up to this point. To my way of thinking, "mathematics" is just another language. It has been aggressively formalized and developed. It has elucidated some incredible truths about the nature of the universe we live in. But, it can be perceived to be as rigidified and subtly self-restrictive in its own way, as is any other language which has a random holisticly organized set of meanings and applications. I believe it can be improved, by the inclusion of new operators which ffunction in regard to broader fluid-dimensional continuums and a deeper understanding of "information" and all the forms to which it can be applied (ie, time). A crux will be redefining "point" as fully-dimensional. And then, building "nested infinities" by using standard ffunctions in non-standard ways. (hoping to make them "standard" from this time forth (!)). In ways that are more profound than is currently appreciated, my proposal is linked to Newtonian principles of Inertia, and Heisenberg's Uncertainty Principle.
There is no transfer of information without an interaction, and nothing in the universe "spontaneously" changes without some cause being present to account for the alteration which is resultantly observed. These are "gauge" and "field" theories. But I propose something more fundamental. Something that will elucidate exactly how information is transcribed through normal relationships of severally-nested infinite continuums. Infinite continuums "in which" quantum states exist, but in a naturally integral and fundamentally fluid way.
The concept of "dimension" has, for the predominant passage of cognizant human history, been firmly fixed in the physical recognition of "extent" , the "distance" between distinguishable locations. Mathematical notations and usages of it however, since the 17th century, have consistently shown us that this is only one small aspect of "dimension". Unfortunately, the general mentality (via "experience") of our peoples has become so habituated to that one concept that it will take this exposition, to reevaluate that myopic view, which linguisticly inbred a short sightedness of understanding. So. Now is the moment when we turn our attention to an analysis of the language of science: Mathematics.
We may not have been there at the dawn of cognitive thought for our species, but we can reasonably surmise that enumeration was one of the earliest ceptual skills. Recorded instances of cultures world wide point via knotted ropes or referential counting marks in bone, stone, or clay, to this fact. Over and above the physical-survival artifacts we find - such as arrowheads, adzes or shale and obsidian knife edges - the first examples that serve strictly a "referring" function toward other event-phenomenon, are "enumerative" ones. Numerically recorded quantities of animals or possessions; markers and monuments measuring the passage of days, times and seasons; consistent distances marked out to build a structure; the simplest markings having one to one correspondences. Eventually, collective quantities were assigned convenient singular marks, such as the Roman "V" for five, etc. Varieties of information-enumeration referencing techniques were tried, with base-10 being the most pervasive, especially with the inclusion of the Arabic notation of "nought" or "zero" as a necessary "place-holder" in the system and as a reference for "potential" (non-current presence).
(In fact, "10" as notation - that integrally uses the "zero", occurs in every base-wholenumber system. We happen to think of it as a specific quantity...9 plus 1. It could just as easily mean the quantity we call "eight". In a base-8 system, "10" equals 7+1. If we had individual symbols for all quantities up to "49" then our "50" could be written "10". "10" squared (10x10;100) would be some of our symbolisms for the quantity we now write "250".
Pre-eminently though, quantified enumeration was the foundation of "mathematics". Experiential spatial-linear "distancing" perceptions were blended with numerical symbolizing to create what we have today...the "number-line". Strongly linear, strongly associative of Euclidean-Cartesian percepts. Even and especially after the inclusion of negative numbers, fractions, transcendentals and imaginaries.
"Distances" were expanded to "areas" and "volumes"....and more. "Infinity" became an early name-tag for the condition of "countability" beyond the ability to count "everything". Geometric designs and notations were given numerical-symbolic correspondences. Yet, all of this was firmly correlated to presence/non-presence, to extent of Measure. Exemplified by this quotation from Dr.Michio Kaku's 1987 book "Beyond Einstein":
"Space, as everyone knows, has three dimensions: length, depth, and breadth. The size of any object in our universe - anything from an ant to the sun - can be described in terms of these three quantities."
His comments at that point in his book are some what facetious, he being a fervent believer in the 26 and 10 "dimensions" referenced by current Super-String and Supersymmetry Theory. His net inference is that "dimensions" are (but not exactly) what everyone thinks they are.
We established symbolic notations to handle and manipulate these measurements in convenient ways. However. We have yet to grasp the far reaching meanings of the symbols and ideas we so casually use. The point is, that somewhere along the way we "separated out" the perception-concept "extent" from the perception-concept "dimension". But we did not do it consciously. It happened by default per the language being developed. Convenient notations such as "kxn" did that for us. We began to speak of flat planar surfaces being the collective expression of myriad individual "lines", and in an attempt to grasp correlations about countable-ness, we distinguished the countable-ness of one line from that of another. We labelled them variably with letters such as X or Y, where sometimes the measure-number was a specific value for the general-letter reference, and sometimes it was an extent-value in combination for reference with separate domains being considered. In the first case (replacement), a value such as 4 for "X" in finding the "area" of a rectangle whose right-angle extends are both "4", the notation became "four times four"..."4x4"...two "fours" multiplied by themselves..."42"..."four squared(!)". Two "dimensional" domains...written in more general form as "X2".
This is where the unvoiced conceptual shift occurred. It began with "dimension" being the k portion of "kxn". Linguistic notation then shifted "dimension" to the n portion of "Kxn". Current Mathematics has not yet differentiated this and we use both symbolic meanings interchangeably, when in fact there are significant differences and ranges of function for each.
"Dimensions" were drifting into the meaning: "aspects" or "factors". Obviously the commonplace "3" different distinguishable directions of space are "factors", but mathematics and the information it handles is not limited to those "directions" only. "Dimensions" are becoming relationships".
This muddled situation was compounded by the Pi-Theorem expounded in the 18th century. I quote here two tracts from the Encyclopedia Britannica (1961): {please note: all emphasis is mine}
ENCYCLOPEDIA BRITANNICA ed1961 "Mechanics,Fluid" vol15.pp159-160.
Dimensional Analysis:
"All terms of a dimensionally homogeneous equation have the same dimensions. It is most convenient to have a physical equation arranged with the variables in dimensionless groups or combinations. Buckingham's p Theorem can be used to organize variables in the smallest number of significant groups. This theorem is particularly useful when the number of variables is large.
Let A1, A2, A3,....An be n physical quantities which are involved in some physical phenomenon. ....... Let m represent the number of all the primary or fundamental units (such as force, length and time) involved in this group of physical quantities. The functional relation between these quantities or the physical equation can be written as
f(A1, A2, A3,...An) = 0
which means that some function involving all the variables equals zero. Buckingham's p theorem states that the foregoing relation can be written alternately as
f(p1, p2,....pn-m) = 0
where each p is an independent "dimensionless" product of some of the A's. The number of the terms in the physical equation has been reduced from n to n-m. For example, if there are 5 variables in a case (n is 5), and there are three fundamental units (m is 3), the physical equation have 2 dimensionless ratios. ...
....A consideration of the three primary dimensions Mass, Length, and Time, yields three separate equations. A simultaneous solution of these three equations gives numerical values for the three exponents. For example, in the p product, the values x1, y1, and z1 can be determined from the three conditions that all the length dimensions must cancel out, all the mass dimensions must cancel out and all the time dimensions must cancel out."
Dimensional Analysis: EB 1961 vol 7 pp387D-387E.
subheading: Pi Theorem and Dimensional Homogeneity
"There is a fundamental theorem here, the so-called pi-theorem, apparently first explicitly enunciated by E. Buckingham, although used implicitly ever since the time of Baron Jean Baptiste Fourier, who was the first to apply dimensional considerations. All the parameters which enter into the functional relationship are given. These will include both physical parameters and dimensional constants, and both physical parameters and dimensional constants are similar in that they have dimensional formulas expressible as products of powers of the fundamental units.
.
.
The pi theorem states that, subject to an important restriction, the fundamental relationship must be expressible in such a form that it contains as arguments only such products of powers of the physical parameters and dimensional constants as have zero dimensions in all the fundamental quantities. The restriction is that there not be more than one independent functional relationship between the quantities. The pi-theorem may be re-phrased to express the principle of dimensional homogeneity, which is often taken as the fundamental theorem of dimensional analysis.(!) Let us express the function which satisfies the pi theorem as
f(p1, p2 ...) = 0
where the p's are the dimensionless (!) products formed from all the parameters. We may solve the equation for one of the arguments, say p1, and express p1 in terms of the component parameters p1-alpha, p2-beta, p3-gamma3...., and then solve for the first, writing the equation as
p1-alpha = p2beta-minus, p3-gamma-minus .... f(p2, p3 ....)
The function /, having only dimensionless arguments, contributes only dimensionless terms.
.
.
.
The restriction that there is only one functional relationship is essential to the pi theorem and to the principle of dimensional analysis.
.
.
.....In practice the requirement of only one functional relationship is no essential restriction because we are always interested in reducing the relationships until only one remains. If there should be two relationships, one of the arguments may be eliminated between the two, leaving a single relation between a smaller number of arguments.
.
.
.....This has particular application to the so-called logarithmic constants which often present themselves in thermo-dynamic analysis.
.
.
.....Since products of powers of dimensionless products are themselves dimensionless products, there is no unique way of writing the n-m dimensionless products - it is only the number which is determined. The precise form in which these products are to be written must be chosen with discretion, to suit the purposes of the application."
In order to attempt to deal with various meanings associated with the notations, one of the instances was to assign meaning to "kx0". It was determined that a "zero" exponent would necessarily have to reduce the domain "X" to "one" in order to remain coherent and consistent in the schema of mathematical relationships. It was then pronounced that since this and similar relational notations remain constant throughout all various mathematical manipulations, that they could summarily be ignored since they would not in any way affect the variables that did change throughout any calculations. They did not have to be physically written. "Dimension" was inadvertently kept associatively linked with "extent" (the "k" factor), rather than allowed to free-float as meaning "relationship" (the "n" exponent factor).... the aspect which has become the predominant sense in the 20th Century.
Unfortunately, this vectored mathematics into two anomalous cubbyholes.
The first is that dimensions became considered as strictly positive-whole-number domains....distinctively identifiable factors. It was considered meaningless to call something a "dimension" if it couldn't be specified and measured somehow. The fact that the exponential expressions happened to overlap was some sort of interesting coincidence, nothing more. Especially if the exponent net-value became "zero". A zero-value exponent was spoken of as "dimensionless", since "common sense" co-linked enumerative counting with spatial loci called "points".... infinitely small locations which couldn't possibly have "dimension".
The second is an explicit application of this concept in mathematics to-date. As far as I am concerned, it is the principle functional anomaly that remains tucked away at the core of all our arithmatics, and makes the rest of the structure suspect. It is the Primary Logical Disjunction of the current language of Mathematics of planet Earth, even in these advanced days of the 20th century. The entire formidable edifice of Mathematics harbors at its core this fundamental anomaly ... which must be resolved if advances in human thought are to be achieved, and, if we are to make any evolutionary strides towards a viable future.
The Disjunction exposes a sloppy application of the Aristotelian principle of Identity, "A = A", and therefore, the entire edifice of Symbolic Logic.
There is, eventually, a way to cope with and resolve this disjunction but, at this moment, I present you with just the anomaly to think about:
In current mathematical grammar we express and then write :
"constant multiplied times a (variable raised to an exponential power)"
written: k(x)n ;
If we define the above matrix as "A" and by substitution replace A=A with
k(x)n = k(x)n ,
we can expect there must be total equality of the several components of the matrix expression if the reduced form "A=A" is to hold.
That is: a = a , x = x , n = n.
Except: current math grammar permits this anomaly
k (1) 1 = k (1) 0.
If n = n, then in some meaningful way "one" must equal "zero".
1 = 0.
This is the linguistic dysjunction. It indicates that two concepts might be over lapping. Now, it is the accepted situation in current mathematics to permit this dichotomy of usage "when convenient". The anomalous overlap is ignored. We can't do that anymore. It is important to explore the dichotomy to find a resolution. At the very least, clarify the distinctions.
In whatever context or situation we use numbers, we must be able to apply the same relationships at all times. If there is numerical differentiation between zero and one along a simple number line domain, then the same must hold true if we use a numberline domain in an "exponent" relationship. If we use the concept of "dimension" as designatible by some exponential expressions, then all exponents should be considered "dimensional", even if it is a new way of experiencing it - especially in the situation when an exponent-value is "zero". An exponential expression is a domain continuum in its own right. A "point-locus" is descriptively discussable as a fully functional "dimension-value". It is "dimensional".
Chaos formulae already broach this, but fall short of considering dimensions as explicitly any exponentially expressible value, not just positive integer value. At this instance, it is formatively crucial to acknowledge exponent-zero as a dimensional state. The same way as it was crucial to accept the Arabic use of "zero" in general arithmatics.
Mathematicians and statisticians are slowly beginning to accept this. When formulas are used to consider a multitude of Variables or factors, each variable is spoken of as being a "dimension" ...."in a mathematical sense, if not in a physically actual one". This broader interpretation is just beginning to find its way into the perceptual paradigms of the general population.
A corollary perception needs to be addressed in Geometry. Cartesian linearity has dominated our model analogues. The base of this percept has been the Pythagorean Theorem of A2+ B2 = C2. Even though there has been a contemporary system of Radians, the Pythagorean Theorem (representing the quantitative information-relation system of Trigonometry) has been at the core of all mathematics in the physical sciences. The only problem is that the Universe is not ffunctionally "linear". The Universe ffunctions in circles and conic sections. But we only obliquely reference it that way! We may consciously note and be aware of it that way, but when we come down to discussing it "mathematically", we reduce forces and relationships to the ratios of Cartesian-Euclidean "straight"-lines! We persist in adjusting non-linear events into linear components, linear continuum domains. I suggest that we re-examine the Universe in its own way. We must see if there is some way to correlate Information, Space, Time, Conic Behavior, Mathematics, Finity, Infinity, Entropy, Complexity, et al, all in a coherent consistent schema.
A window for this is bringing together Einsteinian Relativity with the Big Bang cosmology ... in the simplest and most essential terms. Firstly, Einstein stated that existent Energy can neither be created nor destroyed. Ever. Second, the Big Bang cosmology posits that the fundamental forces, especially Gravity, can condense all matter - i.e., Energy - of the Universe to a unique dimension"less" locus... a Singularity...a "point"!.
Holding these images as valid, its only reasonable to say that all the information and knowledge and such that exists throughout the universe now, also "existed" ... in some possible "form" ... within the Singularity. And will exist there again, should the Universe re-collapse. If energy is indestructible then information, in a sense, should be also. This means we have to find out how to code all the Information of the Universe into this "point". We should begin to reason-through what such a compression or transfer or translation of information-energy might be like and what relational mechanisms could account for this possibility.
This means having to examine how the information inherent in "Dimensions" correspond and translate per each other ... in the same way that terrain features of a so called 3-dimensional globe "translates" to a 2-dimensional flat map. (I say "so-called", because in our new understanding, the number of dimensions begins with zero, not one. An X-cubed volume is 4 dimensional, not three. But I'll speak in conventional terms for the time being).
Beginning with X0, exampled by its simplest representation, the point, as the principle "dimension" ... X1 is represented by a line. X2, however, is not represented in its simplest geometric, by a plane comprised of "straight" lines. Rather, the universe expresses the plane as an infinity if conic sections, more particularly, the circle. X3 finds expression as a spherical laminae, not as cubics. So. It is these representative universal forms that we need to analyze: point line circle sphere etc. Finding topologically how the inherent constructive "information" of each of those forms relates to any other, will eventually be our key to elucidating everything else.
Theoretically and conceptually the universe has always been deemed to be "continuous". However, empirical experiments and investigations, modeled by yet another mathematical methodology, have shown incredible "quantization" of the "real" universe. The conundrum...epitomized by "wave vs particle" form-expressions of energy...begs a resolution. There is validity in each, with some question as to their compatibility. Co-extancy indicates a necessary mutuality. But just what is it? Could it be connected with everything else I've discussed here? Yes.
Rather than continue a protracted discussion (and taking pity on you since you have already trudged your way through the first milieu-arranging part of this tract), I'm going to rather quickly set forth a succession of ceptual ideations, hoping that you follow (and concur) with the perspectives.
Quantum mechanics is a description of the nodal regularity - discussable in complex mathematical statements that a from basic arithmetic observations - of the plateau states of energy. This came as a surprise. Not that there was a way to design a mathematics that would describe the events, but because there had been over two thousand years worth of effort to develop quite the opposite....a smooth and unbroken numerical continuum. First came integers, then fractions, then rational, then irrational, then transcendental and imaginary "numbers". All, to "fill-in" the length of the continuum....from negative infinity to positive infinity. But as I see it, first and foremost came the perceptual conceptual awareness of integers....."quantified integers".
In the primal forms, mathematics began as "quantized/quantified states". And the ffunctions and relational manipulations we have designed ... ever since we started with "one" "two" "three" etc. ... have always reflected that quality. Our mathematics began as a description of the qualitative aspects of the universe. It should comes as no small surprise that this language is an extension of, besides being "reflection" of, these real qualities.
Conceding this fact, we skip forward in conceptual time. Pausing momentarily to recall that ancient Greek, Chinese, Egyptian and other human observers of the world, were aware of the "finite" and the "infinite", and endeavored to make both extremes of existence discussable and meaningful.
In the long haul, it fell to Newton, but especially to Leibnitz, to create a technique that made "infinity" real and handleable. There had never been, nor will there ever be, a way to describe the massively infinite (whether large or small) by using a quantum "alphabet". Neither would it be satisfactory to ignore real and ffunctional "infinities". So they used a progressive approach in order to describe actual observations. They conceived of giving a pragmatic definition and meaning to "the essence of the effort to name the Infinite". Naming "infinity" became a process that only required the conceptual acceptance that such an effort would be possible...and did not need to be actually carried out! This is the foundation of the Calculus.
The Calculus produces a very definite set of relationships and valuations ... even going so far as to specifying Quantum Mechanical numbers...very distinct and discrete "states". However. The foundational principles of Calculus are not done in the same clear cut and exacting ways. Instead, within bounded limits chosen randomly along the open "number line", the principle is proposed that for any length or quantity or measure or value or extent (etc.), it is possible to imagine and recognize the existence of a value "epsilon" which is always smaller or lesser than a randomly chosen value "delta"...no matter how small that extent "delta" is chosen to be. All, heading down toward the "infinitely infinitesimal"!. The crucial crux is that neither "delta" nor "epsilon" NEED EVER BE SPECIFIED!!! The only thing required is the "PROCESS" of infinitizing (to coin a rather cumbersome phrase (g)). Heading toward the "infinitely small" vis a vis "extent", and simultaneously toward the "infinitely large" vis a vis the number of intervals placable between the limit-bounds.
Let's discuss this in another way, appropriate to "information" and the ability of a detector to be sensitive enough to detect the "presence" of something. Let's call this the "sensitivity ffunction". To detect "information" about a thing, the detecting mechanism/environment must have a sensitivity that is "finer" (can process degrees of magnitude smaller) than the thing under consideration. If there can be such a thing as "information size", a sensitivity ffunction is fundamental to holistic system operation because it will determine the minimum "information size" which such a system has the ability to recognize. The "sensitivity ffunction" determines the smallest unit "bit".
Suppose however, that we devise an information processing mechanism where the "sensitivity ffunction" is progressively variable instead of constant and fixedly restricted by mechanical limitations, and, the "sensitivity ffunction" capability is actively determined by situational requirements - continuing to work successfully regardless of the smallness of the unit "bit" - then we have in effect devised the perfect general information processing system. If we assign corresponding symbols for each component of such a "mechanically based" system, and assign relationships between the mechanical components, the net-net result is a totally symbolic "information processing system". This was the achievement of Leibnitz, Newton and Kepler: the Calculus.
It seems that 20th century mathematical methodology defines "information" (ala Shannon, after Boltzmann) in regard to what is already an "information system" so that if mathematicians use an implicit definition of "information" to determine an explicit definition of "information", the situation is a self-reflexive set/process. A tautology dependent on "process", rather than "identity". Another affirmed representation of Objectivity resident in process while Subjectivity resides in membership. A confederation of predicates and nouns; of actions and extants; of ffunction and form ; of process and participants.
If we can accept and conceptually visualize this premise ... realizing all the resultant valid relationships, conditions, and states of expression for phenomena in the universe... that is, if we can rely on this ultimate form of "vague generality" in order to accomplish a high degree of ordered specificity of analysis and modeling correspondence, then we should have no trouble understanding and accepting the subsequent progression to Cantorian Orders of Infinity.
We are discussing qualitative "degrees of differences", not exact quantitative correspondences. I would go so far as to say that it would be improper to accept Leibnitz and Newton and not accept Cantor, instead of recognizing that Cantor's work is a very natural extension of their's. In fact, Cantor's work can guide us to a very important extension of what truly happens when energy or information gets transferred and transcribed through varying densities of infinities. As I will discuss further along, my ceptualizations propose that in an ultimate mathematical (as well as physical) sense, any specified "point" or "instant" is truly and completely "dimensional"...not dimension "less"...because I want to separate the old fashioned concept of "measurable extent" from the more effective ways in which "dimensions" can be, have been, and actually are discussed, used, and relativisticly exist. We already use & process information by techniques of coding, compression, expansion (et al.), which are all normative ways (almost in a topological sense) of coordinating different forms of the same-order infinities.
I will propose a technique for coordinating the geometrics of the ancient Greek mathematician Apollonius, with the explorations of Euler, and thereby with Cantor (all under the aegis of Leibnitzian Calculus). I will show that in the truest sense, the Calculus establishes the total environment called "continuums" in which quantums....whether individual integers or complex ffunction plateaus called "quantum states"...exist.
Suffice it for the moment to say that an exponential state of zero represents a boundary between Nested Cantorian Infinities; a Hilbert bound. Mathematical functions can find application within boundary domains and across them. Integration limits may be confined only by local restrictions and still be useful when criteria are shifted or expanded. Integration related functions may also be viewed in several different ways depending on the reference set used. What in one perspective may be informationally "incomplete" and therefore confined to generalities such as quantum mechanic formulations, may in some other regard be more informationally complete and therefore compatable with thermodynamic equations. Relationally, we can construct a progression of exponent states (sometimes referrred to as superpowers). In this development we establish new definitions for the scope and behavior of existing mathematical operators however.
[end Part 08] 2025 Copyrights ceptualinstitute.com