Author Topic: Fuzzyness in maths  (Read 2706 times)

Groupoid

  • Safe-Zone Citizen
  • **
  • Posts: 118
Fuzzyness in maths
« on: February 16, 2021, 07:15:28 PM »
In response to a post in the SSSS Scriptorium.
Delicious discussion. I like it.

Gödel's work pinpoints that a) there is such an undecidable statement in every moderately advanced mathematical model, hence b) an infinite number of (truly different) models to choose from, and thus c) need for an infinite number of experiments (read: cannot be done) to pinpoint the proper model for an application case. If that doesn't make the relation between reality and math(ematical modeling) "fuzzy", I don't know what would.
Boldness by me. I haven’t thought of the incompleteness theorem creating new models to go with the unprovable statements. I’m not well versed in the details, but the proof on Wikipedia seems to give non-constructive existence proofs of these models. Which is a bit sad.
Anyway. I meant "doing math" and/or "the internal stuff going on in math" is not so fuzzy. Of course the applicability of maths to the real world is a whole mess. How do we even know that abstract logical sentences have real-world applications? etc. I’m not so interested in the philosophy of that.
With the final argumentation I have to agree. But strangely I consider these examples (by feeling) relatively clear in the sense, that we know and can tell what’s going on. Another example from algebra: "5=0" is undecidable in the theory of fields/rings.

I’m sorry for introducing the term "fuzzy". If I had to define it now, I’d say "something is fuzzy" iff "the thing is a grey area" as mentioned by thegreyarea... But that doesn’t help much. I feel a little embarrassed to note, that I was (and still am) writing & thinking much quicker than I can check what I write. But that’s probably natural for philosophy.

I think either all of the above examples have to be called "fuzzy" or all have to be called "non fuzzy", since your argument extends fuzzy-ness.
For the act of doing math (proving a statement, giving an example, calculating, ...), I think none of these examples is "fuzzy"/unclear if one is able to get the definitions straight etc.
The fuzzy-ness comes in, when we have to choose what objects we try to model. i.e. do we only want to study ZFC-models, or also ZF-models? (Similar for geometry, algebra, ...) If one isn’t sure what (logical) models are (informal) models of something real, then this is the fuzzy-ness you claimed (I think). Especially in the sentence with "infinite experiments".
And with set-theory the choice of axioms is especially difficult, because in practice we very rarely concern us with the intricacies of large infinite sets. There are probably few experiments with physical stuff we can do to decide the existence of large cardinals etc.
(That seems stupid: what physical experiment could decide the continuum hypothesis for "our universe"? Nope, not thinking about it.)

Conclusion (maybe): The definition of grey areas in maths is itself a grey area.

Hair-splitting remark:
I'm fairly sure that the proponents of the latter still wouldn't have considered "just pick one already!" as the revelation of hidden truths that they were hoping for ... ::)
Well... as the examples of the different geometries or of rings vs. fields show, sometimes it’s useful not to pick either. This is probably exactly what abstraction is about. To give general statements that hold for as many (logical) models as possible/feasible/useful/necessary/adequate.

And Argh! The editor has no undo-button. I lost a few lines and this made everything longer.
Native: :de: (:ch:), Ok: :gb:, Rusty: :fr:, A bit: :ru:

JoB

  • Mage of the Great Restructuring
  • Admiral of a Sunken Ship
  • ******
  • Posts: 4117
Re: Fuzzyness in maths
« Reply #1 on: February 17, 2021, 04:22:20 AM »
I hope it's all in the past.
(I think it is, but I'm not sufficiently in the math research loop anymore to confirm. When you're sorta hoping for a divine "truth behind truth", with statements that have a property of "being true" that math and logic just cannot get nailed down, the (mathematical) proof that you can forever extend your initial model by accepting either a Gödel statement or its negation as a new axiom is a quite final blow.)

I think a big problem with these flaming philosophical discussions about logic is, that while discussing we think we’re very sure we know what we mean, but actually we don’t express what we mean exactly. Or we don’t understand exactly what the other person meant. [...] No wonder mathematicians developed such a precise language to communicate their thoughts.
Story from my university days, which saw the dpt. of mathematics getting its own new building after being housed in the offices and lecture halls of other dpt.s: The non-mathematicians overseeing the construction of the new building had the idea to save some pennies by installing smaller blackboards into the lecture halls, "because mathematical formulae are much more compact than the sentences of natural(*) language other departments have to write down". The math profs replied that, for sake of precision, they're required to write everything down, rather than just jotting down some key points like most other profs did. The dispute was settled when the janitor was asked to have a look at the blackboards, and found those which had been perused for math lectures for a couple years to be significantly more worn down than others. ;D

(*) Fightin' words for every true mathematician, that term is. ;)

I’m not well versed in the details, but the proof on Wikipedia seems to give non-constructive existence proofs of these models. Which is a bit sad.
Well, I don't think that a constructive proof is anywhere near the horizon for that kind of question; at least not in the sense that you can construct a model's Gödel statement. If you could, you'd have an algorithm to churn out a binary tree of models and their Gödel statements rooted in your initial model mechanistically, and we'd be working on a math of entire trees of models, instead of investigating a single model or a series (indexed by natural numbers) of models ...

Anyway. I meant "doing math" and/or "the internal stuff going on in math" is not so fuzzy.
... it usually isn't. As much as mathematics is trying to get its terms nailed down, the philosophers can still raise some brain-wrecking questions about the concepts it works with ...

Of course the applicability of maths to the real world is a whole mess. How do we even know that abstract logical sentences have real-world applications? etc. I’m not so interested in the philosophy of that.
Agreed. In these days, we leave most of the number crunching to computers and their implementation of floating point numbers. Due to the finite, often fixed, number of bits used to represent those numbers, they not only have a limited range of values, but also varying precision; i.e., things happen like (simplified) computing "one million plus one" and getting "one million" for a result. In other words, computers' floating point numbers are not rings, groups, a mod-n arithmetic, etc. etc., and thus not subject to any classical number theory in the first place. A computation that actually isn't math, and not in the sense of "there's no actual value of \infty involved" ...

Another example from algebra: "5=0" is undecidable in the theory of fields/rings.
Of course, and mod-5 arithmetics (whether with integers or reals) is the specific and relevant case where it holds true.

I think either all of the above examples have to be called "fuzzy" or all have to be called "non fuzzy", since your argument extends fuzzy-ness.
For the act of doing math (proving a statement, giving an example, calculating, ...), I think none of these examples is "fuzzy"/unclear if one is able to get the definitions straight etc.
The fuzzy-ness comes in, when we have to choose what objects we try to model. i.e. do we only want to study ZFC-models, or also ZF-models? (Similar for geometry, algebra, ...) If one isn’t sure what (logical) models are (informal) models of something real, then this is the fuzzy-ness you claimed (I think). Especially in the sentence with "infinite experiments".
Ummmmhh that's what I pointed out as a specific example, yes. Nonetheless, the fact that models can be extended endlessly and in contradictory ways IMHO did make mathematics "fuzzier" in a sense. Before Gödel - see the discussion of "unreachable truths/falsehoods" -, mathematicians had a tendency to imagine that their models would change incessantly, but in the sense of evolution, yielding ever-"better" theories that will replace the older ones except for fringe cases. Gödel sunk that concept of "nearer, my God, to thee", if I may be so blasphemic. :)

And with set-theory the choice of axioms is especially difficult, because in practice we very rarely concern us with the intricacies of large infinite sets. There are probably few experiments with physical stuff we can do to decide the existence of large cardinals etc.
(That seems stupid: what physical experiment could decide the continuum hypothesis for "our universe"? Nope, not thinking about it.)
Who is "we" here? "The science of infinity" is one rather widely accepted definition of mathematics, even though it suggests that "computing" (with actual numbers) fails to qualify as "math".

But yes, our currently "established" understanding of the "real" (physical) world does not allow for actual infinite (finite universe wedged between Big Bang and Big Crunch) or infinitesimal (Heisenberg Uncertainty Principle) aspects. You have to go "(infinite) multiverse" (or "truth behind the truth" in a "God hides behind quantum uncertainty" sense) or somesuch to get reality up to "infinities only, please"-style math's muster. >:D

And Argh! The editor has no undo-button. I lost a few lines and this made everything longer.
The forum/website's builtin editor may not have one, but there's an Edit -> Undo menu point in my browser (Firefox) that seems to work satisfyingly as such ...
native: :de: secondary: :us: :fr:
:artd: :book1+: :book2: :book3: :book4: etc.
PGP Key 0xBEF02A15, Fingerprint C12C 53DC BB92 2FE5 9725  C1AE 5E0F F1AF BEF0 2A15

Groupoid

  • Safe-Zone Citizen
  • **
  • Posts: 118
Re: Fuzzyness in maths
« Reply #2 on: February 17, 2021, 04:02:18 PM »
The dispute was settled when the janitor was asked to have a look at the blackboards, and found those which had been perused for math lectures for a couple years to be significantly more worn down than others. ;D
:))

Agreed. In these days, we leave most of the number crunching to computers and their implementation of floating point numbers. Due to the finite, often fixed, number of bits used to represent those numbers, they not only have a limited range of values, but also varying precision; i.e., things happen like (simplified) computing "one million plus one" and getting "one million" for a result. In other words, computers' floating point numbers are not rings, groups, a mod-n arithmetic, etc. etc., and thus not subject to any classical number theory in the first place. A computation that actually isn't math, and not in the sense of "there's no actual value of \infty involved" ...
Hrm, the example fits well, but I wouldn’t call floating point arithmetic "not math" just because the most basic assumptions of algebra & arithmetic are not satisfied. (e.g. associativity) The example I thought about is the general assumption of most engineers that we live in R^3. At first sight I find it weird, that it is such a great approximation. Sometimes I’m even astonished by the fact, that the abstract "You have 10 sheep and I give you 5 more, then you have 15 sheep afterwards" actually applies to reality.

Of course, and mod-5 arithmetics (whether with integers or reals) is the specific and relevant case where it holds true.
And soo~oo many others where it is false. ;)

Ummmmhh that's what I pointed out as a specific example, yes. Nonetheless, the fact that models can be extended endlessly and in contradictory ways IMHO did make mathematics "fuzzier" in a sense. Before Gödel - see the discussion of "unreachable truths/falsehoods" -, mathematicians had a tendency to imagine that their models would change incessantly, but in the sense of evolution, yielding ever-"better" theories that will replace the older ones except for fringe cases. Gödel sunk that concept of "nearer, my God, to thee", if I may be so blasphemic. :)
Ah, I forgot the historical context. Yes, I think the whole "relativism" I’m used to (i.e. all theorems are "just" implications) must have developed greatly from Gödel’s work. Hilbert’s program...

Who is "we" here? "The science of infinity" is one rather widely accepted definition of mathematics, even though it suggests that "computing" (with actual numbers) fails to qualify as "math".
The use of "we" looks very wrong to me now. I meant non-set-theorists, or when-not-doing-set-theory, based on some essay I read. But I’ll have to find it again. (Appeal to authority. nice :() I think an example was "all Hilbert-spaces of cardinality P(P(P(N))) lack some property" or "some property doesn’t hold for all Hilbert-spaces of cardinality greater-or-equal to P(P(P(N)))", but I’m not sure. I got the example wrong. There’s a correction in a later post.
It wasn’t just about there being physically infinite things, but the author posed that most mathematics could be formalized "predicatively", without having an unconstrained power-set axiom [1]. And this somehow made them conclude "most mathematics (that doesn’t directly involve set-theory) can be done with cardinalities smaller than some cardinality".
I’ll look it up.

But yes, our currently "established" understanding of the "real" (physical) world does not allow for actual infinite (finite universe wedged between Big Bang and Big Crunch) or infinitesimal (Heisenberg Uncertainty Principle) aspects. You have to go "(infinite) multiverse" (or "truth behind the truth" in a "God hides behind quantum uncertainty" sense) or somesuch to get reality up to "infinities only, please"-style math's muster. >:D
I haven’t thought of Heisenberg as a lower bound on our "approach to the infinitesimal by measuring". That’s a nice thought I think.
I have a hard time digesting the second sentence. When I think I grasp your meaning, I agree. The many quoted things (thanks for quoting) are too undefined for me, so I’m not sure whether I understand what you mean.

The forum/website's builtin editor may not have one, but there's an Edit -> Undo menu point in my browser (Firefox) that seems to work satisfyingly as such ...
That’s a helpful tip, thanks. It’s always nice to know a little more about the tools/software I use. Browsers (I also use Firefox) have so many small & useful tools that are easily overlooked.

P.S.
[1] : https://math.stanford.edu/~feferman/papers/predicativity.pdf by Feferman says on p. 12 “In axiomatic Zermelo-Fraenkel set-theory, the fundamental source of impredicativity is the Separation Axiom scheme, ...” and goes on to describe how the axiom of infinity is also important. So as usual with axiom systems, there is not always a single logical culprit, but at least one that we might "feel" to be the reason.
I don’t think that this was the essay I talked about, but I can’t find something that fits better. The example with the hilbert spaces of some size is not in it... So maybe I mixed some papers up.

However that about the essay/paper might be. I believe that a lot of mathematics can be "done" (i.e. a lot of proofs can be carried out) with very weak set-theoretic/logical assumptions or concerns about the continuum hypothesis. This belief is founded (at least) in the above paper’s reference/discussion of Weyl, which I haven’t read myself. And I incorporated it implicitly & rather clumsily in the sentence with the "we".
« Last Edit: February 18, 2021, 10:14:07 AM by Groupoid »
Native: :de: (:ch:), Ok: :gb:, Rusty: :fr:, A bit: :ru:

moredhel

  • Ranger
  • ****
    • DeviantArt
  • Preferred pronouns: happy with any of them except it
  • Posts: 619
Re: Fuzzyness in maths
« Reply #3 on: February 17, 2021, 04:20:45 PM »
Hrm, the example fits well, but I wouldn’t call floating point arithmetic "not math" just because the most basic assumptions of algebra & arithmetic are not satisfied. (e.g. associativity)
It ist math, because it ist made of exactly defined operations on a set of numbers. But it does not work as most people would expect. E. g. the result of this little code:
float number = (float)1.0;
number = number/(float)3.0;
number = number + (float)1.0;
number = number - (float)1.0;
number = number * (float)3.0;
is that number equals 1.0000001.
Whenever rational numbers can be used to do wahtever you want, you should avoid floats.

Groupoid

  • Safe-Zone Citizen
  • **
  • Posts: 118
Re: Fuzzyness in maths
« Reply #4 on: February 17, 2021, 05:10:11 PM »
JoB I found it! “Is set theory indispensable” by Nik Weaver. On ArXiv and on his uni-webpage.

moredhel: That’s what I had in mind. Luckily we can choose to use rationals and reals in maths. But sometimes the intricacies of number-representations in computers can’t be avoided in practice. For the seemingly boring usual reasons of performance and memory use.

Edit: Fixed a typo.
« Last Edit: February 17, 2021, 05:50:27 PM by Groupoid »
Native: :de: (:ch:), Ok: :gb:, Rusty: :fr:, A bit: :ru:

moredhel

  • Ranger
  • ****
    • DeviantArt
  • Preferred pronouns: happy with any of them except it
  • Posts: 619
Re: Fuzzyness in maths
« Reply #5 on: February 17, 2021, 05:47:55 PM »
These reasons are boring, but good ones. More Precision can mean exponential growth of memory usage (really bad for performance too).

JoB

  • Mage of the Great Restructuring
  • Admiral of a Sunken Ship
  • ******
  • Posts: 4117
Re: Fuzzyness in maths
« Reply #6 on: February 17, 2021, 05:54:16 PM »
Hrm, the example fits well, but I wouldn’t call floating point arithmetic "not math" just because the most basic assumptions of algebra & arithmetic are not satisfied. (e.g. associativity)
As far as I know, there still is no analog of number theory for those floating points. How can it be math when math doesn't know it ... ?

The example I thought about is the general assumption of most engineers that we live in R^3. At first sight I find it weird, that it is such a great approximation. Sometimes I’m even astonished by the fact, that the abstract "You have 10 sheep and I give you 5 more, then you have 15 sheep afterwards" actually applies to reality.
As a dipl. rer. math. oec., I know that the (say, monthly) output of a manufacturing plant that can make a dozen different products (which compete for available resources) is a vector in N^12, if not R+^12 (e.g. when the products are liquids). Eat that, 3D engineers! :P

... which falsification of "spacetime ~ R^4" are you thinking of? Finite universe? Then R^4 is a proper local approximation, like topology etc. do it as well. Local relativistic distortion of spacetime? GPS wouldn't work if engineers weren't on top of that when needed. Higher dimensions? IIUC the only observable effect currently suggested for them is to explain why gravity is so much weaker than the other fundamental forces, which math would happily explain away with a dampening factor ex machina. (Let's call it "Dark Mather", OK? >:D )

If it is the latter, note that math does predict that in space of 4+ dimensions, there are no knots that you can tie into a rope(*) and a left and a right glove are topologically congruent, i.e., you can "convert" one into the other. Be ready for people asking you to perform those parlor tricks when you claim that we're actually in 4+D. ;)

(*) Where IIRC the key property of a "knot" is that you cannot (un)tie it without pulling an end through loops.

I have a hard time digesting the second sentence. When I think I grasp your meaning, I agree. The many quoted things (thanks for quoting) are too undefined for me, so I’m not sure whether I understand what you mean.
What I meant to say is that there's a number of unproven, but in some cases quite popular, "extensions" or "discoveries waiting to happen" that would put some form of infinity back into physics-slash-reality. Of course, until such discoveries actually happen, that's not a iota more relevant than a sci-fi author writing about people colonizing a planet and naming it Aleph-one because all other names were already taken ... >:D

E. g. the result of this little code:
Code: [Select]
float number = (float)1.0;
number = number/(float)3.0;
number = number + (float)1.0;
number = number - (float)1.0;
number = number * (float)3.0;
is that number equals 1.0000001.
... careful there not to accidentally link your code against the GMP, or the output might surprise you ... 8)

JoB I found it! “Is set theory indispensable” by Nik Weaver. On ArXiv and ]on his uni-webpage.
... ooookay, reading 21 pages of exegesis on what the true foundations of math are promises not to be in the set one of our professors called "SNKÜ" (for the German of "Sunday afternoon coffee exercise") ... :3

More Precision can mean exponential growth of memory usage (really bad for performance too).
Let me tell you about how formal verification of CPU chip designs happens to be intractable even if you use advanced data models ...
native: :de: secondary: :us: :fr:
:artd: :book1+: :book2: :book3: :book4: etc.
PGP Key 0xBEF02A15, Fingerprint C12C 53DC BB92 2FE5 9725  C1AE 5E0F F1AF BEF0 2A15

moredhel

  • Ranger
  • ****
    • DeviantArt
  • Preferred pronouns: happy with any of them except it
  • Posts: 619
Re: Fuzzyness in maths
« Reply #7 on: February 17, 2021, 07:29:06 PM »
... careful there not to accidentally link your code against the GMP, or the output might surprise you ... 8)
For this I used java so i didn't link to anything myself. I used it for the better readability, for programming tasks I prefer common lisp, where the standard says rational numbers have to be of unlimited precision. Practically you run in storage problems when you work with really big or small numbers or tiny fractions. A simple task like calculating if a given coordinate is inside the Mandelbrot set or not is a good lesson about using floats to get mor speed by less precision.

JoB

  • Mage of the Great Restructuring
  • Admiral of a Sunken Ship
  • ******
  • Posts: 4117
Re: Fuzzyness in maths
« Reply #8 on: February 18, 2021, 02:02:07 AM »
for programming tasks I prefer common lisp, where the standard says rational numbers have to be of unlimited precision. Practically you run in storage problems when you work with really big or small numbers or tiny fractions.
... in order to have unlimited precision, you need to store certain numbers (like 1/3) in a format not based on their (infinite) sequence of decimals (but, for example, as the fraction = rational number given). On the other hand, numbers specified by a (limited) sequence of decimals will very likely be (best) stored as that. IMHO you should not only run into memory problems, but also have serious trouble to decide "is x=y?" whenever it involves dissimilar storage formats ... ?

Code: [Select]
x=1/3
y=1/30
z=x-y
if !(z==0.3) goBonkers()
native: :de: secondary: :us: :fr:
:artd: :book1+: :book2: :book3: :book4: etc.
PGP Key 0xBEF02A15, Fingerprint C12C 53DC BB92 2FE5 9725  C1AE 5E0F F1AF BEF0 2A15

Groupoid

  • Safe-Zone Citizen
  • **
  • Posts: 118
Re: Fuzzyness in maths
« Reply #9 on: February 18, 2021, 10:12:43 AM »
Please don’t feel compelled to read the paper. I skimmed over it again & looked a little for citations & mentions by other people, but couldn’t come up with anything relevant. The author seems to have published multiple more or less ranty philosophical papers about set-theory and logic. Exactly the philosophical-mathematical-ontological-fuzzyness that easily confuses me. (mathe-philosopho-ontologicality?) Often written in relatively approachable language, making very large emotion-laden statements.
At the very least, he gives food-for-thought if one is so inclined, and a overview & bibliography of other people’s works (e.g. Feferman, Weyl, Quine), which probably written in a less emotional tone.

I trusted my memory a little too much. I mixed some of Weaver’s claims up. Relevant would be p.12f. He claims that for "mainstream mathematics" "virtually nothing" beyond P(ℕ) or P(P(ℕ)) is needed, and further down on the page he talks about separable Banach spaces, which become non-separable (and thus "pathological" according to him) if you construct the dual spaces often enough.
In section 3 (p. 8ff) he writes about how (according to Feferman & Weyl) a lot of mathematics can be done in ACA_0, and thus not the whole power of ZFC is needed in applications of mathematics.
I don’t want to take a position on his many other claims & arguments.
Ugh, science. Papers. Ugh. Well, one learns by making errors.

As far as I know, there still is no analog of number theory for those floating points. How can it be math when math doesn't know it ... ?
Hm. Do you consider "math" an entity that can know something? If I had to formalise floating point arithmetic (let’s say IEEE 754) then at least we have a set of values (possibly encoded as bit-strings) and some operations on them, like in universal algebra. We could also have a partial map (because of NaNs) from the set of values to the rationals. Maybe some "approximation theorems" of the form "a calculation with less than n operations has at the most an error of f(n)" could be proven. etc. etc.
Using such a formalism one could probably examine how some numerical algorithms behave (e.g. integrating, finding zeros). Thus I think floating point arithmetic can have a corner in mathematics. Another quite weak argument is: "it has to do with numbers, so it’s math". I don’t want to invoke the super-weak sledge-hammer argument of "everything is formalisable in given a sufficiently complicated logical system, thus everything is math", because that’s impractical.
As I’m not well versed in numerics I don’t know what tools people developed there to deal with the complications & restrictions of practical computations. But I expect that they developed at least something to deal with floats.

... which falsification of "spacetime ~ R^4" are you thinking of?
Yeah, I left out the time direction. Concretely, the Lagrange & Hamilton formalisms are very often applied to 3-(space-)dimensional space. That’s why I meant "most engineers" because you need the theory of relativity neither for planning bridges nor for designing an engine.

About 4D and knots. The usual knots are (sufficiently nice... technical details) embeddings of S^1 in R^3. Restated: to get a mathematical knot, you tie your rope and then glue the ends of the rope together. This makes precise the notion of "not being able to pull the ends through loops". So, yep, you recall correctly.
How often have I wished to have some epsilon of wiggle-room in a fourth dimension to untie some hemp rope after it rained. ;)

I heard, that there even are higher dimensional analogues to knot theory, if you consider for embeddings of S^n in R^(n+2) or somesuch. A quick googling led me to a book (~700p.) with this nice title: “High-dimensional knot theory, Algebraic surgery in codimension 2”. Doesn’t that sound like the title of an exciting novel? No idea about the content though. It defines an n-knot as being an embedding S^n into S^(n+2). No idea yet why S instead of R.

Actually, there’s a lecture about knot theory next semester at my uni. But I didn’t plan to attend it.
Native: :de: (:ch:), Ok: :gb:, Rusty: :fr:, A bit: :ru:

moredhel

  • Ranger
  • ****
    • DeviantArt
  • Preferred pronouns: happy with any of them except it
  • Posts: 619
Re: Fuzzyness in maths
« Reply #10 on: February 19, 2021, 02:11:51 PM »
... in order to have unlimited precision, you need to store certain numbers (like 1/3) in a format not based on their (infinite) sequence of decimals (but, for example, as the fraction = rational number given). On the other hand, numbers specified by a (limited) sequence of decimals will very likely be (best) stored as that. IMHO you should not only run into memory problems, but also have serious trouble to decide "is x=y?" whenever it involves dissimilar storage formats ... ?

The common Lisp approach to do this is that it's standard numbers are stored as a par of numbers. That means 5 is stored as (5 . 1) 1/3 is stored as (1 . 3) if you type in 2/6 or get it as a result it is stored as (1 . 3) (that is why Common lisp has a builtin function to get the greatest common divisor). When the numbers in storage are kept as small as possible to check if x equals y is not that hard. The comparisons for >= and <= are a little harder because they involve multiplications.

With pairs of numbers nice things can be done. In coimmon lisp there are builtin complex numbers wich are just pairs of the standard numbers. So ((1 . 2) . (1 . 1)) is the internal representation of 1/2 + i.