It has long been said "to multiply a by positive integral b is to add a to itself b times".
www.collinsdictionary.com/dictionary/english/multiplication
Such an algorithmic definition of multiplication has been around for centuries, and yet it is wrong! Read on...
A Proof ab = a added to itself b – 1 times via Mathematical Induction.
The ‘Principle of Mathematical Induction’ may prove the proposition P, that for all natural numbers n, an algorithm for multiplication is: an = a1 + (n – 1)a.
P(n) is an = a1 + (n – 1)a for all natural numbers n.
Yet our proposition is for ab, not an, so we substitute b for n, and restate the proposition:
P(b) is ab = a1 + (b – 1)a for all natural numbers b.
IF…
P(1) is true and when P(k) is true, it follows P(k + 1) is also true for all positive integers k
THEN…
P(b) is true for all natural numbers b and we will have proven the proposition.
So for b = 1 (the base step) we get a(1) = a + (1 – 1)a and because the left hand side of the equation a(1) = a and the right hand side of the equation a + (1 – 1)a = a, we have demonstrated P(1) is true.
We now assume b = k is true (the inductive hypothesis), that is, ak = a1 + (k – 1)a and thus we need to show ak + 1 = ak + a for the proof of P(b).
Notably, we find ak + 1 = a + (k – 1)a + a = a + ak = a + [(k + 1) – 1]a. Therefore P(k + 1) is true and we have proven the proposition ab = a added to itself b – 1 times for all positive integers b.
So please ignore any mathematics dictionary or mathematics professor that either says or endorses such a silly concept as ab = a added to itself b times! Such a claim displays gullibility not common sense!
The correct statement a(+b) = a added to itself b – 1 times has the positive integral multiplier b.
Yet what happens if we extend this correct (yet sub-optimal) definition to negative integral multipliers? What definition would be given to a(–b)?
The answer is an example of why the evolution of arithmetic went into reverse from the 16th century.
If we avoid using India's zero in our definition (which we always have) then the definition becomes:
a(–b) = a subtracted from itself b + 1 times
The sign of the integral multiplier has ALWAYS meant addition or subtraction. We just haven't been taught that nugget. So the integral multiplier is modular or 'signless' and we simply add or subtract as many times as the adjusted multiplier states. And we must adjust the multiplier because we are starting from the number 'itself' and not from zero!
Let's use the example two multiplied by 'zero minus three', written 2 × 0–3 or without the zero, 2 × –3. We know the answer to be negative six, yet the incorrect definition ab = a added to itself b times would NEVER have led to the following pedagogy.
Because a(–b) = a subtracted from itself b + 1 times, with 2 × –3 we subtract 2 from itself 3 + 1 times or 4 times to get the answer. So let's do it!
Two minus two one time = 2 – 2 = 0
Two minus two two times = 2 – 2 – 2 = –2
Two minus two three times = 2 – 2 – 2 – 2 = –4
Two minus two four times = 2 – 2 – 2 – 2 – 2 = –6
The algorithmic definitions or recipes for 'additive' and 'subtractive' multipliers work, yet need minus 1 or plus 1 workarounds because the calculation commences from the multiplicand and not from the 0rigin of our number line called zero.
That's what happens when modern mathematics pedagogy remains stuck in Greek mode, where zero and negative numbers did not exist! Let's update our arithmetic for China's use of opposing numbers that cancelled each other out around 2300 years ago. Let's update our arithmetic for India's use of zero as a number with which binary operations may be performed from 1400 years ago.
a(+b) = a added to zero b times (in succession)
a(–b) = a subtracted from zero b times (in succession)
Should a math professor be reading this post (unlikely), he or she might be interested to know the great mathematicians, Grassman, Dedekind, Peano, Landau (and others) appear to have missed an idea because it was too simple. These men never defined a multiplied by b. Instead they defined a multiplied by the successor of b, where the successor of b is b + 1.
Thus professors turn a blind eye to the nonsense that is "to multiply a by positive integral b is to add a to itself b times" and instead, define multiplication via a(b + 1) = ab + a
So what is the super simple idea that appears lost in the axiomatic theory of the positive naturals?
For that, you will need to stay tuned!
Jonathan Crabtree
click to connect at LinkedIn
http://www.linkedin.com/in/jonathancrabtree