Search: in
Hyperoperation in Encyclopedia Encyclopedia
  Tutorials     Encyclopedia     Videos     Books     Software     DVDs  


In mathematics, the hyperoperation sequence is an infinite sequence of arithmetic operations (called hyperoperations)[1][2][3] that starts with the unary operation of successor, then continues with the binary operations of addition, multiplication and exponentiation, after which the sequence proceeds with further binary operations extending beyond exponentiation, using right-associativity. For the operations beyond exponentiation, the nth member of this sequence is named by Reuben Goodstein after the Greek prefix of n suffixed with -ation (such as tetration, pentation)[4] and can be written using n-2 arrows in Knuth's up-arrow notation (if the latter is properly extended to negative arrow-indices for the first three hyperoperations). Each hyperoperation may be understood recursively in terms of the previous one by:

a \uparrow^n b = a \uparrow^{n-1} \left(a \uparrow^{n-1}(...(a \uparrow^{n-1}a))...)\right) with b occurrences of a on the right hand side of the equation

It may also be defined according to the recursion rule part of the definition, as in Knuth's up-arrow version of the Ackermann function:

a \uparrow^n b = a \uparrow^{n-1} \left(a \uparrow^n (b-1)\right)

This recursion rule is common to many variants of hyperoperations (see below).



The hyperoperation sequence is the sequence of binary operations H_n: \mathbb{N} \times \mathbb{N} \rightarrow \mathbb{N}\,\! indexed by n \in \mathbb{N}, defined recursively as follows:

H_n(a, b) = \begin{cases} b + 1 & \text{if } n = 0 \\ a &\text{if } n = 1, b = 0 \\ 0 &\text{if } n = 2, b = 0 \\ 1 &\text{if } n \ge 3, b = 0 \\ H_{n-1}(a, H_n(a, b - 1)) & \text{otherwise} \end{cases}\,\!

(Note that for n = 0, the binary operation essentially reduces to a unary operation by ignoring the first argument.)

For n = 0, 1, 2, 3, this definition reproduces the basic arithmetic operations of successor (which is a unary operation), addition, multiplication, and exponentiation, respectively, as

H_0(a, b) = b + 1\,\!,
H_1(a, b) = a + b\,\!,
H_2(a, b) = a \cdot b\,\!,
H_3(a, b) = a^{b}\,\!,

and for n 4 it extends these basic operations beyond exponentiation to what can be written in Knuth's up-arrow notation as

H_4(a, b) = a\uparrow\uparrow{b}\,\!,
H_5(a, b) = a\uparrow\uparrow\uparrow{b}\,\!,
H_n(a, b) = a\uparrow^{n-2}b \text{ for } n \ge 3\,\!,

Knuth's notation could be extended to negative indices -2 in such a way as to agree with the entire hyperoperation sequence, except for the lag in the indexing:

H_n(a, b) = a \uparrow^{n-2}b\text{ for } n \ge 0.\,\!

The hyperoperations can thus be seen as an answer to the question "what's next" in the sequence: successor, addition, multiplication, exponentiation, and so on. Noting that

  • a + b = 1 + (a + (b - 1)),\,\!
  • a \cdot b = a + (a \cdot (b - 1)),\,\!
  • a ^ b = a \cdot (a ^ {(b - 1)}),\,\!

the relationship between basic arithmetic operations is illustrated, allowing the higher operations to be defined naturally as above. The parameters of the hyperoperation hierarchy are sometimes referred to by their analogous exponentiation term;[5] so a is the base, b is the exponent (or hyperexponent),[6] and n is the rank (or grade).[7]

In common terms, the hyperoperations are ways of compounding numbers that increase in growth based on the iteration of the previous hyperoperation. The concepts of successor, addition, multiplication and exponentiation are all hyperoperations; the successor operation (producing x+1 from x) is the most primitive, the addition operator specifies the number of times 1 is to be added to itself to produce a final value, multiplication specifies the number of times a number is to be added to itself, and exponentiation refers to the number of times a number is to be multiplied by itself.


This is a list of the first seven hyperoperations.

n Operation Definition Names Domain
0 b + 1 { 1 + {\underbrace{1 + 1 + 1 + \cdots + 1}_{b}} } hyper0, increment, successor, zeration b arbitrary
1 a + b { a + {\underbrace{1 + 1 + 1 + \cdots + 1}_{b}} } hyper1, addition arbitrary
2 a\cdot b { {\underbrace{a + a + a + \cdots + a}} \atop{b} } hyper2, multiplication arbitrary
3 a \uparrow b = a^b { {\underbrace{a \cdot a \cdot a \cdot a \cdot \ldots \cdot a}} \atop{b} } hyper3, exponentiation a > 0, b real, or a non-zero, b an integer, with some multivalued extensions to complex numbers
4 a \uparrow\uparrow b { {\underbrace{a \uparrow (a \uparrow (a \uparrow \cdots \uparrow a))...)}} \atop{b} } hyper4, tetration a and b integers > 0 (with some proposed extensions)
5 a \uparrow\uparrow\uparrow b or a \uparrow^3 b { {\underbrace{a \uparrow\uparrow (a \uparrow\uparrow (\cdots \uparrow\uparrow a))...)}} \atop{b} } hyper5, pentation a and b integers > 0
6 a \uparrow^4 b { {\underbrace{a \uparrow^3 (a \uparrow^3 (\cdots \uparrow^3 a))...)}} \atop{b} } hyper6, hexation a and b integers > 0

See also Tables of values.


One of the earliest discussions of hyperoperations was that of Albert Bennett[7] in 1914, who developed some of the theory of commutative hyperoperations (see below). About 12 years later, Wilhelm Ackermann defined the function \phi(a, b, n)\,\![8] which somewhat resembles the hyperoperation sequence.

In his 1947 paper,[4] R. L. Goodstein introduced the specific sequence of operations that are now called hyperoperations, and also suggested the Greek names tetration, pentation, hexation, etc., for the extended operations beyond exponentiation (because they correspond to the indices 4, 5, 6, etc.). As a three-argument function, e.g., G(n,a,b) = H_n(a,b)\,\!, the hyperoperation sequence as a whole is seen to be a version of the original Ackermann function \phi(a,b,n)\,\! recursive but not primitive recursive as modified by Goodstein to incorporate the primitive successor function together with the other three basic operations of arithmetic (addition, multiplication, exponentiation), and to make a more seamless extension of these beyond exponentiation.

The original three-argument Ackermann function \phi\,\! uses the same recursion rule as does Goodstein's version of it (i.e., the hyperoperation sequence), but differs from it in two ways. First, \phi(a,b,n)\,\! defines a sequence of operations starting from addition (n = 0) rather than the successor function, then multiplication (n = 1), exponentiation (n = 2), etc. Secondly, the initial conditions for \phi\,\! result in \phi(a, b, 3) = a \uparrow\uparrow (b + 1)\,\!, thus differing from the hyperoperations beyond exponentiation.[9][10][11] The significance of the b + 1 in the previous expression is that \phi(a,b,3)\,\! = a^{a^{\cdot^{\cdot^{\cdot^a}}}}\,\!, where b counts the number of operators (exponentiations), rather than counting the number of operands ("a"s) as does the b in a\uparrow\uparrow b\,\!, and so on for the higher-level operations. (See the Ackermann function article for details.)


This is a list of notations that have been used for hyperoperations.

Name Notation equivalent to H_n(a, b)\,\! Comment
Knuth's up-arrow notation a \uparrow^{n-2} b\,\! Used by Knuth[12] (for n 2), and found in several reference books.[13][14]
Goodstein's notation G(n, a, b)\,\! Used by Reuben Goodstein[4].
Original Ackermann function \begin{matrix} \phi(a, b, n-1) \ \text{ for } 1 \le n \le 3 \\ \phi(a, b-1, n-1) \ \text{ for } n > 3 \end{matrix}\,\! Used by Wilhelm Ackermann[8].
Ackermann P ter function A(n, b - 3) + 3 \ \text{for } a = 2\,\! This corresponds to hyperoperations for base 2.
Nambiar's notation a \otimes^n b\,\! Used by Nambiar[15]
Box notation a {\,\begin{array}{|c|}\hline{\!n\!}\\\hline\end{array}\,} b\,\! Used by Rubtsov and Romerio.[3][5]
Superscript notation a {}^{(n)} b\,\! Used by Robert Munafo.[16]
Subscript notation a {}_{(n)} b\,\! Used for lower hyperoperations by Robert Munafo.[16]
Square bracket notation a[n]b\,\! Used in many online forums; convenient for ASCII.


For different initial conditions or different recursion rules, very different operations can occur. Some mathematicians refer to all variants as examples of hyperoperations.

In the general sense, a hyperoperation hierarchy (S,\,I,\,F) is a family (F_n)_{n \in I} of binary operations on S, indexed by a set I, such that there exists i, j, k \in I where

Also, if the last condition is relaxed (i.e. there is no exponentiation), then we may also include the commutative hyperoperations, described below. Although one could list each hyperoperation explicitly, this is generally not the case. Most variants only include the successor function (or addition) in their definition, and redefine multiplication (and beyond) based on a single recursion rule that applies to all ranks. Since this is part of the definition of the hierarchy, and not a property of the hierarchy itself, it is difficult to define formally.

There are many possibilities for hyperoperations that are different from Goodstein's version. By using different initial conditions for F_n(a, 0) or F_n(a, 1), the iterations of these conditions may produce different hyperoperations above exponentiation, while still corresponding to addition and multiplication. The modern definition of hyperoperations includes F_n(a, 0) = 1 for all n \ge 3, whereas the variants below include F_n(a, 0) = a, and F_n(a, 0) = 0.

An open problem in hyperoperation research is whether the hyperoperation hierarchy (\mathbb{N}, \mathbb{N}, F) can be generalized to (\mathbb{C}, \mathbb{C}, F), and whether (\mathbb{C}, F_n) forms a quasigroup (with restricted domains).

Variant starting from a

In 1928, Wilhelm Ackermann defined a 3-argument function \phi(a, b, n) which gradually evolved into a 2-argument function known as the Ackermann function. The original Ackermann function \phi was less similar to modern hyperoperations, because his initial conditions start with \phi(a, 0, n) = a for all n > 2. Also he assigned addition to (n = 0), multiplication to (n = 1) and exponentiation to (n = 2), so the initial conditions produce very different operations for tetration and beyond.

n Operation Comment
0 F_0(a, b) = a + b
1 F_1(a, b) = a\cdot b
2 F_2(a, b) = a^b
3 F_3(a, b) = a \uparrow\uparrow (b + 1) An offset form of tetration. The iteration of this operation is much different than the iteration of tetration.
4 F_4(a, b) = (x \to a \uparrow\uparrow (x + 1))^b(a) Not to be confused with pentation.

Another initial condition that has been used is A(0, b) = 2 b + 1 (where the base is constant a=2), due to R zsa P ter, which does not form a hyperoperation hierarchy.

Variant starting from 0

In 1984, C. W. Clenshaw and F. W. J. Olver began the discussion of using hyperoperations to prevent computer floating-point overflows.[17] Since then, many other authors[18][19][20] have renewed interest in the application of hyperoperations to floating-point representation. While discussing tetration, Clenshaw et al. assumed the initial condition F_n(a, 0) = 0, which makes yet another hyperoperation hierarchy. Just like in the previous variant, the fourth operation is very similar to tetration, but offset by one.

n Operation Comment
1 F_1(a, b) = a + b
2 F_2(a, b) = a\cdot b = e^{\ln(a) + \ln(b)}
3 F_3(a, b) = a^b
4 F_4(a, b) = a \uparrow\uparrow (b - 1) An offset form of tetration. The iteration of this operation is much different than the iteration of tetration.
5 F_5(a, b) = (x \to a \uparrow\uparrow (x - 1))^b(0) Not to be confused with pentation.

Commutative hyperoperations

Commutative hyperoperations were considered by Albert Bennett as early as 1914,[7] which is possibly the earliest remark about any hyperoperation sequence. Commutative hyperoperations are defined by the recursion rule

F_{n+1}(a, b) = \exp(F_n(\ln(a), \ln(b)))

which is symmetric in a and b, meaning all hyperoperations are commutative. This sequence does not contain exponentiation, and so does not form a hyperoperation hierarchy.

n Operation Comment
0 F_0(a, b) = \ln(e^{a} + e^{b})
1 F_1(a, b) = a + b
2 F_2(a, b) = a\cdot b = e^{\ln(a) + \ln(b)} This is due to the properties of the logarithm.
3 F_3(a, b) = e^{\ln(a)\ln(b)} A commutative form of exponentiation.
4 F_4(a, b) = e^{e^{\ln(\ln(a))\ln(\ln(b))}} Not to be confused with tetration.

Balanced hyperoperations

Balanced hyperoperations, first considered by Cl ment Frappier in 1991,[21] are based on the iteration of the function x^x, and are thus related to Steinhaus-Moser notation. The recursion rule used in balanced hyperoperations is

F_{n+1}(a, b) = (x \to F_n(x, x))^{\log_2(b)}(a)

which requires continuous iteration, even for integer b.

n Operation Comment
0 Rank 0 does not exist.[22]
1 F_1(a, b) = a + b
2 F_2(a, b) = a\cdot b = a 2^{\log_2(b)}
3 F_3(a, b) = a^b = a^{2^{\log_2(b)}} This is exponentiation.
4 F_4(a, b) = (x \to x^x)^{\log_2(b)}(a) Not to be confused with tetration.

Lower hyperoperations

An alternative for these hyperoperations is obtained by evaluation from left to right. Since

  • a+b = (a+(b-1))+1
  • a\cdot b = (a\cdot (b-1))+a
  • a^b = (a^{(b-1)})\cdot a

define (with or subscript) a_{(n+1)}b = (a_{(n+1)}(b-1))_{(n)}a with a_{(1)}b = a+b, a _ {(2)} 0 = 0, and a _ {(n)} 0 = 1 for n>2

But this suffers a kind of collapse, failing to form the "power tower" traditionally expected of hyper4: a_{(4)}b = a^{(a^{(b-1)})}

How can a^{(n)}b be so different from a_{(n)}b for n>3? This is because of a symmetry called associativity that's defined into + and (see field) but which ^ lacks. Let's demonstrate this lack of associativity in exponentiation, which differentiates the higher and lower hyperoperations. Take for example the product: 2\cdot3\cdot4. This expression unambiguously evaluates to 24. However, if we replace the multiplication symbols with those of exponentiation, the expression becomes ambiguous. Do we mean (2^3)^4 or 2^{(3^4)}? There is a big difference, since the former expression can be rewritten as 2^{12} while the latter is 2^{81}. In other words, left associative folds of the exponential operator on sequences do not coincide with right associative folds, the latter usually resulting in larger numbers. It is more apt to say the two (n)s were decreed to be the same for n<4. (On the other hand, one can object that the field operations were defined to mimic what had been "observed in nature" and ask why "nature" suddenly objects to that symmetry )

The other degrees do not collapse in this way, and so this family has some interest of its own as lower (perhaps lesser or inferior) hyperoperations. With hyperfunctions greater than three, it is also lower in the sense that the answers you get are actually often a lot lower than the answers you get when using the standard method.

n Operation Comment
0 b + 1 increment, successor, zeration
1 F_1(a, b) = a + b
2 F_2(a, b) = a\cdot b
3 F_3(a, b) = a^b This is exponentiation.
4 F_4(a, b) = a^{(a^{(b-1)})} Not to be confused with tetration.
5 F_5(a, b) = (x \to x^{x^{(a-1)}})^{b-1}(a) Not to be confused with pentation.

Coincidence of Hyperoperations

Hyperoperations H_i and H_j are said to coincide on (a, b) when H_i(a, b) = H_j(a, b) . For example, for all i, j > 1 , i.e. all hyperoperations above addition, H_i(a, 1) = H_j(a, 1) = a . Similarly, H_i(1, a) = H_j(1, a) = 1 , but in this case both addition and mutiplication must be excluded. A point at which all hyperoperations coincide (excluding the unary successor function which does not really belong as a binary operation) is (2, 2) i.e. for all i = 1, 2, ... we have that H_i(2, 2) = 4 . There is a connection between the arity of these functions i.e. two and this point of coincidence: since the second argument of a hyperoperation is the length of the list on which to fold the previous operation, and this is 2, we get that the previous operation is folded over a list of length two, which amounts to applying it to the pair represented by that list. Also, since the first argument is itself 2, and this is duplicated in the recursion, we arrive again at the pair (2, 2) with each recursion. This happens until we get to 2 + 2 = 4.

To be more precise, we have that 2 \uparrow^n 2 = fold (\uparrow^{n - 1}, [2, 2]) = 2 \uparrow^{n - 1} 2 . Note that the unit of \uparrow^{n - 1} need not be supplied to fold when the list has length > 1. To demonstrate this recursion by means of an example we take 2^2 , which is two by itself twice i.e. 2\cdot2 . This, in turn is two plus itself twice i.e. 2 + 2 . At +, the recursion terminates and we are left with four.

See also


  1. a b
  2. a b c
  3. a b
  4. a b c
  5. a b
  6. a b
  7. If there was a rank 0 balanced hyperoperation f(a, b), then addition would be a + b = (x \to f(x, x))^{\log_2(b)}(a). Substituting b = 1 in this equation gives a + 1 = (x \to f(x, x))^{0}(a) = a which is a contradiction.


de:Hyper-Operator eo:Hiperoperatoro fr:Hyperop ration ja: pt:Hiperopera o ru: simple:Hyperoperation sl:Hiperoperacija tr:Hiperi lem uk: zh:

Source: Wikipedia | The above article is available under the GNU FDL. | Edit this article

Search for Hyperoperation in Tutorials
Search for Hyperoperation in Encyclopedia
Search for Hyperoperation in Videos
Search for Hyperoperation in Books
Search for Hyperoperation in Software
Search for Hyperoperation in DVDs
Search for Hyperoperation in Store


Hyperoperation in Encyclopedia
Hyperoperation top Hyperoperation

Home - Add TutorGig to Your Site - Disclaimer

©2011-2013 All Rights Reserved. Privacy Statement