Algorithms based on number base systems? [closed]
I've noticed recently that there are a g开发者_高级运维reat many algorithms out there based in part or in whole on clever uses of numbers in creative bases. For example:
- Binomial heaps are based on binary numbers, and the more complex skew binomial heaps are based on skew binary numbers.
- Some algorithms for generating lexicographically ordered permutations are based on the factoradic number system.
- Tries can be thought of as trees that look at one digit of the string at a time, for an appropriate base.
- Huffman encoding trees are designed to have each edge in the tree encode a zero or one in some binary representation.
- Fibonacci coding is used in Fibonacci search and to invert certain types of logarithms.
My question is: what other algorithms are out there that use a clever number system as a key step of their intuition or proof?. I'm thinking about putting together a talk on the subject, so the more examples I have to draw from, the better.
Chris Okasaki has a very good chapter in his book Purely Functional Data Structures that discusses "Numerical Representations": essentially, take some representation of a number and convert it into a data structure. To give a flavor, here are the sections of that chapter:
- Positional Number Systems
- Binary Numbers (Binary Random-Access Lists, Zeroless Representations, Lazy Representations, Segmented Representations)
- Skew Binary Numbers (Skew Binary Random Access Lists, Skew Binomial Heaps)
- Trinary and Quaternary Numbers
Some of the best tricks, distilled:
- Distinguish between dense and sparse representations of numbers (usually you see this in matrices or graphs, but it's applicable to numbers too!)
- Redundant number systems (systems that have more than one representation of a number) are useful.
- If you arrange the first digit to be non-zero or use a zeroless representation, retrieving the head of the data structure can be efficient.
- Avoid cascading borrows (from taking the tail of the list) and carries (from consing onto the list) by segmenting the data structure
Here is also the reference list for that chapter:
- Guibas, McCreight, Plass and Roberts: A new representation for linear lists.
- Myers: An applicative random-access stack
- Carlsson, Munro, Poblete: An implicit binomial queue with constant insertion time.
- Kaplan, Tarjan: Purely functional lists with catenation via recursive slow-down.
"Ternary numbers can be used to convey self-similar structures like a Sierpinski Triangle or a Cantor set conveniently." source
"Quaternary numbers are used in the representation of 2D Hilbert curves." source
"The quater-imaginary numeral system was first proposed by Donald Knuth in 1955, in a submission to a high-school science talent search. It is a non-standard positional numeral system which uses the imaginary number 2i as its base. It is able to represent every complex number using only the digits 0, 1, 2, and 3." source
"Roman numerals are a biquinary system." source
"Senary may be considered useful in the study of prime numbers since all primes, when expressed in base-six, other than 2 and 3 have 1 or 5 as the final digit." source
"Sexagesimal (base 60) is a numeral system with sixty as its base. It originated with the ancient Sumerians in the 3rd millennium BC, it was passed down to the ancient Babylonians, and it is still used — in a modified form — for measuring time, angles, and the geographic coordinates that are angles." source
etc...
This list is a good starting point.
I read your question the other day, and today was faced with a problem: How do I generate all partitionings of a set? The solution that occurred to me, and that I used (maybe due to reading your question) was this:
For a set with (n) elements, where I need (p) partitions, count through all (n) digit numbers in base (p).
Each number corresponds to a partitioning. Each digit corresponds to an element in the set, and the value of the digit tells you which partition to put the element in.
It's not amazing, but it's neat. It's complete, causes no redundancy, and uses arbitrary bases. The base you use depends on the specific partitioning problem.
I recently came across a cool algorithm for generating subsets in lexicographical order based on the binary representations of the numbers between 0 and 2n - 1. It uses the numbers' bits both to determine what elements should be chosen for the set and to locally reorder the generated sets to get them into lexicographical order. If you're curious, I have a writeup posted here.
Also, many algorithms are based on scaling (such as a weakly-polynomial version of the Ford-Fulkerson max-flow algorithm), which uses the binary representation of the numbers in the input problem to progressively refine a rough approximation into a complete solution.
Not exactly a clever base system but a clever use of the base system: Van der Corput sequences are low-discrepancy sequences formed by reversing the base-n representation of numbers. They're used to construct the 2-d Halton sequences which look kind of like this.
I vaguely remember something about double base systems for speeding up some matrix multiplication.
Double base system is a redundant system that uses two bases for one number.
n = Sum(i=1 --> l){ c_i * 2^{a_i} * 3 ^ {b_i}, where c in {-1,1}
Redundant means that one number can be specified in many ways.
You can look for the article "Hybrid Algorithm for the Computation of the Matrix Polynomial" by Vassil Dimitrov, Todor Cooklev.
Trying to give the best short overview I can.
They were trying to compute matrix polynomial G(N,A) = I + A + ... + A^{N-1}
.
Supoosing N is composite G(N,A) = G(J,A) * G(K, A^J)
, if we apply for J = 2, we get:
/ (I + A) * G(K, A^2) , if N = 2K
G(N,A) = |
\ I + (A + A^2) * G(K, A^2) , if N = 2K + 1
also,
/ (I + A + A^2) * G(K, A^3) , if N = 3K
G(N,A) = | I + (A + A^2 + A^3) * G(K, A^3) , if N = 3K + 1
\ I + A * (A + A^2 + A^3) * G(K, A^3) , if N = 3K + 2
As it's "obvious" (jokingly) that some of these equations are fast in the first system and some better in the second - so it is a good idea to choose the best of those depending on N
. But this would require fast modulo operation for both 2 and 3. Here's why the double base comes in - you can basically do the modulo operation fast for both of them giving you a combined system:
/ (I + A + A^2) * G(K, A^3) , if N = 0 or 3 mod 6
G(N,A) = | I + (A + A^2 + A^3) * G(K, A^3) , if N = 1 or 4 mod 6
| (I + A) * G(3K + 1, A^2) , if N = 2 mod 6
\ I + (A + A^2) * G(3K + 2, A^2) , if N = 5 mod 6
Look at the article for better explanation as I'm not an expert in this area.
RadixSort can use a various number bases. http://en.wikipedia.org/wiki/Radix_sort Pretty interesting implementation of a bucketSort.
here is a good post on using ternary numbers to solve the "counterfeit coin" problem (where you have to detect a single counterfeit coin in a bag of regular ones, using a balance as few times as possible)
Hashing strings (e.g. in the Rabin-Karp algorithm) often evaluate the string as a base-b number consisting of n digits (where n is the length of the string, and b is some chosen base that is large enough). For example the string "ABCD" can be hashed as:
'A'*b^3+'B'*b^2+'C'*b^1+'D'*b^0
Substituting ASCII values for characters and taking b to be 256 this becomes,
65*256^3+66*256^2+67*256^1+68*256^0
Though, in most practical applications, the resulting value is taken modulo some reasonably sized number to keep the result sufficiently small.
Exponentiation by squaring is based on binary representation of the exponent.
In Hackers Delight
(a book every programmer should know in my eyes) there is a complete chapter about unusal bases, like -2 as base (yea, right negative bases) or -1+i (i as imaginary unit sqrt(-1)) as base.
Also I nice calculation what the best base is (in terms of hardware design, for all who dont want to read it: The solution of the equation is e, so you can go with 2 or 3, 3 would be little bit better (factor 1.056 times better than 2) - but is technical more practical).
Other things which come to my mind are gray counter (you when you count in this system only 1 bit changes, you often use this property in hardware design to reduce metastability issues) or the generalisation of the already mentioned Huffmann encoding - the arithmetic encoding.
Cryptography makes extensive use of integer rings (modular arithmatic) and also finite fields, whose operations are intuitively based on the way polynomials with integer coefficients behave.
I really like this one for converting binary numbers into Gray codes: http://www.matrixlab-examples.com/gray-code.html
Great question. The list is long indeed. Telling time is a simple instance of mixed bases (days | hours | minutes | seconds | am/pm)
I've created a meta-base enumeration n-tuple framework if you're interested in hearing about it. It's some very sweet syntactic sugar for base numbering systems. It's not released yet. Email my username (at gmail).
One of my favourites using base 2 is Arithmetic Encoding. Its unusual because the hart of the algorithm uses representations of numbers between 0 and 1 in binary.
May be AKS is the case.
精彩评论