Turning an array of integers into an array of nonnegative integers
Start with an array of integers so that the sum of the values is some positive integer S
. The following routine always terminates in the same number of steps with the same results. Why is this?
Start with an array x = [x_0, x_1, ..., x_N-1]
such that all x_i
's are integers. While there is a negative entry, do the following:
Choose any index
i
such thatx_i < 0
.Add
x_i
(a negative number) tox_(i-1 % N)
.Add
x_i
(a negative number) tox_(i+1 % N)
.Replace
x_i
with-x_i
(a positive number).
This process maintains the property that x_0 + x_1 + ... + x_N-1 = S
. For any given starting array x
, no matter which index is chosen at any step, the number of times one goes through these steps is the same as is the resulting vector. It is not even obvious (to me, at least) that this process terminates in finite time, let alone has this nice invariant property.
EXAMPLE:
Take x = [4 , -1, -2]
and flipping x_1
to start, the result is
[4, -1, -2]
[3, 1, -3]
[0, -2, 3]
[-2, 2, 1]
[2, 0, -1]
[1, -1, 1]
[0, 1, 0]
On the other hand, flipping x_2
to start gives
[4, -1, -2]
[2, -3, 2]
[-1, 3, -1]
[1, 2, -2]
[-1, 0, 2]
[1, -1, 1]
[0, 1, 0]
and the final way give this solution with arrays reversed from the third on down if you choose x_2
instead of x_0
to flip at the third array. In all cases, 6 steps lead to [0,1,0]
.
I have an argument for why this is true, but it seems to me to be overly complicated (it has to do with Coxeter groups). Does anyone have a more direct way to think about why this happens? Even finding a reason why this should terminate would be great.
Bonus points to anyone who finds a way to determine the number of steps for a given array (without goin开发者_StackOverflow社区g through the process).
I think the easiest way to see why the output vector and the number of steps are the same no matter what index you choose at each step is to look at the problem as a bunch of matrix and vector multiplications.
For the case where x
has 3 components, think of x
as a 3x1 vector: x = [x_0 x_1 x_2]'
(where '
is the transpose operation). Each iteration of the loop will choose to flip one of x_0,x_1,x_2
, and the operation it performs on x
is identical to multiplication by one of the following matrices:
-1 0 0 1 1 0 1 0 1
s_0 = 1 1 0 s_1 = 0 -1 0 s_2 = 0 1 1
1 0 1 0 1 1 0 0 -1
where multiplication by s_0
is the operation performed if the index i=0
, s_1
corresponds to i=1
, and s_2
corresponds to i=2
. With this view, you can interpret the algorithm as multiplying the corresponding s_i
matrix by x
at each iteration. So in the first example where x_1
is flipped at the start, the algorithm computes: s_1*s_2*s_0*s_1*s_2*s_1[4 -1 -2]' = [0 1 0]'
The fact that the index you choose doesn't affect the final output vector arises from two interesting properties of the s
matrices. First, s_i*s_(i-1)*s_i = s_(i-1)*s_i*s(i-1)
, where i-1
is computed modulo n
, the number of matrices. This property is the only one needed to see why you get the same result in the examples with 3 elements:
s_1*s_2*s_0*s_1*s_2*s_1 = s_1*s_2*s_0*(s_1*s_2*s_1) = s_1*s_2*s_0*(s_2*s_1*s_2)
, which corresponds to choosing x_2
at the start, and lastly:
s_1*s_2*s_0*s_2*s_1*s_2 = s_1*(s_2*s_0*s_2)*s_1*s_2 = s_1*(s_0*s_2*s_0)*s1*s2
, which corresponds to choosing to flip x_2
at the start, but then choosing to flip x_0
in the third iteration.
The second property only applies when x
has 4 or more elements. It is s_i*s_k = s_k*s_i
whenever k <= i-2
where i-2
is again computed modulo n
. This property is apparent when you consider the form of matrices when x
has 4 elements:
-1 0 0 0 1 1 0 0 1 0 0 0 1 0 0 1
s_0 = 1 1 0 0 s_1 = 0 -1 0 0 s_2 = 0 1 1 0 s_3 = 0 1 0 0
0 0 1 0 0 1 1 0 0 0 -1 0 0 0 1 1
1 0 0 1 0 0 0 1 0 0 1 1 0 0 0 -1
The second property essentially says that you can exchange the order in which non-conflicting flips occur. For example, in a 4 element vector, if you first flipped x_1
and then flipped x_3
, this has the same effect as first flipping x_3
and then flipping x_1
.
I picture pushing the negative value(s) out in two directions until they dampen. Since addition is commutative, it doesn't matter what order you process the elements.
Here is an observation for when N is divisible by 3... Probably not useful, but I feel like writing it down.
Let w
(complex) be a primitive cube root of 1; that is, w^3 = 1
and 1 + w + w^2 = 0
. For example, w = cos(2pi/3) + i*sin(2pi/3)
.
Consider the sum x_0 + x_1*w + x_2*w^2 + x_3 + x_4*w + x_5*w^2 + ...
. That is, multiply each element of the sequence by consecutive powers of w
and add them all up.
Something moderately interesting happens to this sum on each step.
Consider three consecutive numbers [a, -b, c]
from the sequence, with b positive. Suppose these elements line up with the powers of w
such that these three numbers contribute a - b*w + c*w^2
to the sum.
Now perform the step on the middle element.
After the step, these numbers contribute (a-b) + b*w + (c-b)*w^2
to the sum.
But since 1 + w + w^2 = 0
, b + b*w + b*w^2 = 0
too. So we can add this to the previous expression to get a + 2*b*w + c
. Which is very similar to what we had before the step.
In other words, the step merely added 3*b*w
to the sum.
If the three consecutive numbers had lined up with powers of w
to contribute (say) a*w - b*w^2 + c
, it turns out that the step will add 3*b*w^2
.
In other words, no matter how the powers of w
line up with the three numbers, the step increases the sum by 3*b
, 3*b*w
, or 3*b*w^2
.
Unfortunately, since w^2 = -(w+1)
, this does not actually yield a steadily increasing function. So, as I said, probably not useful. But it still seems like a reasonable strategy is to seek a "signature" for each position that changes monotonically with each step...
精彩评论