开发者

Convert Array of Decimal Digits to an Array of Binary Digits

This is probably a quite exotic question.

My Problem is as follows:

The TI 83+ graphing calculator allows you to program on it us开发者_StackOverflowing either Assembly and a link cable to a computer or its built-in TI-BASIC programming language.

According to what I've found, it supports only 16-Bit Integers and some emulated floats.

I want to work with a bit larger numbers however (around 64 bit), so for that I use an array with the single digits:

{1, 2, 3, 4, 5}

would be the Decimal 12345.

In binary, that's 110000 00111001, or as a binary digit array:

{1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1}

which would be how the calculator displays it.

How would i go about converting this array of decimal digits (which is too large for the calculator to display it as a native type) into an array of decimal digits?

Efficiency is not an issue. This is NOT homework.

This would leave me free to implement Addition for such arrays and such.

thanks!


Thought about it and I think I would do it with the following 'algorithm'

  • check the last digit (5 in the example case)
  • if it is odd, store (from the reverse order) a 1 in the binary array

  • now divide the number by 2 through the following method:

  • begin with the first digit and clear the 'carry' variable.
  • divide it by 2 and add the 'carry' variable. If the remainder is 1 (check this before you do the divide with an and&1) then put 5 in the carry
  • repeat untill all digits have been done

repeat both steps again untill the whole number is reduced to 0's.

the number in your binary array is the binary representation

your example: 1,2,3,4,5

  • the 5 is odd so we store 1 in the binary array: 1
  • we divide the array by 2 using the algorithm:
  • 0,2,3,4,5 => 0,1+5,3,4,5 => 0,6,1,4,5 => 0,6,1,2+5,5 => 0,6,1,7,2

and repeat:

0,6,1,7,2 last digit is even so we store a 0: 0,1 (notice we fill the binary string from right to left)

etc

you end up with a binary

EDIT: Just to clarify above: All I'm doing is the age old algorithm:

 int value=12345;
 while(value>0)
 {
      binaryArray.push(value&1);
      value>>=1;     //divide by 2
 }

except in your example we don't have an int but an array which represents a (10 base) int ;^)


On way would be to convert each digit in the decimal representation to it's binary representation and then add the binary representations of all the digits:

5 = 101
40 = 101000
300 = 100101100
2000 = 11111010000
10000 = 10011100010000

             101
          101000
       100101100
     11111010000
+ 10011100010000
----------------
  11000000111001

Proof of concept in C#:

Methods for converting to an array of binary digits, adding arrays and multiplying an array by ten:

private static byte[] GetBinary(int value) {
  int bit = 1, len = 1;
  while (bit * 2 < value) {
    bit <<= 1;
    len++;
  }
  byte[] result = new byte[len];
  for (int i = 0; value > 0;i++ ) {
    if (value >= bit) {
      value -= bit;
      result[i] = 1;
    }
    bit >>= 1;
  }
  return result;
}

private static byte[] Add(byte[] a, byte[] b) {
  byte[] result = new byte[Math.Max(a.Length, b.Length) + 1];
  int carry = 0;
  for (int i = 1; i <= result.Length; i++) {
    if (i <= a.Length) carry += a[a.Length - i];
    if (i <= b.Length) carry += b[b.Length - i];
    result[result.Length - i] = (byte)(carry & 1);
    carry >>= 1;
  }
  if (result[0] == 0) {
    byte[] shorter = new byte[result.Length - 1];
    Array.Copy(result, 1, shorter, 0, shorter.Length);
    result = shorter;
  }
  return result;
}

private static byte[] Mul2(byte[] a, int exp) {
  byte[] result = new byte[a.Length + exp];
  Array.Copy(a, result, a.Length);
  return result;
}

private static byte[] Mul10(byte[] a, int exp) {
  for (int i = 0; i < exp; i++) {
    a = Add(Mul2(a, 3), Mul2(a, 1));
  }
  return a;
}

Converting an array:

byte[] digits = { 1, 2, 3, 4, 5 };

byte[][] bin = new byte[digits.Length][];
int exp = 0;
for (int i = digits.Length - 1; i >= 0; i--) {
  bin[i] = Mul10(GetBinary(digits[i]), exp);
  exp++;
}
byte[] result = null;
foreach (byte[] digit in bin) {
  result = result == null ? digit: Add(result, digit);
}

// output array
Console.WriteLine(
  result.Aggregate(
    new StringBuilder(),
    (s, n) => s.Append(s.Length == 0 ? "" : ",").Append(n)
  ).ToString()
);

Output:

1,1,0,0,0,0,0,0,1,1,1,0,0,1

Edit:
Added methods for multiplying an array by tens. Intead of multiplying the digit before converting it to a binary array, it has to be done to the array.


The main issue here is that you're going between bases which aren't multiples of one another, and thus there isn't a direct isolated mapping between input digits and output digits. You're probably going to have to start with your least significant digit, output as many least significant digits of the output as you can before you need to consult the next digit, and so on. That way you only need to have at most 2 of your input digits being examined at any given point in time.

You might find it advantageous in terms of processing order to store your numbers in reversed form (such that the least significant digits come first in the array).

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜