开发者

Octal conversion using loops in C++

I am currently working on a basic program which converts a binary number to an octal. Its task is to print a table with all the numbers between 0-256, with their binary, octal and hexadecimal equivalent. The task requires me only to use my own code (i.e. using loops etc and not in-built functions). The code I have made (it is quite messy at the moment) is as following (this is only a snippit):

        int counter = ceil(log10(fabs(binaryValue)+1));
        int iter;
        if (counter%3 == 0)
        {
            iter = counter/3;
        }
        else if (counter%3 != 0)
        {
            iter = ceil((counter/3)); 
        }
        c = binaryValue;
        for (int h = 0; h < iter; h++)
        {
            tempOctal = c%1000;
            c /= 1000;
            int count = ceil(log10(fabs(tempOctal)+1));
            for (int counter = 0; counter < count; counter++)
            {
                if (tempOctal%10 != 0)
                {
                   e = pow(2.0, counter);
                   tempDecimal += e;
                }
                tempOctal /= 10;
            }
            octalValue += (tempDecimal * pow(10.0, h));
        }

The output is completely wrong. When for example the binary code is 1111 (decimal value 15), it outputs 7. I can understand why this happens (the last three digits in the binary number, 111, is 7 in decimal format), but can't be able to identify the problem in the code. Any ideas?

Edit: After some debugging and testing I figured the answer.

#include <iostream>
#include <cmath>
using namespace std;

int main()
{
    while (true)
{
    int binaryValue, c, tempOctal, tempDecimal, octalValue = 0, e;
    cout << "Enter a binary number to开发者_如何学Python convert to octal: ";
    cin >> binaryValue;
    int counter = ceil(log10(binaryValue+1));
    cout << "Counter " << counter << endl;
    int iter;
    if (counter%3 == 0)
    {
       iter = counter/3;
    }
    else if (counter%3 != 0)
    {
       iter = (counter/3)+1; 
    }
    cout << "Iterations " << iter << endl;
    c = binaryValue;
    cout << "C " << c << endl;
    for (int h = 0; h < iter; h++)
    {
        tempOctal = c%1000;
        cout << "3 digit binary part " << tempOctal << endl;
        int count = ceil(log10(tempOctal+1));
        cout << "Digits " << count << endl;
        tempDecimal = 0;
        for (int counterr = 0; counterr < count; counterr++)
        {
            if (tempOctal%10 != 0)
            {
                 e = pow(2.0, counterr);
                 tempDecimal += e;
                 cout << "Temp Decimal value 0-7 " << tempDecimal << endl;
            }
            tempOctal /= 10;
        }
        octalValue += (tempDecimal * pow(10.0, h));
        cout << "Octal Value " << octalValue << endl;
        c /= 1000;
    }
cout << "Final Octal Value: " << octalValue << endl;
}
system("pause");
return 0;

}


This looks overly complex. There's no need to involve floating-point math, and it can very probably introduce problems.

Of course, the obvious solution is to use a pre-existing function to do this (like { char buf[32]; snprintf(buf, sizeof buf, "%o", binaryValue); } and be done, but if you really want to do it "by hand", you should look into using bit-operations:

  • Use binaryValue & 3 to mask out the three lowest bits. These will be your next octal digit (three bits is 0..7, which is one octal digit).
  • use binaryValue >>= 3 to shift the number to get three new bits into the lowest position
  • Reverse the number afterwards, or (if possible) start from the end of the string buffer and emit digits backwards


It don't understand your code; it seems far too complicated. But one thing is sure, if you are converting an internal representation into octal, you're going to have to divide by 8 somewhere, and do a % 8 somewhere. And I don't see them. On the other hand, I see a both operations with both 10 and 1000, neither of which should be present.

For starters, you might want to write a simple function which converts a value (preferably an unsigned of some type—get unsigned right before worrying about the sign) to a string using any base, e.g.:

//! \pre
//!     base >= 2 && base < 36
//!
//! Digits are 0-9, then A-Z.
std::string convert(unsigned value, unsigned base);

This shouldn't take more than about 5 or 6 lines of code. But attention, the normal algorithm generates the digits in reverse order: if you're using std::string, the simplest solution is to push_back each digit, then call std::reverse at the end, before returning it. Otherwise: a C style char[] works well, provided that you make it large enough. (sizeof(unsigned) * CHAR_BITS + 2 is more than enough, even for signed, and even with a '\0' at the end, which you won't need if you return a string.) Just initialize the pointer to buffer + sizeof(buffer), and pre-decrement each time you insert a digit. To construct the string you return: std::string( pointer, buffer + sizeof(buffer) ) should do the trick.

As for the loop, the end condition could simply be value == 0. (You'll be dividing value by base each time through, so you're guaranteed to reach this condition.) If you use a do ... while, rather than just a while, you're also guaranteed at least one digit in the output.

(It would have been a lot easier for me to just post the code, but since this is obviously homework, I think it better to just give indications concerning what needs to be done.)

Edit: I've added my implementation, and some comments on your new code:

First for the comments: there's a very misleading prompt: "Enter a binary number" sounds like the user should enter binary; if you're reading into an int, the value input should be decimal. And there are still the % 1000 and / 1000 and % 10 and / 10 that I don't understand. Whatever you're doing, it can't be right if there's no % 8 and / 8. Try it: input "128", for example, and see what you get.

If you're trying to input binary, then you really have to input a string, and parse it yourself.

My code for the conversion itself would be:

//! \pre
//!     base >= 2 && base <= 36
//!
//! Digits are 0-9, then A-Z.
std::string toString( unsigned value, unsigned base )
{
    assert( base >= 2 && base <= 36 );
    static char const digits[] = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ";
    char buffer[sizeof(unsigned) * CHAR_BIT];
    char* dst = buffer + sizeof(buffer);
    do
    {
        *--dst = digits[value % base];
        value /= base;
    } while (value != 0);
    return std::string(dst, buffer + sizeof(buffer));
}

If you want to parse input (e.g. for binary), then something like the following should do the trick:

unsigned fromString( std::string const& value, unsigned base )
{
    assert( base >= 2 && base <= 36 );
    static char const digits[] = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ";
    unsigned results = 0;
    for (std::string::const_iterator iter = value.begin();
            iter != value.end();
            ++ iter)
    {
        unsigned digit = std::find
            ( digits, digits + sizeof(digits) - 1,
              toupper(static_cast<unsigned char>( *iter ) ) ) - digits;
        if ( digit >= base )
            throw std::runtime_error( "Illegal character" );
        if ( results >= UINT_MAX / base
             && (results > UINT_MAX / base || digit > UINT_MAX % base) )
            throw std::runtime_error( "Overflow" );
        results = base * results + digit;
    }
    return results;
}

It's more complicated than toString because it has to handle all sorts of possible error conditions. It's also still probably simpler than you need; you probably want to trim blanks, etc., as well (or even ignore them: entering 01000000 is more error prone than 0100 0000).

(Also, the end iterator for find has a - 1 because of the trailing '\0' the compiler inserts into digits.)


Actually I don't understand why do you need so complex code to accomplish what you need.

First of all there is no such a thing as conversion from binary to octal (same is true for converting to/from decimal and etc.). The machine always works in binary, there's nothing you can (or should) do about this.

This is actually a question of formatting. That is, how do you print a number as octal, and how do you parse the textual representation of the octal number.

Edit:

You may use the following code for printing a number in any base:

const int PRINT_NUM_TXT_MAX = 33; // worst-case for binary

void PrintNumberInBase(unsigned int val, int base, PSTR szBuf)
{
    // calculate the number of digits
    int digits = 0;
    for (unsigned int x = val; x; digits++)
        x /= base;

    if (digits < 1)
        digits = 1; // will emit zero

    // Print the value from right to left

    szBuf[digits] = 0; // zero-term

    while (digits--)
    {
        int dig = val % base;
        val /= base;

        char ch = (dig <= 9) ?
            ('0' + dig) :
            ('a' + dig - 0xa);

        szBuf[digits] = ch;
    }
}

Example:

char sz[PRINT_NUM_TXT_MAX];
PrintNumberInBase(19, 8, sz);


The code the OP is asking to produce is what your scientific calculator would do when you want a number in a different base.

I think your algorithm is wrong. Just looking over it, I see a function that is squared towards the end. why? There is a simple mathematical way to do what you are talking about. Once you get the math part, then you can convert it to code.

If you had pencil and paper, and no calculator (similar to not using pre built functions), the method is to take the base you are in, change it to base 10, then change to the base you require. In your case that would be base 8, to base 10, to base 2.

This should get you started. All you really need are if/else statements with modulus to get the remainders. http://www.purplemath.com/modules/numbbase3.htm

Then you have to figure out how to get your desired output. Maybe store the remainders in an array or output to a txt file.

(For problems like this is the reason why I want to double major with applied math)

Since you want conversion from decimal 0-256, it would be easiest to make functions, say call them int binary(), char hex(), and int octal(). Do the binary and octal first as that would be the easiest since they can represented by only integers.


#include <cmath>
#include <iostream>
#include <string>
#include <cstring>
#include <cctype>
#include <cstdlib>

using namespace std;

char*  toBinary(char* doubleDigit)
{
  int digit = atoi(doubleDigit);
  char* binary = new char();

  int x = 0 ;
  binary[x]='(';
  //int tempDigit = digit;
 int k=1;
  for(int i = 9 ; digit != 0; i--)
  {
    k=1;//cout << digit << endl;
    //cout << "i"<< i<<endl;
    if(digit-k *pow(8,i)>=0)
    {


      k =1;
      cout << "i" << i << endl;
      cout << k*pow(8,i)<< endl;

      while((k*pow(8,i)<=digit))
      {
    //cout << k <<endl;
    k++;
      }
      k= k-1;



       digit = digit -k*pow(8,i);

      binary[x+1]= k+'0';
      binary[x+2]= '*';
      binary[x+3]= '8';
      binary[x+4]='^';
      binary[x+5]=i+'0';
      binary[x+6]='+';

    x+=6;

    }

  }
  binary[x]=')';
  return binary;
}

int main()
{
 char value[6]={'4','0','9','8','7','9'};



 cout<< toBinary(value); 



  return 0 ;
}
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜