notyabussines Posted November 30, 2012 Posted November 30, 2012 Convert -128 ( decimal) to binary,representing the number with Two's complement with 8 bits...when I convert it to binary it is 10000000..so it is already 8 bits, if I put the bit of sign then it is 9 bits...110000000 I also have to convert 115,375 (octal) to binary representing the number with 16 bits,but when I convert it I have 001001101,011111101..which is far more than 16 bits..where am I wrong?
lesolee Posted November 30, 2012 Posted November 30, 2012 Convert -128 ( decimal) to binary,representing the number with Two's complement with 8 bits...when I convert it to binary it is 10000000..so it is already 8 bits, if I put the bit of sign then it is 9 bits...110000000 I also have to convert 115,375 (octal) to binary representing the number with 16 bits,but when I convert it I have 001001101,011111101..which is far more than 16 bits..where am I wrong? What has gone wrong in your calculation is an overflow. You can't represent +128 in a signed 8 bit value. The number you wrote (10000000) is in fact -128 not +128. All the rest should make sense when you see this.
Recommended Posts