6

Against the advice of online resources, I'm using Python to do some simple bit shifting in one of my scripts. The bit shifting is fine, it's representing the result that's difficult. Let me explain.

I have a 64 bit binary representation

1000010101010111010101010101010101010101010101010111010101010101

which, when represented as a signed integer, should be negative. However, Python defaults to type long, preventing the number from being signed by the first bit, yielding the number 9608242155010487637.

How can I get Python to recognize the sign bit in 64 bits?

To clarify, I'm using Python 2.7.

2

3 Answers 3

3

Try the bitstring module:

>>> from bitstring import BitArray
>>> s = '1000010101010111010101010101010101010101010101010111010101010101'
>>> BitArray(bin=s).int
-8838501918699063979
Sign up to request clarification or add additional context in comments.

Comments

3

You can use struct, like this:

>>> import struct
>>> struct.unpack('>q',struct.pack('>Q',
int('1000010101010111010101010101010101010101010101010111010101010101',
2)))
#=> (-8838501918699063979,)

The result is a tuple, which you can then request the first element of:

>>> struct.unpack('>q',struct.pack('>Q',
int('1000010101010111010101010101010101010101010101010111010101010101',
2)))[0]
#=> -8838501918699063979

3 Comments

I'm getting a struct error struct.error: unpack requires a string argument of length 4
I changed the l argument to q in order to represent the 64 bit signed long, which fixed this error and gave me the correct result.
@DonutGaz I used a platform-dependent format string, which I should not have done. Edited to a platform-independent version.
1

It's been a looong time ... . Still, here is a basic-python-only solution to round it all off.
First, negative integers are just positive integers with an offset. Take a char, i.e., 8 bit.
Runs from (let's consider big endian, i.e., most significant bit (acting sign!) first, least significant bit last):

00000000 == 0
00000001 == 1
..
01111110 == 126
01111111 == 127

and then it flips around. All the way to

10000000 == -128
10000001 == -127
..
11111110 == -2
11111111 == -1
00000000 == 0
..

So, if you just subtract 256 == 2**8 from any number that has the most significant bit set, you are done. This is a one-liner.

Code example:

binary = '10000011'
int(binary, 2) - int(binary[0]) * 256

Of course, this can be generalized to n bits:

binary = '101...01'  #<< some n bits. With n==64, if you like.
int(binary, 2) - int(binary[0]) * 2**n

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.