This is UTF-16-BE encoding for the string engineer.1
UTF-16 uses two bytes for BMP characters (including ASCII characters), so, for example, the character e, which is Unicode (and ASCII) character number 101 (0x65 hex), shows up as the 16-bit code unit 101. In big-endian (that's what the -BE part means), the first byte is 0, and the second byte is 101. So, if your text is pure ASCII, your UTF-16 ends up looking like ASCII with an extra \0 byte before each character.
The cleanest way to solve this is to open the file as a Unicode file. As a general rule, if you decode everything to unicode as part of reading it, encode back to bytes only at the very end as part of writing it, and do all the work in the middle with unicode, everything is simpler.
In Python 2.7, there are two ways to do this, codecs.open or io.open. Using codecs makes your code a bit easier to port to Python 2.5, using io makes it a bit easier to port to 3.x, but it doesn't make a difference otherwise in simple cases like this.
Notice that your line strings will now be unicode instead of str, so ideally you'll want your set of search strings to also be unicode values.
d = {u'engineer': 0, u'conductor': 0, u'transit cop': 0}
with io.open(path, encoding='utf-16-be') as f:
for line in f:
try:
d[line.strip()] += 1
except KeyError:
pass
Another alternative is to read the file as binary UTF-16-BE, and make your search strings UTF-16-BE-encoded str values:
d = {u'engineer': 0, u'conductor': 0, u'transit cop': 0}
d = {key.encode('utf-16-be'): count for key, count in d.items()}
with open(path) as f:
for line in f:
try:
d[line.rstrip('\n\0')] += 1
except KeyError:
pass
Notice that I had to be careful with stripping, to make sure to remove the whole two-byte \0\n at the end instead of just the \n byte, and to not strip off the \0 byte at the start. This is just one of many ways that dealing with encoded bytes is more of a pain than dealing with Unicode. And if your final output is going to involve, say, printing these strings to your console or writing them out to a UTF-8 file, it'll get even more painful. If the final output is going to be another UTF-16-BE file, and if saving a bit of CPU is really important, it might be worth doing it this way. But otherwise, I'd go with the first.
1. Actually, you've got an extra \0 at the end. But presumably in your real data, that's actually the first byte of the next character—maybe a \n, which, in UTF-16-BE, of course looks like \0\n.