I recently rediscovered this strange behaviour in Python’s Unicode handling.—Evan

Ok, but I don’t really follow you here: you are suggesting to relax the current UTF-16 behavior and to start defaulting to UTF-16-BE if no BOM is present - that’s most likely going to cause more problems that it seems to solve: namely complete garbage if the data turns out to be UTF-16-LE encoded and, what’s worse, enters the application undetected.

If you do have UTF-16 without a BOM mark it’s much better to let a short function analyze the text by reading for first few bytes of the file and then make an educated guess based on the findings. You can then process the file using one of the other codecs UTF-16-LE or -BE.—M.-A.

The crux of my argument is that the spec declares that UTF-16 without a BOM is BE. If the file is encoded in UTF-16LE and it doesn’t have a BOM, it doesn’t deserve to be processed correctly. That being said, treating it as UTF-16BE if it’s LE will result in a lot of invalid code points, so it shouldn’t be non-obvious that something has gone wrong.—Nicholas

This is about what we do now - we catch UnicodeError and then add a BOM to the file, and read it again. We know our files are UTF-16BE if they don’t have a BOM, as the files are written by code which observes the spec. We can’t use UTF-16BE all the time, because sometimes they’re UTF-16LE, and in those cases the BOM is set.

It would be nice if you could optionally specify that the codec would assume UTF-16BE if no BOM was present, and not raise UnicodeError in that case, which would preserve the current behaviour as well as allow users’ to ask for behaviour which conforms to the standard.

I’m not saying that you can’t work around the issue now, what I’m saying is that you shouldn’t have to - I think there is a reasonable expectation that the UTF-16 codec conforms to the spec, and if you wanted it to do something else, it is those users who should be forced to come up with a workaround.—Nicholas

It should be feasible to implement your own codec for that based on Lib/encodings/utf_16.py. Simply replace the line in StreamReader.decode():   raise UnicodeError,"UTF-16 stream does not start with BOM" with:   self.decode = codecs.utf_16_be_decode and you should be done.

Bye,—Walter

Oops, this only works if you have a big endian system. Otherwise you have to redecode the input with:   codecs.utf_16_ex_decode(input, errors, 1, False)

Bye,—Walter

Alternatively, the UTF-16BE codec could support the BOM, and do UTF-16LE if the "other" BOM is found.

This would also support your usecase, and in a better way. The Unicode assertion that UTF-16 is BE by default is void these days - there is always a higher layer protocol, and it more often than not specifies (perhaps not in English words, but only in the source code of the generator) that the default should by LE.

Regards, Martin—"Martin