I recently rediscovered this strange behaviour in Python’s Unicode handling.—Evan

The problem is "in the absence of a higher level protocol": the codec doesn’t know anything about a protocol - it’s the application using the codec that knows which protocol get’s used. It’s a lot safer to require the BOM for UTF-16 streams and raise an exception to have the application decide whether to use UTF-16-BE or the by far more common UTF-16-LE.

Unlike for the UTF-8 codec, the BOM for UTF-16 is a configuration parameter, not merely a signature.

In terms of history, I don’t recall whether your quote was already in the standard at the time I wrote the PEP. You are the first to have reported a problem with the current implementation (which has been around since 2000), so I believe that application writers are more comfortable with the way the UTF-16 codec is currently implemented. Explicit is better than implicit :-)—M.-A.