I've found at least one recent Preset where the current method of reading a UTF-16 String in the PREPEND_LEN
mode will fail. I have a fix and could PR, but I'm not sure I fully understand the situation so I'm starting with an issue.
In this case, the UTF-16 String will only decode in Big-Endian mode, and for some reason the default is Little-Endian. I don't know how the defaults or how Bitwig saves them are chosen.
str_bytes = self.read(str_len * (char_len))
try:
return str_bytes.decode(char_enc)
except UnicodeDecodeError:
return str_bytes.decode('utf-16-be')
It's not the best fail-safe, but in this case it's working for the existing and problem Presets.