Python:从ISO-8859-1/latin1转换为UTF-8。

时间:2022-11-13 14:33:45

I have this string that has been decoded from Quoted-printable to ISO-8859-1 with the email module. This gives me strings like "\xC4pple" which would correspond to "Äpple" (Apple in Swedish). However, I can't convert those strings to UTF-8.

我有这个字符串,它已经被解码了,它是由ISO-8859-1和电子邮件模块被解码的。这给了我像“\xC4pple”这样的字符串,它对应的是“Apple”(瑞典的苹果)。但是,我不能将这些字符串转换为UTF-8。

>>> apple = "\xC4pple"
>>> apple
'\xc4pple'
>>> apple.encode("UTF-8")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc4 in position 0: ordinal not in     range(128)

What should I do?

我应该做什么?

5 个解决方案

#1


90  

Try decoding it first, then encoding:

试着先解码,然后编码:

apple.decode('iso-8859-1').encode('utf8')

#2


125  

This is a common problem, so here's a relatively thorough illustration.

这是一个常见的问题,所以这是一个相对完整的例子。

For non-unicode strings (i.e. those without u prefix like u'\xc4pple'), one must decode from the native encoding (iso8859-1/latin1, unless modified with the enigmatic sys.setdefaultencoding function) to unicode, then encode to a character set that can display the characters you wish, in this case I'd recommend UTF-8.

对于非unicode字符串(即没有u'\xc4pple')的非unicode字符串,必须从本机编码(iso8859-1/latin1)中解码,除非使用谜一样的sys进行修改。对unicode编码,然后编码到一个字符集,可以显示您希望的字符,在本例中我推荐UTF-8。

First, here is a handy utility function that'll help illuminate the patterns of Python 2.7 string and unicode:

首先,这里有一个实用的实用函数,它将帮助说明Python 2.7字符串和unicode的模式:

>>> def tell_me_about(s): return (type(s), s)

A plain string

>>> v = "\xC4pple" # iso-8859-1 aka latin1 encoded string

>>> tell_me_about(v)
(<type 'str'>, '\xc4pple')

>>> v
'\xc4pple'        # representation in memory

>>> print v
?pple             # map the iso-8859-1 in-memory to iso-8859-1 chars
                  # note that '\xc4' has no representation in iso-8859-1, 
                  # so is printed as "?".

Decoding a iso8859-1 string - convert plain string to unicode

>>> uv = v.decode("iso-8859-1")
>>> uv
u'\xc4pple'       # decoding iso-8859-1 becomes unicode, in memory

>>> tell_me_about(uv)
(<type 'unicode'>, u'\xc4pple')

>>> print v.decode("iso-8859-1")
Äpple             # convert unicode to the default character set
                  # (utf-8, based on sys.stdout.encoding)

>>> v.decode('iso-8859-1') == u'\xc4pple'
True              # one could have just used a unicode representation 
                  # from the start

A little more illustration — with “Ä”

>>> u"Ä" == u"\xc4"
True              # the native unicode char and escaped versions are the same

>>> "Ä" == u"\xc4"  
False             # the native unicode char is '\xc3\x84' in latin1

>>> "Ä".decode('utf8') == u"\xc4"
True              # one can decode the string to get unicode

>>> "Ä" == "\xc4"
False             # the native character and the escaped string are
                  # of course not equal ('\xc3\x84' != '\xc4').

Encoding to UTF

>>> u8 = v.decode("iso-8859-1").encode("utf-8")
>>> u8
'\xc3\x84pple'    # convert iso-8859-1 to unicode to utf-8

>>> tell_me_about(u8)
(<type 'str'>, '\xc3\x84pple')

>>> u16 = v.decode('iso-8859-1').encode('utf-16')
>>> tell_me_about(u16)
(<type 'str'>, '\xff\xfe\xc4\x00p\x00p\x00l\x00e\x00')

>>> tell_me_about(u8.decode('utf8'))
(<type 'unicode'>, u'\xc4pple')

>>> tell_me_about(u16.decode('utf16'))
(<type 'unicode'>, u'\xc4pple')

Relationship between unicode and UTF and latin1

>>> print u8
Äpple             # printing utf-8 - because of the encoding we now know
                  # how to print the characters

>>> print u8.decode('utf-8') # printing unicode
Äpple

>>> print u16     # printing 'bytes' of u16
���pple

>>> print u16.decode('utf16')
Äpple             # printing unicode

>>> v == u8
False             # v is a iso8859-1 string; u8 is a utf-8 string

>>> v.decode('iso8859-1') == u8
False             # v.decode(...) returns unicode

>>> u8.decode('utf-8') == v.decode('latin1') == u16.decode('utf-16')
True              # all decode to the same unicode memory representation
                  # (latin1 is iso-8859-1)

Unicode Exceptions

 >>> u8.encode('iso8859-1')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0:
  ordinal not in range(128)

>>> u16.encode('iso8859-1')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xff in position 0:
  ordinal not in range(128)

>>> v.encode('iso8859-1')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc4 in position 0:
  ordinal not in range(128)

One would get around these by converting from the specific encoding (latin-1, utf8, utf16) to unicode e.g. u8.decode('utf8').encode('latin1').

一种方法是将特定编码(latin-1, utf8, utf16)转换为unicode,例如u8.decode('utf8').encode('latin1')。

So perhaps one could draw the following principles and generalizations:

因此,也许我们可以得出以下的原则和概括:

  • a type str is a set of bytes, which may have one of a number of encodings such as Latin-1, UTF-8, and UTF-16
  • 类型str是一组字节,它可能有许多编码,如Latin-1、UTF-8和UTF-16。
  • a type unicode is a set of bytes that can be converted to any number of encodings, most commonly UTF-8 and latin-1 (iso8859-1)
  • 类型unicode是一组字节,可以转换为任意数量的编码,最常见的是UTF-8和latin-1 (iso8859-1)
  • the print command has its own logic for encoding, set to sys.stdout.encoding and defaulting to UTF-8
  • print命令有自己的编码逻辑,设置为sys.stdout。编码和默认为UTF-8。
  • One must decode a str to unicode before converting to another encoding.
  • 在转换为另一种编码之前,必须解码一个str到unicode。

Of course, all of this changes in Python 3.x.

当然,所有这些都在Python 3.x中发生了变化。

Hope that is illuminating.

希望是照明。

Further reading

And the very illustrative rants by Armin Ronacher:

以及阿尔芒·罗彻的说明性的咆哮:

#3


9  

Decode to Unicode, encode the results to UTF8.

解码到Unicode,将结果编码到UTF8。

apple.decode('latin1').encode('utf8')

apple.decode(' latin1”中的一个).encode(use utf8)

#4


7  

For Python 3:

Python 3:

bytes(apple,'iso-8859-1').decode('utf-8')

I used this for a text incorrectly encoded as iso-8859-1 (showing words like VeÅ\x99ejné) instead of utf-8. This code produces correct version Veřejné.

我把它用在一个错误编码为iso-8859-1的文本中(显示像VeA \ x99ejnA那样的单词)而不是utf-8。这段代码生成Veřejne正确版本。

#5


0  

concept = concept.encode('ascii', 'ignore') concept = MySQLdb.escape_string(concept.decode('latin1').encode('utf8').rstrip())

概念=概念。编码('ascii', 'ignore')概念= MySQLdb.escape_string(concept.decode('latin1').encode('utf8').rstrip())

I do this, I am not sure if that is a good approach but it works everytime !!

我这样做,我不确定这是不是一个好的方法,但是它每次都有效!!

#1


90  

Try decoding it first, then encoding:

试着先解码,然后编码:

apple.decode('iso-8859-1').encode('utf8')

#2


125  

This is a common problem, so here's a relatively thorough illustration.

这是一个常见的问题,所以这是一个相对完整的例子。

For non-unicode strings (i.e. those without u prefix like u'\xc4pple'), one must decode from the native encoding (iso8859-1/latin1, unless modified with the enigmatic sys.setdefaultencoding function) to unicode, then encode to a character set that can display the characters you wish, in this case I'd recommend UTF-8.

对于非unicode字符串(即没有u'\xc4pple')的非unicode字符串,必须从本机编码(iso8859-1/latin1)中解码,除非使用谜一样的sys进行修改。对unicode编码,然后编码到一个字符集,可以显示您希望的字符,在本例中我推荐UTF-8。

First, here is a handy utility function that'll help illuminate the patterns of Python 2.7 string and unicode:

首先,这里有一个实用的实用函数,它将帮助说明Python 2.7字符串和unicode的模式:

>>> def tell_me_about(s): return (type(s), s)

A plain string

>>> v = "\xC4pple" # iso-8859-1 aka latin1 encoded string

>>> tell_me_about(v)
(<type 'str'>, '\xc4pple')

>>> v
'\xc4pple'        # representation in memory

>>> print v
?pple             # map the iso-8859-1 in-memory to iso-8859-1 chars
                  # note that '\xc4' has no representation in iso-8859-1, 
                  # so is printed as "?".

Decoding a iso8859-1 string - convert plain string to unicode

>>> uv = v.decode("iso-8859-1")
>>> uv
u'\xc4pple'       # decoding iso-8859-1 becomes unicode, in memory

>>> tell_me_about(uv)
(<type 'unicode'>, u'\xc4pple')

>>> print v.decode("iso-8859-1")
Äpple             # convert unicode to the default character set
                  # (utf-8, based on sys.stdout.encoding)

>>> v.decode('iso-8859-1') == u'\xc4pple'
True              # one could have just used a unicode representation 
                  # from the start

A little more illustration — with “Ä”

>>> u"Ä" == u"\xc4"
True              # the native unicode char and escaped versions are the same

>>> "Ä" == u"\xc4"  
False             # the native unicode char is '\xc3\x84' in latin1

>>> "Ä".decode('utf8') == u"\xc4"
True              # one can decode the string to get unicode

>>> "Ä" == "\xc4"
False             # the native character and the escaped string are
                  # of course not equal ('\xc3\x84' != '\xc4').

Encoding to UTF

>>> u8 = v.decode("iso-8859-1").encode("utf-8")
>>> u8
'\xc3\x84pple'    # convert iso-8859-1 to unicode to utf-8

>>> tell_me_about(u8)
(<type 'str'>, '\xc3\x84pple')

>>> u16 = v.decode('iso-8859-1').encode('utf-16')
>>> tell_me_about(u16)
(<type 'str'>, '\xff\xfe\xc4\x00p\x00p\x00l\x00e\x00')

>>> tell_me_about(u8.decode('utf8'))
(<type 'unicode'>, u'\xc4pple')

>>> tell_me_about(u16.decode('utf16'))
(<type 'unicode'>, u'\xc4pple')

Relationship between unicode and UTF and latin1

>>> print u8
Äpple             # printing utf-8 - because of the encoding we now know
                  # how to print the characters

>>> print u8.decode('utf-8') # printing unicode
Äpple

>>> print u16     # printing 'bytes' of u16
���pple

>>> print u16.decode('utf16')
Äpple             # printing unicode

>>> v == u8
False             # v is a iso8859-1 string; u8 is a utf-8 string

>>> v.decode('iso8859-1') == u8
False             # v.decode(...) returns unicode

>>> u8.decode('utf-8') == v.decode('latin1') == u16.decode('utf-16')
True              # all decode to the same unicode memory representation
                  # (latin1 is iso-8859-1)

Unicode Exceptions

 >>> u8.encode('iso8859-1')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0:
  ordinal not in range(128)

>>> u16.encode('iso8859-1')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xff in position 0:
  ordinal not in range(128)

>>> v.encode('iso8859-1')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc4 in position 0:
  ordinal not in range(128)

One would get around these by converting from the specific encoding (latin-1, utf8, utf16) to unicode e.g. u8.decode('utf8').encode('latin1').

一种方法是将特定编码(latin-1, utf8, utf16)转换为unicode,例如u8.decode('utf8').encode('latin1')。

So perhaps one could draw the following principles and generalizations:

因此,也许我们可以得出以下的原则和概括:

  • a type str is a set of bytes, which may have one of a number of encodings such as Latin-1, UTF-8, and UTF-16
  • 类型str是一组字节,它可能有许多编码,如Latin-1、UTF-8和UTF-16。
  • a type unicode is a set of bytes that can be converted to any number of encodings, most commonly UTF-8 and latin-1 (iso8859-1)
  • 类型unicode是一组字节,可以转换为任意数量的编码,最常见的是UTF-8和latin-1 (iso8859-1)
  • the print command has its own logic for encoding, set to sys.stdout.encoding and defaulting to UTF-8
  • print命令有自己的编码逻辑,设置为sys.stdout。编码和默认为UTF-8。
  • One must decode a str to unicode before converting to another encoding.
  • 在转换为另一种编码之前,必须解码一个str到unicode。

Of course, all of this changes in Python 3.x.

当然,所有这些都在Python 3.x中发生了变化。

Hope that is illuminating.

希望是照明。

Further reading

And the very illustrative rants by Armin Ronacher:

以及阿尔芒·罗彻的说明性的咆哮:

#3


9  

Decode to Unicode, encode the results to UTF8.

解码到Unicode,将结果编码到UTF8。

apple.decode('latin1').encode('utf8')

apple.decode(' latin1”中的一个).encode(use utf8)

#4


7  

For Python 3:

Python 3:

bytes(apple,'iso-8859-1').decode('utf-8')

I used this for a text incorrectly encoded as iso-8859-1 (showing words like VeÅ\x99ejné) instead of utf-8. This code produces correct version Veřejné.

我把它用在一个错误编码为iso-8859-1的文本中(显示像VeA \ x99ejnA那样的单词)而不是utf-8。这段代码生成Veřejne正确版本。

#5


0  

concept = concept.encode('ascii', 'ignore') concept = MySQLdb.escape_string(concept.decode('latin1').encode('utf8').rstrip())

概念=概念。编码('ascii', 'ignore')概念= MySQLdb.escape_string(concept.decode('latin1').encode('utf8').rstrip())

I do this, I am not sure if that is a good approach but it works everytime !!

我这样做,我不确定这是不是一个好的方法,但是它每次都有效!!