当用Python在熊猫中读取CSV文件时,UnicodeDecodeError

时间:2023-01-04 23:41:06

I'm running a program which is processing 30,000 similar files. A random number of them are stopping and producing this error...

我正在运行一个处理30000个类似文件的程序。他们中的一些人正在停止并产生这个错误……

   File "C:\Importer\src\dfman\importer.py", line 26, in import_chr
     data = pd.read_csv(filepath, names=fields)
   File "C:\Python33\lib\site-packages\pandas\io\parsers.py", line 400, in parser_f
     return _read(filepath_or_buffer, kwds)
   File "C:\Python33\lib\site-packages\pandas\io\parsers.py", line 205, in _read
     return parser.read()
   File "C:\Python33\lib\site-packages\pandas\io\parsers.py", line 608, in read
     ret = self._engine.read(nrows)
   File "C:\Python33\lib\site-packages\pandas\io\parsers.py", line 1028, in read
     data = self._reader.read(nrows)
   File "parser.pyx", line 706, in pandas.parser.TextReader.read (pandas\parser.c:6745)
   File "parser.pyx", line 728, in pandas.parser.TextReader._read_low_memory (pandas\parser.c:6964)
   File "parser.pyx", line 804, in pandas.parser.TextReader._read_rows (pandas\parser.c:7780)
   File "parser.pyx", line 890, in pandas.parser.TextReader._convert_column_data (pandas\parser.c:8793)
   File "parser.pyx", line 950, in pandas.parser.TextReader._convert_tokens (pandas\parser.c:9484)
   File "parser.pyx", line 1026, in pandas.parser.TextReader._convert_with_dtype (pandas\parser.c:10642)
   File "parser.pyx", line 1046, in pandas.parser.TextReader._string_convert (pandas\parser.c:10853)
   File "parser.pyx", line 1278, in pandas.parser._string_box_utf8 (pandas\parser.c:15657)
 UnicodeDecodeError: 'utf-8' codec can't decode byte 0xda in position 6: invalid    continuation byte

The source/creation of these files all come from the same place. What's the best way to correct this to proceed with the import?

这些文件的源/创建都来自同一个地方。纠正这个问题的最好方法是什么?

3 个解决方案

#1


326  

read_csv takes an encoding option to deal with files in different formats. I mostly use read_csv('file', encoding = "ISO-8859-1"), or alternatively encoding = "utf-8" for reading, and generally utf-8 for to_csv.

read_csv采用一个编码选项来处理不同格式的文件。我主要使用read_csv('file',编码= "ISO-8859-1"),或交替使用编码= "utf-8"进行读取,一般使用utf-8进行to_csv。

You can also use the alias 'latin1' instead of 'ISO-8859-1'.

您还可以使用别名“latin1”而不是“ISO-8859-1”。

See relevant Pandas documentation, python docs examples on csv files, and plenty of related questions here on SO.

请参阅相关的panda文档,csv文件中的python文档示例,以及大量相关问题。

#2


12  

Simplest of all Solutions:

简单的解决方案:

  • Open the csv file in Sublime text editor.
  • 在崇高的文本编辑器中打开csv文件。
  • Save the file in utf-8 format.
  • 以utf-8格式保存文件。

In sublime, Click File -> Save with encoding -> UTF-8

在崇高,点击文件->保存与编码-> UTF-8

Then, you can read your file as usual:

然后,你可以像往常一样阅读你的文件:

import pandas as pd
data = pd.read_csv('file_name.csv', encoding='utf-8')

EDIT 1:

编辑1:

If there are many files, then you can skip the sublime step.

如果有许多文件,那么您可以跳过崇高的步骤。

Just read the file using

只需使用

data = pd.read_csv('file_name.csv', encoding='utf-8')

and the other different encoding types are:

其他不同的编码类型有:

encoding = "cp1252"
encoding = "ISO-8859-1"

#3


1  

Struggled with this a while and thought I'd post on this question as it's the first search result. Adding the encoding='iso-8859-1" tag to pandas read_csv didn't work, nor did any other encoding, kept giving a UnicodeDecodeError.

有段时间我一直纠结于这个问题,我觉得我应该把这个问题贴出来,因为这是第一个搜索结果。将编码='iso-8859-1"标签添加到熊猫read_csv没有工作,也没有任何其他编码,继续提供一个UnicodeDecodeError。

If you're passing a file handle to pd.read_csv(), you need to put the encoding= attribute on the file open, not in read_csv. Obvious in hindsight, but a subtle error to track down.

如果将文件句柄传递给pd.read_csv(),则需要将编码=属性放入文件打开,而不是在read_csv中。事后看来,这是显而易见的,但要追踪的一个微妙的错误。

#1


326  

read_csv takes an encoding option to deal with files in different formats. I mostly use read_csv('file', encoding = "ISO-8859-1"), or alternatively encoding = "utf-8" for reading, and generally utf-8 for to_csv.

read_csv采用一个编码选项来处理不同格式的文件。我主要使用read_csv('file',编码= "ISO-8859-1"),或交替使用编码= "utf-8"进行读取,一般使用utf-8进行to_csv。

You can also use the alias 'latin1' instead of 'ISO-8859-1'.

您还可以使用别名“latin1”而不是“ISO-8859-1”。

See relevant Pandas documentation, python docs examples on csv files, and plenty of related questions here on SO.

请参阅相关的panda文档,csv文件中的python文档示例,以及大量相关问题。

#2


12  

Simplest of all Solutions:

简单的解决方案:

  • Open the csv file in Sublime text editor.
  • 在崇高的文本编辑器中打开csv文件。
  • Save the file in utf-8 format.
  • 以utf-8格式保存文件。

In sublime, Click File -> Save with encoding -> UTF-8

在崇高,点击文件->保存与编码-> UTF-8

Then, you can read your file as usual:

然后,你可以像往常一样阅读你的文件:

import pandas as pd
data = pd.read_csv('file_name.csv', encoding='utf-8')

EDIT 1:

编辑1:

If there are many files, then you can skip the sublime step.

如果有许多文件,那么您可以跳过崇高的步骤。

Just read the file using

只需使用

data = pd.read_csv('file_name.csv', encoding='utf-8')

and the other different encoding types are:

其他不同的编码类型有:

encoding = "cp1252"
encoding = "ISO-8859-1"

#3


1  

Struggled with this a while and thought I'd post on this question as it's the first search result. Adding the encoding='iso-8859-1" tag to pandas read_csv didn't work, nor did any other encoding, kept giving a UnicodeDecodeError.

有段时间我一直纠结于这个问题,我觉得我应该把这个问题贴出来,因为这是第一个搜索结果。将编码='iso-8859-1"标签添加到熊猫read_csv没有工作,也没有任何其他编码,继续提供一个UnicodeDecodeError。

If you're passing a file handle to pd.read_csv(), you need to put the encoding= attribute on the file open, not in read_csv. Obvious in hindsight, but a subtle error to track down.

如果将文件句柄传递给pd.read_csv(),则需要将编码=属性放入文件打开,而不是在read_csv中。事后看来,这是显而易见的,但要追踪的一个微妙的错误。