Python json.loads显示了ValueError:Extra数据

时间:2022-03-13 07:38:27

I am getting some data from a JSON file "new.json", and I want to filter some data and store it into a new JSON file. Here is my code:

我从JSON文件“new.json”获取一些数据,我想过滤一些数据并将其存储到一个新的JSON文件中。这是我的代码:

import jsonwith open('new.json') as infile:    data = json.load(infile)for item in data:    iden = item.get["id"]    a = item.get["a"]    b = item.get["b"]    c = item.get["c"]    if c == 'XYZ' or  "XYZ" in data["text"]:        filename = 'abc.json'    try:        outfile = open(filename,'ab')    except:        outfile = open(filename,'wb')    obj_json={}    obj_json["ID"] = iden    obj_json["VAL_A"] = a    obj_json["VAL_B"] = b

and I am getting an error, the traceback is:

我收到一个错误,追溯是:

  File "rtfav.py", line 3, in <module>    data = json.load(infile)  File "/usr/lib64/python2.7/json/__init__.py", line 278, in load    **kw)  File "/usr/lib64/python2.7/json/__init__.py", line 326, in loads    return _default_decoder.decode(s)  File "/usr/lib64/python2.7/json/decoder.py", line 369, in decode    raise ValueError(errmsg("Extra data", s, end, len(s)))ValueError: Extra data: line 88 column 2 - line 50607 column 2 (char 3077 - 1868399)

Can someone help me?

有人能帮我吗?

Here is a sample of the data in new.json, there are about 1500 more such dictionaries in the file

以下是new.json中的数据示例,文件中还有大约1500个这样的词典

{    "contributors": null,     "truncated": false,     "text": "@HomeShop18 #DreamJob to professional rafter",     "in_reply_to_status_id": null,     "id": 421584490452893696,     "favorite_count": 0,     "source": "<a href=\"https://mobile.twitter.com\" rel=\"nofollow\">Mobile Web (M2)</a>",     "retweeted": false,     "coordinates": null,     "entities": {        "symbols": [],         "user_mentions": [            {                "id": 183093247,                 "indices": [                    0,                     11                ],                 "id_str": "183093247",                 "screen_name": "HomeShop18",                 "name": "HomeShop18"            }        ],         "hashtags": [            {                "indices": [                    12,                     21                ],                 "text": "DreamJob"            }        ],         "urls": []    },     "in_reply_to_screen_name": "HomeShop18",     "id_str": "421584490452893696",     "retweet_count": 0,     "in_reply_to_user_id": 183093247,     "favorited": false,     "user": {        "follow_request_sent": null,         "profile_use_background_image": true,         "default_profile_image": false,         "id": 2254546045,         "verified": false,         "profile_image_url_https": "https://pbs.twimg.com/profile_images/413952088880594944/rcdr59OY_normal.jpeg",         "profile_sidebar_fill_color": "171106",         "profile_text_color": "8A7302",         "followers_count": 87,         "profile_sidebar_border_color": "BCB302",         "id_str": "2254546045",         "profile_background_color": "0F0A02",         "listed_count": 1,         "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme1/bg.png",         "utc_offset": null,         "statuses_count": 9793,         "description": "Rafter. Rafting is what I do. Me aur mera Tablet.  Technocrat of Future",         "friends_count": 231,         "location": "",         "profile_link_color": "473623",         "profile_image_url": "http://pbs.twimg.com/profile_images/413952088880594944/rcdr59OY_normal.jpeg",         "following": null,         "geo_enabled": false,         "profile_banner_url": "https://pbs.twimg.com/profile_banners/2254546045/1388065343",         "profile_background_image_url": "http://abs.twimg.com/images/themes/theme1/bg.png",         "name": "Jayy",         "lang": "en",         "profile_background_tile": false,         "favourites_count": 41,         "screen_name": "JzayyPsingh",         "notifications": null,         "url": null,         "created_at": "Fri Dec 20 05:46:00 +0000 2013",         "contributors_enabled": false,         "time_zone": null,         "protected": false,         "default_profile": false,         "is_translator": false    },     "geo": null,     "in_reply_to_user_id_str": "183093247",     "lang": "en",     "created_at": "Fri Jan 10 10:09:09 +0000 2014",     "filter_level": "medium",     "in_reply_to_status_id_str": null,     "place": null} 

3 个解决方案

#1


88  

As you can see in the following example, json.loads (and json.load) does not decode multiple json object.

正如您在以下示例中所看到的,json.loads(和json.load)不会解码多个json对象。

>>> json.loads('{}'){}>>> json.loads('{}{}') # == json.loads(json.dumps({}) + json.dumps({}))Traceback (most recent call last):  File "<stdin>", line 1, in <module>  File "C:\Python27\lib\json\__init__.py", line 338, in loads    return _default_decoder.decode(s)  File "C:\Python27\lib\json\decoder.py", line 368, in decode    raise ValueError(errmsg("Extra data", s, end, len(s)))ValueError: Extra data: line 1 column 3 - line 1 column 5 (char 2 - 4)

If you want to dump multiple dictionaries, wrap them in a list, dump the list (instead of dumping dictionaries multiple times)

如果要转储多个字典,将它们包装在列表中,请转储列表(而不是多次转储字典)

>>> dict1 = {}>>> dict2 = {}>>> json.dumps([dict1, dict2])'[{}, {}]'>>> json.loads(json.dumps([dict1, dict2]))[{}, {}]

#2


41  

Can I just suggest that you don't have to package all of the tweets into a list and then do json.dumps. You can just write to a file as you go, and then load them in with:

我可以建议您不必将所有推文打包到列表中,然后执行json.dumps。您可以随时写入文件,然后使用以下命令加载它们:

tweets = []for line in open('test.txt', 'r'):    tweets.append(json.loads(line))

That way you don't have to store intermediate python objects. As long as your write one full tweet per write call this should work.

这样您就不必存储中间python对象。只要你为每次写入调用写一个完整的推文,这应该有效。

#3


8  

This may also happen if your JSON file is not just 1 JSON record.A JSON record looks like this:

如果您的JSON文件不仅仅是1个JSON记录,也可能发生这种情况.JSON记录如下所示:

[{"some data": value, "next key": "another value"}]

It opens and closes with a bracket [ ], within the brackets are the braces { }. There can be many pairs of braces, but it all ends with a close bracket ].If your json file contains more than one of those:

它用括号[]打开和关闭,括号内是大括号{}。可以有很多对括号,但它们都以一个紧密括号结束。如果你的json文件包含多个:

[{"some data": value, "next key": "another value"}][{"2nd record data": value, "2nd record key": "another value"}]

then loads() will fail.

然后loads()将失败。

I verified this with my own file that was failing.

我用我自己的文件验证了这个失败。

import jsonguestFile = open("1_guests.json",'r')guestData = guestFile.read()guestFile.close()gdfJson = json.loads(guestData)

This works because 1_guests.json has one record []. The original file I was using all_guests.json had 6 records separated by newline. I deleted 5 records, (which I already checked to be bookended by brackets) and saved the file under a new name. Then the loads statement worked.

这是有效的,因为1_guests.json有一个记录[]。我使用all_guests.json的原始文件有6条由换行符分隔的记录。我删除了5条记录(我已经检查过用括号进行书签)并以新名称保存文件。然后load语句工作。

Error was

错误是

   raise ValueError(errmsg("Extra data", s, end, len(s)))ValueError: Extra data: line 2 column 1 - line 10 column 1 (char 261900 - 6964758)

PS. I use the word record, but that's not the official name. Also, if your file has newline characters like mine, you can loop through it to loads() one record at a time into a json variable.

PS。我用的是单词record,但那不是官方名称。此外,如果你的文件有像我这样的换行符,你可以循环遍历它,将一条记录一次加载到一个json变量中。

#1


88  

As you can see in the following example, json.loads (and json.load) does not decode multiple json object.

正如您在以下示例中所看到的,json.loads(和json.load)不会解码多个json对象。

>>> json.loads('{}'){}>>> json.loads('{}{}') # == json.loads(json.dumps({}) + json.dumps({}))Traceback (most recent call last):  File "<stdin>", line 1, in <module>  File "C:\Python27\lib\json\__init__.py", line 338, in loads    return _default_decoder.decode(s)  File "C:\Python27\lib\json\decoder.py", line 368, in decode    raise ValueError(errmsg("Extra data", s, end, len(s)))ValueError: Extra data: line 1 column 3 - line 1 column 5 (char 2 - 4)

If you want to dump multiple dictionaries, wrap them in a list, dump the list (instead of dumping dictionaries multiple times)

如果要转储多个字典,将它们包装在列表中,请转储列表(而不是多次转储字典)

>>> dict1 = {}>>> dict2 = {}>>> json.dumps([dict1, dict2])'[{}, {}]'>>> json.loads(json.dumps([dict1, dict2]))[{}, {}]

#2


41  

Can I just suggest that you don't have to package all of the tweets into a list and then do json.dumps. You can just write to a file as you go, and then load them in with:

我可以建议您不必将所有推文打包到列表中,然后执行json.dumps。您可以随时写入文件,然后使用以下命令加载它们:

tweets = []for line in open('test.txt', 'r'):    tweets.append(json.loads(line))

That way you don't have to store intermediate python objects. As long as your write one full tweet per write call this should work.

这样您就不必存储中间python对象。只要你为每次写入调用写一个完整的推文,这应该有效。

#3


8  

This may also happen if your JSON file is not just 1 JSON record.A JSON record looks like this:

如果您的JSON文件不仅仅是1个JSON记录,也可能发生这种情况.JSON记录如下所示:

[{"some data": value, "next key": "another value"}]

It opens and closes with a bracket [ ], within the brackets are the braces { }. There can be many pairs of braces, but it all ends with a close bracket ].If your json file contains more than one of those:

它用括号[]打开和关闭,括号内是大括号{}。可以有很多对括号,但它们都以一个紧密括号结束。如果你的json文件包含多个:

[{"some data": value, "next key": "another value"}][{"2nd record data": value, "2nd record key": "another value"}]

then loads() will fail.

然后loads()将失败。

I verified this with my own file that was failing.

我用我自己的文件验证了这个失败。

import jsonguestFile = open("1_guests.json",'r')guestData = guestFile.read()guestFile.close()gdfJson = json.loads(guestData)

This works because 1_guests.json has one record []. The original file I was using all_guests.json had 6 records separated by newline. I deleted 5 records, (which I already checked to be bookended by brackets) and saved the file under a new name. Then the loads statement worked.

这是有效的,因为1_guests.json有一个记录[]。我使用all_guests.json的原始文件有6条由换行符分隔的记录。我删除了5条记录(我已经检查过用括号进行书签)并以新名称保存文件。然后load语句工作。

Error was

错误是

   raise ValueError(errmsg("Extra data", s, end, len(s)))ValueError: Extra data: line 2 column 1 - line 10 column 1 (char 261900 - 6964758)

PS. I use the word record, but that's not the official name. Also, if your file has newline characters like mine, you can loop through it to loads() one record at a time into a json variable.

PS。我用的是单词record,但那不是官方名称。此外,如果你的文件有像我这样的换行符,你可以循环遍历它,将一条记录一次加载到一个json变量中。