django-autocomplete-light:如何缓存选择?

时间:2022-06-01 18:31:04

I have my own City model (not django-cities-light) with more than 2M records in MySQL table. Each time I'm starting to type in autocomplete field, CPU load on htop table jumps over 200% on mysqld process, so it looks that script requests the table on each autocomplete.

我有自己的城市模型(不是django-cities-light),在MySQL表中记录超过200万条。每次我开始输入autocomplete字段时,htop表上的CPU负载在mysqld进程中都会超过200%,因此看起来每个autocomplete上的脚本都会请求表。

I'd like to put the table into memcache to avoid this, and here is what I have so far:

我想把这个表放到memcache中,以避免出现这种情况。

autocomplete_light_registry.py

import autocomplete_light
from django.core.cache import cache, InvalidCacheBackendError
from cities.models import City

def prepare_choices(model):
    key = "%s_autocomplete" % model.__name__.lower()
    try:
        qs = cache.get(key)
        if cache.get(key): # return if not expired
            return qs
    except InvalidCacheBackendError:
        pass
    qs = model.objects.all()     # populate cache
    cache.set(key, qs, 60*60*24) # if expired or not set
    return qs

class CityAutocomplete(autocomplete_light.AutocompleteModelBase):
    search_fields = ['city_name']
    choices = prepare_choices(City)
autocomplete_light.register(City, CityAutocomplete)

But it still keeps on requesting mysql.

但它仍然继续请求mysql。

Any suggestions?

有什么建议吗?

UPDATE

I tried to set the cache for cities table in django shell, but the process breaks with Segmentation fault message.

我试图在django shell中为cities表设置缓存,但是过程中出现了分段错误消息。

>>> from django.core.cache import cache
>>> qs = City.objects.all()
>>> qs.count()
2246813
>>> key = 'city_autocomplete'
>>> cache.set(key, qs, 60*60*24)
Segmentation fault

But I was able to put smaller tables into cache, and I hope to overcome this problem, so the answer is still needed.

但是我可以把更小的表放到缓存中,我希望解决这个问题,所以答案仍然是需要的。

1 个解决方案

#1


1  

cache.set(key, qs, 60*60*24) Segmentation fault

缓存。设置(key, qs, 60*60*24)分割错误

this happened because query is to big. You will need to cache this AFTER its filtered.

这是因为查询很大。您需要在过滤后缓存它。

this is how I did this. Not perfect but did work nicely with 500 elements.

我就是这样做的。虽然不完美,但与500个元素配合得很好。

def get_autocomplete(request):
    if request.is_ajax():
        q = request.GET.get('term', '')
        results_list = MY_model.objects.filter(title__contains=q).all()
        results = []
        for result in results_list:
            results.append(result)
        data = json.dumps(results)
    else:
        data = 'Nothing to see here!'
    mimetype = 'application/json'
    return http.HttpResponse(data, mimetype)

Found this on net somewhere.

这是在网上找到的。

You will need at best first 10 elements as rest will pop out of screen.

您将需要在最好的前10个元素,因为休息将跳出屏幕。

#1


1  

cache.set(key, qs, 60*60*24) Segmentation fault

缓存。设置(key, qs, 60*60*24)分割错误

this happened because query is to big. You will need to cache this AFTER its filtered.

这是因为查询很大。您需要在过滤后缓存它。

this is how I did this. Not perfect but did work nicely with 500 elements.

我就是这样做的。虽然不完美,但与500个元素配合得很好。

def get_autocomplete(request):
    if request.is_ajax():
        q = request.GET.get('term', '')
        results_list = MY_model.objects.filter(title__contains=q).all()
        results = []
        for result in results_list:
            results.append(result)
        data = json.dumps(results)
    else:
        data = 'Nothing to see here!'
    mimetype = 'application/json'
    return http.HttpResponse(data, mimetype)

Found this on net somewhere.

这是在网上找到的。

You will need at best first 10 elements as rest will pop out of screen.

您将需要在最好的前10个元素,因为休息将跳出屏幕。