提取搜索词周围的单词

时间:2022-06-01 12:46:13

I have this script that does a word search in text. The search goes pretty good and results work as expected. What I'm trying to achieve is extract n words close to the match. For example:

我有这个脚本在文本中进行单词搜索。搜索结果非常好,结果按预期工作。我想要实现的是提取接近匹配的n个单词。例如:

The world is a small place, we should try to take care of it.

世界是一个小地方,我们应该尽力照顾它。

Suppose I'm looking for place and I need to extract the 3 words on the right and the 3 words on the left. In this case they would be:

假设我在找地方,我需要提取右边的3个单词和左边的3个单词。在这种情况下,他们将是:

left -> [is, a, small]
right -> [we, should, try]

What is the best approach to do this?

这样做的最佳方法是什么?

Thanks!

谢谢!

4 个解决方案

#1


13  

def search(text,n):
    '''Searches for text, and retrieves n words either side of the text, which are retuned seperatly'''
    word = r"\W*([\w]+)"
    groups = re.search(r'{}\W*{}{}'.format(word*n,'place',word*n), text).groups()
    return groups[:n],groups[n:]

This allows you to specify how many words either side you want to capture. It works by constructing the regular expression dynamically. With

这允许您指定要捕获的任意一侧的单词数。它的工作原理是动态构造正则表达式。同

t = "The world is a small place, we should try to take care of it."
search(t,3)
(('is', 'a', 'small'), ('we', 'should', 'try'))

#2


3  

import re
s='The world is a small place, we should try to take care of it.'
m = re.search(r'((?:\w+\W+){,3})(place)\W+((?:\w+\W+){,3})', s)
if m:
    l = [ x.strip().split() for x in m.groups()]
left, right = l[0], l[2]
print left, right

Output

产量

['is', 'a', 'small'] ['we', 'should', 'try']

If you search for The, it yields:

如果您搜索The,它会产生:

[] ['world', 'is', 'a']

#3


2  

Find all of the words:

找到所有的单词:

import re

sentence = 'The world is a small place, we should try to take care of it.'
words = re.findall(r'\w+', sentence)

Get the index of the word that you're looking for:

获取您要查找的单词的索引:

index = words.index('place')

And then use slicing to find the other ones:

然后使用切片找到其他的:

left = words[index - 3:index]
right = words[index + 1:index + 4]

#4


2  

While regex would work, I think it's overkill for this problem. You're better off with two list comprehensions:

虽然正则表达式可行,但我认为这对于这个问题来说太过分了。你最好有两个列表推导:

sentence = 'The world is a small place, we should try to take care of it.'.split()
indices = (i for i,word in enumerate(sentence) if word=="place")
neighbors = []
for ind in indices:
    neighbors.append(sentence[ind-3:ind]+sentence[ind+1:ind+4])

Note that if the word that you're looking for appears multiple times consecutively in the sentence, then this algorithm will include the consecutive occurrences as neighbors.
For example:

请注意,如果您要查找的单词在句子中连续出现多次,则此算法将包括连续出现的邻居。例如:

In [29]: neighbors = []

在[29]中:邻居= []

In [30]: sentence = 'The world is a small place place place, we should try to take care of it.'.split()

在[30]中:句子='世界是一个小地方的地方,我们应该尽力照顾它。'。split()

In [31]: sentence Out[31]: ['The', 'world', 'is', 'a', 'small', 'place', 'place', 'place,', 'we', 'should', 'try', 'to', 'take', 'care', 'of', 'it.']

在[31]中:句子出[31]:[''','世界','是','a','小','地方','地方','地方','我们','应该','尝试','到','接受','关心','做','它。']

In [32]: indices = [i for i,word in enumerate(sentence) if word == 'place']

In [33]: for ind in indices:
   ....:     neighbors.append(sentence[ind-3:ind]+sentence[ind+1:ind+4])


In [34]: neighbors
Out[34]: 
[['is', 'a', 'small', 'place', 'place,', 'we'],
 ['a', 'small', 'place', 'place,', 'we', 'should']]

#1


13  

def search(text,n):
    '''Searches for text, and retrieves n words either side of the text, which are retuned seperatly'''
    word = r"\W*([\w]+)"
    groups = re.search(r'{}\W*{}{}'.format(word*n,'place',word*n), text).groups()
    return groups[:n],groups[n:]

This allows you to specify how many words either side you want to capture. It works by constructing the regular expression dynamically. With

这允许您指定要捕获的任意一侧的单词数。它的工作原理是动态构造正则表达式。同

t = "The world is a small place, we should try to take care of it."
search(t,3)
(('is', 'a', 'small'), ('we', 'should', 'try'))

#2


3  

import re
s='The world is a small place, we should try to take care of it.'
m = re.search(r'((?:\w+\W+){,3})(place)\W+((?:\w+\W+){,3})', s)
if m:
    l = [ x.strip().split() for x in m.groups()]
left, right = l[0], l[2]
print left, right

Output

产量

['is', 'a', 'small'] ['we', 'should', 'try']

If you search for The, it yields:

如果您搜索The,它会产生:

[] ['world', 'is', 'a']

#3


2  

Find all of the words:

找到所有的单词:

import re

sentence = 'The world is a small place, we should try to take care of it.'
words = re.findall(r'\w+', sentence)

Get the index of the word that you're looking for:

获取您要查找的单词的索引:

index = words.index('place')

And then use slicing to find the other ones:

然后使用切片找到其他的:

left = words[index - 3:index]
right = words[index + 1:index + 4]

#4


2  

While regex would work, I think it's overkill for this problem. You're better off with two list comprehensions:

虽然正则表达式可行,但我认为这对于这个问题来说太过分了。你最好有两个列表推导:

sentence = 'The world is a small place, we should try to take care of it.'.split()
indices = (i for i,word in enumerate(sentence) if word=="place")
neighbors = []
for ind in indices:
    neighbors.append(sentence[ind-3:ind]+sentence[ind+1:ind+4])

Note that if the word that you're looking for appears multiple times consecutively in the sentence, then this algorithm will include the consecutive occurrences as neighbors.
For example:

请注意,如果您要查找的单词在句子中连续出现多次,则此算法将包括连续出现的邻居。例如:

In [29]: neighbors = []

在[29]中:邻居= []

In [30]: sentence = 'The world is a small place place place, we should try to take care of it.'.split()

在[30]中:句子='世界是一个小地方的地方,我们应该尽力照顾它。'。split()

In [31]: sentence Out[31]: ['The', 'world', 'is', 'a', 'small', 'place', 'place', 'place,', 'we', 'should', 'try', 'to', 'take', 'care', 'of', 'it.']

在[31]中:句子出[31]:[''','世界','是','a','小','地方','地方','地方','我们','应该','尝试','到','接受','关心','做','它。']

In [32]: indices = [i for i,word in enumerate(sentence) if word == 'place']

In [33]: for ind in indices:
   ....:     neighbors.append(sentence[ind-3:ind]+sentence[ind+1:ind+4])


In [34]: neighbors
Out[34]: 
[['is', 'a', 'small', 'place', 'place,', 'we'],
 ['a', 'small', 'place', 'place,', 'we', 'should']]