在1D Numpy数组中找到具有Numpy的本地maxima/minima。

时间:2022-08-22 13:08:26

Can you suggest a module function from numpy/scipy that can find local maxima/minima in a 1D numpy array? Obviously the simplest approach ever is to have a look at the nearest neighbours, but I would like to have an accepted solution that is part of the numpy distro.

您能建议一个来自numpy/scipy的模块函数,它可以在1D numpy数组中找到局部最大值/最小值吗?显然,最简单的方法是查看最近的邻居,但是我希望有一个可接受的解决方案,它是numpy发行版的一部分。

9 个解决方案

#1


52  

If you are looking for all entries in the 1d array a smaller than their neighbors, you can try

如果您正在查找1d数组中比相邻数组小的所有条目,您可以尝试

numpy.r_[True, a[1:] < a[:-1]] & numpy.r_[a[:-1] < a[1:], True]

You could also smooth your array before this step using numpy.convolve().

还可以使用numpy.convolve()在此步骤之前平滑数组。

I don't think there is a dedicated function for this.

我认为这里没有专门的函数。

#2


162  

In SciPy >= 0.11

在SciPy > = 0.11

import numpy as np
from scipy.signal import argrelextrema

x = np.random.random(12)

# for local maxima
argrelextrema(x, np.greater)

# for local minima
argrelextrema(x, np.less)

Produces

生产

>>> x
array([ 0.56660112,  0.76309473,  0.69597908,  0.38260156,  0.24346445,
    0.56021785,  0.24109326,  0.41884061,  0.35461957,  0.54398472,
    0.59572658,  0.92377974])
>>> argrelextrema(x, np.greater)
(array([1, 5, 7]),)
>>> argrelextrema(x, np.less)
(array([4, 6, 8]),)

Note, these are the indices of x that are local max/min. To get the values, try:

注意,这些是x的指标它们是局部最大值/最小值。要获取这些值,请尝试:

>>> x[argrelextrema(x, np.greater)[0]]

scipy.signal also provides argrelmax and argrelmin for finding maxima and minima respectively.

scipy。信号还分别为求极大值和极小值提供argrelmax和argrelmin。

#3


29  

For curves with not too much noise, I recommend the following small code snippet:

对于噪音不太大的曲线,我推荐以下小代码片段:

from numpy import *

# example data with some peaks:
x = linspace(0,4,1e3)
data = .2*sin(10*x)+ exp(-abs(2-x)**2)

# that's the line, you need:
a = diff(sign(diff(data))).nonzero()[0] + 1 # local min+max
b = (diff(sign(diff(data))) > 0).nonzero()[0] + 1 # local min
c = (diff(sign(diff(data))) < 0).nonzero()[0] + 1 # local max


# graphical output...
from pylab import *
plot(x,data)
plot(x[b], data[b], "o", label="min")
plot(x[c], data[c], "o", label="max")
legend()
show()

The +1 is important, because diff reduces the original index number.

+1很重要,因为diff降低了原始索引号。

#4


14  

Another approach (more words, less code) that may help:

另一种方法(更多的单词,更少的代码)可能会有帮助:

The locations of local maxima and minima are also the locations of the zero crossings of the first derivative. It is generally much easier to find zero crossings than it is to directly find local maxima and minima.

局部极大值和最小值的位置也是一阶导数零交点的位置。通常要比直接求局部极值和极值容易得多。

Unfortunately, the first derivative tends to "amplify" noise, so when significant noise is present in the original data, the first derivative is best used only after the original data has had some degree of smoothing applied.

不幸的是,一阶导数倾向于“放大”噪声,所以当原始数据中存在明显的噪声时,一阶导数最好在原始数据具有一定的平滑程度后才使用。

Since smoothing is, in the simplest sense, a low pass filter, the smoothing is often best (well, most easily) done by using a convolution kernel, and "shaping" that kernel can provide a surprising amount of feature-preserving/enhancing capability. The process of finding an optimal kernel can be automated using a variety of means, but the best may be simple brute force (plenty fast for finding small kernels). A good kernel will (as intended) massively distort the original data, but it will NOT affect the location of the peaks/valleys of interest.

由于平滑,在最简单的意义上,是一个低通滤波器,平滑通常是最好的(好吧,最容易)通过使用卷积核,和“塑造”内核可以提供令人惊讶的数量的特性保持/增强能力。找到最优内核的过程可以使用各种方法进行自动化,但是最好的方法可能是简单的蛮力(对于查找小内核来说速度非常快)。一个好的内核会(故意)极大地扭曲原始数据,但不会影响到峰值/低谷的位置。

Fortunately, quite often a suitable kernel can be created via a simple SWAG ("educated guess"). The width of the smoothing kernel should be a little wider than the widest expected "interesting" peak in the original data, and its shape will resemble that peak (a single-scaled wavelet). For mean-preserving kernels (what any good smoothing filter should be) the sum of the kernel elements should be precisely equal to 1.00, and the kernel should be symmetric about its center (meaning it will have an odd number of elements.

幸运的是,通常可以通过一个简单的SWAG(“根据经验的猜测”)创建一个合适的内核。平滑核的宽度应该比原始数据中最宽的期望“有趣”峰宽一点,它的形状将类似于峰值(单尺度小波)。对于均值保留的内核(任何好的平滑滤波器都应该是这样),内核元素的和应该精确地等于1.00,内核应该对它的中心对称(意味着它将有奇数个元素)。

Given an optimal smoothing kernel (or a small number of kernels optimized for different data content), the degree of smoothing becomes a scaling factor for (the "gain" of) the convolution kernel.

给定一个最优平滑内核(或针对不同数据内容进行优化的少量内核),平滑的程度就变成了卷积核的“增益”的比例因子。

Determining the "correct" (optimal) degree of smoothing (convolution kernel gain) can even be automated: Compare the standard deviation of the first derivative data with the standard deviation of the smoothed data. How the ratio of the two standard deviations changes with changes in the degree of smoothing cam be used to predict effective smoothing values. A few manual data runs (that are truly representative) should be all that's needed.

甚至可以自动确定“正确”(最优)平滑度(卷积核增益):比较一阶导数数据的标准差和平滑数据的标准差。两个标准差之比随平滑凸轮度的变化而变化,可用来预测有效平滑值。只需运行一些手动数据(真正具有代表性的数据)。

All the prior solutions posted above compute the first derivative, but they don't treat it as a statistical measure, nor do the above solutions attempt to performing feature preserving/enhancing smoothing (to help subtle peaks "leap above" the noise).

上面列出的所有先前的解都计算一阶导数,但它们不把它当作统计度量,也不尝试执行特征保留/增强平滑(以帮助微妙的峰值“越过”噪声)。

Finally, the bad news: Finding "real" peaks becomes a royal pain when the noise also has features that look like real peaks (overlapping bandwidth). The next more-complex solution is generally to use a longer convolution kernel (a "wider kernel aperture") that takes into account the relationship between adjacent "real" peaks (such as minimum or maximum rates for peak occurrence), or to use multiple convolution passes using kernels having different widths (but only if it is faster: it is a fundamental mathematical truth that linear convolutions performed in sequence can always be convolved together into a single convolution). But it is often far easier to first find a sequence of useful kernels (of varying widths) and convolve them together than it is to directly find the final kernel in a single step.

最后,坏消息是:发现“真实”的峰值会成为皇家的痛苦,当噪音也有看起来像真正的峰值(重叠带宽)的特征。下一个更复杂的解决方案通常是使用长卷积内核(一种“更广泛的内核光圈”),考虑相邻的“真正的”山峰之间的关系(如最小或最大峰值出现率),或者使用多个卷积通过使用内核有不同宽度(但只有更快:它是一个基本的数学事实线性卷积序列总是可以进行卷积组合进一个卷积)。但是,首先找到一系列有用的内核(不同宽度的)并将它们整合在一起,通常比直接在一个步骤中直接找到最终的内核要容易得多。

Hopefully this provides enough info to let Google (and perhaps a good stats text) fill in the gaps. I really wish I had the time to provide a worked example, or a link to one. If anyone comes across one online, please post it here!

希望这能提供足够的信息,让谷歌(或者一个好的统计文本)填补空白。我真希望我有时间提供一个有用的示例,或者一个链接。如果有人在网上看到,请在这里贴出来!

#5


5  

Why not use Scipy built-in function signal.find_peaks_cwt to do the job ?

为什么不使用Scipy内置函数信号呢?完成这项工作?

from scipy import signal
import numpy as np

#generate junk data (numpy 1D arr)
xs = np.arange(0, np.pi, 0.05)
data = np.sin(xs)

# maxima : use builtin function to find (max) peaks
max_peakind = signal.find_peaks_cwt(data, np.arange(1,10))

#generate an inverse numpy 1D arr (in order to find minima)
inv_data = 1./data
# minima : use builtin function fo find (min) peaks (use inversed data)
min_peakind = signal.find_peaks_cwt(inv_data, np.arange(1,10))

#show results
print "maxima",  data[max_peakind]
print "minima",  data[min_peakind]

results:

结果:

maxima [ 0.9995736]
minima [ 0.09146464]

Regards

问候

#6


4  

Update: I wasn't happy with gradient so I found it more reliable to use numpy.diff. Please let me know if it does what you want.

更新:我对渐变不满意,所以我发现使用numpi .diff更可靠。如果它符合你的要求,请告诉我。

Regarding the issue of noise, the mathematical problem is to locate maxima/minima if we want to look at noise we can use something like convolve which was mentioned earlier.

关于噪声的问题,数学问题是找到极大值/极小值如果我们想观察噪声我们可以使用像前面提到的卷积。

import numpy as np
from matplotlib import pyplot

a=np.array([10.3,2,0.9,4,5,6,7,34,2,5,25,3,-26,-20,-29],dtype=np.float)

gradients=np.diff(a)
print gradients


maxima_num=0
minima_num=0
max_locations=[]
min_locations=[]
count=0
for i in gradients[:-1]:
        count+=1

    if ((cmp(i,0)>0) & (cmp(gradients[count],0)<0) & (i != gradients[count])):
        maxima_num+=1
        max_locations.append(count)     

    if ((cmp(i,0)<0) & (cmp(gradients[count],0)>0) & (i != gradients[count])):
        minima_num+=1
        min_locations.append(count)


turning_points = {'maxima_number':maxima_num,'minima_number':minima_num,'maxima_locations':max_locations,'minima_locations':min_locations}  

print turning_points

pyplot.plot(a)
pyplot.show()

#7


2  

None of these solutions worked for me since I wanted to find peaks in the center of repeating values as well. for example, in

这些解对我都不起作用,因为我也想在重复值的中心找到峰值。例如,在

ar = np.array([0,1,2,2,2,1,3,3,3,2,5,0])

ar = np.array([0,1、2、2、2、1,3,3,3,2 5 0])

the answer should be

答案应该是

array([ 3,  7, 10], dtype=int64)

I did this using a loop. I know it's not super clean, but it gets the job done.

我用了一个循环。我知道它不是超级干净,但它能完成任务。

def findLocalMaxima(ar):
# find local maxima of array, including centers of repeating elements    
maxInd = np.zeros_like(ar)
peakVar = -np.inf
i = -1
while i < len(ar)-1:
#for i in range(len(ar)):
    i += 1
    if peakVar < ar[i]:
        peakVar = ar[i]
        for j in range(i,len(ar)):
            if peakVar < ar[j]:
                break
            elif peakVar == ar[j]:
                continue
            elif peakVar > ar[j]:
                peakInd = i + np.floor(abs(i-j)/2)
                maxInd[peakInd.astype(int)] = 1
                i = j
                break
    peakVar = ar[i]
maxInd = np.where(maxInd)[0]
return maxInd 

#8


1  

import numpy as np
x=np.array([6,3,5,2,1,4,9,7,8])
y=np.array([2,1,3,5,3,9,8,10,7])
sortId=np.argsort(x)
x=x[sortId]
y=y[sortId]
minm = np.array([])
maxm = np.array([])
i = 0
while i < length-1:
    if i < length - 1:
        while i < length-1 and y[i+1] >= y[i]:
            i+=1

        if i != 0 and i < length-1:
            maxm = np.append(maxm,i)

        i+=1

    if i < length - 1:
        while i < length-1 and y[i+1] <= y[i]:
            i+=1

        if i < length-1:
            minm = np.append(minm,i)
        i+=1


print minm
print maxm

minm and maxm contain indices of minima and maxima, respectively. For a huge data set, it will give lots of maximas/minimas so in that case smooth the curve first and then apply this algorithm.

minm和maxm分别包含minima和maxima指标。对于一个庞大的数据集,它将给出大量的极大值/极小值,以便在这种情况下首先平滑曲线,然后应用这个算法。

#9


0  

While this question is really old. I believe there is a much simpler approach in numpy (a one liner).

虽然这个问题很老了。我相信numpy(一行)有一个更简单的方法。

import numpy as np

list = [1,3,9,5,2,5,6,9,7]

np.diff(np.sign(np.diff(list))) #the one liner

#output
array([ 0, -2,  0,  2,  0,  0, -2])

To find a local max or min we essentially want to find when the difference between the values in the list (3-1, 9-3...) changes from positive to negative (max) or negative to positive (min). Therefore, first we find the difference. Then we find the sign, and then we find the changes in sign by taking the difference again. (Sort of like a first and second derivative in calculus, only we have discrete data and don't have a continuous function.)

为了找到一个局部最大值或最小值,我们实际上想要找到列表中的值(3- 1,9 -3…)从正到负(max)或负到正(min)的差值。因此,首先我们找到了区别。然后我们找到符号,然后我们再通过改变符号来发现符号的变化。(有点像微积分中的一阶导数和二阶导数,只是我们有离散数据,没有连续函数)

The output in my example does not contain the extrema (the first and last values in the list). Also, just like calculus, if the second derivative is negative, you have max, and if it is positive you have a min.

我的示例中的输出不包含极端值(列表中的第一个和最后一个值)。同样,就像微积分一样,如果二阶导数是负的,就有最大值,如果它是正的,就有最小值。

Thus we have the following matchup:

因此,我们有以下的配对:

[1,  3,  9,  5,  2,  5,  6,  9,  7]
    [0, -2,  0,  2,  0,  0, -2]
        Max     Min         Max

#1


52  

If you are looking for all entries in the 1d array a smaller than their neighbors, you can try

如果您正在查找1d数组中比相邻数组小的所有条目,您可以尝试

numpy.r_[True, a[1:] < a[:-1]] & numpy.r_[a[:-1] < a[1:], True]

You could also smooth your array before this step using numpy.convolve().

还可以使用numpy.convolve()在此步骤之前平滑数组。

I don't think there is a dedicated function for this.

我认为这里没有专门的函数。

#2


162  

In SciPy >= 0.11

在SciPy > = 0.11

import numpy as np
from scipy.signal import argrelextrema

x = np.random.random(12)

# for local maxima
argrelextrema(x, np.greater)

# for local minima
argrelextrema(x, np.less)

Produces

生产

>>> x
array([ 0.56660112,  0.76309473,  0.69597908,  0.38260156,  0.24346445,
    0.56021785,  0.24109326,  0.41884061,  0.35461957,  0.54398472,
    0.59572658,  0.92377974])
>>> argrelextrema(x, np.greater)
(array([1, 5, 7]),)
>>> argrelextrema(x, np.less)
(array([4, 6, 8]),)

Note, these are the indices of x that are local max/min. To get the values, try:

注意,这些是x的指标它们是局部最大值/最小值。要获取这些值,请尝试:

>>> x[argrelextrema(x, np.greater)[0]]

scipy.signal also provides argrelmax and argrelmin for finding maxima and minima respectively.

scipy。信号还分别为求极大值和极小值提供argrelmax和argrelmin。

#3


29  

For curves with not too much noise, I recommend the following small code snippet:

对于噪音不太大的曲线,我推荐以下小代码片段:

from numpy import *

# example data with some peaks:
x = linspace(0,4,1e3)
data = .2*sin(10*x)+ exp(-abs(2-x)**2)

# that's the line, you need:
a = diff(sign(diff(data))).nonzero()[0] + 1 # local min+max
b = (diff(sign(diff(data))) > 0).nonzero()[0] + 1 # local min
c = (diff(sign(diff(data))) < 0).nonzero()[0] + 1 # local max


# graphical output...
from pylab import *
plot(x,data)
plot(x[b], data[b], "o", label="min")
plot(x[c], data[c], "o", label="max")
legend()
show()

The +1 is important, because diff reduces the original index number.

+1很重要,因为diff降低了原始索引号。

#4


14  

Another approach (more words, less code) that may help:

另一种方法(更多的单词,更少的代码)可能会有帮助:

The locations of local maxima and minima are also the locations of the zero crossings of the first derivative. It is generally much easier to find zero crossings than it is to directly find local maxima and minima.

局部极大值和最小值的位置也是一阶导数零交点的位置。通常要比直接求局部极值和极值容易得多。

Unfortunately, the first derivative tends to "amplify" noise, so when significant noise is present in the original data, the first derivative is best used only after the original data has had some degree of smoothing applied.

不幸的是,一阶导数倾向于“放大”噪声,所以当原始数据中存在明显的噪声时,一阶导数最好在原始数据具有一定的平滑程度后才使用。

Since smoothing is, in the simplest sense, a low pass filter, the smoothing is often best (well, most easily) done by using a convolution kernel, and "shaping" that kernel can provide a surprising amount of feature-preserving/enhancing capability. The process of finding an optimal kernel can be automated using a variety of means, but the best may be simple brute force (plenty fast for finding small kernels). A good kernel will (as intended) massively distort the original data, but it will NOT affect the location of the peaks/valleys of interest.

由于平滑,在最简单的意义上,是一个低通滤波器,平滑通常是最好的(好吧,最容易)通过使用卷积核,和“塑造”内核可以提供令人惊讶的数量的特性保持/增强能力。找到最优内核的过程可以使用各种方法进行自动化,但是最好的方法可能是简单的蛮力(对于查找小内核来说速度非常快)。一个好的内核会(故意)极大地扭曲原始数据,但不会影响到峰值/低谷的位置。

Fortunately, quite often a suitable kernel can be created via a simple SWAG ("educated guess"). The width of the smoothing kernel should be a little wider than the widest expected "interesting" peak in the original data, and its shape will resemble that peak (a single-scaled wavelet). For mean-preserving kernels (what any good smoothing filter should be) the sum of the kernel elements should be precisely equal to 1.00, and the kernel should be symmetric about its center (meaning it will have an odd number of elements.

幸运的是,通常可以通过一个简单的SWAG(“根据经验的猜测”)创建一个合适的内核。平滑核的宽度应该比原始数据中最宽的期望“有趣”峰宽一点,它的形状将类似于峰值(单尺度小波)。对于均值保留的内核(任何好的平滑滤波器都应该是这样),内核元素的和应该精确地等于1.00,内核应该对它的中心对称(意味着它将有奇数个元素)。

Given an optimal smoothing kernel (or a small number of kernels optimized for different data content), the degree of smoothing becomes a scaling factor for (the "gain" of) the convolution kernel.

给定一个最优平滑内核(或针对不同数据内容进行优化的少量内核),平滑的程度就变成了卷积核的“增益”的比例因子。

Determining the "correct" (optimal) degree of smoothing (convolution kernel gain) can even be automated: Compare the standard deviation of the first derivative data with the standard deviation of the smoothed data. How the ratio of the two standard deviations changes with changes in the degree of smoothing cam be used to predict effective smoothing values. A few manual data runs (that are truly representative) should be all that's needed.

甚至可以自动确定“正确”(最优)平滑度(卷积核增益):比较一阶导数数据的标准差和平滑数据的标准差。两个标准差之比随平滑凸轮度的变化而变化,可用来预测有效平滑值。只需运行一些手动数据(真正具有代表性的数据)。

All the prior solutions posted above compute the first derivative, but they don't treat it as a statistical measure, nor do the above solutions attempt to performing feature preserving/enhancing smoothing (to help subtle peaks "leap above" the noise).

上面列出的所有先前的解都计算一阶导数,但它们不把它当作统计度量,也不尝试执行特征保留/增强平滑(以帮助微妙的峰值“越过”噪声)。

Finally, the bad news: Finding "real" peaks becomes a royal pain when the noise also has features that look like real peaks (overlapping bandwidth). The next more-complex solution is generally to use a longer convolution kernel (a "wider kernel aperture") that takes into account the relationship between adjacent "real" peaks (such as minimum or maximum rates for peak occurrence), or to use multiple convolution passes using kernels having different widths (but only if it is faster: it is a fundamental mathematical truth that linear convolutions performed in sequence can always be convolved together into a single convolution). But it is often far easier to first find a sequence of useful kernels (of varying widths) and convolve them together than it is to directly find the final kernel in a single step.

最后,坏消息是:发现“真实”的峰值会成为皇家的痛苦,当噪音也有看起来像真正的峰值(重叠带宽)的特征。下一个更复杂的解决方案通常是使用长卷积内核(一种“更广泛的内核光圈”),考虑相邻的“真正的”山峰之间的关系(如最小或最大峰值出现率),或者使用多个卷积通过使用内核有不同宽度(但只有更快:它是一个基本的数学事实线性卷积序列总是可以进行卷积组合进一个卷积)。但是,首先找到一系列有用的内核(不同宽度的)并将它们整合在一起,通常比直接在一个步骤中直接找到最终的内核要容易得多。

Hopefully this provides enough info to let Google (and perhaps a good stats text) fill in the gaps. I really wish I had the time to provide a worked example, or a link to one. If anyone comes across one online, please post it here!

希望这能提供足够的信息,让谷歌(或者一个好的统计文本)填补空白。我真希望我有时间提供一个有用的示例,或者一个链接。如果有人在网上看到,请在这里贴出来!

#5


5  

Why not use Scipy built-in function signal.find_peaks_cwt to do the job ?

为什么不使用Scipy内置函数信号呢?完成这项工作?

from scipy import signal
import numpy as np

#generate junk data (numpy 1D arr)
xs = np.arange(0, np.pi, 0.05)
data = np.sin(xs)

# maxima : use builtin function to find (max) peaks
max_peakind = signal.find_peaks_cwt(data, np.arange(1,10))

#generate an inverse numpy 1D arr (in order to find minima)
inv_data = 1./data
# minima : use builtin function fo find (min) peaks (use inversed data)
min_peakind = signal.find_peaks_cwt(inv_data, np.arange(1,10))

#show results
print "maxima",  data[max_peakind]
print "minima",  data[min_peakind]

results:

结果:

maxima [ 0.9995736]
minima [ 0.09146464]

Regards

问候

#6


4  

Update: I wasn't happy with gradient so I found it more reliable to use numpy.diff. Please let me know if it does what you want.

更新:我对渐变不满意,所以我发现使用numpi .diff更可靠。如果它符合你的要求,请告诉我。

Regarding the issue of noise, the mathematical problem is to locate maxima/minima if we want to look at noise we can use something like convolve which was mentioned earlier.

关于噪声的问题,数学问题是找到极大值/极小值如果我们想观察噪声我们可以使用像前面提到的卷积。

import numpy as np
from matplotlib import pyplot

a=np.array([10.3,2,0.9,4,5,6,7,34,2,5,25,3,-26,-20,-29],dtype=np.float)

gradients=np.diff(a)
print gradients


maxima_num=0
minima_num=0
max_locations=[]
min_locations=[]
count=0
for i in gradients[:-1]:
        count+=1

    if ((cmp(i,0)>0) & (cmp(gradients[count],0)<0) & (i != gradients[count])):
        maxima_num+=1
        max_locations.append(count)     

    if ((cmp(i,0)<0) & (cmp(gradients[count],0)>0) & (i != gradients[count])):
        minima_num+=1
        min_locations.append(count)


turning_points = {'maxima_number':maxima_num,'minima_number':minima_num,'maxima_locations':max_locations,'minima_locations':min_locations}  

print turning_points

pyplot.plot(a)
pyplot.show()

#7


2  

None of these solutions worked for me since I wanted to find peaks in the center of repeating values as well. for example, in

这些解对我都不起作用,因为我也想在重复值的中心找到峰值。例如,在

ar = np.array([0,1,2,2,2,1,3,3,3,2,5,0])

ar = np.array([0,1、2、2、2、1,3,3,3,2 5 0])

the answer should be

答案应该是

array([ 3,  7, 10], dtype=int64)

I did this using a loop. I know it's not super clean, but it gets the job done.

我用了一个循环。我知道它不是超级干净,但它能完成任务。

def findLocalMaxima(ar):
# find local maxima of array, including centers of repeating elements    
maxInd = np.zeros_like(ar)
peakVar = -np.inf
i = -1
while i < len(ar)-1:
#for i in range(len(ar)):
    i += 1
    if peakVar < ar[i]:
        peakVar = ar[i]
        for j in range(i,len(ar)):
            if peakVar < ar[j]:
                break
            elif peakVar == ar[j]:
                continue
            elif peakVar > ar[j]:
                peakInd = i + np.floor(abs(i-j)/2)
                maxInd[peakInd.astype(int)] = 1
                i = j
                break
    peakVar = ar[i]
maxInd = np.where(maxInd)[0]
return maxInd 

#8


1  

import numpy as np
x=np.array([6,3,5,2,1,4,9,7,8])
y=np.array([2,1,3,5,3,9,8,10,7])
sortId=np.argsort(x)
x=x[sortId]
y=y[sortId]
minm = np.array([])
maxm = np.array([])
i = 0
while i < length-1:
    if i < length - 1:
        while i < length-1 and y[i+1] >= y[i]:
            i+=1

        if i != 0 and i < length-1:
            maxm = np.append(maxm,i)

        i+=1

    if i < length - 1:
        while i < length-1 and y[i+1] <= y[i]:
            i+=1

        if i < length-1:
            minm = np.append(minm,i)
        i+=1


print minm
print maxm

minm and maxm contain indices of minima and maxima, respectively. For a huge data set, it will give lots of maximas/minimas so in that case smooth the curve first and then apply this algorithm.

minm和maxm分别包含minima和maxima指标。对于一个庞大的数据集,它将给出大量的极大值/极小值,以便在这种情况下首先平滑曲线,然后应用这个算法。

#9


0  

While this question is really old. I believe there is a much simpler approach in numpy (a one liner).

虽然这个问题很老了。我相信numpy(一行)有一个更简单的方法。

import numpy as np

list = [1,3,9,5,2,5,6,9,7]

np.diff(np.sign(np.diff(list))) #the one liner

#output
array([ 0, -2,  0,  2,  0,  0, -2])

To find a local max or min we essentially want to find when the difference between the values in the list (3-1, 9-3...) changes from positive to negative (max) or negative to positive (min). Therefore, first we find the difference. Then we find the sign, and then we find the changes in sign by taking the difference again. (Sort of like a first and second derivative in calculus, only we have discrete data and don't have a continuous function.)

为了找到一个局部最大值或最小值,我们实际上想要找到列表中的值(3- 1,9 -3…)从正到负(max)或负到正(min)的差值。因此,首先我们找到了区别。然后我们找到符号,然后我们再通过改变符号来发现符号的变化。(有点像微积分中的一阶导数和二阶导数,只是我们有离散数据,没有连续函数)

The output in my example does not contain the extrema (the first and last values in the list). Also, just like calculus, if the second derivative is negative, you have max, and if it is positive you have a min.

我的示例中的输出不包含极端值(列表中的第一个和最后一个值)。同样,就像微积分一样,如果二阶导数是负的,就有最大值,如果它是正的,就有最小值。

Thus we have the following matchup:

因此,我们有以下的配对:

[1,  3,  9,  5,  2,  5,  6,  9,  7]
    [0, -2,  0,  2,  0,  0, -2]
        Max     Min         Max