对列中值相等的行求和

时间:2023-01-19 13:02:43

How can I sum across rows that have equal values in the first column of a numpy array? For example:

如何在numpy数组的第一列中具有相等值的行之间求和?例如:

In: np.array([[1,2,3],
             [1,4,6], 
             [2,3,5],
             [2,6,2],
             [3,4,8]])

Out: [[1,6,9], [2,9,7], [3,4,8]]

Any help would be greatly appreciated.

任何帮助将不胜感激。

4 个解决方案

#1


Pandas has a very very powerful groupby function which makes this very simple.

Pandas有一个非常强大的groupby功能,这使得这非常简单。

import pandas as pd

n = np.array([[1,2,3],
             [1,4,6], 
             [2,3,5],
             [2,6,2],
             [3,4,8]])

df = pd.DataFrame(n, columns = ["First Col", "Second Col", "Third Col"])

df.groupby("First Col").sum()

#2


Approach #1

Here's something in a numpythonic vectorized way based on np.bincount -

这是基于np.bincount的numpythonic矢量化方式的东西 -

# Initial setup             
N = A.shape[1]-1
unqA1, id = np.unique(A[:, 0], return_inverse=True)

# Create subscripts and accumulate with bincount for tagged summations
subs = np.arange(N)*(id.max()+1) + id[:,None]
sums = np.bincount( subs.ravel(), weights=A[:,1:].ravel() )

# Append the unique elements from first column to get final output
out = np.append(unqA1[:,None],sums.reshape(N,-1).T,1)

Sample input, output -

样本输入,输出 -

In [66]: A
Out[66]: 
array([[1, 2, 3],
       [1, 4, 6],
       [2, 3, 5],
       [2, 6, 2],
       [7, 2, 1],
       [2, 0, 3]])

In [67]: out
Out[67]: 
array([[  1.,   6.,   9.],
       [  2.,   9.,  10.],
       [  7.,   2.,   1.]])

Approach #2

Here's another based on np.cumsum and np.diff -

这是基于np.cumsum和np.diff的另一个 -

# Sort A based on first column
sA = A[np.argsort(A[:,0]),:]

# Row mask of where each group ends
row_mask = np.append(np.diff(sA[:,0],axis=0)!=0,[True])

# Get cummulative summations and then DIFF to get summations for each group
cumsum_grps = sA.cumsum(0)[row_mask,1:]
sum_grps = np.diff(cumsum_grps,axis=0)

# Concatenate the first unique row with its counts
counts = np.concatenate((cumsum_grps[0,:][None],sum_grps),axis=0)

# Concatenate the first column of the input array for final output
out = np.concatenate((sA[row_mask,0][:,None],counts),axis=1)

Benchmarking

Here's some runtime tests for the numpy based approaches presented so far for the question -

以下是针对该问题迄今为止提出的基于numpy的方法的一些运行时测试 -

In [319]: A = np.random.randint(0,1000,(100000,10))

In [320]: %timeit cumsum_diff(A)
100 loops, best of 3: 12.1 ms per loop

In [321]: %timeit bincount(A)
10 loops, best of 3: 21.4 ms per loop

In [322]: %timeit add_at(A)
10 loops, best of 3: 60.4 ms per loop

In [323]: A = np.random.randint(0,1000,(100000,20))

In [324]: %timeit cumsum_diff(A)
10 loops, best of 3: 32.1 ms per loop

In [325]: %timeit bincount(A)
10 loops, best of 3: 32.3 ms per loop

In [326]: %timeit add_at(A)
10 loops, best of 3: 113 ms per loop

Seems like Approach #2: cumsum + diff is performing quite well.

看起来像方法#2:cumsum + diff表现得相当不错。

#3


Try using pandas. Group by the first column and then sum rowwise. Something like

尝试使用熊猫。按第一列分组,然后按行进行求和。就像是

df.groupby(df.ix[:,1]).sum()

#4


With a little help from your friends np.unique and np.add.at:

在你的朋友np.unique和np.add.at的帮助下:

>>> unq, unq_inv = np.unique(A[:, 0], return_inverse=True)
>>> out = np.zeros((len(unq), A.shape[1]), dtype=A.dtype)
>>> out[:, 0] = unq
>>> np.add.at(out[:, 1:], unq_inv, A[:, 1:])

>>> out  # A was the OP's array
array([[1, 6, 9],
       [2, 9, 7],
       [3, 4, 8]])

#1


Pandas has a very very powerful groupby function which makes this very simple.

Pandas有一个非常强大的groupby功能,这使得这非常简单。

import pandas as pd

n = np.array([[1,2,3],
             [1,4,6], 
             [2,3,5],
             [2,6,2],
             [3,4,8]])

df = pd.DataFrame(n, columns = ["First Col", "Second Col", "Third Col"])

df.groupby("First Col").sum()

#2


Approach #1

Here's something in a numpythonic vectorized way based on np.bincount -

这是基于np.bincount的numpythonic矢量化方式的东西 -

# Initial setup             
N = A.shape[1]-1
unqA1, id = np.unique(A[:, 0], return_inverse=True)

# Create subscripts and accumulate with bincount for tagged summations
subs = np.arange(N)*(id.max()+1) + id[:,None]
sums = np.bincount( subs.ravel(), weights=A[:,1:].ravel() )

# Append the unique elements from first column to get final output
out = np.append(unqA1[:,None],sums.reshape(N,-1).T,1)

Sample input, output -

样本输入,输出 -

In [66]: A
Out[66]: 
array([[1, 2, 3],
       [1, 4, 6],
       [2, 3, 5],
       [2, 6, 2],
       [7, 2, 1],
       [2, 0, 3]])

In [67]: out
Out[67]: 
array([[  1.,   6.,   9.],
       [  2.,   9.,  10.],
       [  7.,   2.,   1.]])

Approach #2

Here's another based on np.cumsum and np.diff -

这是基于np.cumsum和np.diff的另一个 -

# Sort A based on first column
sA = A[np.argsort(A[:,0]),:]

# Row mask of where each group ends
row_mask = np.append(np.diff(sA[:,0],axis=0)!=0,[True])

# Get cummulative summations and then DIFF to get summations for each group
cumsum_grps = sA.cumsum(0)[row_mask,1:]
sum_grps = np.diff(cumsum_grps,axis=0)

# Concatenate the first unique row with its counts
counts = np.concatenate((cumsum_grps[0,:][None],sum_grps),axis=0)

# Concatenate the first column of the input array for final output
out = np.concatenate((sA[row_mask,0][:,None],counts),axis=1)

Benchmarking

Here's some runtime tests for the numpy based approaches presented so far for the question -

以下是针对该问题迄今为止提出的基于numpy的方法的一些运行时测试 -

In [319]: A = np.random.randint(0,1000,(100000,10))

In [320]: %timeit cumsum_diff(A)
100 loops, best of 3: 12.1 ms per loop

In [321]: %timeit bincount(A)
10 loops, best of 3: 21.4 ms per loop

In [322]: %timeit add_at(A)
10 loops, best of 3: 60.4 ms per loop

In [323]: A = np.random.randint(0,1000,(100000,20))

In [324]: %timeit cumsum_diff(A)
10 loops, best of 3: 32.1 ms per loop

In [325]: %timeit bincount(A)
10 loops, best of 3: 32.3 ms per loop

In [326]: %timeit add_at(A)
10 loops, best of 3: 113 ms per loop

Seems like Approach #2: cumsum + diff is performing quite well.

看起来像方法#2:cumsum + diff表现得相当不错。

#3


Try using pandas. Group by the first column and then sum rowwise. Something like

尝试使用熊猫。按第一列分组,然后按行进行求和。就像是

df.groupby(df.ix[:,1]).sum()

#4


With a little help from your friends np.unique and np.add.at:

在你的朋友np.unique和np.add.at的帮助下:

>>> unq, unq_inv = np.unique(A[:, 0], return_inverse=True)
>>> out = np.zeros((len(unq), A.shape[1]), dtype=A.dtype)
>>> out[:, 0] = unq
>>> np.add.at(out[:, 1:], unq_inv, A[:, 1:])

>>> out  # A was the OP's array
array([[1, 6, 9],
       [2, 9, 7],
       [3, 4, 8]])