2

I have the following equation:

enter image description here

where v, mu are |R^3, where Sigma is |R^(3x3) and where the result is a scalar value. Implementing this in numpy is no problem:

result = np.transpose(v - mu) @ Sigma_inv @ (v - mu)

Now I have a bunch of v-vectors (lets call them V \in |R^3xn) and I would like to execute the above equation in a vectorized manner so that, as a result I get a new vector Result \in |R^1xn.

# pseudocode
Result = np.zeros((n, 1))
for i,v in V:
    Result[i,:] = np.transpose(v - mu) @ Sigma_inv @ (v - mu)

I looked at np.vectorize but the documentation suggests that its just the same as looping over all entries which I would prefer not to do. What would be an elegant vectorized solution?

As a side node: n might be quite large and a |R^nxn matrix will certainly not fit into my memory!

edit: working code sample

import numpy as np

S = np.array([[1, 2], [3,4]])
V = np.array([[10, 11, 12, 13, 14, 15],[20, 21, 22, 23, 24, 25]])

Res = np.zeros((V.shape[1], 1))
for i in range(V.shape[1]):
    v = np.transpose(np.atleast_2d(V[:,i]))
    Res[i,:] = (np.transpose(v) @ S @ v)[0][0]

print(Res)
3
  • It would be easier if you would provide a minimal working example of what you would like to achieve, even if using loops (e.g. including import statements and definitions of all symbols you use). Commented Sep 21, 2017 at 12:41
  • True: I added a minimal example Commented Sep 21, 2017 at 12:47
  • 5
    I would look into np.einsum, although I believe it is one of the most cryptic function in numpy, it may be exactly what you want. Commented Sep 21, 2017 at 12:51

4 Answers 4

4

Using a combination of matrix-multiplication and np.einsum -

np.einsum('ij,ij->j',V,S.dot(V))
Sign up to request clarification or add additional context in comments.

1 Comment

You probably need a reshape at the end to match the MWE, but other than that, this is excellent!
2

Does this work for you?

res = np.diag(V.T @ S @ V).reshape(-1, 1)

It seems to provide the same result as you want.

import numpy as np

S = np.array([[1, 2], [3,4]])
V = np.array([[10, 11, 12, 13, 14, 15],[20, 21, 22, 23, 24, 25]])

Res = np.zeros((V.shape[1], 1))
for i in range(V.shape[1]):
    v = np.transpose(np.atleast_2d(V[:,i]))
    Res[i,:] = (np.transpose(v) @ S @ v)[0][0]

res = np.diag(V.T @ S @ V).reshape(-1, 1)

print(np.all(np.isclose(Res, res)))
# output: True

Although there is probably a more memory efficient solution using np.einsum.

2 Comments

thanks for the hints! The problem with your solution is that "n" in my application is quite large so a matrix R^{n x n} will definitely not fit into memory.
The solution by @Divarak is superior to this.
2

Here is a simple solution:

import numpy as np

S = np.array([[1, 2], [3,4]])
V = np.array([[10, 11, 12, 13, 14, 15],[20, 21, 22, 23, 24, 25]])

Res = np.sum((V.T @ S) * V.T, axis=1)

3 Comments

My problem was that V.T @ S @ V will be too large to fit into memory as it will be |R^{n x n}
@juqa Yes, I've changed the answer to use only n x 2 arrays. It's essentially the same as Divakar's answer, although einsum might be faster than multiplication and sum.
thanks, I benchmarked both solutions for a vector with 5.000.000 entries and your hunch was correct: einsum took ~100ms while your solution took ~150ms (benchmarked over multiple iterations).
0

This are multiplications of matrix/vector stacks. numpy.matmul can do that after bringing S and V into the correct shape:

S = S[np.newaxis, :, :]
VT = V.T[:, np.newaxis, :]
V = VT.transpose(0, 2, 1)    

tmp = np.matmul(S, V)
Res = np.matmul(VT, tmp)

print(Res)
#[[[2700]]
# [[3040]]
# [[3400]]
# [[3780]]
# [[4180]]
# [[4600]]]

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.