Given arrays a, b, and c:
import numpy as np
a = np.array([100, 200, 300])
b = np.array([[1, 0, 0],
[1, 0, 1],
[0, 1, 1],
[1, 1, 1]])
c = np.array([150, 300, 500, 650])
I'd like to optimize a such that each value minimizes the sum of the absolute difference defined in c_prime.
c_prime = c - np.sum(a*b, axis=1)
print(c_prime)
print(np.abs(c_prime).sum())
[ 50 -100 0 50]
200
Manually... by changing the first element in a, c_prime starts to achieve the desired result.
a = np.array([150, 200, 300])
c_prime = c - np.sum(a*b, axis=1)
print(c_prime)
print(np.abs(c_prime).sum())
[ 0 -150 0 0]
150
Now, my question, embarrassingly, is how can I achieve the desired result?
I've tried scipy.optimize.minimize, but it's obvious this code misses the mark and the function may be conceptually incorrect altogether.
def f(x, b, c):
return np.abs(c - np.sum(x*b, axis=1)).sum()
x0 = a
minimize(f, x0, args=(b,c))
fun: 200.0
hess_inv: array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
jac: array([-1., 0., 1.])
message: 'Desired error not necessarily achieved due to precision loss.'
nfev: 327
nit: 0
njev: 63
status: 2
success: False
x: array([100., 200., 300.])
Given the improved results from manually setting a[0] to 150 above, why do these results return a non-optimal x?
sumcall.asuch thatsum(abs(B*a - c))is minimized (here bothaandcare 1-column vectors). That just seems strictly solvable to me. Maybe also ask the math stack exchange.a=np.linalg.lstsq(b,c)[0][None,:]solves the matrix equation b*a-c=0 (by least squares), but it may be close enough toabs