1

i have two function below with two for loops. in the first one i need the index [i] for both inputs a[i] * b[i], while in the second I need it next to output as well as next to matrix[i].

Why, what is the logic behind these indexing reference [i]?

If I do not index matrix with [i] I get this

error

def w_sum(a,b):
      output = 0 
      assert(len(a) == len(b))
    
      for i in range(len(a)):
        output += (a[i] * b[i])
    
      return output


def vec_mat_mul(vector, matrix):
  output = [0, 0, 0]
  assert(len(vector) == len(matrix))

  for i in range(len(vector)):
    output[i] = w_sum(vector, matrix[i])
  
  return output

here are the input variables and dependent function w_sum:

#dataset at the beginning of a game
toes = [8.5, 9.5, 9.9, 9.0]
wlrec = [0.65, 0.8, 0.8, 0.9]
nfans = [1.2, 1.3, 0.5, 1.0]

#inserting one input datapoint of each variable
input = [toes[0], wlrec[0], nfans[0]]

#defining weights
weights = [[0.1, 0.2, -0.1],
           [-0.1, 0.1, 0.9],
           [0.1, .04, 0.1]]

Might be a very mundane question but I need to get the logic to move on.

Thanks!

5
  • Where are you calling vec_mat_mul? Commented Apr 9, 2021 at 13:32
  • 1
    If you want to multiply matrices, I suggest you use numpy arrays rather than Python lists Commented Apr 9, 2021 at 13:33
  • Are you asking what indexing a list does? Commented Apr 9, 2021 at 13:56
  • @OneCricketeer I want to create it without numpy first to understand the logic. I am calling vec_mat_mul here: def nn_mul_in_out(input, weights): pred = vec_mat_mul(input,weights) return pred neural_output = nn_mul_in_out(input, weights) print(neural_output) Commented Apr 9, 2021 at 14:05
  • @interjay no I am asking why i need to refer to [i] in the for loop of the vec_mat_mul function here: matrix[i] Commented Apr 9, 2021 at 14:08

1 Answer 1

3

TLDR; removing [i] from matrix means you multiplying a list (sequence) of lists with a list of floats, which gives the error message.

I am assuming you are calling vec_mat_mul(input, weights). You are multiplying a 1d vector (list of floats) by a 2d matrix (list of lists).

Now for the line output[i] = w_sum(vector, matrix[i]): If you remove the index [i] from matrix, it means that you are passing a matrix as a list of lists.

For the line output += (a[i] * b[i]) inside def w_sum(a,b): It will perform an element-wise multiplication between floats of the first list and lists of the second list which is undefined, and therefore you will get the error: TypeError: can't multiply sequence by non-int of type 'float'. The sequence here is the matrix which is a sequence of lists.

Now for the correct behavior, if you pass matrix[i] in output[i] = w_sum(vector, matrix[i]) it means you are passing only an element of the matrix which is a list of floats, therefore you are doing an element-wise multiplication of a list of floats with another list of floats which is what we expect.

Sign up to request clarification or add additional context in comments.

1 Comment

"it means you are passing only an element of the matrix which is a list of floats, therefore you are doing an element-wise multiplication of a list of floats with a list of floats which is what expected." thank you so much, this clears is up perfectly in my head!

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.