0

This would save me a lot of code, but I'm not sure how to implement it. I would like to set my variable "totalfactors" to the result of a for loop iterating through a dictionary and performing a product operation (Capital Pi Notation). So I would think I would write this like:

totalfactors = for x in dictionary: dictionary[x]*totalfactors

I know I could write this out in a couple lines like:

totalfactors = 1

    for pf in apfactors:
        totalfactors *= (apfactors[pf]+1)

Any help would be quite useful! Thanks

3 Answers 3

4

You could use the functional built-in reduce. It will repeatedly (or recursively) apply a function - here an anonymous lambda - on a list of values, building up some aggregate:

>>> reduce(lambda x, y: x * (y + 1), [1, 2, 3])
12

which would be equivalent to:

>>> (1 * (2 + 1)) * (3 + 1)
12

If you need another initial value, you can pass it as the last argument to reduce:

>>> reduce(lambda x, y: x * (y + 1), [1, 2, 3], 10)
240

>>> (((10 * (1 + 1)) * (2 + 1)) * (3 + 1))
240

Like @DSM points out in the comment, you probably want:

>>> reduce(lambda x, y: x * (y + 1), [1, 2, 3], 1) # initializer is 1

which can be written more succinctly with the operator module and a generator expression as:

>>> from operator import mul
>>> reduce(mul, (v + 1 for v in d.values()))

I would have guessed, that the generator variant is faster, but on 2.7 it seems it is not (at least for very small dictionaries):

In [10]: from operator import mul

In [11]: d = {'a' : 1, 'b' : 2, 'c' : 3}

In [12]: %timeit reduce(lambda x, y: x * (y + 1), d.values(), 1)
1000000 loops, best of 3: 1 us per loop

In [13]: %timeit reduce(mul, (v + 1 for v in d.values()))
1000000 loops, best of 3: 1.23 us per loop
Sign up to request clarification or add additional context in comments.

5 Comments

Is it better to use a lambda or operator.mul?
Note that by my reading of the OP's second example, he'd actually prefer 2*3*4=24 for the answer. I'd probably do something like reduce(mul, (v+1 for v in d.values()).
With a small dictionary, the generator overhead swamps the speed improvement. Add a few more items and the generator variant catches up and becomes faster. For me the crossover was at 6 key/value pairs in the dictionary.
@torek, well my machine still runs the benchmark over 1M entries. At 100000 both variants were on par (4.41s vs 4.4s).
Interesting. Perhaps by the time you reach 1M the arithmetic takes over. I only did 3, 4, 5, and 6 under timeit. In any case, speed is implementation-dependent, as usual. :-)
2

Sounds like you may want to look into doing a reduce(). For example:

>>> d={'a':1,'b':2,'c':3,'d':4}
>>> reduce(lambda x,y: x*y, d.values())
24

Comments

0

I tried to think of a way to do this with a generator, but all I could come up with was

import operator
total_product = reduce(operator.mul, dictionary.values(), 1)

I tested it with:

factorial = reduce(operator.mul, xrange(1,6), 1)

which gave 120 as a result.

Edit:

You probably already know this but I thought about it later. If you have any non-numeric data in dictionary.values() values, you'll get a TypeError, provided that you have at least one float. You're probably handling that when you insert into the dictionary, though.

I messed around a bit and came up with:

import numbers
import operator

foo = [1, 2.1, None, 4.5, 7, 'm']
print reduce(operator.mul, [num for num in foo if isinstance(num, numbers.Number)], 1)

Which gave me 66.15 and no exceptions. It's probably less efficient, but it is more efficient than an unhandled exception.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.