Lets suppose that I have a lot of(hundreds) big python dictionaries. Pickled file size is about 2Mb. I want to draw chart using data from this dictionaries so i have to load them all. What is the most efficent (at first speed, at second memory) way to store my data? May be I should use another caching tool? This how i am solving this task now:
- Pickle every my dictionary. Just pickle(dict)
- Load the pickled string to redis. redis.set(key, dict)
When user needs chart, i am creating array and fill it with unpickled data from redis. Just like that:
array = [] for i in range(iteration_count): array.append(unpickle(redis.get(key)))
Now i have both problems: with memory, cause my array is very big, but its not important and easy to solve. The main problem - speed. A lot of objects unpickling more than 0.3 seconds. I even have bottlenecks with more than 1 second unpickling time. And getting this string from redis rather expensive (more than 0.01 sec). When i have lots of objects, my user have to wait a lot of seconds.