Add fancy caching for large blocks of data

Review Request #591 — Created Oct. 14, 2008 and submitted

Information

Navi (deprecated)
trunk

Reviewers

memcached will only cache chunks of data below 1M. This is kind of crappy, since some of our files can be larger than 1M, and those are the ones which take the longest to fetch, patch, diff, and render.

This change adds a large_data keyword to cache_memoize, which will do some fancy pickling, compression and splitting to stuff these large blocks into the cache.
Looked at my giant diff and saw that things were getting cached correctly.
chipx86
  1. 
      
  2. trunk/djblets/djblets/util/misc.py (Diff revision 1)
     
     
    Can you add a small comment documenting the resulting size?
    1. Yup. I realized that with the overhead of the list and pickling, we need to chunk smaller, so I added -1024.
  3. trunk/djblets/djblets/util/misc.py (Diff revision 1)
     
     
     
     
     
     
     
     
    Can you space this out a little, put blank lines before/after each block?
    
    Also, we may want to use the multi-key get capabilities to get all the data in one go, since we know all the keys up-front.
  4. trunk/djblets/djblets/util/misc.py (Diff revision 1)
     
     
     
    Can combine these.
  5. trunk/djblets/djblets/util/misc.py (Diff revision 1)
     
     
    Can you add a comment describing the format of the data we're storing and how we piece it all together?
  6. trunk/djblets/djblets/util/misc.py (Diff revision 1)
     
     
    [data, ] should just be [data].
  7. trunk/djblets/djblets/util/misc.py (Diff revision 1)
     
     
    No need for pass.
  8. 
      
chipx86
  1. Looks good! Awesome change.
  2.