Posted by: Arthur Blake | 2008-03-02

CompressorRater 0.9.9 Updates

I’ve finally managed to find some time to work on the CompressorRater. Updates on line today:

  • The Packer compressor is now enabled. It’s still quite slow (because the Rhino JavaScript regular expression engine is slow) but I have a plan for how to make it really fast. Hopefully that will get done soon!
  • Updated Packer to 3.1 alpha 3 version.
  • Updated YUI compressor to version 2.3.5.
  • Slightly better look and feel.

Hope you enjoy! Feedback will be welcomed and appreciated.

Advertisements

Responses

  1. Great work Arthur.

    I was thinking of making something similar — but there’s no point anymore!

  2. hey, thanks! What would you have done differently, or what ideas do you have to make it better?

  3. Hello & congratz Mr. Blake, this service is great.

    I was wondering what does “Run time” stands for in the results section. Does I mean “time to compress the code” or “time to run the new code in opposition to the original code” ?

    Thank you 🙂

  4. Thank you. In this case, “Run Time” means: “how long it took to compress the code”.

  5. That what I thought, thank you for your quick answer 🙂

  6. Thanks Arthur, CompressorRater really _is_ great!

  7. I wouldn’t have done much differently (although I must say I prefer Tahoma over Arial :-))

    BTW, which port of Packer do you use?

    P.S. You could hash the submitted code and compare it to a list of hashes of commonly compressed files/scripts, so that they don’t have to be compressed again.

  8. I use Dean Edwards’ Packer from his SVN site on google code (it’s an alpha pre-release) There is a slightly newer release on there but I haven’t had time to update the compressorrater to it yet.

    I’ve thought about doing something like the hash idea, but the versions of the compressors are updated just often enough to make that a big headache…

  9. You could delete the cached results when adding new compressor versions.

  10. Stumbled upon this a couple of weeks ago – i think it’s great. Thanks.

  11. I’ve got a suggestion: You could also add the JS-compressor of jsutility.pjoneil.net/

    On the page it says that that compressor has a higher compression rate than other JS-Code shrinkers. I checked it and it seems to be true.

  12. Perfect tool! Got Packer at least to my attention now 🙂

    Do you store the overall statistics somewhere so that you can see a top 3 or something?

    Or is the outcome per file already obvious at all time?
    gziped:
    1 Packer
    2 YUI
    3 JSMin

  13. I don’t keep the stats around. But yes, in general that’s what I’m seeing–

    gzip of course makes the largest difference

    I have noticed that Packer does give a tiny bit better compression in general, but YUI seems to produce cleaner code that has less issues in more situations. YUI also compresses a lot faster.

    One caveat is that I’m using an alpha version of Packer so it may be producing cleaner, better code by the time it gets out of alpha.

  14. Hi,
    I would like to know if you plan to update this tool ? (YUIC 2.4.1)
    Where does the Packer 3.1 comes from ?
    And Which JSMin version do you use (PHP?) ?
    (And do you plan to update compression results with new versions of JS Libraries ?)

  15. Yes, I plan to when I get time.

    Packer came from the base2 SVN repository. It’s a bit out of date too.

    JSMin is the Java version.

    And yes– I always update the compression results when I update the compressors.

  16. I understand that “RunTime” in the results shows the length of time to COMPRESS the input javascript. But it would seem (especially with Packer’s) that DECOMPRESSION (that is, real ‘eval’ run-time) would be also an incredibly useful comparison statistic. For instance, Packer gets much smaller results, but the resulting eval that it must use is costly in actual run-time evaluation, whereas YUI seems to come fairly close to it but without any run-time eval penalty.

    I was thinking perhaps you could have your tool include each version of compressed code (maybe like each in separate hidden iframes) and calculate the times it takes for each to run/evaluate. This would be a HUGELY useful addition to this tool I think.

    PS (all along, I’ve thought “Run Time” column was exactly that. It was not until just the other day when my misconception was corrected by a coworker!)

  17. Your co-worker is correct– The Run Time column is how long the compression itself takes. It may not be too accurate though and must be taken with a big grain of salt though because it’s only the run time that this one particular server took (which could vary enormously depending on how busy the server is with other requests and things.)

    I don’t think that your suggested change would be very useful because the number you are talking about is very specific to Packer and only when using base62 encoding. Otherwise, this number will always be 0 because compressed or minified code from all the other compressors (including Packer without the base62 option) runs in the compressed state as is.

    I think that using the base62 encoding with Packer is in general not a good idea. It doesn’t really add any additional compression once you are using gzip (sometimes it actually makes things worse) and there is a decompression time penalty as you mentioned.

    The only other potential benefit of using Packer with base62 (that I can think of) for some people is that it greatly increases the obfuscation of your code to casual observers- but that’s pretty trivial to get around as well for the determined and it’s not worth the runtime penalty of decompression (some of the other compressors obfuscate a good bit too by crunching variable names, removing white space, etc.)

  18. Wouldn’t they all be some non-trivial (>0) number for actual processing/run time? It does take some physical slice of time to evaluate a set of code into the global DOM and make it ready to execute, even a single line of script. The bigger the actual browser-side processed text is, in general, I bet the longer it takes the browser to process and make ready to the DOM of the page.

    I haven’t tested myself, but it seems like various different code (compressed, minified, Packed, etc) might have different results for that (mostly based on size, but possibly also on complexity or other factors like nested declarations, etc?), and certainly there would be cross-browser differences too. So maybe even cross-browser testing of this metric would be helpful.

    For instance, if my code takes a considerable amount more time to “process” in IE, with one compression method vs another, then I might choose to take a slightly larger source (less compression) for the tradeoff of having better run-times in all the major browsers.

    And then of course these timings could factor in against Packer’s results, since the browser side code IS much smaller for the page to deal with (and so assumably quicker to initially process), but then the tradeoff is, as mentioned, the additional eval time. I imagine the smaller the “Packed” code, the more of a tradeoff occurs here!

    It just seemed to me like it might be useful to some authors to know the comparison of the complexity/length of their compressed code in terms of how quickly the page can process the code to be ready to execute, compared to Packer’s smaller browser-side processed text but with the additional eval time factored in. Seems like there’s a chance these timing tradeoffs could be helpful and non-trivial pieces of information to make informed decisions about when tuning production code delivery.

  19. Yes of course it does take some time to load and eval the initial javascript, and yes that time will vary slightly depending on how you’ve compressed the code (I would argue though that this difference probably IS a trivial amount of time for most compressors)– but the primary function of CompressorRater is to rate the size compression of different compressors– I threw in the compressor run time because it was easy.

  20. A simple onle javascript minifier can be found at
    http://netspurt.com
    It is based on dojo’s shrinksafe.

  21. hello!

    first of all, i’m very impressed in your MASTERPIECE of compressorRater!

    but, when i use the YUICompressor, the running time was about 0.6 sec. it’s very long time comparing with your site, 0.032 sec.

    so, what is your WAY of estimating the running time?

    i simple typed the ‘system.currenttimeis’ codes in the cunstructor of javascriptcompress.

  22. The compression run time could vary by quite a bit because of many factors: the type of server you run it on, the JVM version, how much load the machine is under, etc. etc.

    The run time is shown really only to give an estimation of the order of magnitude run time difference between the different compressors. Comparing that run time to the run time on a different computer is not very useful (unless you were benchmarking the speed of the machines themselves! and even then, you would need to control the environment better…)

    Also note that because compressorrater is a web application all the classes that make up the compressorrater will be pre-loaded, very well optimized by the JIT, etc. which is another reason why it may appear faster in the web app.

  23. Thanks for the well-done compressorrater!

    I would love to see jQuery 1.3.x integrated in the test.

  24. You should include the obfuscation (reducing variable size), compaction (reducing white space) and compression from jsutility.pjoneil.net in your analysis.

    It provides better results than any of the ones listed on your site and will work with any of the libraries in your test suite.

    Pat


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: