After some testing I found out what you meant, for simple images the algorithm works just fine, however there are some pixels that don't get converted correctly (or even not at all) in more complex images which causes it too look very bad. I'm gonna delve further to see if I can fix it and otherwise I'll try the other method to see if it works better.
Edit: I just finished fixing it, it was a problem with float accuracy and other values that weren't correctly passed, the new algorithm uses double and a lower EPSILON for maximum accuracy in all pixels, I'll upload it around midnight if I am still awake or tomorrow. I still have to try the other algorithm and check versus photoshop hue shifting, but at least it now can process big images without producing annoying pixel artifacts.
2nd Edit: It's in the wiki now.