Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Gradient descent has ushered out many challengers who were sure they could do things gradient descent could never do even on bigger compute. I'm not worried. (My own belief is that gradient descent is so useful that any better optimization approach will simply evolve gradient descent as an intermediate phase for bootstrapping problem-specific learning. It's a tower of optimizers all the way down/up.)

You can't call that a 'CLIP bug' because using CLIP for gradient ascent on a diffusion model is not remotely what it was trained or intended to do, and is not much like your use-case of detecting real world objects. It's basically adversarial pixel-wise hacking, which is not what real world pink trucks are like. Also, that post was 7 months ago, and the AI art community advances really fast. (However bad you think those samples are, I assure you the first BigSleep samples, back in February 2021 when CLIP had just come out, would have been far worse.) 'Unicorn cake' may not have worked 7 months ago, but maybe it does now... Check out the Midjourney samples all over AI art Twitter the past month.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: