Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Anthropic is the new king. This isn't even Claude 3.5 Opus and it's already super impressive. The speed is insane.

I asked it "Write an in depth tutorial on async programming in Go" and it filled out 8 sections of a tutorial with multiple examples per section before GPT4o got to the second section and GPT4o couldn't even finish the tutorial before quitting.

I been a fan of Anthropic models since Claude 3. Despite the benchmarks people always post with GPT4 being the leader, I always found way better results with Claude 3 than GPT4 especially with responses and larger context. GPT responses always feel computer generated, while Claude 3 felt more humanlike.



One thing Anthropic did that I loved and think was very smart was building a prompt generator[1] into the developer console. The generator is tuned to generate prompts the way Claude prompts are supposed to be, which improves responses. And you can use it to improve your user prompt as well, not just your system prompt, which make responses even better.

You can see examples of the prompts it generates here[2]. It significantly improved my experience with LLMs; I haven't touched GPT4 in quite a while, and GPT4o didn't change that.

[1]: https://docs.anthropic.com/en/docs/build-with-claude/prompt-...

[2]: https://sr.ht/~jamesponddotco/llm-prompts/


Agree. They're like the quiet achievers. The new experimental sidebar 'artifacts' feature is super cool (it keeps a convenient version history also). I just fed it a json object and asked for a collapsible table app using next and shadcn. First code worked perfectly and code doesn't get lost in the chat history like chatgpt. Response was super fast.

And latest training data date for 3.5 is April, 2024.


Our internal blinded human evals for summarization/creative work have always preferred Claude 3.0 Opus by a huge margin, so we've been using it for months - GPT-4o didn't unseat it either.

GPT-4o IMO was better for coding (still using GPT-4 original w/ Cursor, but long-form stuff GPT-4o seemed better) but with this new launch, will definitely have to retest.

Pretty big news.


I agree. I've been really impressed with Anthropic. The issue for me comes when I want to take arbitrary user input and ask Claude questions about the user provided input. Claude is very, very, very ethical. Which is great, but it won't provide a response if the user tends to use a lot of curse words.


Do some masking of curse words with sht, ?!, verybad, or similar? Something that Claude will accept. It might work, if users are just generally badmouthed, not actively trying to trigger the model/system.


Can you let us know about the quality of the tutorial?


Anthropic is the king, but Jensen Huang is the emperor... :-)


I think Anthropic also uses Google TPUs.


I don't think that is the case. AWS is a very significant investor and if you meet with their business development team they will recommend deploying on bedrock (which is Nvidia). There are also press releases like this[1] stating they use Nvidia.

[1] https://nvidianews.nvidia.com/news/aws-and-nvidia-collaborat... and https://press.aboutamazon.com/2023/3/aws-and-nvidia-collabor... search for "anthropic"


Anthropic uses both.

From Claude 3's technical report:

Like its predecessors, Claude 3 models employ various training methods, such as unsupervised learning and Constitutional AI [6]. These models were trained using hardware from Amazon Web Services (AWS) and Google Cloud Platform (GCP), with core frameworks including PyTorch [7], JAX [8], and Triton [9].

JAX's GPU support is practically non-existent, it is only used on TPUs.



Never tried anything other than Open AI GPT family models and some toy LLMs, but GPT4o sucks compared to GPT4 (imho). I'll try Claude and compare.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: