Hacker Newsnew | past | comments | ask | show | jobs | submit | rashidae's commentslogin

Interesting. Have you tested other LLMs or CLIs as a comparison? Curious which one you’re finding more reliable than Opus 4.5 through Claude Code.

Codex is quite a bit better in terms of code quality and usability. My only frustration is that it's a lot less interactive than Claude. On the plus side, I can also trust it to go off and implement a deep complicated feature without a lot of input from me.

> As a mirror to real-world agent design: the limiting factor for general-purpose agents is the legibility of their environments, and the strength of their interfaces. For this reason, we prefer to think of agents as automating diligence, rather than intelligence, for operational challenges.

My site! Rashidazarang.com

Trust yourself to be able to handle agents. Stop trying to be too safe, you’re paying the price with ignorance. Just use Claude Code with Opus 4.5.


Last week, I ran an experiment. Instead of building incrementally, I described exactly what I wanted to ChatGPT 5.1 Pro to draft a prompt, and then asked Claude Opus 4.5 to deliver it in one shot.

The result changed how I think about about speed when working with AI.


Last week, I ran an experiment. Instead of building incrementally, I described exactly what I wanted to ChatGPT 5.1 Pro to draft a prompt, and then asked Claude Opus 4.5 to deliver it in one shot.

The result changed how I think about about speed when working with AI.


People are starting to use AI not just to think, but to feel. When something hurts or feels uncertain, it is easy to offload that discomfort into a system that instantly turns it into clarity: an explanation, a plan, a message, a neatly packaged insight. It works. It feels good. But there is a hidden cost.

If we outsource emotional uncertainty too quickly, we skip the part where we actually feel it. The system digests the discomfort before we do. Over time this can make us excellent at understanding our lives but worse at sitting with the parts of experience that have no immediate answers. We get analysis instead of depth, interpretation instead of emotional endurance.

AI is powerful as a thinking partner, but it becomes risky when it becomes an emotional bypass. Some forms of growth only happen in the silence before clarity. If we replace those moments with instant interpretation, we trade long term resilience for short term relief.

This is not an argument against using AI. It is simply a reminder that some of the most important human capacities develop in the space where no external system can feel on our behalf.


When OpenAI launched Canvas yesterday, many called it a step backwards.

But conversation and spatial interfaces aren’t competing; they’re complementary.

Chat captures intent. Canvas structures complexity.

The real question isn’t which one wins, it’s whether we know when to use each.


Location: Mexico

Remote: Yes

Willing to relocate: Yes

Technologies: Python, TypeScript, React, Supabase, Postgres, Docker, Cloudflare Workers, MCP, WebRTC, agentic AI orchestration

Résumé/CV: https://rashidazarang.com

GitHub: https://github.com/rashidazarang

---

I am Rashid Azarang, a systems architect and builder focused on making intelligence usable. I design and implement cognitive systems that enable human-like interaction through AI agents.

Some recent work:

- Supply Chain Risk Management Platform — turned fragmented data into operational clarity.

- From Sync Bridge to Data Warehouse — re-architected brittle integrations into a coherent warehouse.

- Open Source Twilio SMS Dashboard — practical tooling others have since adopted.

- AWS CloudWatch Interface — lightweight logs explorer with MCP adapter.

I also maintain ChatGPT Exporter (80+ stars) and multiple MCP servers/agents. Previously, I helped scale a COVID-19 testing platform from ~1k to 100k+ tests per month, serving over a million people.

I’m looking for early-stage engineering roles where frontend speed meets backend reliability, especially in real-time systems, or AI.


Agents act. Infrastructure enables. A Navigation Layer enforces constitutional invariants and improves how we improve... Doug Engelbart’s bootstrapping in code.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: