Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why? I've build some massive analytic data flows in Python with turbodbc + pandas which are basically C++ fast. It uses more memory which supports your point, but on the flip-side we're talking $5-10 extra cost a year. It could frankly be $20k a year and still be cheaper than staffing more people like me to maintain these things, rather than having a couple of us and then letting the BI people use the tools we provide for them. Similarily when we do embeded work, micro-python is just so much easier to deal with for our engineering staff.

The interoperability between C and Python makes it great, and you need to know these numbers on Python to know when to actually build something in C. With Zig getting really great interoperability, things are looking better than ever.

Not that you're wrong as such. I wouldn't use Python to run an airplane, but I really don't see why you wouldn't care about the resources just because you're working with an interpreted or GC language.



> you need to know these numbers on Python to know when to actually build something in C

People usually approach this the other way, use something like pandas or numpy from the beginning if it solves your problem. Do not write matrix multiplications or joins in python at all.

If there is no library that solves your problem, it's a great indication that you should avoid python. Unless you are willing to spend 5 man-years writing a C or C++ library with good python interop.


> People usually approach this the other way, use something like pandas or numpy from the beginning if it solves your problem.

That is exactly how we approach it though. We didn't start out with turbodbc + pandas, it started as an sql alchemy and pandas service. Then when it was too slow, I got involved, found and dealth with the bottle necks. I'm not sure how you would find and fix such things without knowing the efficiency or lack there of in different parts of Python. Also, as you'll notice, we didn't write ur own stuff, we simply used more efficient Python libraries.


People generally aren’t rolling their own matmuls or joins or whatever in production code. There are tons of tools like Numba, Jax, Triton, etc that you can use to write very fast code for new, novel, and unsolved problems. The idea that “if you need fast code, don’t write Python” has been totally obsolete for over a decade.


Yes, that's what I said.

If you are writing performance sensitive code that is not covered by a popular Python library, don't do it unless you are a megacorp that can put a team to write and maintain a library.


It isn’t what you said. If you want, you can write your own matmul in Numba and it will be roughly as fast as similar C code. You shouldn’t, of course, for the same reason handrolling your own matmuls in C is stupid.

Many problems can performantly solved in pure Python, especially via the growing set of tools like the JIT libraries I cited. Even more will be solvable when things like free threaded Python land. It will be a minority of problems that can’t be, if it isn’t already.


From the complete opposite side, I've built some tiny bits of near irrelevant code where python has been unacceptable, e.g. in shell startup / in bash's PROMPT_COMMAND, etc. It ends up having a very painfully obvious startup time, even if the code is nearing the equivalent of Hello World

    time python -I -c 'print("Hello World")'
    real    0m0.014s
    time bash --noprofile -c 'echo "Hello World"'
    real    0m0.001s


What exactly do you need 1ms instead of 14ms startup time in a shell startup? The difference is barely perceptible.

Most of the time starting up is time spent seartching the filesystem for thousands of packages.


> What exactly do you need 1ms instead of 14ms startup time in a shell startup?

I think as they said: when dynamically building a shell input prompt it starts to become very noticable if you have like 3 or more of these and you use the terminal a lot.


Ah, I only noticed the "shell startup" bit.

Yes, after 2-3 I agree you'd start to notice if you were really fast. I suppose at that point I'd just have Gemini rewrite the prompt-building commands in Rust (it's quite good at that) or merge all the prompt-building commands into a single one (to amortize the startup cost).


https://starship.rs/ perhaps? I should probably start using it again honestly.


it feels good to have all that information at your fingertips but most of the time the default config is way too noisy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: