WasmGC is not really what you think it is. Think of it as a full-on replacement for the linear memory model, not as a way to add GC to it.
It's exceptionally difficult to port a language runtime over to WasmGC -- it doesn't even offer finalizers, so __del__ can't implemented correctly (even ignoring the issues with existing __del__ semantics in Python leading to compatibility issues). It doesn't support interior pointers, so you can't have a pointer into the middle of an array.
> Recently, the most interesting rift in the Postgres vs OLAP space is [Hydra](https://www.hydra.so), an open-source, column-oriented distribution of Postgres that was very recently launched (after our migration to ClickHouse). Had Hydra been available during our decision-making time period, we might’ve made a different choice.
There will likely be a good OLAP solution (possibly implemented as an extension) in Postgres in the next year or so. There are a few companies are working on it (Hydra, Parade[0], tembo etc.).
The biggest performance issue Clojure has, which isn't mentioned in the article and is fundamentally unsolvable, is that it misses the CPU cache - a lot.
The data structure that drives the immutable variables, the Hash-Array Mapped Trie, is efficient in terms of time- and space- complexity, but it's incredibly CPU cache-unfriendly because by nature it's fragmented - it's not contiguous in RAM. Any operation on a HAMT will be steeped in cache misses, which slows you down by a factor of hundreds to thousands. The longer the trie has been around, the more fragmented it is likely to become. Looping over a HAMT is slower than looping over an array by two or three orders of magnitude.
I don't think there is really a solution to that. It's inherent. Clojure will always be slow, because it's not cache friendly. The architecture of your computer means you physically can't speed it up without sacrificing immutability.
Tenet Bank Ltd. an engineering first bank licensed by the Cayman Islands Monetary Authority, features proprietary state-of-the-art software, technology sector expertise, white glove service, and a premier banking location.
We are seeking to grow our Dev team and have four (4) open positions. Software Developer, Back-End (2 mid-level and 2 experienced) .Net and C# experience. Golang a +
People STILL confuse the construction of software with the construction of buildings. We can estimate fairly accurately how long it will take to build a building once we have reasonable plans for it. I can pretty accurately say that it will take about 4 minutes to build the software once I have the plans to build it. The compiler pretty much automates the whole job.
Writing software is NOT construction. Much of it isn't even design. Most of it is gathering detailed requirements and writing them down in unambiguous form (code).
My asking how long it's going to take to write a software it's like saying to a building contractor how long will it take to design every single detail of a city block including gathering all the requirements.
Also the requirements for software are much more detailed than building. 100000 lines of code represents 100000 decisions. I bet not many buildings have 100000 decisions. And 10000 is tiny for a software project.
Location: Dallas, TX
Remote: Yes
Willing to relocate: No
Technologies: F#, C#, Javascript, Python, AWS, PostgreSQL, open to more
LinkedIn: https://www.linkedin.com/in/harvey-frye-codes/
Email: harvey@frye.codes
I'm a passionate developer and team leader. While I'm open to working with many technologies, I'm primarily interested in an F# focused role or other functional programming language.
I'm changing careers from being a music teacher to becoming a Backend Developer, and I'm hoping to find a full-time remote job. I've spent most of this year heads down in a book and my Macbook. I've learned dozens of technologies, and it's been quite the whirlwind of knowledge gaining. I've taken 7 courses, 3 books, and numerous articles and tutorials.
I won't go into detail about everything on my [resume](https://ashley.boehs.com), but suffice it to say it was a bit of frontend, DevOps, and a LOT of backend. What I've learned could be used well, especially around Rails and Stimulus.
Being poor in general, during your childhood, scars you for life.
It makes you penny wise and pound foolish, adverse to taking risks in your career and investments, can destroy your confidence in yourself, while always having to look over your shoulder monetarily speaking.
Habits that even after you remove poverty from your live, you still feel the the anxiety and triggers in your head.
> I do wonder why the author is still on Upwork, their rating & exposure on there must be worth more to them than working several weeks for free
why should they have to walk away? why even suggest such a strategy?
UpWork already has a 10% market share. so if there were 1000 employers, 100 of them would be the same company.
i'm not sure if you're aware of all the gigification and precaritization of digital labor going on? it's emerging as a frontal assault; a full-force attack on labor rights.
"For all of its forward-looking ‘innovation’, there’s something suspiciously feudal about Silicon Valley. Tech royalty compete for dominance in platform wars [...] They hoard resources while showering key personnel with lavish gifts to ensure loyalty and peddling a compelling story about their right to rule. Meanwhile, the remaining workers, dependent on ‘gigs’ for their livelihood, are made to battle with each other for scraps."
This post is completely and totally wrong. At least you got to ruin my day, I hope that's a consolation prize for you.
There is NO meaningful connection between the completion vs polling futures model and the epoll vs io-uring IO models. comex's comments regarding this fact are mostly accurate. The polling model that Rust chose is the only approach that has been able to achieve single allocation state machines in Rust. It was 100% the right choice.
After designing async/await, I went on to investigate io-uring and how it would be integrated into Rust's system. I have a whole blog series about it on my website: https://without.boats/tags/io-uring/. I assure you, the problems it present are not related to Rust's polling model AT ALL. They arise from the limits of Rust's borrow system to describe dynamic loans across the syscall boundary (i.e. that it cannot describe this). A completion model would not have made it possible to pass a lifetime-bound reference into the kernel and guarantee no aliasing. But all of them have fine solutions building on work that already exists.
Pin is not a hack any more than Box is. It is the only way to fit the desired ownership expression into the language that already exists, squaring these requirements with other desireable primitives we had already committed to shared ownership pointers, mem::swap, etc. It is simply FUD - frankly, a lie - to say that it will block "noalias," following that link shows Niko and Ralf having a fruitful discussion about how to incorporate self-referential types into our aliasing model. We were aware of this wrinkle before we stabilized Pin, I had conversations with Ralf about it, its just that now that we want to support self-referential types in some cases, we need to do more work to incorporate it into our memory model. None of this is unusual.
And none of this was rushed. Ignoring the long prehistory, a period of 3 and a half years stands between the development of futures 0.1 and the async/await release. The feature went through a grueling public design process that burned out everyone involved, including me. It's not finished yet, but we have an MVP that, contrary to this blog post, does work just fine, in production, at a great many companies you care about. Moreover, getting a usable async/await MVP was absolutely essential to getting Rust the escape velocity to survive the ejection from Mozilla - every other funder of the Rust Foundation finds async/await core to their adoption of Rust, as does every company that is now employing teams to work on Rust.
Async/await was, both technically and strategically, as well executed as possible under the circumstances of Rust when I took on the project in December 2017. I have no regrets about how it turned out.
Everyone who reads Hacker News should understand that the content your consuming is usually from one of these kinds of people: a) dilettantes, who don't have a deep understanding of the technology; b) cranks, who have some axe to grind regarding the technology; c) evangelists, who are here to promote some other technology. The people who actually drive the technologies that shape our industry don't usually have the time and energy to post on these kinds of things, unless they get so angry about how their work is being discussed, as I am here.
Has anyone ever measured the latency of the sending message to SQS? I was using with ELB in t2.medium instances, and my API (handle => send message to queue => return {status: true}) response times were around 150 - 300 ms and replaced SQS with RabbitMQ, and it went down to around 75-100 ms.
Does anyone think that sending message to SQS is slow?
Edit:
With this update, I was able to process almost 3 x requests with the same resources, and it lowered my bills quite a lot.
For example my SQS bill for last month
Amazon Simple Queue Service EUC1-Requests-Tier1
$0.40 per 1,000,000 Amazon SQS Requests per month thereafter 290,659,096 Requests $116.26
it went to 0, and ec2 cost went down as well because ELB spun up fewer instances that I could handle more quest with the same resources.
This was my experience with SQS. I just wanted to share it.
I sold to cities, counties and regional agencies in the U.S. for 30 years. Random suggestions:
* Do not wait wait for them to decide they need a solution and then engage; your business will die. Success comes from educating prospects, helping them understand the benefits -- hell, even helping them write the RFP, if they'll let you -- and being the preferred solution before the bidding process formally begins.
* Related: All of your meaningful sales will come from formal proposals submitted as part of an RFP process. You need to get good at being the insider who helps them write the RFP or you need to get good at writing better, more compelling (not cheaper, not fancier -- more compelling) proposals. Even better? Get good at both.
* Bob correctly referenced the fact that you'll have high insurance requirements. Don't let them faze you too much -- commercial liability is cheap. Errors and Omissions, on the other hand, can be pricey and you want to avoid having to have that if possible.
* Get very good at finding local partners, even if you don't need them. Big projects that leave some of the money in the community are more compelling.
* If you are not a woman or a minority, get good at finding local partners who are certified as women-owned or minority-owned businesses. Some public agencies set up their RFPs with an automatic point deduction from your score if you can't tick this box.
I'd say the elephant in the room is graduating beyond plaintext (projectional editor, model-based editor).
If you think about it so many of our problems are a direct result of representing software as a bunch of files and folders with plaintext.
Our "fancy" editors and "intellisense" only goes so far.
Language evolution is slowed down because syntax is fragile and parsing is hard.
A "software as data model" approach takes a lot of that away.
You can cut down so much boilerplate and noise because you can have certain behaviours and attributes of the software be hidden from immediate view or condensed down into a colour or an icon.
Plaintext forces you to have a visually distracting element in front of you for every little thing. So as a result you end up with obscure characters and generally noisy code.
If your software is always in a rich data model format your editor can show you different views of it depending on the context.
So how you view your software when you are in "debug mode" could be wildly different from how you view it in "documentation mode" or "development mode".
You can also pull things from arbitrarily places into a single view at will.
Thinking of software as "bunch of files stored in folders" comes with a lot baggage and a lot of assumptions. It inherently biases how you organise things. And it forces you to do things that are not always in your interest. For example you may be "forced" to break things into smaller pieces more than you would like because things get visually too distracting or the file gets too big.
All of that stuff are arbitrary side effects of this ancient view of software that will immediately go away as soon as you treat AND ALWAYS KEEP your software as a rich data model.
Hell all of the problems with parsing text and ambiguity in sytnax and so on will also disappear.
The thing to realise about Clojure is that it isn't an open source language like Python. It is a language controlled very tightly by Rich Hickey and Cognitect. Major work is done in secret (transducers, reducers, spec), then announced to the world as a "Here it is!" and then suggestions are taken. This goes well mostly, though it took a lot of outside persuasion that Feature Expressions weren't the best idea, and to come up with Reader Conditionals instead.
The underlying problem Clojure/Core has is their communication. If they would come out and explain their philosophy then people wouldn't get frustrated and confused when they expect the language development to work like other open source projects. Clojure's development is functioning exactly as planned. It's not a mistake.
A better way to treat Clojure is closer to a project at Apple (except for Swift). You can submit bugs, and make suggestions for future improvements, and if you really want to provide a patch. But go into it realising that it's not a community project, and you're much less likely to get frustrated.
With all that said, the proof of the pudding is in the eating, and for the most part Clojure has developed pretty well from Rich's tight control. I'd love it if there was a Snow Leopard year for Clojure where the focus was fixing longstanding bugs and feature requests, and more focus on those in general, but the language is developing well.
It's exceptionally difficult to port a language runtime over to WasmGC -- it doesn't even offer finalizers, so __del__ can't implemented correctly (even ignoring the issues with existing __del__ semantics in Python leading to compatibility issues). It doesn't support interior pointers, so you can't have a pointer into the middle of an array.
There's no easy way to port Python to WasmGC.