Hacker Newsnew | past | comments | ask | show | jobs | submit | Ecco's commentslogin

I think the whole article is super confusing.

A Symbol is really just a string!

Well, it's a string that will guarantee unique allocations (two identical strings are guaranteed to be allocated at the same address), which makes equality checks super fast (compare pointers directly). But pretty much just a string nonetheless...


A symbol really is just an integer which happens to have a fancy name when looked at from a user's point of view.


Lisp symbols have various properties; the exact set depends on the dialect. The properties may be mutable (e.g. property list, value cell).

You can certainly associate such things with an integer: e.g. value_cell[42], plist[42].

But those associations are part of the "symbolness", not just the 42.

Integers are not suitable symbols in some ways: they have a security problem.

Real symbols can be used for sandboxing: if you don't already hold the symbol, you have no way to get it.

This won't happen for integers. Especially not reasonably small integers.

What do I mean that if you don't already have a symbol, you have no way to get it? If the symbol is interned, you can intern it, right?

Not if there is a symbol package system, and you don't have access to the package in which the symbol is interned.

Now you might say, since objects are typically represented as pointers, aren't those a kind of integer anyway? Yes they are, but they are different type which doesn't support arithmetic; we can restrict programs from calculating arbitrary pointers, but we can't restrict programs from calculating arbitrary integers.

Even if we have escape hatches for converting an integer to a pointer, or other security/safety bypassng mechanisms, those escape hatches have an API which is identified by symbols, which we can exclude from the sandbox.


I (and GP given their use of capitalised Symbol) was talking about Ruby symbols, which for the most part are more equivalent to their object_id than to their textual representation.

What I mean by "integer" is not "pointer", it's "natural number", in the sense that there exists only one `1` vs you can have multiple textual "foo" strings.

So it's more useful to think of symbols as natural numbers which you don't know nor care the actual value of, because as a human you only care about the label you have attached to it, but you do care about its numberness property of it conceptually existing only once.


frozen string literals also have unique allocations now, so symbols are kindof redundant now


No they’re not.

String can be interned in Ruby, but it’s always possible for an equal string that is not interned to also exist. Hence interned strings can’t benefit from the same optimization than symbols.

You can first compare them by pointer, but on a mismatch you have to fallback to comparing content. Same for the hashcode, you have to hash the content.


I'm talking about as a user of the language, not as a language designer. I have an unpopular opinion about this, but symbols are error prone and these optimizations are ultimately not worth it.


Feels like a disassembly of a boilerplate app, as opposed to handcrafted, minimal assembly code.

For instance I’m pretty sure the autorelease pool is unnecessary as long as you don’t use the autorelease mechanism of Objective-C, which you’re most likely not going to do if you’re writing assembly in the first place.


How would that impact the App Store approval? AFAIK they review binaries anyway…


They do, but ASM doesn't have the guardrails that the compiled languages have, so it's almost certain that private APIs would get accessed.


> so it's almost certain that private APIs would get accessed

No it's not. Just like with ObjC or Swift, in ASM you have to be explicit about the APIs you want to call. I don't see how you would accidentally call a private API in ASM.

IMO the bigger risk is attempting to call a method that does not actually exist. ObjC or Swift would protect you from that, but ASM would not and may just crash at runtime.


What? That doesn’t make any sense. The only guard rails normal Obj-C has against calling private APIs is that they aren’t listed in the public headers, otherwise you can still easily call them. If you don’t explicitly make calls to private APIs from ASM, the won’t be called. I have no idea why you think “it’s almost certain that private APIs would get accessed.”


Tell you guys what.

We’ll just leave things as they are.

I’ll forfeit the game.

The field is yours.

Have a great day!


How is this any better than a Blender file with a rigged human model?


Literally just ease of use. Blender you have to learn to use, whereas JSM is built to be pick-up-and-go for even the least tech-savvy users.


And feature-wise, that's all it needs.


Try get a non technical artist to pick up Blender and work out a rigged human model.


first question why: where do you get a rigged human models with the same level of detail, for free ?


I wanted to use blender for quick pose reference yesterday and I was able to find several good ones and even one great one within 5min. They're around.


haha this was mine as well, would love to see some recommendations where people are getting some nice models to draw from.


Ease of use.


Because of a lack of features? It doesn't even have IK so I would argue Blender is in fact easier to use (as in, will get you to the result faster even if you need a bit more time to learn the interface).


This has pre-built poses, props and a bunch of other easy to use features. This is classic hacker-news ‘Less intuitive and more complicated X is better because I already know how to use X’


I must have spent a hundred hours in Blender over the years, and I'm still not competent enough to do even some of the most basic things on my own when I pick it up.

Blender is an amazing project, but I would not suggest it as an alternative to OP's tool if asked by someone who practices drawing and has zero Blender experience or interest.


Great content, terrible form.


I looked it up rapidly and couldn't figure out the difference with the original OrangePi 5.

By the way, the OrangePi 5 is a pretty good SBC. Much better bang/bucks than RPi, and the mainline kernel support is pretty good and getting better with every release thanks to the folks at Collabora.

https://gitlab.collabora.com/hardware-enablement/rockchip-35...


I have a cluster of 3 of the Orange Pi 5 Pros. They're extremely capable machines if you don't need the GPU or NPU (which I don't). That being said, they're more expensive, louder, and less energy efficient than like an Intel N100 mini PC.


This.

If power draw isn't critical, an N95/N100/N150 x86 wins out every time on OS support and price point. Especially when you factor in an SSD, thermal handling, case, power supply...


I give OrangePi a lot of points for putting an M.2 slot on the bottom of the PCB. Not only does Raspberry Pi charge extra for their M.2 board, it sits in an obnoxious location above the board where it interferes with many other things one might want to put on top, e.g. any sort of passive cooling.


> I looked it up rapidly and couldn't figure out the difference with the original OrangePi 5.

RK3588S -> RK3588, LPDDR4 -> LPDDR5


There is also one in between - the plus. Rk3588 but ddr4


Thanks. Does it make a big difference in practice?


A non-Broadcom SoC and actually fast network? Absolutely better than any Raspberry Pi


The built-in ethernet controller would be fine, not sure why it needs to use up some PCIe lanes on an external one.


Assuming you can get software support for more than one version of 'blessed' distro.


Well, that’s my point exactly: mainline kernel is what all distros eventually use.

As a matter of fact I’m currently running an OrangePi 5 as a server using an unmodified Debian Trixie and hardware support is nearly perfect.


Could you be more specific about what's not perfect yet with the hardware support?


I feel like we’ve gone full circle. For decades Apple hardware sucked and was badly overpriced, but you paid the price to enjoy running Mac OS X. Now Apple makes amazing hardware (especially laptops) but the drawback is that you have to run macOS on them.

I really wish Asahi Linux had more support, I would have bought a couple M4 Minis.


If you don't need the battery life of a MacBook and you're happy getting a desktop device, there's plenty of machines running new AMD chips that are just as fast as an M series mac, if not faster. And they'll run Linux with no compromises. Check out Bee-Link (https://www.bee-link.com/) for some mac-inspired hardware.


Without knowing your specific workloads, I'd imagine an M2 Pro Mac mini (which is supported by Asahi) is still plenty fast.


That's going the opposite direction not full circle. Hardware bad, OS good is now hardware good, OS bad.


Actually, save power efficiency, Apple is still behind the curve and has been for decades. Nvidia has reigned uninterrupted for longer than I can remember and regularly beats Apple in raw performance on available hardware, and AMD has regularly topped benchmarks for years.

In fact, AMD and Nvidia have been the de facto high-performance combination for so long, that I can't remember when it was any different. But prior to that, it was Intel and Nvidia. Apple was never a real high quality hardware competitor. The only thing they ever had to offer were products produced by a production process almost no one replicates.

Razer started used CNC unibodies for their laptops 14 years ago, but they're maybe the only company I can think of that does so other than Apple.

And MSI has shipped high performance laptops for so long that even Apple used their laptops for comparison during the M-series chip releases in the MacBook Pro.


That's not a full circle, a full circle would be if Apple later returned to badly overpriced yet enjoyable macOS again


If Asahi Linux had support for Thunderbolt and DP alt-mode I would be running it today, but those are dealbreakers for me unfortunately.

I'm donating to them and hoping they eventually get those implemented.


>I really wish Asahi Linux had more support, I would have bought a couple M4 Minis.

Me too. Just wonder instead of reverse engineer to SOC to get Linux running? Why can we just use the Darwin Kernel (which is suppose to be Open Sourced right? ) and build something like FreeBSD desktop for the M3/M4? Would that be more long term viable than reverse engineering SOC? Is there any project in that direction?


The background on that page is so distracting…


> Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.


On desktop you can delete the div with id="particle-bg", its what I did to read the article before checking the comments


Was going to say the same thing. While neat it made it nearly impossible for me to focus on the content.


Does reader mode help? No background visible on iOS Safari, even with content blockers disabled.


That does! Thank you.


i feel like it's really not that bad


I'm a huge fan of vintage Apple devices, and just got my hands on an iBook G3. It didn't come with a power adapter, but it has what really looks like a docking connector on the back. However I surprisingly haven't been able to find anything online about that connector.

Does anyone here have any info about that docking connector? Any help would be greatly appreciated!


Not a docking connector: charging contacts.

See page 29 of the service manual here: http://www.applerepairmanuals.com/the_manuals_are_in_here/iB...

(No third-party charger that used them was ever released AFAICT.)


Amazing, thanks for that link! Do you have any idea how I could figure out what voltage to provide?


I don't see that anyone's ever done the work. Disassemble it, trace the contacts, apply increasing amounts of current?

In the underside photo here, it's CP1 and CP2: https://davidigreen.com/blog/ibook-g3-clamshell-logic-board-...


With a limit of 10 million different serial numbers, I wonder how China does it. I can't come up with a decent estimate, and maybe I'm way off. But with the growth of sellers like Shein or Temu, I wouldn't be surprised if they shipped that many parcels in like a single day ? Or at least in a timeframe short enough that they would have over 10 million shipped but yet-to-be-delivered parcels, effectively running out of tracking numbers.


What helps is that they don’t ship direct from China by mail much. They often send in bulk to the destination country and then mail locally, and local post systems can have their own domestic format.

Or they have their own private courier do the last mile delivery too so it never touches any postal operator.


Do they? In Australia usually get them direct from HK or China because it is cheaper to do that even than post it within Australia!


In Southern Ontario Canada, yes, even in the suburbs, most stuff is dropped off by some rando courier for a few years now.

Somehow cheaper than paying bulk international airmail rates.


> With a limit of 10 million different serial numbers, I wonder how China does it.

The author has issued a correction, it's 100 million numbers per service indicator. But even then, it's probably not enough.

The boring answer is that your shipping options are either get untracked postal service (which the S10 standard does not apply) or use a private courier (which also does not use the S10 standard).

If you insist, you got two options for UPU-based postal tracking: normal e-commerce parcel aka H-codes, practically 2,300,000,000 trackable packages per year [1]. EMS is the other route, and there are another 2,300,000,000 trackable packages per year [2]. However, in my experience tracked postal delivery is only used in certain countries where it is more advantageous than private delivery (like until very recently in the US, for complicated reasons [3]), while other destinations has a more-than-willing private delivery partner (that is not the Big Three [4]) or even set up the delivery systems themselves.

1: 23 service indicators: HA-HW, HX-HZ are reserved for multilateral/bilateral use only

2: another 23 service indicators: EA-EW, EX-EZ are reserved for multilateral/bilateral use only

3: https://www.thewirechina.com/2020/11/22/delivering-chinas-ma... https://www.ft.com/content/a1233f3e-d21a-11e8-a9f2-7574db66b...

4: DHL, FedEx, and UPS


Yeah. Apparently last year they shipped over two million small parcels to Finland (pop. 5.6M) alone, which is completely bollocks.


> completely bollocks

Do you mean “bonkers”? Because “bollocks” in this case would mean “made up”.


Oops, yes!


Service type and serial need to be unique. Countries control what that 2letter field means. There is no rule against multiple codes indicating the same service. So AA through AZ would give you 260,000,000 unique combinations that you shouldn’t reuse for 1 year. Rinse, later and repeat if you need more.


Is the serial number even in base 10? the other parts of the number allow letters, the article does not say, but it could easily be base 36. which is close to 3 trillion serials.

Plus a bonus rant: this is one of those things that looks like a number and as such you are tempted to use a number to store it, but its not, it's a string, you will never do math on it so it is not a number. see also: phone numbers, social security numbers, serial numbers.

and sheepish bonus update: there is a checksum, so math is done on it. wonder if the checksum makes more or less sense in base 36? probably less, the checksum almost looks base12-ish, the mod(11), but there are special cases for two digit values so it is probably base 10.


Eh, your comment here was checksummed several times as well crossing the network. Doesn't make it "a number".


You could hate it by an internal metric, like date received.


And I wonder what was the constraint to not make it longer when they developed the standard. Making it a few digits more seem it wouldn't cost much.


The cost will be in updating every legacy postal system that currently has fixed column lengths/input field length limits.


Yes, now, but the person you're replying to was asking about at inception.


Why was IPv4 so pitifully small? I guess most people thought 100 million parcels a year was a ridiculously generous limit that we'd never reach.


Even the US must easily run into this constraint.


The short answer is probably : in-house consolidation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: