Hacker Newsnew | past | comments | ask | show | jobs | submit | nonplus's commentslogin

We seem to agree that what replaced religion (for profit social media) is not a basis for a strong society, or fulfilling lives. Religion seems like a close 2nd worst option though.

What pcie version are you running? Normally I would not mention one of these, but you have already invested in all the cards, and it could free up some space if any of your lanes being used now are 3.0.

If you can afford the 16 (pcie 3) lanes, you could get a PLX ("PCIe Gen3 PLX Packet switch X16 - x8x8x8x8" on ebay for like $300) and get 4 of your cards up to x8.


All are PCIe 3.0, I wasn't aware of those switches at all, in spite of buying my risers and cables from that source! Unfortunately all of the slots on the board are x8, there are no x16 slots at all.

So that switch would probably work but I wonder how big the benefit would be: you will probably see effectively an x4 -> (x4 / x8) -> (x8 / x8) -> (x8 / x8) -> (x8 / x4) -> x4 pipeline, and then on to the next set of four boards.

It might run faster on account of the three passes that are are double the speed they are right now as long as the CPU does not need to talk to those cards and all transfers are between layers on adjacent cards (very likely), and with even more luck (due to timing and lack of overlap) it might run the two x4 passes at approaching x8 speeds as well. And then of course you need to do this a couple of times because four cards isn't enough, so you'd need four of those switches.

I have not tried having a single card with fewer lanes in the pipeline but that should be an easy test to see what the effect on throughput of such a constriction would be.

But now you have me wondering to what extent I could bundle 2 x8 into an x16 slot and then to use four of these cards inserted into a fifth! That would be an absolutely unholy assembly but it has the advantage that you will need far fewer risers, just one x16 to x8/x8 run in reverse (which I have no idea if that's even possible but I see no reason right away why it would not work unless there are more driver chips in between the slots and the CPUs, which may be the case for some of the farthest slots).

PCIe is quite amazing in terms of the topology tricks that you can pull off with it, and c-payne's stuff is extremely high quality.


If you end up trying it please share your findings!

I've basically been putting this kind of gear in my cart, and then deciding I dont want to manage more than the 2 3090s, 4090 and a5000 I have now, then I take the PLX out of my cart.

Seeing you have the cards already it could be a good fit!


Yes, it could be. Unfortunately I'm a bit distracted by both paid work and some more urgent stuff but eventually I will get back to it. By then this whole rig might be hopelessly outdated but we've done some fun experiments with it and have kept our confidential data in-house which was the thing that mattered to me.


Yes, the privacy is amazing, and there's no rate limiting so you can be as productive as you want. There's also tons of learnings in this exercise. I have just 2x 3090's and I've learnt so much about pcie and hardware that just makes the creative process that more fun.

The next iteration of these tools will likely be more efficient so we should be able to run larger models at a lower cost. For now though, we'll run nvidia-smi and keep an eye on those power figures :)


You can tune that power down to what gives you the best tokencount per joule, which I think is a very important metric by which to optimize these systems and by which you can compare them as well.

I have a hard time understanding all of these companies that toss their NDA's and client confidentiality into the wind and feed newfangled AI companies their corporate secrets with abandon. You'd think there would be a more prudent approach to this.


"there are more than a dozen different types of diabetes" this feels like an ai synopsis that any human would have critiqued, but since ai said it, it made it in.

I'm not a doctor, but I've never heard of more than 5 "types" from endocrinologists in the past 20+ years.


I think that was always Alan Moore's intent. The feelings you just listed resonated with the comics audience as it came out decades ago.

(Which is not an attempt to discount that it resonates with you now; just pointing out that the subject matter seems timeless at this point.)


Thorough agree.

And, it seems, my agreement just gets more thorough as more time passes.


That value (of one company) is from speculative investment. I don't think it negates that the field has a perception problem.

After seeing something like blockchain run completely afoul/used for the wrong things and embraced by the public for it, I at least agree that AI has a value perception problem.


How does speculative investment get 500m DAU?


I dunno, how did Juicero get $120m?


Exactly. It is possible for a metric like DAUs to come almost entirely from marketing saturation, heavy promotion, and hype, and not from actual utility to the user. I'm not sure that's the case in particular for ChatGPT but I wouldn't be surprised.


I do think Facebook and Instagram are forced on the public if they want to fully interact with their peers.

I just don't participate in discussions about Facebook marketplace links friends share, or Instagram reels my D&D groups post.

So in a sense I agree with you, forcing AI into products is similar to forcing advertising into products.


I believe Berkshire bought (or agreed to buy) part of dominion (a public utility delivering power) on the east coast, I don't know any particulars of how it was run or if the deal even closed. Thats the only related example I know of (non exhaustive).


This is a really cool project. Your post is the first time I am seeing the figures on how much others also hate CGM notifications.

I have been working on software for the G1 headset to display glucose data without any audible alarms, just a visual notification.

The issues I run into are:

* Dexcom sensors have limited Bluetooth connections (with pump support), so I need to pull data from a phone.

* Battery life (I can get 9 hours maybe with a not always on display, a far cry from 6 days).

* xDrip gets readings slower than directly from dexcom.

* General UI edge cases for missed readings.

Thank you for sharing your work, it's validation that others are experiencing this problem and maybe my visual solution will be useful to others (and a watch/audible/haptic device also might be a good complement to what I have now, my solution is not supposed to be the only way to get CGM data.) https://github.com/ltomes/rel-a/tree/feature/xDrip


It seems like a valid metric to pick on. Premium devices are refreshed early/short cycle their lifespans because they are purchased by customers with disposable income. Budget devices should be sold to last as that's what's important to the customers buying them.

As a counter though, I would say with 2gb of ram this device just won't be fast enough for most of its users in 3 years anyway; so although I find this a valid argument to make, a new issue pops up immediately (for me at least).


I hope you come out of this in good shape. I try to get all my (digital) TTRPGs and indie games through your platform.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: