I think it’s reasonable to take out a long term loan given current rates.
Surely, there is some percentage where it makes sense. If the loan percentage were -1%, it would obviously be a good idea, right?
Assuming 2% inflation, sitting in cash is like taking out a loan you don’t need out at ~2%, and stuffing the money in a mattress. That’s clearly a bad idea, right?
Current rates are 3%, and the market returns about 7% in the long term. (4-5% inflation-adjusted).
Historically, the only way to get burned reinvesting a loan at these rates was buying at the peak prior to the Great Depression. The market will go up or down over the next year, so averaging the purchase over that time eliminates the timing risk.
I can’t come up with a more risk averse strategy than this that also has positive real returns in expectation. Any suggestions?
I used to get this, along with some other anxiety stuff. A low dose of prozac taken 2x per week totally eliminated it (anecdata i know, but i thought it was interesting).
what would you like to see? it's because I've been posting content myself as I work on it, and I like reading about space/spacex etc. I'll post some new stuff
I've been told by some well-placed people (pooh pooh me if you want, I don't care), that a number of PE shops and hedge funds were able to get some pretty big loans to the top of the pile. Many, many, many businesses are in your situation. A very large percentage of the money went places you and I probably wouldn't think it "should" go. The program isn't working.
1) a private university near me with multiple thousand students, and nearly 1,000 employees (faculty and the like) "tweaking" their payroll numbers to exclude adjuncts, TAs, student employees and such, so they could claim they were a "small business"
2) a startup who closed a $25M round of funding in March, has no product/revenue as yet, and works 100% remote, talks of having a 14 month runway, got a fairly large check "because they're offering money, we should take it" (regardless of need)
Seems like maybe the company who leased you the car might be willing to do a deal to get you to keep the car a bit longer. It's not like they'll be able to sell it quickly.
Not sure I agree with the bit about Linux + Raspberry Pi not being able to handle precision timing down to the microsecond level. I'd be interested if anyone has thoughts.
I've done a fair amount of timing-critical design, including control of stepping motors and encoders for specialized uses. My experience has been that "real time" programming on a mainstream computer with an operating system is a headache, if it's possible at all. It's just a lot easier to do that kind of stuff on a mid range microcontroller, where the processor has exactly one task and predictable sequencing of operations.
Granted, it was a long time ago, but I remember trying to work with a CNC milling machine that generated all of the timing on a Windows computer. Everything worked fine unless you touched the mouse, which would cause motion to stutter. So, there can be a lot of gotchas that have to be accounted for when using a "big" computer for "little" things.
I've done a fair amount of timing-critical design, including control of stepping motors and encoders for specialized uses. My experience has been that "real time" programming on a mainstream computer with an operating system is a headache, if it's possible at all. It's just a lot easier to do that kind of stuff on a mid range microcontroller, where the processor has exactly one task and predictable sequencing of operations.
Yes. This is just a "keep motor A in sync with encoder B" problem. That should be on something at the Arduino level, although you might need a faster CPU version. Something with no caches, no superscalar, just dumb static memory and fixed-time instruction execution. Linux would just get in the way. The loop that keeps both sides in sync should probably be running around 1000Hz or so. In one of the OP's rather long and numerous videos, he mentions 10Hz not being enough. Right.
If it needs a user interface, that might be on a separate computer, with an interconnect to set the ratio in the motor controller over I2C or something equally dumb.
Also, you can get stepper motors with encoders, so you can tell if you missed a step and correct. You don't have to use a servomotor, which seems to have complicated this project. Shopbots and Tormach mills use stepper motors with encoders.
Many people have converted existing milling machines to CNC, and there are kits for that. This is a much simpler project.
Starting about 7:09 he talked about why the Arduino wasn't a good fit and explained his choice of the TI board (including speed, single-cycle floating point hardware, responsive interrupts and dedicated peripherals to latch consistent, jitter-free reads of the quadrature encoder counts).
Note it's a 4096 step encoder, mounted on a different shaft than the one he's driving, that can run upwards of 2500rpm. His motor control loop (not the complete end-to-end, but the bit that maintains output position to that desired) takes under a microsecond. His code works with either a Nema 24 stepper or Nema 23 servo, and he demonstrated both.
In the second video he maths out some projections for error/drift (factoring in floating point representation error) and its ultimate impact on thread accuracy (which provides some motivation for what you might initially consider "overkill" hardware).
10hz is just the refresh rate for the tachometer calculation.
Indeed, a bit of confusion about "Arduino" is that it has come to mean two things. One is the original Arduino board and its "branded" successors. The other is the development stack. In the latter case, it's GCC with some libraries and a beginner friendly editor. It's bare-iron programming.
Thinking about dealing with an encoder, let's say it has 8k counts per revolution, and the motor can do 10 revolutions per second, which is pretty fast for a stepping motor. Then the encoder steps are coming in at an 80 kHz rate, but an 180 MHz MCU can execute a couple thousand instructions in that time period, so it's not even breathing hard.
I've been quite happy with the Teensy boards. The Teensy 3.6 has a 32 bit MCU with FPU, running at 180 MHz and executes code sequentially AFAIK. You program it in the Arduino IDE, but Arduino is now just a thin layer on top of GCC, and you have full access to the MCU registers and functionality. There are a number of other attractive 32 bit processors such as the STM32 family.
That's the sort of thing I had in mind. Low-level Arduino-type microcontroller programming on a bigger engine.
The Arduino development environment is just an IDE and a library for board-level systems. It's C++ underneath, and all of the C++ language, although not the libraries, is available to you. You can do the same thing without the Arduino system, but setting up the build environment tends to be complicated.
If you need precision timing, you either need to program bare-metal, or use some hard real time OS like VXworks or QNX. Here, where it's all stepping and reading encoders, bare metal is the way to go. If you were coordinating a multi-axis machine, a real-time OS might be better.
>it was a long time ago, but I remember trying to work with a CNC milling machine that generated all of the timing on a Windows computer. Everything worked fine unless you touched the mouse, which would cause motion to stutter.
Windows used to (maybe still does, idk) use an interrupt for mouse movement. That was likely the cause of your issue.
This was a problem when recordable CDs were first available. The retail CD writers had really small buffers, and a contemporary PC could barely supply data fast enough. If you touched the mouse while recording the CD instantly turned into a coaster.
Measuring time (down to nanoseconds) is easy. Hard realtime scheduling with microsecond precision... just forget about it, unless you're building a custom distro & custom kernel and have in-depth knowledge about all the drivers that are going to be running on the system (you almost definitely do not).
You can't really bang bits like you can with carefully cycle counted code on something like an Arduino, but you can put all your bits into a good sized buffer and use DMA to push them out a stable, hardware clocked rate. You still have to be able ping pong your buffers fast enough and that isn't really guaranteed, but for large enough buffers can be made to work, mostly, probably, well… except when…
This technique doesn't work when you need to respond to inputs rapidly, but is fine for driving steppers. I see he appears to have gone with servos and encoders, so that would be a bit of a problem since there are inputs to be read.
> put all your bits into a good sized buffer and use DMA to push them out
Sure, time-honored technique for audio. Very on the mark.
But it doesn't really fix the "latency is forever" problem. Buffer too much and your control will lag. Too little and buffers can bottom-out. Best is to over-spec the metal and then be the only code on it so you don't need to buffer at all.
(Worst is, you didn't get to spec the metal, and there's a whole OS full of someone else's Bright Ideas between you and that 200us-or-the-reactor-goes-blooie control loop. Good luck, you're gonna need it).
> you can put all your bits into a good sized buffer and use DMA to push them out a stable, hardware clocked rate.
Obviously; without buffering, glitch-free audio playback would be impossible. I made the assumption that when we're talking about timing on Linux, it's not about throwing buffers at a hardware clocked device.
Does libgpio or some other interface on Linux give you access to clocked DMA out of the box? Do the drivers on RPi support it?
All the gpio drivers I've worked with are just directly writing state to registers.
In Linux you can isolate a cpu from the others, so you can get closer to a realtime solution (though you still get contention on cache lines, memory bus etc.). I haven't tried it though.
His loop needs to deal with an encoder that generates 1-4K ticks per rotation, hooked to a spindle running, IIRC, 2K RPM. I think his concern was that was 8M events per second, all while also generating stepper outputs.
He mentioned not only microsecond but nanosecond.
It reminds me of Linux routers. They worked great until the PPS rate suddenly got high, like in a network loop or DDoS situation. The kernel would dynamically switch between interrupts and interrupt mitigation/polling, I couldn't find any way to force it to always poll.
If the system was in interrupt mode, and the PPS rate suddenly went way up, the system was so busy handling the interrupts that it couldn't find the time to switch to interrupt mitigation and would livelock.
The solution was fairly easy: Just have more CPU cores than physical network interfaces. Disclaimer: This was ~10 years ago.
He never says that. You probably could do the same with multiple 8 bit arduinos as well, I have done something similar myself with klipper. But your life while doing that will be miserable.
The first thing you have to add for anything serious is a real time clock that Raspi DOES NOT HAVE by default.
What he is saying is that Linux is not intended for that,not designed for that, because it it not a RTOS, and it is huge.
You can fill the void, but you will be reinventing the wheel. Low level hardware design is painful.
Linux is extremely complex, too complex for a person to understand. It will take man-years of work to handle all the details, that is, it cost hundreds of thousands of dollars just in salaries alone.
It is way easier to do what this man has done. And then if you want you add an additional raspi with linux as the controlling UI.
Any cpu with a cache cannot do it because you are not sure if the data is in cache and will arrive on time or not. If there is something else on the bus (USB, video, network...) you need to design the whole system to ensure that the cpu gets the bus when it needs it regardless of other uses - most computers are not designed this way. A multitasking cpu generally cannot do this because you cannot be sure some other process isn't using the cpu when you need something.
That isn't to say it can't be done. Audio is very time sensitive and Linux does it well with the right kernel options. (video is easier, though it uses far more cpu and data on the bus, it is less sensitive to delay).
Depending on how bad delay it might or might not work. As rule though most real time control is run by low speed 8 bit cpus that have exact timing. You have a high power cpu doing the calculations and then the microcontroller just runs the result of the calculations checking in for new orders as needed.
Not Raspberry Pi exactly, but the SoC used in Beaglebone has a couple "PRU" co-processors which run at 200MHz, share RAM with the main CPU, and have access to peripherals including some GPIO. It's not too hard to use those for precision timing while the main CPU does the typical Linux stuff.
I've heard that said, but I've never actually seen a real project that uses those coprocessors. I've always thought that odd as it seems like the perfect setup for real time control. (I have seen tutorials on how to use them but no non-educatuonal projects)
Yes, I think they're more of a curiosity for many hacker types, but they definitely do get used.
My Siglent SDG-2042X signal generator uses the same SoC, would guess it's using the PRUs to drive its DACs.
A member of our local makerspace used a PRU to drive a big display made with shift-register RGB LEDs (something like a WS2812 - can't recall details).
I have made a proof-of-concept interface to a fairly high end ADC that did a little bit of filtering in the PRU before sending the results to the main CPU.
It is a great setup, but most of the projects you'll hear of are in the hobbyist world where cost reigns supreme.
Raspberry Pi is much more popular and costs far less. The PRUs can be replaced by a couple of Arduino Nanos for a buck or two each. Won't run at 200MHZ, but there are very few control tasks that need to run that fast.
It depends on if you need closed-loop control (feedback-driven systems like DC motors with encoders) or open-loop control (where you just output a signal and you can trust that the system moves to a specific state without checking up on it).
A few years ago I made a 3d printer that used an open-loop stepper system in userland on the original raspberry pi. I got stepper precision down to 2 microseconds [1] — which IIRC is better than the arduino-based systems out there were using. Latency was a different matter — the steppers would still overstep a few times after triggering the endstops.
It takes some hacking (I used DMA and the audio peripherals to offload timing-critical parts from the cpu), but open-loop control is totally doable at 2 microseconds if you’re dedicated to getting it working. Maybe you can get down to 1 us on the newer raspberry pi’s. If you repurpose the video core, maybe you can find a way to do real-time closed-loop control, but that sounds even more involved than what in the end is a relatively simple DMA hack.
With proper care it's doable, but that stack is bringing a whole lot of baggage you'll need to either strip out or isolate your control process from.
Having a bunch of cores is advantageous though. There are a variety of options for dedicating one core to the special purpose while the rest serve the general-purpose wild west.
But I'd be surprised if you didn't have headaches just using some python code with cpu affinity on an otherwise unchanged raspbian install.
Linux for sure doesn't for the reasons stated here by others.
I would love to see a real RTOS on a RPi though. Something based on sel4 would be very nice to see for instance. You won't get nanosecond precision due to the cache/system effects like people are saying, but microsecond precision should be more than doable.
It's quite doable but you have to do some low level plumbing to make it work, simply disable interrupts, pre-read your data at least once to warm up the cache and you're good to go. That's how I controlled my plasmacutter.
Linux is a GPOS (general purpose operating system) whereas the domain of servo controller programming (as shown in the videos) is falls under (RTOS). Even if there is no RTOS installed and the author is programming on bare metal with interrupt-service-routines (ISR), it is essentially a "Real-time Operating System" or RTOS. The author could also have used FreeRTOS. The primary difference between GPOS such as Linux and RTOS (bare metal roundrobin or using something like FreeRTOS, Micrium, etc) is that GPOS focus on "throughput" and RTOS specializes in "priority". RTOS works by priority-scheduling or roundrobin.
An example would be better. Say there is an external input to the system (a sensor that detects roller coaster position), you want a deterministic scheduling of the task without any concern about the rest of the state of the operating system. Doesn't matter what the OS is doing - when the roller coaster's position is sensed and there is an interrupt generated to call a subroutine (ISR), a RTOS will not block and it will guarantee execution. Meanwhile, Linux will schedule the task hoping it will get executed but there is no guarantee... I am sure you can hack into the kernel space and write a driver but it becomes a much more of a laborious task - it is just better to use the right OS from the get go.
Hope that makes sense.
A professional motion control system such as [1] comes with 2 components. RTOS element which is a proprietary controller and a GPOS component which is used to communicate, monitor, interface with the user (GUI) and do high-level functions. Usually this is a piece of software installed on a Windows OS, with dedicated PCI cards to communicate to the RTOS modules.
Also, checkout LynxOS...this is the top-end of all RTOSs which has some POSIX (UNIX) like features. It can also run on powerful processors such as Intel and AMD. This is used in airplanes (avionics), trains and public works, super critical operations: https://www.lynx.com/products/lynxos-178-do-178c-certified-p...
I think a better way to describe an RTOS than as specializing in "priority" is that an RTOS has bounded latency. If it's hard real-time that means deterministically bounded, while soft real-time means probabilistically bounded. What that bound is (or the distribution for soft real-time systems) is up to the system designer. It could be a microsecond, could be a day. But it has to exist, whereas in a GPOS there's no bound at all.
Telling people to increase their indebtedness to buy index funds right now is pretty dubious advice.