People also said similar things in the past (I'm a musician as well: guitar, piano and bass) about Synths and Drum machines (and things like GarageBand which can do backing drums and even basslines semi-automatically now).
Some of those things enabled others to create new types of music or express themselves in different ways.
People have said that about robots and computers in the workplace, and indeed, since the 60s, more and more jobs have been automated. And there are less musicians now. Film scores and albums are produced with samples instead of bands or orchestras, reducing demand for session players, leading to less income for musicians, leading to less musicians.
And while automating dangerous jobs is a good thing, generating AI music isn't. It's not as unethical as generating deepfakes, but it's useless, and bad for society.
Not if you want better fidelity: the VFX industry for film moved from rasterisation to raytracing / pathtracing (on CPU initially, and a lot of final frame rendering is still done on CPU due to memory requirements even today, although lookdev is often done on GPU if the shaders / light transport algorithms can be matched between GPU/CPU codepaths) due to the higher fidelity possible starting back in around 2012/2013.
It required discarding a lot of "tricks" that had been learnt with rasterisation to speed things up over the years, and made things slower in some cases, but meant everything could use raytracing to compute visibility / occlusion, rather than having shadow maps, irradiance caches, pointcloud SSS caches, which simplified workflows greatly and allowed high-fidelity light transport simulations of things like volume scattering in difficult mediums like water/glass and hair (i.e. TRRT lobes), where rasterisation is very difficult to get the medium transitions and LT correct.
he covers this in the video, but both engines have the same LRD (Load reduction device), but it's more about how the bleed system is done on if it's an impact or not, and he doesn't know if the other planes have the same flaw or not.
> or intentionally don't write const because writing it everywhere clutters up the code
I don't often like being judgemental (at least publicly!), but I'd argue that's just people being very bad developers...
You could argue having to add '&mut' at call sites everywhere (i.e. opposite to the way C++ does const in terms of call site vs target site) also clutters up the code in terms of how verbose it is, but it's still largely a good thing.
As someone who's used Lua a lot as an embedded language in the VFX industry (The Games industry wasn't the only one that used it for that!), and had to deal with wrapping C++ and Python APIs with Lua (and vice-versa at times!), this is indeed very annoying, especially when tracing through callstacks to work out what's going on.
Eventually you end up in a place where it's beneficial to have converter functions that show up in the call stack frames so that you can keep track of whether the index is in the right "coordinate index system" (for lack of a better term) for the right language.
Oh that’s super interesting, where in the VFX industry is Lua common? I typically deal with Python and maybe Tcl (I do mostly Nuke and pipeline integrations), and I can’t think of a tool that is scripted in Lua. But I’ve never worked with Vega or Shake or what this is/was called
Katana uses LuaJIT quite extensively for user-side OpScripts, and previously both DNeg and MPC (they've largely moved on to newer tech now) had quite a lot of Lua code...
It used to in older (pre 2.0) versions, but due to Python's GIL lock (and the fact Python's quite a bit slower than lua anyway), it was pretty slow an inefficient using Python with AttributeScripts, so 2.0 moved to Lua with OpScripts...
A 256-item float32 LUT for 8-bit sRGB -> linear conversion is definitely still faster than doing the division live (I re-benchmarked it on Zen4 and Apple M3 last month), however floating point division with the newer microarchs is not as slow as it was on processors 10 years ago or so, so I can imagine using a much larger LUT cache is not worth it.
does this include vectorized code? I stopped using LUTs for anything “trivial” probably 20 years ago because I rarely see any improvements (in particular where it would benefit the overall runtime noticeably).
I've tried a lot in the past as well, and after getting annoyed with proprietary OS X software (iBank in particular) back in 2009 or so, and not really liking GNUCash and KDEMoney (at least back in 2009) ended up writing my own open source simple app (native Cocoa, with a more recent Qt port for Linux) that I've been using every since on a daily basis.
In terms of the detail, I used to do very detailed breakdowns of categories, but now I don't really see the point: my app supports 'split transactions' (one of the reasons I actually made it, as existing solutions had poor support for them back in 2009), and I generally just use things like 'Food', 'Drinks', 'Essentials' as categories, as it never really made sense (at least for me) to detail them with such accuracy.
But for things like 'coffee', I do 'Drinks:Coffee', so I can see how much I am spending on fairly specific things, but I guess it's a balance in terms of whether it's worth the effort to record them so accurately compared to making use of the details.
Similarly, things like 'Car:Fuel', 'Car:Service', etc...
At some point I really should do a first principles analysis of why I track money... as far as I know, it mostly comes down to: 1. is fraud happening? and 2. Am I saving enough for retirement? Oh, and I guess 3. taxes
For fraud, I think it's basically a matter of whether we can recognize each transaction. You don't actually need to download transactions for that; you can just skim your monthly statements.
For saving, that's tricky because there needs to be that recognition of what categories are likely to increase during retirement versus decrease. I gave that a single pass a while back, and now I have a count each month of those expense categories that will continue into retirement, along with a 12-month average, so I can get a sense of what my portfolio needs to be able to fund after I retire. For that, even though I have Banktivity, I also have to use a spreadsheet.
For taxes, I don't know if anything really makes that easy. It's hard to know what category breakdown you really need to know whether you're capturing all your tax benefit, and my financial software doesn't tell me "oh, by the way, you'll want to split that transaction since some of it has a tax benefit."
In the early period after moving to the USA, for a few years I was tracking money in and out in great detail. Including splitting checks from stores. And while I did not set explicit budget, I believe it allowed me to keep our finances healthy. And it certainly decreased money-related anxieties, giving me sense of control.
I stopped doing it after a few years, after I felt pretty secure financially. And that certainly coincided with more spending on things that I would otherwise not spend on...
Your grandparents tracked money because they were also verifying the math, which could have done by hand. Now, we assume the math is right, and we're checking for fraud.
Oh come on... there's lots of reasons. Understanding where the money goes. How much are you spending on dining out each month? How much does your car cost when you add it all up at the end of the year? It's easy to fool ourselves when it goes out $10 - $20 at a time.
This is true but unless you have a motivation, i.e. somewhere else that money could rather be going that's somewhat immediate, you're kind of wasting your time (IMO).
If you want a vacation and couldn't afford it or you wanted some cool home gadget and couldn't afford it then sure, delve into your finances. But if that money you're saving is just going to sit around then what's the point? If you already have a rainy day and a 401K or equivalent, then you're good. Ultimately money is worthless if you don't use it.
The reason I say this is because tracking money is not free. It's a mental burden. Do you really want that to be your business? How much mental energy are you willing to give it?
Because it sounds simple until you really want a coffee after work, but it turns out you don't have the budget and then you sit and cry in your car because that hypothetical coffee was the one thing tying you to reality.
That's been within the last year I think - I too have been using Linux and Firefox for calls in Teams meetings from 2020 to last year when I had to move to the Teams Linux client (which they're deprecating, so having to move to Chrome or Edge).
Everyone I work with is constantly badmouthing Teams. It's buggy and flakey and they killed Linux support which my company actually made use of. Either way, it doesn't matter since it's bundled. Literally killed any chance of competition getting a fair shake at our usage.
Teams doesn't have to be better, they're just bundled.
The company I work for used Slack, we were happy, but higher ups were looking to cut costs and they noticed they had Teams for free, so guess what... bye bye Slack.
The customers I'm thinking of are in the public sector and quite non-technical, from us they learn that there are better options and realise that the tooling they have are causing them pain. Together with GDPR cases tightening things up on what software you can use I expect this to make a difference.
Some of those things enabled others to create new types of music or express themselves in different ways.