There's two layers to its utility. For overall circuit design (not IC design specifically) it's a tool for simulating the analog behaviour of a circuit. Two frequently used modes are "transient response" and "AC steady state response". The first shows you the voltages and currents in the circuit in response to a changing input (e.g. the input goes from 0V to 3.3V). The second sweeps across a user-specified range of frequencies and shows you the voltages and currents to expect across that range. Linear devices (resistors, capacitors, inductors) are generally easy to work out by hand, but when you start adding non-linear devices (diodes, transistors, etc) it gets pretty impossible pretty quickly to hand calculate with any real degree of accuracy. Many vendors provide SPICE models for their parts, so you can often get decently accurate simulation outputs. To your question "when is it not useful?", you have to be very careful to look at the simulation output and use your brain to determine whether or not the results actually make sense; it's quite possible to either mess up your netlist or device models and get output that is completely wrong but still simulates fine.
For IC design specifically, all of the above is still true but with a little extra. One of the challenging parts with IC design is the fact that every fab/process behaves differently and at smaller and smaller feature sizes those behaviours differ more and more from the basic first-order models. To make this kind of design even possible, the fabs do their best to characterize and model what transistors made on the process will do as a function of their geometry. This is often considered to be highly proprietary data and most of the commercial simulators provide a mechanism for the fab to release these model files (sometimes called PDK: process development kit) in an encrypted format that the simulator can decrypt but the user can't see. The designer can parameterize the transistors in their design by manufacturing parameters (e.g. gate width, well depth, etc) and get reasonably accurate simulations... with the same caveat as above: you always have to be vigilant to ensure that the output values actually make sense; the simulator can produce absolutely nonsensical results if the input is bad.
The other "not useful" part is that the simulation runtime can get very long as the complexity of the design grows. I haven't been in that industry for almost 20 years now but back then you were generally limited to simulating small subcircuits (say 100s of components) if you wanted anything close to quick results. Simulating larger designs would take days or weeks.
That is 100% the issue. This is really low quality heat. Making it better would require even more energy input (e.g. a heat pump) because we can’t safely run electronics hot enough to generate high quality process heat.
The times I've used TLA+ in anger have been generally related to lower-level "communicating state machines". One was a LoRaWAN handshake with the possibility of lost radio packets, another was a slightly-higher-level Bluetooth message exchange between a phone and a device.
In the first situation I wanted to ensure that the state machine I'd put together couldn't ever deadlock. There's a fairness constraint you can turn in on TLA+ that basically says "at some point in every timeline it will choose every path instead of just looping on a specific failure path". The model I put together ensured that, assuming at some point there was a successful transmission and reception of packets, the state machines on both sides could handle arbitrary packet loss without getting into a bad state indefinitely.
On the Bluetooth side it was the converse: a vendor had built the protocol and I believed that it was possible to undetectably lose messages based on their protocol. The model I put together was quite simple and I used it to give me the sequence of events that would cause a message to be lost; I then wrapped that sequence of events up in an email to the vendor to explain a minimal test case to prove that the protocol was broken.
For your specific example, I'd suggest you think about invariants instead of actions. I'd probably split your example into two distinct models though: one for the connection retry behaviour and one for the queued edits.
The invariant for the first model would be pretty straightforward: starting from the disconnected state, ensure that all (fair) paths eventually lead to the connected state. The fairness part is necessary because without it you can end up in an obvious loop of AttemptConnect -> Fail -> AttemptConnect -> Fail... that never ends up connected. A riff on that that gets a little more interesting was a model I put together for an iOS app that was talking to an Elixir Phoenix backend over Phoenix Channels. Not only did it need to end up with a connected websocket but it also needed to successfully join a channel. There were a few more steps to it and the model ended up being somewhat interesting.
The second model in your example is the edits model. What are the invariants here? We completely ignore the websocket part of it and just model it as two (or three!) state machines with a queue in between them. The websocket model handles the reconnect logic. The key invariant here is that edits aren't lost. In the two-state-machine model you're modelling a client that's providing edits and a server that's maintaining the overall state. An obvious invariant is that the server's state eventually reflects the client's queued edits.
The three-state-machine model is where you're going to get more value out of it. Now you've got TWO clients making edits, potentially on stale copies of the document. How do you handle conflicts? There's going to be a whole sequence of events like "client 1 receives document v123", "client 2 receives document v123", "client 2 makes an edit to v123 and sends to server", "server accepts edit and returns v124", "client 1 makes an edit to v123", "server does...?!"
I haven't reviewed this model at all but a quick-ish google search found it. This is likely somewhat close to how Google Docs works under the hood: https://github.com/JYwellin/CRDT-TLA
Heh, it’s -11C here right now and my thermostat is set to drop to 65F overnight.
Also, with respect to the metric/imperial systems of measurement… officially the government is all metric, but due to the history of it all there will be a bunch of regulations that say things like “the toilet must be at least 228.6mm away from the wall” because the pre-metric standard was 9 inches.
And a final one for the prairies: in the 1800s there was the Dominion Land Survey, which carved us up into 1 mile x 1 mile squares. They did a truly impressive job of it. However, the edges of these squares is where the road allowances are, which means that despite the speed limit being in km/h, you are almost certainly going to be travelling N miles down the highway to get to your destination.
Have a look at Fosi Audio. I'm currently using a BT30D to drive the passive speakers from an old Samsung integrated amplifier+receiver+2014-era "Smart TV" type system that died. It only has 1 analog input and Bluetooth, but it looks like they have other products in a similar form factor that can take multiple inputs (e.g. the P4 Mini). I was skeptical but needed something cheap to drive those speakers and am quite impressed.
Boids, Game of Life, Genetic Algorithms, Pixel Shaders...
All so satisfying to play with.
One of my favorites was when I was sure I was right about the Monty Hall problem, so I decided to write a simulator, and my fingers typed the code... and then my brain had to read it, and realize I was wrong. It was hilarious. I knew how to code the solution better than I could reason about it. I didn't even need to run the program.
Counterpoint: if a small part of the process is getting tweaked, how responsive can the team responsible for these apps be? That’s the killer feature of spreadsheets for business processes: the accountants can change the accounting spreadsheets, the shipping and receiving people can change theirs, and there’s no team in the way to act as a bottleneck.
That’s also the reason that so-called “Shadow IT” exists. Teams will do whatever they need to do to get their jobs done, whether or not IT is going to be helpful in that effort.
i've seen many attempts to turn a widely used spreadsheet into a webapp. Eventually, it becomes an attempt to re-implement spreadsheets. The first time something changes and the user says "well in Excel i would just do this..." the dev team is off chasing existing features of excel for eternity and the users are pissed because it takes so long and is buggy, meanwhile, excel is right there ready and waiting.
I always see this point mentioned in "App VS Spreadsheet" but no one gives a concrete example. The whole point of using a "purpose" build app is to give some structure and consistency to the problem. If people are replicating spreadsheet feature then they needed "excel" to begin with since that is a purpose built tool for generalizing a lot of problems. It's like I can say well my notebook and pen is already in front of me, I can use this why would I ever bother opening an app? well because the app provides some additional value.
Also, any software implemented to do a task which knowledge workers previously used a spreadsheet for, will inevitably get this feature request from users sooner or later:
It's when the users start taking care of IT issues themselves. Maybe the name comes from the Shadow Cabinet in England?
Where it might not be obvious is that IT in this context is not just pulling wires and approving tickets, but is "information technology" in the broader sense of using computers to solve problems. This could mean creating custom apps, databases, etc. A huge amount of this goes on in most businesses. Solutions can range from trivial to massive and mission-critical.
I think the term is mainly just because it tends not to be very visible/legible to the organization as a whole (and that's probably the main risk of it: either someone leaves and a whole section of the IT infrastructure collapses, or someone sets up something horrifically insecure and the company gets pwned). Especially because most IT departments hate it so there's a strong incentive to keep it quiet (I personally think IT organizations should consider shadow IT a failing of themselves and seek out ways to collaborate with those setting it up or figure out what is lacking in the service they provide to the rest of the company that means they get passed over).
That's quite possible. I've done a certain amount of it myself. A couple of programs that I wrote for the factory 15+ years ago are being used continually for critical adjustment and testing of subassemblies. All told it's a few thousand lines of Visual Basic. Not "clean code" but carefully documented with a complete theory of operation that could be used as a spec for a professionally written version.
My view is that it's not a failing, any more than "software development can't be estimated" is, but a fact of life. Every complex organization faces the dilemma of big versus little projects, and ends up having to draw the line somewhere. It makes the most sense for the business, and for developer careers, to focus on the biggest, most visible projects.
The little projects get conducted in shadow mode. Perhaps a benefit of Excel is a kind of social compromise, where it signals that you're not trying to do IT work, and IT accepts that it's not threatening.
There's a risk, but I think it's minimal. Risk is probability times impact, measured in dollars. The biggest risks come from the biggest projects, just because the potential impact is big. Virtually all of the project failures that threaten businesses come from big projects that are carried out by good engineers using all of the proper methods.
It's where you have processes etc set up to manage your IT infra, but these very processes often make it impossible / too time consuming to use anything.
The team that needs it ends up managing things itself without central IT support (or visibility, or security etc..)
Think being given a locked down laptop and no admin access. Either get IT to give you admin access or buy another laptop that isn't visible to IT and let's you install whatever you need to get your job done.
I'm laughing because I used that exact same phrase: "shit umbrella". Like some of the other replies mentioned, telling your team it's raining is great. The balance I found was to let them know what's coming and why but to let leadership's "pivots" to stabilize for a few days before sharing the unfiltered shit stream with my reports. This meant that the team still knew what was going on early but didn't panic as much when there was a sudden crazy random request from leadership that would be highly disruptive.
I've thoroughly enjoyed using ImGui for tooling around image processing, computational geometry, a bunch of 3D projection stuff. The fact that it's based on OpenGL or Vulkan or whatever backend you want is a big win for this kind of work. I can just take a bunch of pixels, throw them into a texture, and render some quads with those textures painted on them after going through some 2D transformations or 3D projection/transformation. It's quite beautiful for all of this. ImPlot for doing basic data plotting and the built-in ImGui widgets for controlling the whole thing.
For IC design specifically, all of the above is still true but with a little extra. One of the challenging parts with IC design is the fact that every fab/process behaves differently and at smaller and smaller feature sizes those behaviours differ more and more from the basic first-order models. To make this kind of design even possible, the fabs do their best to characterize and model what transistors made on the process will do as a function of their geometry. This is often considered to be highly proprietary data and most of the commercial simulators provide a mechanism for the fab to release these model files (sometimes called PDK: process development kit) in an encrypted format that the simulator can decrypt but the user can't see. The designer can parameterize the transistors in their design by manufacturing parameters (e.g. gate width, well depth, etc) and get reasonably accurate simulations... with the same caveat as above: you always have to be vigilant to ensure that the output values actually make sense; the simulator can produce absolutely nonsensical results if the input is bad.
The other "not useful" part is that the simulation runtime can get very long as the complexity of the design grows. I haven't been in that industry for almost 20 years now but back then you were generally limited to simulating small subcircuits (say 100s of components) if you wanted anything close to quick results. Simulating larger designs would take days or weeks.
reply