I work at an ISP that offers fixed wireless Internet.
For marketing purposes I generate viewsheds around each of our ~500 towers, so we can get an idea which suburbs to market to.
At the time of sale, my system will calculate the line of site from the access point on the tower to the customer rooftop to determine the height of the pole (is any) needed to get service.
Like the OP, we re-sampled (gdalwarp, raster2pgsql) some of the 15cm lidar data to ~1m to get it down to a manageable size (7TB) and run it on a single bare-metal PostgGIS instance (500GB ram, 64 cores)
Radio waves at 5GHz are quite 'fat' so we need to allow for that on LOS calculations as per [0].
The GIS magic mostly sits in PostGIS and we use a number of data sets to solve problems:
* Shuttle Radar Topography Mission - Digital Elevation Model and Digital Surface Model, 30m grid [1]
* Building footprints for all of Australia [2]
* National Roads [3]
* property boundaries (cadastre) [4]
* All Australian addresses [5]
* Australian suburbs [6]
For the front end we use a VueJS app (quasar.dev) using DeckGL on Google Maps to visualise the LOS path. Back end is Rust (axum/sqlx).
GIS is a very interesting are to work in - if I had more fun they might start charging me admission to come to work!
I have recently introduced Rust into my workplace. We do basically Go microservices supporting VueJS front ends.
In my opinion Rust is generally superior to Go for basic JSON web services communicating with the usual suspects; Postgres, RabbitMQ, etc. I'd say it is far superior to the JVM approach for a our problem space.
I found the JetBrains tooling for Rust to be equivalent to what is available for Go. Compile times are on the order of 10 seconds for a 'medium sized web' service but of course YMMV. This is slightly annoying, but comparable with large Java/Spring projects I've worked on.
I presented an example web service (Axum) that was very similar in architecture to the Go approach we had been using; router, handlers, middleware etc. Migrating from Go to Rust is not that bad at all. Rust has a great error handling story and better type system, so the code is much cleaner and easier to understand.
Using SQLX means that you get compile-time database schema checking, including understanding if a return field could be null and requiring that to be Option<>.
The compiler is picky, but I quickly came to appreciate that 'once it compiles, it will very likely just work'. It is like having a pair programmer helping out.
It took me about 2 months learning what were the appropriate crates to put together, but now I have paid that price, I am more productive in Rust than Go.
Rust uses less memory and is faster/safer than Go, so we can deploy more services on the same hardware, with less downtime, which saves $.
You won't find an army of Rust developers to hire, but you will find an army of people who want to learn/use Rust at $dayjob, and these people will tend to be motivated self-learners that you want to have on your team. Advertising a Rust role will make your business more attractive to better candidates in my experience.
I have been a developer for 20 years; mostly Java/Scala/Kotin, then Go. Now I have learned to use Rust there is no way I am going back to those languages!
I think they make a category error by putting ChatGPT etc in the General column. As far as I can tell we only have narrow definitions of 'intelligence' and ChatGPT falls into one of those. I don't know of a general agreement on what 'General Intelligence' is in people, so how can we categorise anything is AGI? Knowing a bit about how ChatGPT works I feel it is a lot more like a chess program than a human.
ChatGPT is the most general system ever created and packaged. You can throw arbitrary problems at it and get half-decent solutions for most of them. It can summarize and expand text, translate both explicitly and internally[0], play games, plan, code, transcode, draw, cook, rhyme, solve riddles and challenges, do basic reasoning, and many, many other things. Whether one is leaning more towards "stochastic parrot" or more towards "sparks of AGI" - it's undeniable that it's a general system.
--
[0] - The whole "fine-tune LLM on a super-specific task but only in language X (which is not English), its performance for that task improves in languages other than X" part, indicating it's not just learning tokens, but the meanings behind them too.
It can't cook, it can talk about cooking. It wouldn't be able to get a pan out of a drawer. I know all we do these days is produce text tokens on the internet, but it is in fact in itself a domain specific task. If you can talk about opening a can of beans you're an LLM. If you can do that and actually open the physical can we may be a little bit further towards general intelligence.
We don't even have a full self driving system, the limited systems we have are not LLMs, and there isn't even a system on the Horizon that can drive and talk to you about the news and cook you a dinner.
If that was a valid criticism of its intelligence, Stephen Hawking would have spent most of life categorised as a vegetable.
Also:
> We don't even have a full self driving system,
debatable given the accident rate of the systems we do have
> the limited systems we have are not LLMs,
they tautologically are LLMs
> and there isn't even a system on the Horizon that can drive and talk to you about the news and cook you a dinner
There's at least four cooking robots in use, and that's just narrow AI and used to show off. Here's one from 14 years back: https://youtu.be/nv7VUqPE8AE
Stephen Hawking lost the capacity to move because his ALS paralyzed him, not because his brain lacked the capacity to do so, come on this has to be the worst analogy of the year. Also no, driving systems are not LLMs. LLMs are large language models, no existing self driving system runs on a language model. And also, that's not what the word "tautology" means. "All bachelors are unmarried" is a tautology.
Ah, you wrote unclearly, it sounded like you were asserting that no system was an LLM rather than no driving system.
So while your claim is still false, I will accept that it isn't tautologically so.
Likewise, I am demonstrating that the actual definition you're using here is poor due to the consequence of it ruling out Stephen Hawking, and that goal means that the reason why he couldn't do things is unimportant: you still ruled him out with your standard.
Transformer models are surprisingly capable in multiple domains, so although ChatGPT hasn't got relatively many examples of labeled motor control input/output sequences and corresponding feedback values, this was my first search result for "llm robot control": https://github.com/GT-RIPL/Awesome-LLM-Robotics (note several are mentioning specifically ChatGPT).
That's not a category error. GPT is general. It is able to perform many tasks (Creative Writing, playing Chess, Poker and other games, language translation, Code, robot piloting) etc
Arguments like yours are why I regard "generality" (certainly in the context of AGI) as a continuum rather than a boolean. AlphaZero is more general than AlphaGo Zero, as the former can do three games and the latter only one. All LLMs are much more general than those game playing models, even if they aren't so wildly superhuman on any specific skill, and `gpt-4-vision-preview` is more general than any 3.5 model as 4 can take image inputs while the 3.5's can't.
Yes. If you read "Computing Machinery and Intelligence" this idea of generality being a continuum is a point that Turing makes actually (albeit in different words). What constitutes generality of an AI is really going to be very sensitive to your metric and the assessment is going to vary a lot from observer to observer.
it seems that 'making a better' GPT-3 or similar model is like climbing higher up a tall tree and claiming you are closer to the moon... technically true, but...