I'll be curious what the collecter's market looks like for Apple Vision Pro a decade from now. I imagine there are far fewer of these things out in the world than Apple hoped there would be, and I wonder how that'd impact long-term scarcity.
I'd be interested in a podcast app that allows me to subscribe to written RSS content and uses AI to (transparently) convert them to audio podcasts.
The time/attention it takes to engage with written content can be a barrier for me. The hands-free experience of listening to something is so convenient, I find I engage with audio far more often than written text.
The only major downside of audio is the challenges around notetaking, and recording the snippets of info I want to hold onto.
One thing confuses me... Why invest so much into automony on streets/roads when most (all?) US trains/trams/busses still have operators?
Elon said himself that busses have 5x the operating cost of robo-taxis (1$ vs 20¢ per hour), but failed to mention that a typical bus carries 25x people. Presumably the cost delta is due to payroll for a human operators & janitors, so if busses already have 5x max throughout per $ in a worst-case comparison, why not put more energy into driving down costs on the more efficient modality?
I imagine it would be an easier problem to solve too, as you could could define constraints on your operating environment. Dedicated travel lanes, rail, and preprogrammed routes seem like they'd massively reduce the complexity of the problem.
I could also envision a decent financial argument for moving from a customer base of general-consumers to one of cities & local govt. If a 5x operating cost reduction is feasible, I could see "public transit automation" making for a pretty compelling capital project.
I would really love to see a "maps" app that focuses specifically on local discovery for businesses and other points of interest. Or, one that at least makes a real attempt to deliniate between getting you to a known place, and finding you new places to go.
Most mapping apps seem to blend navigation & discovery into a single experience that winds up being worse at both.
Location: Chicago, IL
Remote: Full/Hybrid
Willing to relocate: No
Technologies: Ruby/Rails, Go, JS/React, AWS, Terraform (all used professionally) w/ hobby experience in Elixir & other BEAM languages
Résumé/CV: https://bit.ly/3L7xdq7
Email: contact <at> henrynelsonfirth <dot> com
Full-stack engineer with nearly a decade of total professional experience, and 2+ years of experience working in each of the following domains:
- Front-end React projects w/ an internal npm package repo
- Back-end Ruby/Rails monoliths
- AWS/Golang serverless microservices
- Engineering Management
I also have a strong interest in Elixir and other BEAM languages, and have been staying active as a full-stack Elixir hobbyist for 5+ years while bouncing between different languages/domains professionally.
While open to all opportunities, I am particularly interested in using Elixir professionally full-time :)
Not quite. pkl is a language that is mostly designed for parsing and serialization of data. TypeSpec is a language that is designed to describe APIs and the structure of data that those APIs take. You can actually combine the two technologies as follows:
1. Read a .pkl file from disk and generate (for example) a Person struct with a first, last name and an age value.
2. Let's say that according to some TypeSpec, the HTTP endpoint /greet accepts a POST request with a JSON payload containing a first and a last name. You convert your Person struct into a JSON literal (and drop the age field in the process) and send it to the HTTP endpoint
3. You should receive a greeting from the HTTP endpoint as a response. The TypeSpec may define a greeting as a structure that contains the fields "message" and "language".
4. You can then use pkl to write that greeting structure to disk.
Sidenote: pkl also supports methods[1] but in almost all use cases you only use pkl to define which fields your data has. TypeSpec cares most about the methods/endpoints. Of course, you still need to define the shape of the arguments to these methods and that is where you have a bit of overlap between these two technologies. I can imagine that you would generate pkl definitons from TypeSpec definitions.
The backend has definitely suffered from "crazy hype-driven development" for the last 30 years. Perl, Python, Ruby, Java, PHP, Scala, Clojure, Go, Rust have all had their brief moment as the silver bullet. Not to mention ops tooling - Vagrant, Docker, Kubernetes, Puppet, Chef, Ansible.
I wasn't alive in the 1970s, but I'm guessing that those who were would say it was just as faddish then as well.
The only time I've had success with using AI to drive development work is for "writers block" situations where I'm staring at an empty file or using a language/tool with which I'm out of practice or simply don't have enough experience.
In these situations, giving me something that doesn't work (even if I wind up being forced to rewrite it) is actually kinda helpful. The faster I get my hands dirty and start actually trying to build the thing, the faster I usually get it done.
The alternative is historically trying to read the docs or man pages and getting overwhelmed and discouraged if they wind up being hard to grok.
Like many things in America, the government is lobbied to create an unnecessary problem by private companies who aim to profit off of solving that problem.
The nice thing is there are plenty of socially-acceptable, but still every bit as objectively valid reasons to advocate for policies aimed as reducing the use of cars in cities.