For me the first time we dabbled with SQL was in a 3rd year Software Engineering course where the focus of the class was a single group project that we managed among ourselves by splitting tasks, conducting code reviews and handling the build and release in teams.
I recall one group doing the project login which went much along the lines of what the OP's article touched on. Their code was esseentially
var success = false
var query = SELECT * FROM users
while query.read
{
if query(user) == input_user && query(password) == input_pass
{
success = true
}
}
Yes. They selected the entire user table.
Yes. They iterated over the entire result (even if first returned result was valid)
Yes. That was "shipped" for the project
No. My complaints notion they should be leveraging the database for all the things they're doing wrong were ignored. It was performant! Look! It logs in instantly! YEah, because there's 8 users on the database for this project, what about when it ""ships"" and there's 100,000? More?
---
My first real job dealing with a database wasn't much better. We were using a MS Access database with no normalized data. Our client's primary transaction data was across a table with 70 some columns, many of which were often duplicated values in some form or utilizing very bad practices. Since joining this company I've sped up queries in almost immeasurable ways and done things my older coworkers initially derided because they couldn't understand the syntax.
TL;DR SQL, for some stupid reason, is still treated as second class to core langauges and it is a god damn shame
Agree so much. And if you've ever seen a real SQL wizard in action, you realise how much can be done with it. Like most of the business logic of a system can be in the database, with an interface that's a set of stored procs/functions. And fast.
> most of the business logic of a system can be in the database
The problem of this approach is the tooling and lock-in.
If databases had first-class versioning support for their code objects (which could easily interoperate with git), testing automation, and a parvence of standardization across the industry, then a lot of people would be very happy to work with that model.
> The problem of this approach is the tooling and lock-in.
I've seen a program rewritten or heavily refactored on top of an existing database more times than I've seen the database swapped on an app that had reached production (which I've seen zero times).
Consequently, I have regard remaining "database agnostic" as having very little worth. If you pick a DB with a bunch of great features that can save you time, improve performance, and improve data integrity—use those features!
Plus, if you find yourself in that rewriting-or-heavily-refactoring job that I've seen a few times, your favorite person in the whole world will be whoever put all those annoying constraints and triggers and such in the DB itself. It'll make the operation far easier and safer.
While the tooling could definitely be better, a lot of those issues aren't so problematic if you just use Postgres. Use a migration tool and store the migrations in git, use a tool like Zapatos to provide typing for your queries at the application layer, support multiple versions of stored procedures using schemas with a defined search order and test your procedures using pgTap.
Postgres-as-a-platform is definitely a new architectural trend, but because of companies like Supabase it's maturing quickly, and there are so many benefits to it when executed properly.
Came in to say pretty much the same thing. Discoverability is a huge issue... and even if you do things in a way that lends itself to that, it gets really clunky really quickly.
I may be misunderstanding here because I’m not a dev and have only a cursory level of experience doing some basic programming or sql but is this not what dbt allows?
I'm all for straight up queries and understanding... even some more complex sprocs... I'm not a fan of too much logic in the dbms, since it's pretty much a lock-in for a single vendor, limits breaking pieces out for scale and makes things generally much harder to find/understand in practice. I'm a proponent of what I like to call discoverable code structures, sprocs/functions don't lend themselves to that.
yeah, I should have added "can be, not should be" ;)
Though I have met DB Admins who insist that the only way of stopping bad data getting into the DB is to have the DB do all the data manipulation, including a lot of what we would now consider business logic
I recall one group doing the project login which went much along the lines of what the OP's article touched on. Their code was esseentially
Yes. They selected the entire user table.Yes. They iterated over the entire result (even if first returned result was valid)
Yes. That was "shipped" for the project
No. My complaints notion they should be leveraging the database for all the things they're doing wrong were ignored. It was performant! Look! It logs in instantly! YEah, because there's 8 users on the database for this project, what about when it ""ships"" and there's 100,000? More?
---
My first real job dealing with a database wasn't much better. We were using a MS Access database with no normalized data. Our client's primary transaction data was across a table with 70 some columns, many of which were often duplicated values in some form or utilizing very bad practices. Since joining this company I've sped up queries in almost immeasurable ways and done things my older coworkers initially derided because they couldn't understand the syntax.
TL;DR SQL, for some stupid reason, is still treated as second class to core langauges and it is a god damn shame