Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Accounting, specifically book-keeping, really plays to the strengths of LLMs - pattern matching within a bounded context.

The primary task in book-keeping is to classify transactions (from expense vouchers, bank transactions, sales and purchase invoices and so on) and slot them into the Chart of Accounts of the business.

LLMs can already do this well without any domain/business specific context. For example - a fuel entry is so obvious that they can match it into a similar sounding account in the CoA.

And for others where human discretion is required, we can add a line of instruction in the prompt, and that classification is permanently encoded. A large chunk of these kind of entries are repetitive in nature, and so each such custom instruction is a long-term automation.

You might have not been speaking about simple book-keeping. If so, I'm curious to learn.



Audit is at the heart of accounting, and LLMs are the antithesis of an audit trail.


I'm sorry I don't follow. The fact that you use an LLM to classify a transaction does not mean there is no audit trail for the fact. There should also be a manual verifier who's ultimately responsible for the entries, so that we do not abdicate responsibility to black boxes.


If you mark data as "Processed by LLM", that in turn taints all inference from it.

Requirements for a human in the loop devolve to ticking a box by someone who doesn't realise the responsibility they have been burdened with.

Mark my words, some unfortunate soul will be then be thrown under the bus once a major scandal arises from such use of LLMs.

As an example, companies aren't supposed to use AI for hiring, they are supposed to have all decisions made by a human-in-the-loop. Inevitably this just means presenting a massive grid of outcomes to someone who never actually goes against the choices of the machine.

The more junior the employee, the "better". They won't challenge the system, and they won't realise the liability they're setting themselves up with, and the company will more easily shove them under the proverbial bus if there ever is an issue.

Hiring is too nebulous, too hard to get concrete data for, and too hard to inspect outcomes to properly check.

Financial auditing however is the opposite of that. It's hard numbers. Inevitably when discrepancies arise, people run around chasing other people to get all their numbers close enough to something that makes sense. There's enough human wiggle-room to get away with chaotic processes that still demand accountability.

This is possibly the worst place you could put LLMs, if you care about actual outcomes:

1. Mistakes aren't going to get noticed.

2. If they are noticed, people aren't going to be empowered to actually challenge them, especially once they're used to the LLM doing the work.

3. People will be held responsible for the LLM's mistakes, despite pressure (And the general sense of time-pressure in audit is already immense ) to sign-off.

4. It's a black-box, so any faults cannot be easily diagnosed, the best you can do is try to re-prompt in a way that doesn't happen.


Well put. It should always be "Created by <person>" rather than "Processed by LLM". We can already see it with Claude Code - its commit messages contain a "Generated by Claude Code" line, and it guarantees a pandemic of diffused responsibility in software engineering. But I think there is no point in railing against it - market forces, corporate incentives, and tragedy of the commons all together make it an inevitability.


Now instead of having accountants audit transactions you will have accountants audit LLM output for possible hallucinations. Seems counter productive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: