Which are pretty awful btw - every project at my job that started with LangChain openly regrets it - the abstractions, instead of making hard things easy, trend to make the way things hard (and hard to debug and maintain).
We use langchain and don't regret it at all. As a matter of fact, it is likely that without lc we would've failed to deliver our product.
The main reason is langsmith. (But there are other reasons too). Because of langchain we got "free" (as in no development necessary) langsmith integration and now I can debug my llm.
Before that it was trying to make sense of whats happening inside my app within hundreds and hundreds of lines of text which was extremely painful and time consuming.
Also, lc people are extremely nice and very open/quick to feedback.
The abstractions are too verbose, and make it difficult, but the value we've been getting from lc as a whole cannot be overstated.
other benefits:
* easy integrations with vector stores (we tried several until landing on one but switching was easy)
* easily adopting features like chat history, that would've taken us ages to determine correctly on our own
people that complain and say "just call your llm directly": If your usecase is that simple, of course. using lc for that usecase is also almost equally simple.
But if you have more complex use cases, lc provides some verbose abstractions, but it's very likely that you would've done the same.
https://www.llamaindex.ai/ is much better IMO, but it's definitely a case of boilerplate-y, well-supported incumbent vs smaller, better, less supported (e.g. Java vs Python in the 00s or something like that). Depends on your team and your needs.
Have a fairly thin layer than wraps the underlying LLM behind a common API (e.g., Ollama as being discussed here, Oobabooga, etc.) and leaves the application-level stuff for the application rather than a framework like LangChain.
(Better for certain use cases, that is, I’m not saying LangChain doesn't have uses.)