CedarDB focuses on being a high-performance HTAP database whereas Spice's was built from day 1 to enable high-peformance data and search for data-intensive applications and AI.
So Spice natively has data acceleration, federation, hybrid-search (vector + BM25 full-text-search), and LLM inference in the core runtime so you can zero-copy data across them, which you would not normally see in a database like CedarDB.
Like most things, all measures fail point in time and on short time horizons, but law of big numbers and trends can be useful. I.e. compare yourself (on almost any measure) to yourself in your most productive period.
We reviewed our metrics for period-on-period comparisons at 1, 2, 3, 4 years and the numbers are surprisingly consistent for each person, and across similar productive engineers.
Like in the article, if you can apply a semantic score across years of data, it gives you a pretty good idea.
Periodic objectives x customer results then GPT-5 scoring pull requests, etc. against them roughly aligned to that period. I.e. it scores higher if code is used by and produces value for customers.
Hi HN, we’re Luke and Philip, founders of Spice AI. Today, we’re announcing Spice.ai OSS 1.0-Stable, a portable, single-node data query and LLM-inference engine built in Rust.
We introduced the very first POC of Spice.ai OSS on ShowHN in Sep 2021 as a runtime for building AI-driven applications using time-series data!! [Insert, it’s been 87-years meme].
One of the hard lessons we learned was that before we organizations could use AI effectively, they needed a much higher level of data readiness. Our customers told us they wanted to adopt AI and technologies like Arrow, Iceberg, Delta Lake, and DuckDB but simply didn’t have the time or resources (and were struggling to keep up), so we focused on making it super easy and simple to use them. We rebuilt Spice from the ground up in Rust on Apache DataFusion, and launched on ShowHN in Mar 2024 as a unified SQL query interface to locally materialize, accelerate, and query datasets sourced from any data source.
It’s designed for developers who want to build fast, reliable data-intensive and AI apps without getting stuck managing ETL pipelines or complex infrastructure.
That release was just the data foundation and today, we’re announcing Spice.ai OSS 1.0-Stable that includes federated data query, acceleration, retrieval, and AI inference into a single engine—now ready for production deployments across cloud, BYOC, edge, or on-prem.
Spice supports accelerating federated queries across databases (MySQL, PostgreSQL, etc.), data warehouses (Snowflake, Databricks, etc.), and data lakes (S3, MinIO, etc.). It materializes datasets locally using Arrow, DuckDB, or SQLite for sub-second query times and integrates LLM-inference and memory capabilities and a purpose-built data-grounding toolset that includes vector and hybrid search, Text-to-SQL/NSQL, and evals to ensure accurate outputs.
The Model Context server is similar to what we've built at Spice, but we've focused on databases and data systems. Overall, standards are good. Perhaps we can implement MCP as a data connector and tool.
Being such a core component, we want developers to be completely comfortable integrating and deploying the Spice runtime in their applications and services, as well as running Spice in their own infrastructure.
In addition, Spice OSS is built on other great open-source projects like DataFusion and Arrow, both Apache 2.0, and DuckDB (MIT), so being permissively licensed aligns with the fundamental technologies and communities it's built upon.
We expect to release specific enterprise control-plane services, such as our Kubernetes Operator under a license such as BSL.
Spice AI | SWE & DevRel | FT | ONSITE (Seattle, Seoul), REMOTE (Australia)
Spice AI is the creator of the Spice.ai open-source project, a query-engine and ML inferencing runtime built in Rust on DataFusion.
Hiring experienced Rust, distributed systems, data systems, and database engineers and DevRel.
ShowHN: https://news.ycombinator.com/item?id=39854584
Details: https://spice.ai/careers
CedarDB focuses on being a high-performance HTAP database whereas Spice's was built from day 1 to enable high-peformance data and search for data-intensive applications and AI.
So Spice natively has data acceleration, federation, hybrid-search (vector + BM25 full-text-search), and LLM inference in the core runtime so you can zero-copy data across them, which you would not normally see in a database like CedarDB.
reply