Skip to main content

· 9 min read
Matthaus Krzykowski

Summary

The excitement around DuckDB has steadily increased over the last year. The project has consistently crossed the 1M downloads per month mark over the last three months (Dec ‘22, Jan ‘23, Feb ‘23), a large increase from the ~250k downloads per month in February ‘22.

Like so many others, we are excited about the project, too. Recently, we attended the DuckDB conference and spoke with many members of the community to learn why people are excited about it. We examined issues on GitHub, interviewed some of the top contributors to the project, and even experimented with DuckDB ourselves to determine how we could contribute.

We aimed to identify the most popular reasons why people try out DuckDB with our research. We found five perspectives that people commonly have when trying out DuckDB.

Marcin watching a MotherDuck presentation

dltHub co-founder Marcin watching a MotherDuck presentation at DuckCon in Brussels in February

1) "Normie" users love to drop-In DuckDB in all sorts of places

Last December, folks from the data + ML community organized a great conference, Normconf. Presenters and attendees were invited to share stories about everyday data problems and celebrate solutions. Many of the attendees referred to themselves as "normies" during the conference. We have found many of these folks overlap with those trying out DuckDB.

Normies have a specific way of solving problems that breaks some behavior patterns and expectations of previous generations of software engineers. As Ben Labaschin explains in his presentation Building an HTTPS Model API for Cheap: AWS, Docker, and the Normconf API, “normie software” has the following criteria:

  • It does not require domain knowledge to use. You should need almost no documentation because time is most important and all learning is investment. Before you learn, you need to know if it will pay off in the future.
  • The best tools can be reused in many contexts. You learn them once, and you can apply them everywhere.
  • Tools should work together. You should pick the tool that works with your other tools.

Many at Normconf agreed that DuckDB is also a “normie tool”. It has a Python wrapper and can be dropped into any Python script, notebook, or Streamlit app. It has helpers and integrates with other libraries that are part of typical workflows (e.g. Pandas, Parquet, Arrow, etc). It is a powerful analytical database and brings local SQL execution without credentials and other hassles. You can scan data from and export data to Parquet, CSV, or JSON and query an S3 bucket directly.


2) Local Data Workflows Are Going Mainstream, and DuckDB Is at the Center

Many people struggle to access cloud data warehouses within their organizations. Some of the problems that these users encounter include:

  1. Painful processes to obtain credentials and permissions
  2. A setup that is perceived as "difficult"
  3. Deployment of local composer files
  4. Working on remote machines is often much less pleasant

Instead they often use DuckDB to load data locally. DuckDB enables people to start using data by:

  1. Allowing them to learn SQL and try examples without any setup
  2. Querying GCP or S3 buckets from a local machine
  3. Creating notebooks or data apps with embedded DuckDB that showcase their work

Prototyping and experimenting with production data locally on DuckDB is a popular practice. From what we learned, deployment of DuckDB to production is still quite rare. Companies seldom use local workflows as this depends on someone having their laptop turned on to function. However, many non-engineering personnel use DuckDB to access production data.

3) The community is exploring various ways to use DuckDB's columnar query engine (e.g. analytics)

As many people in the community are exploring how DuckDB could be used, the fact that the DuckDB engine provides a way to quickly query the columnar format seems to be central to multiple use cases.

In data processing and usage, there are two types of transformations:

  • Non-time-critical, usually nightly, "transformation" jobs. These are run programmatically, the output is saved somewhere, and a business person consumes this output on demand a few hours or days later
  • Time-critical "dashboard user access" jobs. A report is created as output from the former job. Now, a user wants to gain insights from it "on demand", so they aggregate this table and wait. This computation is now time-critical, because the user is actively waiting for it

Row-based engines like Postgres are great at the first type of job, which usually involves many joins and row-based operations. However, they are not fast at aggregating data sets, as this requires them to iterate over rows by index (access the row, find its value, sum it).

Column-based engines, such as DuckDB, the Postgres AlloyDB engine, MySQL Percona, Redshift, etc., excel at aggregation jobs, which, for example, count or sum elements in a column.

Here, we have found evidence of two use cases where DuckDB is particularly suited:

  1. Aggregation of event data (e.g. product analytics). A dump of events could be easily scanned and aggregated into "users", "sessions", counted, etc. By using a database architecture like an "activity schema" and a way to tag events, this would make a great "product analytics" embedded warehouse. The MotherDuck team explains some reasons why to use DuckDB for analytics here.
  2. Aggregation of "one big table" architectures that use "one big table" instead of a dimensional model. This is a common design pattern for organizations that do not tackle data architecture, such as small analyst teams.

The amount of data processed for analytics workloads is often smaller than people think, with dashboards typically built from aggregated data. The co-founder of Motherduck suggests that a past analysis of his of BigQuery queries found that 90% of queries in organisations processed less than 100 MB of data (while the co-founder of Ponder kind of disagrees). Many people load CSV, Parquet, and JSON files sizes ranging from 50 to 200M rows into DuckDB. This includes not only one-off data loading but also ongoing demo projects. Common advantages of DuckDB we frequently heard about are speed, costs, and the usability advantages mentioned above.

4) With DuckDB users bring the database engine to their data and instead of the other way around

For most of us this behavioural pattern should ring true:

  1. “I have my data where I see fit. For example, in a AWS S3 bucket or on my laptop. My workflows deal with this well. And the workflow is fairly simple and cheap.”
  2. “Now I want to process my data.”
  3. “So I move my data to where a database engine is. I load it somewhere. I load it to Snowflake, BigQuery, Redshift. This is more complicated and costs significantly.”

We encountered a different related pattern with DuckDB users. DuckDB users often bring the engine to their data:

  1. People drop it into AWS Lambda or Google Cloud Function to process data close to the engine. The few DuckDB production deployments that we have seen were all AWS Lambda centric. You can read a general example here
  2. The product that makes this pattern super easy for AWS is Boiling Data. The product also scales fairly well

The community frequently experiments with making such a setup work. In a recent post from frequent DuckDB issue contributor Mimoune Djouallah, he describes how he used Azure Storage, DuckDB, and an Azure ML Notebook to build a cost-effective data engineering pipeline for smaller data workloads. The pipeline involves reading data from Azure Storage, running complex queries, and saving the results in a bucket. The resulting bucket can be consumed in Synapse Serverless/PowerBI/Notebook, etc. We wonder, just like Mimoune and others do, if vendors will be building more support for smaller data workloads going forward.

It is probably worth mentioning at this stage that DuckDB and even PostgreSQL are not databases in the traditional sense. Rather, they are relational "database" management systems (RDBMS) that manage structured collections of data. While both can be used for querying internally stored data and reading external data like files or federated databases, DuckDB focuses primarily on the latter.

This means that it functions more as an RDBMS without an attached database. Therefore, it is inaccurate to refer to DuckDB as a database. Instead, we should consider the parquet file or data lake as the actual database, with DuckDB serving as a tool for managing and querying the data.

5) Some people are dropping DuckDB into their BI tooling

A few people people we met have chosen to have always-up file storage and on-demand DuckDB embedded in a BI tool. A BI tool that supports this use case is Metabase. It’s an interesting concept that might one day lead to a stable open source BI tool SQL cache.

This is different than the past. Classic business intelligence tools use the OLAP concept—the business user creates a pivot table, and the tool composes a SQL query automatically and issues it to the database. This slows things down and adds additional cost. Think about Data Studio doing queries on BigQuery just because you change the granularity.

If the part of the data is cached (in-memory or as temporary parquet file) and the BI tool has an embedded query engine, it can do following tricks, for free and in no-time:

  • change the granularity of a time dimension between different units (hour, day, week, calendar week, US calendar week, etc.)
  • drill downs, drill throughs, and filters
  • leveraging joins through predefined paths or UI query builders

· 3 min read
Matthaus Krzykowski

The number of Python developers increased from 7 million in 2017 to 15.7 million in Q1 2021 and grew by 3 million (20%) between Q4 2021 and Q1 2022 alone, making it the most popular programming language in Q3 2022. A large percentage of this new group are what we call Python practitionersdata folks and scripters. This group uses Python to do tasks in their jobs, but they do not consider themselves to be software engineers.

They are entering modern organizations in masse. Organizations often employ them for data-related jobs, especially in data engineering, data science / ML, and analytics. They must work with established data sources, data stores, and data pipelines that are essential to the business of these organizations These companies, though, are not providing them with the type of tooling they learnt to expect. There’s no “Jupyter Notebook, pandas, NumPy, etc. for data loading” for them to use.

At this stage of dlt we are focused on serving the needs of organizations with 150 employees or less. Companies of this size typically begin making their first data hires. They want data to be at their core: their CEOs may want to make their companies more “data driven” and “user feedback centric”. Their CTOs may want to “build a data warehouse for automation and self service”. They frequently are eager to take advantage of the skills of the Python practioners they have hired.

To achieve our mission of making this next generation of Python users autonomous in these organizations, we believe we need to build dlt in a “Pythonic” way. Anyone that can write a loop in Python script should be able to write a source and load it. There should minimal learning curve. Anyone in these organizations that gets basic Python should be able to use dlt right away.

However, we also recognize the need dlt to be loved not only by Python users but also data engineers to fulfill our mission. This is crucial because eventually these folks will be brought in to help with data loading in an organization. We need data engineers to evolve dlt pipelines rather than ripping them out and replacing them like they almost always do to scripts written by Python practitioners today.

To develop with dlt, anyone can install it like any other Python library with pip install dlt. They can then run dlt init and be ready to go. Already today data engineers love the automatic schema inference and evolution as well as the customizability of dlt.

· 3 min read
Matthaus Krzykowski

dltHub Mission

Since 2017, the number of Python users has been increasing by millions annually. The vast majority of these people leverage Python as a tool to solve problems at work. Our mission is to make this next generation of Python users autonomous when they create and use data in their organizations. For this end, we are building an open source Python library called data load tool (dlt).

These Python practitioners, as we call them, use dlt in their scripts to turn messy, unstructured data into regularly updated datasets. dlt empowers them to create highly scalable, easy to maintain, straightforward to deploy data pipelines without having to wait for help from a data engineer. When organizations eventually bring in data engineers to help with data loading, these engineers build on their work and evolve dlt pipelines.

We are dedicated to keeping dlt an open source project surrounded by a vibrant, engaged community. To make this sustainable, dltHub stewards dlt while also offering additional software and services that generate revenue (similar to what GitHub does with Git).

Why does dltHub exist?

We believe in a world where data loading becomes a commodity. A world where hundreds of thousands of pipelines will be created, shared, and deployed. A world where data sets, reports, and analytics will be written and shared publicly and privately.

To achieve our mission to make this next generation of Python users autonomous when they create and use data in their organizations, we need to address the requirements of both the Python practitioner and the data engineer with a minimal Python library. We also need dltHub to become the GitHub for data pipelines, facilitating and supporting the ecosystem of pipeline creators and maintainers as well as the other data folks who consume and analyze the data loaded.

There are lots of ETL/ELT tools available (300+!). Yet, as we engaged with Python practioners over the last one and half years, we found few Python practitioners that use traditional data ingestion tools. Only a handful have even heard of them. Very simplified, there’s two approaches in traditional data ingestion tools and neither works for this new generation: 1) SaaS solutions that handle the entire data loading process and 2) object-oriented frameworks for software engineers.

SaaS solutions do not give Python practitioners enough credit, while frameworks expect too much of them. In other words, there's no “Jupyter Notebook, pandas, NumPy, etc. for data loading” that meets users needs. As millions of Python practioners are now entering organizations every year, we think this should exist.

This demo works on codespaces. Codespaces is a development environment available for free to anyone with a Github account. You'll be asked to fork the demo repository and from there the README guides you with further steps.
The demo uses the Continue VSCode extension.

Off to codespaces!

DHelp

Ask a question

Welcome to "Codex Central", your next-gen help center, driven by OpenAI's GPT-4 model. It's more than just a forum or a FAQ hub – it's a dynamic knowledge base where coders can find AI-assisted solutions to their pressing problems. With GPT-4's powerful comprehension and predictive abilities, Codex Central provides instantaneous issue resolution, insightful debugging, and personalized guidance. Get your code running smoothly with the unparalleled support at Codex Central - coding help reimagined with AI prowess.