home / docs

docs

Custom SQL query returning 101 rows (hide)

This data as json, CSV

rowidtitlecontentsections_ftsrank
1 Datasette An open source multi-tool for exploring and publishing data Datasette is a tool for exploring and publishing data. It helps people take data of any shape or size and publish that as an interactive, explorable website and accompanying API. Datasette is aimed at data journalists, museum curators, archivists, local governments and anyone else who has data that they wish to share with the world. It is part of a wider ecosystem of tools and plugins dedicated to making working with structured data as productive as possible. Explore a demo , watch a presentation about the project or Try Datasette without installing anything using Glitch . Interested in learning Datasette? Start with the official tutorials . Support questions, feedback? Join our GitHub Discussions forum . 7  
2 Contents Getting started Play with a live demo Follow a tutorial Datasette in your browser with Datasette Lite Try Datasette without installing anything using Glitch Using Datasette on your own computer Installation Basic installation Datasette Desktop for Mac Using Homebrew Using pip Advanced installation options Using pipx Using Docker A note about extensions The Datasette Ecosystem sqlite-utils Dogsheep CLI reference datasette --help datasette serve datasette --get datasette serve --help-settings datasette plugins datasette install datasette uninstall datasette publish datasette publish cloudrun datasette publish heroku datasette package datasette inspect Pages and API endpoints Top-level index Database Table Row Publishing data datasette publish Publishing to Google Cloud Run Publishing to Heroku Publishing to Vercel Publishing to Fly Custom metadata and plugins datasette package Deploying Datasette Deployment fundamentals Running Datasette using systemd Running Datasette using OpenRC Deploying using buildpacks Running Datasette behind a proxy Nginx proxy configuration Apache proxy configuration JSON API Different shapes Pagination Special JSON arguments Table arguments Column filter arguments Special table arguments Expanding foreign key references Discovering the JSON for a page Running SQL queries Named parameters Views Canned queries Canned query parameters Additional canned query options Writable canned queries Magic parameters JSON API for writable canned queries Pagination Cross-database queries Authentication and permissions Actors Using the "root" actor Permissions Defining permissions with "allow" blocks The /-/allow-debug tool Configuring permissions in metadata.json Controlling access to an instance Controlling access to specific databases Controlling access to specific tables and views Controlling access to specific canned queries Controlling the ability to execute arbitrary SQL Checking permissions in plugins actor_matches_allow() The permissions debug tool The ds_actor cookie Including an expiry time Th… 7  
3 Running SQL queries Datasette treats SQLite database files as read-only and immutable. This means it is not possible to execute INSERT or UPDATE statements using Datasette, which allows us to expose SELECT statements to the outside world without needing to worry about SQL injection attacks. The easiest way to execute custom SQL against Datasette is through the web UI. The database index page includes a SQL editor that lets you run any SELECT query you like. You can also construct queries using the filter interface on the tables page, then click "View and edit SQL" to open that query in the custom SQL editor. Note that this interface is only available if the execute-sql permission is allowed. Any Datasette SQL query is reflected in the URL of the page, allowing you to bookmark them, share them with others and navigate through previous queries using your browser back button. You can also retrieve the results of any query as JSON by adding .json to the base URL. 7  
4 Named parameters Datasette has special support for SQLite named parameters. Consider a SQL query like this: select * from Street_Tree_List where "PermitNotes" like :notes and "qSpecies" = :species If you execute this query using the custom query editor, Datasette will extract the two named parameters and use them to construct form fields for you to provide values. You can also provide values for these fields by constructing a URL: /mydatabase?sql=select...&species=44 SQLite string escaping rules will be applied to values passed using named parameters - they will be wrapped in quotes and their content will be correctly escaped. Values from named parameters are treated as SQLite strings. If you need to perform numeric comparisons on them you should cast them to an integer or float first using cast(:name as integer) or cast(:name as real) , for example: select * from Street_Tree_List where latitude > cast(:min_latitude as real) and latitude < cast(:max_latitude as real) Datasette disallows custom SQL queries containing the string PRAGMA (with a small number of exceptions ) as SQLite pragma statements can be used to change database settings at runtime. If you need to include the string "pragma" in a query you can do so safely using a named parameter. 7  
5 Views If you want to bundle some pre-written SQL queries with your Datasette-hosted database you can do so in two ways. The first is to include SQL views in your database - Datasette will then list those views on your database index page. The quickest way to create views is with the SQLite command-line interface: $ sqlite3 sf-trees.db SQLite version 3.19.3 2017-06-27 16:48:08 Enter ".help" for usage hints. sqlite> CREATE VIEW demo_view AS select qSpecies from Street_Tree_List; <CTRL+D> 7  
6 Canned queries As an alternative to adding views to your database, you can define canned queries inside your metadata.json file. Here's an example: { "databases": { "sf-trees": { "queries": { "just_species": { "sql": "select qSpecies from Street_Tree_List" } } } } } Then run Datasette like this: datasette sf-trees.db -m metadata.json Each canned query will be listed on the database index page, and will also get its own URL at: /database-name/canned-query-name For the above example, that URL would be: /sf-trees/just_species You can optionally include "title" and "description" keys to show a title and description on the canned query page. As with regular table metadata you can alternatively specify "description_html" to have your description rendered as HTML (rather than having HTML special characters escaped). 7  
7 Canned query parameters Canned queries support named parameters, so if you include those in the SQL you will then be able to enter them using the form fields on the canned query page or by adding them to the URL. This means canned queries can be used to create custom JSON APIs based on a carefully designed SQL statement. Here's an example of a canned query with a named parameter: select neighborhood, facet_cities.name, state from facetable join facet_cities on facetable.city_id = facet_cities.id where neighborhood like '%' || :text || '%' order by neighborhood; In the canned query metadata (here Using YAML for metadata as metadata.yaml ) it looks like this: databases: fixtures: queries: neighborhood_search: sql: |- select neighborhood, facet_cities.name, state from facetable join facet_cities on facetable.city_id = facet_cities.id where neighborhood like '%' || :text || '%' order by neighborhood title: Search neighborhoods Here's the equivalent using JSON (as metadata.json ): { "databases": { "fixtures": { "queries": { "neighborhood_search": { "sql": "select neighborhood, facet_cities.name, state\nfrom facetable\n join facet_cities on facetable.city_id = facet_cities.id\nwhere neighborhood like '%' || :text || '%'\norder by neighborhood", "title": "Search neighborhoods" } } } } } Note that we are using SQLite string concatenation here - the || operator - to add wildcard % characters to the string provided by the user. You can try this canned query out here: https://latest.datasette.io/fixtures/neighborhood_search?text=town In this example the :text named parameter is automatically extracted from the query using a regular expression. … 7  
8 Additional canned query options Additional options can be specified for canned queries in the YAML or JSON configuration. 7  
9 hide_sql Canned queries default to displaying their SQL query at the top of the page. If the query is extremely long you may want to hide it by default, with a "show" link that can be used to make it visible. Add the "hide_sql": true option to hide the SQL query by default. 7  
10 fragment Some plugins, such as datasette-vega , can be configured by including additional data in the fragment hash of the URL - the bit that comes after a # symbol. You can set a default fragment hash that will be included in the link to the canned query from the database index page using the "fragment" key. This example demonstrates both fragment and hide_sql : { "databases": { "fixtures": { "queries": { "neighborhood_search": { "sql": "select neighborhood, facet_cities.name, state\nfrom facetable join facet_cities on facetable.city_id = facet_cities.id\nwhere neighborhood like '%' || :text || '%' order by neighborhood;", "fragment": "fragment-goes-here", "hide_sql": true } } } } } See here for a demo of this in action. 7  
11 Writable canned queries Canned queries by default are read-only. You can use the "write": true key to indicate that a canned query can write to the database. See Controlling access to specific canned queries for details on how to add permission checks to canned queries, using the "allow" key. { "databases": { "mydatabase": { "queries": { "add_name": { "sql": "INSERT INTO names (name) VALUES (:name)", "write": true } } } } } This configuration will create a page at /mydatabase/add_name displaying a form with a name field. Submitting that form will execute the configured INSERT query. You can customize how Datasette represents success and errors using the following optional properties: on_success_message - the message shown when a query is successful on_success_redirect - the path or URL the user is redirected to on success on_error_message - the message shown when a query throws an error on_error_redirect - the path or URL the user is redirected to on error For example: { "databases": { "mydatabase": { "queries": { "add_name": { "sql": "INSERT INTO names (name) VALUES (:name)", "write": true, "on_success_message": "Name inserted", "on_success_redirect": "/mydatabase/names", "on_error_message": "Name insert failed", "on_error_redirect": "/mydatabase" } } } } } You can use "p… 7  
12 Magic parameters Named parameters that start with an underscore are special: they can be used to automatically add values created by Datasette that are not contained in the incoming form fields or query string. These magic parameters are only supported for canned queries: to avoid security issues (such as queries that extract the user's private cookies) they are not available to SQL that is executed by the user as a custom SQL query. Available magic parameters are: _actor_* - e.g. _actor_id , _actor_name Fields from the currently authenticated Actors . _header_* - e.g. _header_user_agent Header from the incoming HTTP request. The key should be in lower case and with hyphens converted to underscores e.g. _header_user_agent or _header_accept_language . _cookie_* - e.g. _cookie_lang The value of the incoming cookie of that name. _now_epoch The number of seconds since the Unix epoch. _now_date_utc The date in UTC, e.g. 2020-06-01 _now_datetime_utc The ISO 8601 datetime in UTC, e.g. 2020-06-24T18:01:07Z _random_chars_* - e.g. … 7  
13 JSON API for writable canned queries Writable canned queries can also be accessed using a JSON API. You can POST data to them using JSON, and you can request that their response is returned to you as JSON. To submit JSON to a writable canned query, encode key/value parameters as a JSON document: POST /mydatabase/add_message {"message": "Message goes here"} You can also continue to submit data using regular form encoding, like so: POST /mydatabase/add_message message=Message+goes+here There are three options for specifying that you would like the response to your request to return JSON data, as opposed to an HTTP redirect to another page. Set an Accept: application/json header on your request Include ?_json=1 in the URL that you POST to Include "_json": 1 in your JSON body, or &_json=1 in your form encoded body The JSON response will look like this: { "ok": true, "message": "Query executed, 1 row affected", "redirect": "/data/add_name" } The "message" and "redirect" values here will take into account on_success_message , on_success_redirect , on_error_message and on_error_redirect , if they have been set. 7  
14 Pagination Datasette's default table pagination is designed to be extremely efficient. SQL OFFSET/LIMIT pagination can have a significant performance penalty once you get into multiple thousands of rows, as each page still requires the database to scan through every preceding row to find the correct offset. When paginating through tables, Datasette instead orders the rows in the table by their primary key and performs a WHERE clause against the last seen primary key for the previous page. For example: select rowid, * from Tree_List where rowid > 200 order by rowid limit 101 This represents page three for this particular table, with a page size of 100. Note that we request 101 items in the limit clause rather than 100. This allows us to detect if we are on the last page of the results: if the query returns less than 101 rows we know we have reached the end of the pagination set. Datasette will only return the first 100 rows - the 101st is used purely to detect if there should be another page. Since the where clause acts against the index on the primary key, the query is extremely fast even for records that are a long way into the overall pagination set. 7  
15 Cross-database queries SQLite has the ability to run queries that join across multiple databases. Up to ten databases can be attached to a single SQLite connection and queried together. Datasette can execute joins across multiple databases if it is started with the --crossdb option: datasette fixtures.db extra_database.db --crossdb If it is started in this way, the /_memory page can be used to execute queries that join across multiple databases. References to tables in attached databases should be preceded by the database name and a period. For example, this query will show a list of tables across both of the above databases: select 'fixtures' as database, * from [fixtures].sqlite_master union select 'extra_database' as database, * from [extra_database].sqlite_master Try that out here . 7  
16 Deploying Datasette The quickest way to deploy a Datasette instance on the internet is to use the datasette publish command, described in Publishing data . This can be used to quickly deploy Datasette to a number of hosting providers including Heroku, Google Cloud Run and Vercel. You can deploy Datasette to other hosting providers using the instructions on this page. 7  
17 Deployment fundamentals Datasette can be deployed as a single datasette process that listens on a port. Datasette is not designed to be run as root, so that process should listen on a higher port such as port 8000. If you want to serve Datasette on port 80 (the HTTP default port) or port 443 (for HTTPS) you should run it behind a proxy server, such as nginx, Apache or HAProxy. The proxy server can listen on port 80/443 and forward traffic on to Datasette. 7  
18 Running Datasette using systemd You can run Datasette on Ubuntu or Debian systems using systemd . First, ensure you have Python 3 and pip installed. On Ubuntu you can use sudo apt-get install python3 python3-pip . You can install Datasette into a virtual environment, or you can install it system-wide. To install system-wide, use sudo pip3 install datasette . Now create a folder for your Datasette databases, for example using mkdir /home/ubuntu/datasette-root . You can copy a test database into that folder like so: cd /home/ubuntu/datasette-root curl -O https://latest.datasette.io/fixtures.db Create a file at /etc/systemd/system/datasette.service with the following contents: [Unit] Description=Datasette After=network.target [Service] Type=simple User=ubuntu Environment=DATASETTE_SECRET= WorkingDirectory=/home/ubuntu/datasette-root ExecStart=datasette serve . -h 127.0.0.1 -p 8000 Restart=on-failure [Install] WantedBy=multi-user.target Add a random value for the DATASETTE_SECRET - this will be used to sign Datasette cookies such as the CSRF token cookie. You can generate a suitable value like so: $ python3 -c 'import secrets; print(secrets.token_hex(32))' This configuration will run Datasette against all database files contained in the /home/ubuntu/datasette-root directory. If that directory contains a metadata.yml (or .json ) file or a templates/ or plugins/ sub-directory those will automatically be loaded by Datasette - see Configuration directory mode for details. You can start the Datasette process running using the following: sudo systemctl daemon-reload sudo systemctl start datasette.service You will need to restart the Datasette service after making changes to its metadata.json configuration or adding a new database file to that directory. You can do that using: sudo systemctl restart datasette.service Once th… 7  
19 Running Datasette using OpenRC OpenRC is the service manager on non-systemd Linux distributions like Alpine Linux and Gentoo . Create an init script at /etc/init.d/datasette with the following contents: #!/sbin/openrc-run name="datasette" command="datasette" command_args="serve -h 0.0.0.0 /path/to/db.db" command_background=true pidfile="/run/${RC_SVCNAME}.pid" You then need to configure the service to run at boot and start it: rc-update add datasette rc-service datasette start 7  
20 Deploying using buildpacks Some hosting providers such as Heroku , DigitalOcean App Platform and Scalingo support the Buildpacks standard for deploying Python web applications. Deploying Datasette on these platforms requires two files: requirements.txt and Procfile . The requirements.txt file lets the platform know which Python packages should be installed. It should contain datasette at a minimum, but can also list any Datasette plugins you wish to install - for example: datasette datasette-vega The Procfile lets the hosting platform know how to run the command that serves web traffic. It should look like this: web: datasette . -h 0.0.0.0 -p $PORT --cors The $PORT environment variable is provided by the hosting platform. --cors enables CORS requests from JavaScript running on other websites to your domain - omit this if you don't want to allow CORS. You can add additional Datasette Settings options here too. These two files should be enough to deploy Datasette on any host that supports buildpacks. Datasette will serve any SQLite files that are included in the root directory of the application. If you want to build SQLite files or download them as part of the deployment process you can do so using a bin/post_compile file. For example, the following bin/post_compile will download an example database that will then be served by Datasette: wget https://fivethirtyeight.datasettes.com/fivethirtyeight.db simonw/buildpack-datasette-demo is an example GitHub repository showing a Datasette configuration that can be deployed to a buildpack-supporting host. 7  
21 Running Datasette behind a proxy You may wish to run Datasette behind an Apache or nginx proxy, using a path within your existing site. You can use the base_url configuration setting to tell Datasette to serve traffic with a specific URL prefix. For example, you could run Datasette like this: datasette my-database.db --setting base_url /my-datasette/ -p 8009 This will run Datasette with the following URLs: http://127.0.0.1:8009/my-datasette/ - the Datasette homepage http://127.0.0.1:8009/my-datasette/my-database - the page for the my-database.db database http://127.0.0.1:8009/my-datasette/my-database/some_table - the page for the some_table table You can now set your nginx or Apache server to proxy the /my-datasette/ path to this Datasette instance. 7  
22 Nginx proxy configuration Here is an example of an nginx configuration file that will proxy traffic to Datasette: daemon off; events { worker_connections 1024; } http { server { listen 80; location /my-datasette { proxy_pass http://127.0.0.1:8009/my-datasette; proxy_set_header Host $host; } } } You can also use the --uds option to Datasette to listen on a Unix domain socket instead of a port, configuring the nginx upstream proxy like this: daemon off; events { worker_connections 1024; } http { server { listen 80; location /my-datasette { proxy_pass http://datasette/my-datasette; proxy_set_header Host $host; } } upstream datasette { server unix:/tmp/datasette.sock; } } Then run Datasette with datasette --uds /tmp/datasette.sock path/to/database.db --setting base_url /my-datasette/ . 7  
23 Apache proxy configuration For Apache , you can use the ProxyPass directive. First make sure the following lines are uncommented: LoadModule proxy_module lib/httpd/modules/mod_proxy.so LoadModule proxy_http_module lib/httpd/modules/mod_proxy_http.so Then add these directives to proxy traffic: ProxyPass /my-datasette/ http://127.0.0.1:8009/my-datasette/ ProxyPreserveHost On A live demo of Datasette running behind Apache using this proxy setup can be seen at datasette-apache-proxy-demo.datasette.io/prefix/ . The code for that demo can be found in the demos/apache-proxy directory. Using --uds you can use Unix domain sockets similar to the nginx example: ProxyPass /my-datasette/ unix:/tmp/datasette.sock|http://localhost/my-datasette/ The ProxyPreserveHost On directive ensures that the original Host: header from the incoming request is passed through to Datasette. Datasette needs this to correctly assemble links to other pages using the .absolute_url(request, path) method. 7  
24 The Datasette Ecosystem Datasette sits at the center of a growing ecosystem of open source tools aimed at making it as easy as possible to gather, analyze and publish interesting data. These tools are divided into two main groups: tools for building SQLite databases (for use with Datasette) and plugins that extend Datasette's functionality. The Datasette project website includes a directory of plugins and a directory of tools: Plugins directory on datasette.io Tools directory on datasette.io 7  
25 sqlite-utils sqlite-utils is a key building block for the wider Datasette ecosystem. It provides a collection of utilities for manipulating SQLite databases, both as a Python library and a command-line utility. Features include: Insert data into a SQLite database from JSON, CSV or TSV, automatically creating tables with the correct schema or altering existing tables to add missing columns. Configure tables for use with SQLite full-text search, including creating triggers needed to keep the search index up-to-date. Modify tables in ways that are not supported by SQLite's default ALTER TABLE syntax - for example changing the types of columns or selecting a new primary key for a table. Adding foreign keys to existing database tables. Extracting columns of data into a separate lookup table. 7  
26 Dogsheep Dogsheep is a collection of tools for personal analytics using SQLite and Datasette. The project provides tools like github-to-sqlite and twitter-to-sqlite that can import data from different sources in order to create a personal data warehouse. Personal Data Warehouses: Reclaiming Your Data is a talk that explains Dogsheep and demonstrates it in action. 7  
27 Authentication and permissions Datasette does not require authentication by default. Any visitor to a Datasette instance can explore the full data and execute read-only SQL queries. Datasette's plugin system can be used to add many different styles of authentication, such as user accounts, single sign-on or API keys. 7  
28 Actors Through plugins, Datasette can support both authenticated users (with cookies) and authenticated API agents (via authentication tokens). The word "actor" is used to cover both of these cases. Every request to Datasette has an associated actor value, available in the code as request.actor . This can be None for unauthenticated requests, or a JSON compatible Python dictionary for authenticated users or API agents. The actor dictionary can be any shape - the design of that data structure is left up to the plugins. A useful convention is to include an "id" string, as demonstrated by the "root" actor below. Plugins can use the actor_from_request(datasette, request) hook to implement custom logic for authenticating an actor based on the incoming HTTP request. 7  
29 Using the "root" actor Datasette currently leaves almost all forms of authentication to plugins - datasette-auth-github for example. The one exception is the "root" account, which you can sign into while using Datasette on your local machine. This provides access to a small number of debugging features. To sign in as root, start Datasette using the --root command-line option, like this: $ datasette --root http://127.0.0.1:8001/-/auth-token?token=786fc524e0199d70dc9a581d851f466244e114ca92f33aa3b42a139e9388daa7 INFO: Started server process [25801] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://127.0.0.1:8001 (Press CTRL+C to quit) The URL on the first line includes a one-use token which can be used to sign in as the "root" actor in your browser. Click on that link and then visit http://127.0.0.1:8001/-/actor to confirm that you are authenticated as an actor that looks like this: { "id": "root" } 7  
30 Permissions Datasette has an extensive permissions system built-in, which can be further extended and customized by plugins. The key question the permissions system answers is this: Is this actor allowed to perform this action , optionally against this particular resource ? Actors are described above . An action is a string describing the action the actor would like to perform. A full list is provided below - examples include view-table and execute-sql . A resource is the item the actor wishes to interact with - for example a specific database or table. Some actions, such as permissions-debug , are not associated with a particular resource. Datasette's built-in view permissions ( view-database , view-table etc) default to allow - unless you configure additional permission rules unauthenticated users will be allowed to access content. Permissions with potentially harmful effects should default to deny . Plugin authors should account for this when designing new plugins - for example, the datasette-upload-csvs plugin defaults to deny so that installations don't accidentally allow unauthenticated users to create new tables by uploading a CSV file. 7  
31 Defining permissions with "allow" blocks The standard way to define permissions in Datasette is to use an "allow" block. This is a JSON document describing which actors are allowed to perform a permission. The most basic form of allow block is this ( allow demo , deny demo ): { "allow": { "id": "root" } } This will match any actors with an "id" property of "root" - for example, an actor that looks like this: { "id": "root", "name": "Root User" } An allow block can specify "deny all" using false ( demo ): { "allow": false } An "allow" of true allows all access ( demo ): { "allow": true } Allow keys can provide a list of values. These will match any actor that has any of those values ( allow demo , deny demo ): { "allow": { "id": ["simon", "cleopaws"] } } This will match any actor with an "id" of either "simon" or "cleopaws" . Actors can have properties that feature a list of values. These will be matched against the list of values in an allow block. Consider the following actor: { "id": "simon", "roles": ["staff", "developer"] } This allow block will provide access to any actor that has "developer" as one of their roles ( allow demo , deny demo ): { "allow": { "roles": ["developer"] } } Note that "roles" is not a concept that is baked into Datasette - it's a convention that plugins can choose to implement and act on. If you want to provide access to any actor with a value for a specific key, use "*" . For example, to match any logged-in user specify the following ( allow demo , deny demo ): { "allow": { "id": "*" } } You can specify that only unauthenticated actors (from anonymous HTTP requests) should be all… 7  
32 The /-/allow-debug tool The /-/allow-debug tool lets you try out different "action" blocks against different "actor" JSON objects. You can try that out here: https://latest.datasette.io/-/allow-debug 7  
33 Configuring permissions in metadata.json You can limit who is allowed to view different parts of your Datasette instance using "allow" keys in your Metadata configuration. You can control the following: Access to the entire Datasette instance Access to specific databases Access to specific tables and views Access to specific Canned queries If a user cannot access a specific database, they will not be able to access tables, views or queries within that database. If a user cannot access the instance they will not be able to access any of the databases, tables, views or queries. 7  
34 Controlling access to an instance Here's how to restrict access to your entire Datasette instance to just the "id": "root" user: { "title": "My private Datasette instance", "allow": { "id": "root" } } To deny access to all users, you can use "allow": false : { "title": "My entirely inaccessible instance", "allow": false } One reason to do this is if you are using a Datasette plugin - such as datasette-permissions-sql - to control permissions instead. 7  
35 Controlling access to specific databases To limit access to a specific private.db database to just authenticated users, use the "allow" block like this: { "databases": { "private": { "allow": { "id": "*" } } } } 7  
36 Controlling access to specific tables and views To limit access to the users table in your bakery.db database: { "databases": { "bakery": { "tables": { "users": { "allow": { "id": "*" } } } } } } This works for SQL views as well - you can list their names in the "tables" block above in the same way as regular tables. Restricting access to tables and views in this way will NOT prevent users from querying them using arbitrary SQL queries, like this for example. If you are restricting access to specific tables you should also use the "allow_sql" block to prevent users from bypassing the limit with their own SQL queries - see Controlling the ability to execute arbitrary SQL . 7  
37 Controlling access to specific canned queries Canned queries allow you to configure named SQL queries in your metadata.json that can be executed by users. These queries can be set up to both read and write to the database, so controlling who can execute them can be important. To limit access to the add_name canned query in your dogs.db database to just the root user : { "databases": { "dogs": { "queries": { "add_name": { "sql": "INSERT INTO names (name) VALUES (:name)", "write": true, "allow": { "id": ["root"] } } } } } } 7  
38 Controlling the ability to execute arbitrary SQL Datasette defaults to allowing any site visitor to execute their own custom SQL queries, for example using the form on the database page or by appending a ?_where= parameter to the table page like this . Access to this ability is controlled by the execute-sql permission. The easiest way to disable arbitrary SQL queries is using the default_allow_sql setting when you first start Datasette running. You can alternatively use an "allow_sql" block to control who is allowed to execute arbitrary SQL queries. To prevent any user from executing arbitrary SQL queries, use this: { "allow_sql": false } To enable just the root user to execute SQL for all databases in your instance, use the following: { "allow_sql": { "id": "root" } } To limit this ability for just one specific database, use this: { "databases": { "mydatabase": { "allow_sql": { "id": "root" } } } } 7  
39 Checking permissions in plugins Datasette plugins can check if an actor has permission to perform an action using the datasette.permission_allowed(...) method. Datasette core performs a number of permission checks, documented below . Plugins can implement the permission_allowed(datasette, actor, action, resource) plugin hook to participate in decisions about whether an actor should be able to perform a specified action. 7  
40 actor_matches_allow() Plugins that wish to implement this same "allow" block permissions scheme can take advantage of the datasette.utils.actor_matches_allow(actor, allow) function: from datasette.utils import actor_matches_allow actor_matches_allow({"id": "root"}, {"id": "*"}) # returns True The currently authenticated actor is made available to plugins as request.actor . 7  
41 The permissions debug tool The debug tool at /-/permissions is only available to the authenticated root user (or any actor granted the permissions-debug action according to a plugin). It shows the thirty most recent permission checks that have been carried out by the Datasette instance. This is designed to help administrators and plugin authors understand exactly how permission checks are being carried out, in order to effectively configure Datasette's permission system. 7  
42 The ds_actor cookie Datasette includes a default authentication plugin which looks for a signed ds_actor cookie containing a JSON actor dictionary. This is how the root actor mechanism works. Authentication plugins can set signed ds_actor cookies themselves like so: response = Response.redirect("/") response.set_cookie( "ds_actor", datasette.sign({"a": {"id": "cleopaws"}}, "actor"), ) Note that you need to pass "actor" as the namespace to .sign(value, namespace="default") . The shape of data encoded in the cookie is as follows: { "a": {... actor ...} } 7  
43 Including an expiry time ds_actor cookies can optionally include a signed expiry timestamp, after which the cookies will no longer be valid. Authentication plugins may chose to use this mechanism to limit the lifetime of the cookie. For example, if a plugin implements single-sign-on against another source it may decide to set short-lived cookies so that if the user is removed from the SSO system their existing Datasette cookies will stop working shortly afterwards. To include an expiry, add a "e" key to the cookie value containing a base62-encoded integer representing the timestamp when the cookie should expire. For example, here's how to set a cookie that expires after 24 hours: import time from datasette.utils import baseconv expires_at = int(time.time()) + (24 * 60 * 60) response = Response.redirect("/") response.set_cookie( "ds_actor", datasette.sign( { "a": {"id": "cleopaws"}, "e": baseconv.base62.encode(expires_at), }, "actor", ), ) The resulting cookie will encode data that looks something like this: { "a": { "id": "cleopaws" }, "e": "1jjSji" } 7  
44 The /-/logout page The page at /-/logout provides the ability to log out of a ds_actor cookie authentication session. 7  
45 Built-in permissions This section lists all of the permission checks that are carried out by Datasette core, along with the resource if it was passed. 7  
46 view-instance Top level permission - Actor is allowed to view any pages within this instance, starting at https://latest.datasette.io/ Default allow . 7  
47 view-database Actor is allowed to view a database page, e.g. https://latest.datasette.io/fixtures resource - string The name of the database Default allow . 7  
48 view-database-download Actor is allowed to download a database, e.g. https://latest.datasette.io/fixtures.db resource - string The name of the database Default allow . 7  
49 view-table Actor is allowed to view a table (or view) page, e.g. https://latest.datasette.io/fixtures/complex_foreign_keys resource - tuple: (string, string) The name of the database, then the name of the table Default allow . 7  
50 view-query Actor is allowed to view (and execute) a canned query page, e.g. https://latest.datasette.io/fixtures/pragma_cache_size - this includes executing Writable canned queries . resource - tuple: (string, string) The name of the database, then the name of the canned query Default allow . 7  
51 execute-sql Actor is allowed to run arbitrary SQL queries against a specific database, e.g. https://latest.datasette.io/fixtures?sql=select+100 resource - string The name of the database Default allow . See also the default_allow_sql setting . 7  
52 permissions-debug Actor is allowed to view the /-/permissions debug page. Default deny . 7  
53 debug-menu Controls if the various debug pages are displayed in the navigation menu. Default deny . 7  
54 JSON API Datasette provides a JSON API for your SQLite databases. Anything you can do through the Datasette user interface can also be accessed as JSON via the API. To access the API for a page, either click on the .json link on that page or edit the URL and add a .json extension to it. If you started Datasette with the --cors option, each JSON endpoint will be served with the following additional HTTP headers: Access-Control-Allow-Origin: * Access-Control-Allow-Headers: Authorization Access-Control-Expose-Headers: Link This means JavaScript running on any domain will be able to make cross-origin requests to fetch the data. If you start Datasette without the --cors option only JavaScript running on the same domain as Datasette will be able to access the API. 7  
55 Different shapes The default JSON representation of data from a SQLite table or custom query looks like this: { "database": "sf-trees", "table": "qSpecies", "columns": [ "id", "value" ], "rows": [ [ 1, "Myoporum laetum :: Myoporum" ], [ 2, "Metrosideros excelsa :: New Zealand Xmas Tree" ], [ 3, "Pinus radiata :: Monterey Pine" ] ], "truncated": false, "next": "100", "next_url": "http://127.0.0.1:8001/sf-trees-02c8ef1/qSpecies.json?_next=100", "query_ms": 1.9571781158447266 } The columns key lists the columns that are being returned, and the rows key then returns a list of lists, each one representing a row. The order of the values in each row corresponds to the columns. The _shape parameter can be used to access alternative formats for the rows key which may be more convenient for your application. There are three options: ?_shape=arrays - "rows" is the default option, shown above ?_shape=objects - "rows" is a list of JSON key/value objects ?_shape=array - an JSON array of objects ?_shape=array&_nl=on - a newline-separated list of JSON objects ?_shape=arrayfirst - a flat JSON array containing just the first value from each row ?_shape=object - a JSON object keyed using the primary keys of the rows _shape=objects looks like this: { "database": "sf-trees", ... "rows": [ { "id": 1, … 7  
56 Pagination The default JSON representation includes a "next_url" key which can be used to access the next page of results. If that key is null or missing then it means you have reached the final page of results. Other representations include pagination information in the link HTTP header. That header will look something like this: link: <https://latest.datasette.io/fixtures/sortable.json?_next=d%2Cv>; rel="next" Here is an example Python function built using requests that returns a list of all of the paginated items from one of these API endpoints: def paginate(url): items = [] while url: response = requests.get(url) try: url = response.links.get("next").get("url") except AttributeError: url = None items.extend(response.json()) return items 7  
57 Special JSON arguments Every Datasette endpoint that can return JSON also accepts the following query string arguments: ?_shape=SHAPE The shape of the JSON to return, documented above. ?_nl=on When used with ?_shape=array produces newline-delimited JSON objects. ?_json=COLUMN1&_json=COLUMN2 If any of your SQLite columns contain JSON values, you can use one or more _json= parameters to request that those columns be returned as regular JSON. Without this argument those columns will be returned as JSON objects that have been double-encoded into a JSON string value. Compare this query without the argument to this query using the argument ?_json_infinity=on If your data contains infinity or -infinity values, Datasette will replace them with None when returning them as JSON. If you pass _json_infinity=1 Datasette will instead return them as Infinity or -Infinity which is invalid JSON but can be processed by some custom JSON parsers. ?_timelimit=MS Sets a custom time limit for the query in ms. You can use this for optimistic queries where you would like Datasette to give up if the query takes too long, for example if you want to implement autocomplete search but only… 7  
58 Table arguments The Datasette table view takes a number of special query string arguments. 7  
59 Column filter arguments You can filter the data returned by the table based on column values using a query string argument. ?column__exact=value or ?_column=value Returns rows where the specified column exactly matches the value. ?column__not=value Returns rows where the column does not match the value. ?column__contains=value Rows where the string column contains the specified value ( column like "%value%" in SQL). ?column__endswith=value Rows where the string column ends with the specified value ( column like "%value" in SQL). ?column__startswith=value Rows where the string column starts with the specified value ( column like "value%" in SQL). ?column__gt=value Rows which are greater than the specified value. ?column__gte=value Rows which are greater than or equal to the specified value. ?column__lt=value Rows which are less than the specified value. … 7  
60 Special table arguments ?_col=COLUMN1&_col=COLUMN2 List specific columns to display. These will be shown along with any primary keys. ?_nocol=COLUMN1&_nocol=COLUMN2 List specific columns to hide - any column not listed will be displayed. Primary keys cannot be hidden. ?_labels=on/off Expand foreign key references for every possible column. See below. ?_label=COLUMN1&_label=COLUMN2 Expand foreign key references for one or more specified columns. ?_size=1000 or ?_size=max Sets a custom page size. This cannot exceed the max_returned_rows limit passed to datasette serve . Use max to get max_returned_rows . ?_sort=COLUMN Sorts the results by the specified column. ?_sort_desc=COLUMN Sorts the results by the specified column in descending order. ?_search=keywords For SQLite tables that have been configured for full-text search executes a search … 7  
61 Expanding foreign key references Datasette can detect foreign key relationships and resolve those references into labels. The HTML interface does this by default for every detected foreign key column - you can turn that off using ?_labels=off . You can request foreign keys be expanded in JSON using the _labels=on or _label=COLUMN special query string parameters. Here's what an expanded row looks like: [ { "rowid": 1, "TreeID": 141565, "qLegalStatus": { "value": 1, "label": "Permitted Site" }, "qSpecies": { "value": 1, "label": "Myoporum laetum :: Myoporum" }, "qAddress": "501X Baker St", "SiteOrder": 1 } ] The column in the foreign key table that is used for the label can be specified in metadata.json - see Specifying the label column for a table . 7  
62 Discovering the JSON for a page Most of the HTML pages served by Datasette provide a mechanism for discovering their JSON equivalents using the HTML link mechanism. You can find this near the top of the source code of those pages, looking like this: <link rel="alternate" type="application/json+datasette" href="https://latest.datasette.io/fixtures/sortable.json"> The JSON URL is also made available in a Link HTTP header for the page: Link: https://latest.datasette.io/fixtures/sortable.json; rel="alternate"; type="application/json+datasette" 7  
63 Changelog   7  
64 0.65.2 (2025-11-05) Fixes an open redirect security issue: Datasette instances would redirect to example.com/foo/bar if you accessed the path //example.com/foo/bar . Thanks to James Jefferies for the fix. ( #2429 ) Upgraded for compatibility with Python 3.14. Fixed datasette publish cloudrun to work with changes to the underlying Cloud Run architecture. ( #2511 ) Minor upgrades to fix warnings, including pkg_resources deprecation. 7  
65 0.65.1 (2024-12-28) Fixed bug with upgraded HTTPX 0.28.0 dependency. ( #2443 ) 7  
66 0.65 (2024-10-07) Upgrade for compatibility with Python 3.13 (by vendoring Pint dependency). ( #2434 ) Dropped support for Python 3.8. 7  
67 0.64.8 (2024-06-21) Security improvement: 404 pages used to reflect content from the URL path, which could be used to display misleading information to Datasette users. 404 errors no longer display additional information from the URL. ( #2359 ) Backported a better fix for correctly extracting named parameters from canned query SQL against SQLite 3.46.0. ( #2353 ) 7  
68 0.64.7 (2024-06-12) Fixed a bug where canned queries with named parameters threw an error when run against SQLite 3.46.0. ( #2353 ) 7  
69 0.64.6 (2023-12-22) Fixed a bug where CSV export with expanded labels could fail if a foreign key reference did not correctly resolve. ( #2214 ) 7  
70 0.64.5 (2023-10-08) Dropped dependency on click-default-group-wheel , which could cause a dependency conflict. ( #2197 ) 7  
71 0.64.4 (2023-09-21) Fix for a crashing bug caused by viewing the table page for a named in-memory database. ( #2189 ) 7  
72 0.64.3 (2023-04-27) Added pip and setuptools as explicit dependencies. This fixes a bug where Datasette could not be installed using Rye . ( #2065 ) 7  
73 0.64.2 (2023-03-08) Fixed a bug with datasette publish cloudrun where deploys all used the same Docker image tag. This was mostly inconsequential as the service is deployed as soon as the image has been pushed to the registry, but could result in the incorrect image being deployed if two different deploys for two separate services ran at exactly the same time. ( #2036 ) 7  
74 0.64.1 (2023-01-11) Documentation now links to a current source of information for installing Python 3. ( #1987 ) Incorrectly calling the Datasette constructor using Datasette("path/to/data.db") instead of Datasette(["path/to/data.db"]) now returns a useful error message. ( #1985 ) 7  
75 0.64 (2023-01-09) Datasette now strongly recommends against allowing arbitrary SQL queries if you are using SpatiaLite . SpatiaLite includes SQL functions that could cause the Datasette server to crash. See SpatiaLite for more details. New default_allow_sql setting, providing an easier way to disable all arbitrary SQL execution by end users: datasette --setting default_allow_sql off . See also Controlling the ability to execute arbitrary SQL . ( #1409 ) Building a location to time zone API with SpatiaLite is a new Datasette tutorial showing how to safely use SpatiaLite to create a location to time zone API. New documentation about how to debug problems loading SQLite extensions . The error message shown when an extension cannot be loaded has also been improved. ( #1979 ) Fixed an accessibility issue: the <select> elements in the table filter form now show an outline when they are currently focused. ( #1771 ) 7  
76 0.63.3 (2022-12-17) Fixed a bug where datasette --root , when running in Docker, would only output the URL to sign in as root when the server shut down, not when it started up. ( #1958 ) You no longer need to ensure await datasette.invoke_startup() has been called in order for Datasette to start correctly serving requests - this is now handled automatically the first time the server receives a request. This fixes a bug experienced when Datasette is served directly by an ASGI application server such as Uvicorn or Gunicorn. It also fixes a bug with the datasette-gunicorn plugin. ( #1955 ) 7  
77 0.63.2 (2022-11-18) Fixed a bug in datasette publish heroku where deployments failed due to an older version of Python being requested. ( #1905 ) New datasette publish heroku --generate-dir <dir> option for generating a Heroku deployment directory without deploying it. 7  
78 0.63.1 (2022-11-10) Fixed a bug where Datasette's table filter form would not redirect correctly when run behind a proxy using the base_url setting. ( #1883 ) SQL query is now shown wrapped in a <textarea> if a query exceeds a time limit. ( #1876 ) Fixed an intermittent "Too many open files" error while running the test suite. ( #1843 ) New db.close() internal method. 7  
79 0.63 (2022-10-27) See Datasette 0.63: The annotated release notes for more background on the changes in this release. 7  
80 Features Now tested against Python 3.11. Docker containers used by datasette publish and datasette package both now use that version of Python. ( #1853 ) --load-extension option now supports entrypoints. Thanks, Alex Garcia. ( #1789 ) Facet size can now be set per-table with the new facet_size table metadata option. ( #1804 ) The truncate_cells_html setting now also affects long URLs in columns. ( #1805 ) The non-JavaScript SQL editor textarea now increases height to fit the SQL query. ( #1786 ) Facets are now displayed with better line-breaks in long values. Thanks, Daniel Rech. ( #1794 ) The settings.json file used in Configuration directory mode is now validated on startup. ( #1816 ) SQL queries can now include leading SQL comments, using /* ... */ or -- ... syntax. Thanks, Charles Nepote. ( #1860 ) SQL query is now re-displayed when terminated with a time limit error. ( #1819 ) The inspect data mechanism is now used to speed up server startup - thanks, Forest Gregg. ( #1834 ) In Configuration directory mode databases with filenames ending in .sqlite or .sqlite3 are now automatically added to the Datasette instance. ( #1646 ) Breadcrumb navigation display now respects the current user's permissions. ( #1831 ) 7  
81 Plugin hooks and internals The prepare_jinja2_environment(env, datasette) plugin hook now accepts an optional datasette argument. Hook implementations can also now return an async function which will be awaited automatically. ( #1809 ) Database(is_mutable=) now defaults to True . ( #1808 ) The datasette.check_visibility() method now accepts an optional permissions= list, allowing it to take multiple permissions into account at once when deciding if something should be shown as public or private. This has been used to correctly display padlock icons in more places in the Datasette interface. ( #1829 ) Datasette no longer enforces upper bounds on its dependencies. ( #1800 ) 7  
82 Documentation New tutorial: Cleaning data with sqlite-utils and Datasette . Screenshots in the documentation are now maintained using shot-scraper , as described in Automating screenshots for the Datasette documentation using shot-scraper . ( #1844 ) More detailed command descriptions on the CLI reference page. ( #1787 ) New documentation on Running Datasette using OpenRC - thanks, Adam Simpson. ( #1825 ) 7  
83 0.62 (2022-08-14) Datasette can now run entirely in your browser using WebAssembly. Try out Datasette Lite , take a look at the code or read more about it in Datasette Lite: a server-side Python web application running in a browser . Datasette now has a Discord community for questions and discussions about Datasette and its ecosystem of projects. 7  
84 Features Datasette is now compatible with Pyodide . This is the enabling technology behind Datasette Lite . ( #1733 ) Database file downloads now implement conditional GET using ETags. ( #1739 ) HTML for facet results and suggested results has been extracted out into new templates _facet_results.html and _suggested_facets.html . Thanks, M. Nasimul Haque. ( #1759 ) Datasette now runs some SQL queries in parallel. This has limited impact on performance, see this research issue for details. New --nolock option for ignoring file locks when opening read-only databases. ( #1744 ) Spaces in the database names in URLs are now encoded as + rather than ~20 . ( #1701 ) <Binary: 2427344 bytes> is now displayed as <Binary: 2,427,344 bytes> and is accompanied by tooltip showing "2.3MB". ( #1712 ) The base Docker image used by datasette publish cloudrun , datasette package and the official Datasette image has been upgraded to 3.10.6-slim-bullseye . ( #1768 ) Canned writable queries against immutable databases now show a warning message. ( #1728 ) datasette publish cloudrun has a new --timeout option which can be used to increase the time limit applied by the Google Cloud build environment. Thanks, Tim Sherratt. ( #1717 ) datasette publish cloudrun has new --min-instances and --max-instances options. ( #1779 ) 7  
85 Plugin hooks New plugin hook: handle_exception() , for custom handling of exceptions caught by Datasette. ( #1770 ) The render_cell() plugin hook is now also passed a row argument, representing the sqlite3.Row object that is being rendered. ( #1300 ) The configuration directory is now stored in datasette.config_dir , making it available to plugins. Thanks, Chris Amico. ( #1766 ) 7  
86 Bug fixes Don't show the facet option in the cog menu if faceting is not allowed. ( #1683 ) ?_sort and ?_sort_desc now work if the column that is being sorted has been excluded from the query using ?_col= or ?_nocol= . ( #1773 ) Fixed bug where ?_sort_desc was duplicated in the URL every time the Apply button was clicked. ( #1738 ) 7  
87 Documentation Examples in the documentation now include a copy-to-clipboard button. ( #1748 ) Documentation now uses the Furo Sphinx theme. ( #1746 ) Code examples in the documentation are now all formatted using Black. ( #1718 ) Request.fake() method is now documented, see Request object . New documentation for plugin authors: Registering a plugin for the duration of a test . ( #903 ) 7  
88 0.61.1 (2022-03-23) Fixed a bug where databases with a different route from their name (as used by the datasette-hashed-urls plugin ) returned errors when executing custom SQL queries. ( #1682 ) 7  
89 0.61 (2022-03-23) In preparation for Datasette 1.0, this release includes two potentially backwards-incompatible changes. Hashed URL mode has been moved to a separate plugin, and the way Datasette generates URLs to databases and tables with special characters in their name such as / and . has changed. Datasette also now requires Python 3.7 or higher. URLs within Datasette now use a different encoding scheme for tables or databases that include "special" characters outside of the range of a-zA-Z0-9_- . This scheme is explained here: Tilde encoding . ( #1657 ) Removed hashed URL mode from Datasette. The new datasette-hashed-urls plugin can be used to achieve the same result, see datasette-hashed-urls for details. ( #1661 ) Databases can now have a custom path within the Datasette instance that is independent of the database name, using the db.route property. ( #1668 ) Datasette is now covered by a Code of Conduct . ( #1654 ) Python 3.6 is no longer supported. ( #1577 ) Tests now run against Python 3.11-dev. ( #1621 ) New datasette.ensure_permissions(actor, permissions) internal method for checking multiple permissions at once. ( #1675 ) New datasette.check_visibility(actor, action, resource=None) internal method for checking if a user can see a resource that would otherwise be invisible to unauthenticated users. ( #1678 ) Table and row HTML pages now include a <link rel="alternate" type="application/json+datasette" href="..."> element and return a Link: URL; rel="alternate"; type="applicatio… 7  
90 0.60.2 (2022-02-07) Fixed a bug where Datasette would open the same file twice with two different database names if you ran datasette file.db file.db . ( #1632 ) 7  
91 0.60.1 (2022-01-20) Fixed a bug where installation on Python 3.6 stopped working due to a change to an underlying dependency. This release can now be installed on Python 3.6, but is the last release of Datasette that will support anything less than Python 3.7. ( #1609 ) 7  
92 0.60 (2022-01-13)   7  
93 Plugins and internals New plugin hook: filters_from_request(request, database, table, datasette) , which runs on the table page and can be used to support new custom query string parameters that modify the SQL query. ( #473 ) Added two additional methods for writing to the database: await db.execute_write_script(sql, block=True) and await db.execute_write_many(sql, params_seq, block=True) . ( #1570 ) The db.execute_write() internal method now defaults to blocking until the write operation has completed. Previously it defaulted to queuing the write and then continuing to run code while the write was in the queue. ( #1579 ) Database write connections now execute the prepare_connection(conn, database, datasette) plugin hook. ( #1564 ) The Datasette() constructor no longer requires the files= argument, and is now documented at Datasette class . ( #1563 ) The tracing feature now traces write queries, not just read queries. ( #1568 ) The query string variables exposed by request.args will now include blank strings for arguments such as foo in ?foo=&bar=1 rather than ignoring those parameters entirely. ( #1551 ) 7  
94 Faceting The number of unique values in a facet is now always displayed. Previously it was only displayed if the user specified ?_facet_size=max . ( #1556 ) Facets of type date or array can now be configured in metadata.json , see Facets in metadata.json . Thanks, David Larlet. ( #1552 ) New ?_nosuggest=1 parameter for table views, which disables facet suggestion. ( #1557 ) Fixed bug where ?_facet_array=tags&_facet=tags would only display one of the two selected facets. ( #625 ) 7  
95 Other small fixes Made several performance improvements to the database schema introspection code that runs when Datasette first starts up. ( #1555 ) Label columns detected for foreign keys are now case-insensitive, so Name or TITLE will be detected in the same way as name or title . ( #1544 ) Upgraded Pluggy dependency to 1.0. ( #1575 ) Now using Plausible analytics for the Datasette documentation. explain query plan is now allowed with varying amounts of whitespace in the query. ( #1588 ) New CLI reference page showing the output of --help for each of the datasette sub-commands. This lead to several small improvements to the help copy. ( #1594 ) Fixed bug where writable canned queries could not be used with custom templates. ( #1547 ) Improved fix for a bug where columns with a underscore prefix could result in unnecessary hidden form fields. ( #1527 ) 7  
96 0.59.4 (2021-11-29) Fixed bug where columns with a leading underscore could not be removed from the interactive filters list. ( #1527 ) Fixed bug where columns with a leading underscore were not correctly linked to by the "Links from other tables" interface on the row page. ( #1525 ) Upgraded dependencies aiofiles , black and janus . 7  
97 0.59.3 (2021-11-20) Fixed numerous bugs when running Datasette behind a proxy with a prefix URL path using the base_url setting. A live demo of this mode is now available at datasette-apache-proxy-demo.datasette.io/prefix/ . ( #1519 , #838 ) ?column__arraycontains= and ?column__arraynotcontains= table parameters now also work against SQL views. ( #448 ) ?_facet_array=column no longer returns incorrect counts if columns contain the same value more than once. 7  
98 0.59.2 (2021-11-13) Column names with a leading underscore now work correctly when used as a facet. ( #1506 ) Applying ?_nocol= to a column no longer removes that column from the filtering interface. ( #1503 ) Official Datasette Docker container now uses Debian Bullseye as the base image. ( #1497 ) Datasette is four years old today! Here's the original release announcement from 2017. 7  
99 0.59.1 (2021-10-24) Fix compatibility with Python 3.10. ( #1482 ) Documentation on how to use Named parameters with integer and floating point values. ( #1496 ) 7  
100 0.59 (2021-10-14) Columns can now have associated metadata descriptions in metadata.json , see Column descriptions . ( #942 ) New register_commands() plugin hook allows plugins to register additional Datasette CLI commands, e.g. datasette mycommand file.db . ( #1449 ) Adding ?_facet_size=max to a table page now shows the number of unique values in each facet. ( #1423 ) Upgraded dependency httpx 0.20 - the undocumented allow_redirects= parameter to datasette.client is now follow_redirects= , and defaults to False where it previously defaulted to True . ( #1488 ) The --cors option now causes Datasette to return the Access-Control-Allow-Headers: Authorization header, in addition to Access-Control-Allow-Origin: * . ( #1467 ) Code that figures out which named parameters a SQL query takes in order to display form fields for them is no longer confused by strings that contain colon characters. ( #1421 ) Renamed --help-config option to --help-settings . ( #1431 ) datasette.databases property is now a documented API. ( #1443 ) The base.html template now wraps everything other than the <footer> in a <div class="not-footer"> element, to help with advanced CSS customization. ( #1446 ) The render_cell() plugin hook can now return an awaitable function. This means the hook can execute SQL queries. ( #1425 ) register_routes(datasette) plugin hook now accepts an optional datasette argument. ( #1404 ) … 7  
101 0.58.1 (2021-07-16) Fix for an intermittent race condition caused by the refresh_schemas() internal function. ( #1231 ) 7  
Powered by Datasette · Queries took 0.936ms