sections_fts
478 rows sorted by title
This data as json, CSV (advanced)
Link | rowid | title ▼ | content | sections_fts | rank |
---|---|---|---|---|---|
131 | 131 | Discovering the JSON for a page | Most of the HTML pages served by Datasette provide a mechanism for discovering their JSON equivalents using the HTML link mechanism. You can find this near the top of the source code of those pages, looking like this: <link rel="alternate" type="application/json+datasette" href="https://latest.datasette.io/fixtures/sortable.json"> The JSON URL is also made available in a Link HTTP header for the page: Link: https://latest.datasette.io/fixtures/sortable.json; rel="alternate"; type="application/json+datasette" | 68 | |
258 | 258 | Documentation | New tutorial: Cleaning data with sqlite-utils and Datasette . Screenshots in the documentation are now maintained using shot-scraper , as described in Automating screenshots for the Datasette documentation using shot-scraper . ( #1844 ) More detailed command descriptions on the CLI reference page. ( #1787 ) New documentation on Running Datasette using OpenRC - thanks, Adam Simpson. ( #1825 ) | 68 | |
263 | 263 | Documentation | Examples in the documentation now include a copy-to-clipboard button. ( #1748 ) Documentation now uses the Furo Sphinx theme. ( #1746 ) Code examples in the documentation are now all formatted using Black. ( #1718 ) Request.fake() method is now documented, see Request object . New documentation for plugin authors: Registering a plugin for the duration of a test . ( #903 ) | 68 | |
72 | 72 | Dogsheep | Dogsheep is a collection of tools for personal analytics using SQLite and Datasette. The project provides tools like github-to-sqlite and twitter-to-sqlite that can import data from different sources in order to create a personal data warehouse. Personal Data Warehouses: Reclaiming Your Data is a talk that explains Dogsheep and demonstrates it in action. | 68 | |
364 | 364 | Easier custom templates for table rows | If you want to customize the display of individual table rows, you can do so using a _table.html template include that looks something like this: {% for row in display_rows %} <div> <h2>{{ row["title"] }}</h2> <p>{{ row["description"] }}<lp> <p>Category: {{ row.display("category_id") }}</p> </div> {% endfor %} This is a backwards incompatible change . If you previously had a custom template called _rows_and_columns.html you need to rename it to _table.html . See Custom templates for full details. | 68 | |
193 | 193 | Editing and building the documentation | Datasette's documentation lives in the docs/ directory and is deployed automatically using Read The Docs . The documentation is written using reStructuredText. You may find this article on The subset of reStructuredText worth committing to memory useful. You can build it locally by installing sphinx and sphinx_rtd_theme in your Datasette development environment and then running make html directly in the docs/ directory: # You may first need to activate your virtual environment: source venv/bin/activate # Install the dependencies needed to build the docs pip install -e .[docs] # Now build the docs cd docs/ make html This will create the HTML version of the documentation in docs/_build/html . You can open it in your browser like so: open _build/html/index.html Any time you make changes to a .rst file you can re-run make html to update the built documents, then refresh them in your browser. For added productivity, you can use use sphinx-autobuild to run Sphinx in auto-build mode. This will run a local webserver serving the docs that automatically rebuilds them and refreshes the page any time you hit save in your editor. sphinx-autobuild will have been installed when you ran pip install -e .[docs] . In your docs/ directory you can start the server by running the following: make livehtml Now browse to http://localhost:8000/ to view the documentation. Any edits you make should be instantly reflected in your browser. | 68 | |
65 | 65 | Enabling full-text search for a SQLite table | Datasette takes advantage of the external content mechanism in SQLite, which allows a full-text search virtual table to be associated with the contents of another SQLite table. To set up full-text search for a table, you need to do two things: Create a new FTS virtual table associated with your table Populate that FTS table with the data that you would like to be able to run searches against | 68 | |
130 | 130 | Expanding foreign key references | Datasette can detect foreign key relationships and resolve those references into labels. The HTML interface does this by default for every detected foreign key column - you can turn that off using ?_labels=off . You can request foreign keys be expanded in JSON using the _labels=on or _label=COLUMN special query string parameters. Here's what an expanded row looks like: [ { "rowid": 1, "TreeID": 141565, "qLegalStatus": { "value": 1, "label": "Permitted Site" }, "qSpecies": { "value": 1, "label": "Myoporum laetum :: Myoporum" }, "qAddress": "501X Baker St", "SiteOrder": 1 } ] The column in the foreign key table that is used for the label can be specified in metadata.json - see Specifying the label column for a table . | 68 | |
69 | 69 | FTS versions | There are three different versions of the SQLite FTS module: FTS3, FTS4 and FTS5. You can tell which versions are supported by your instance of Datasette by checking the /-/versions page. FTS5 is the most advanced module but may not be available in the SQLite version that is bundled with your Python installation. Most importantly, FTS5 is the only version that has the ability to order by search relevance without needing extra code. If you can't be sure that FTS5 will be available, you should use FTS4. | 68 | |
15 | 15 | Facet by JSON array | If your SQLite installation provides the json1 extension (you can check using /-/versions ) Datasette will automatically detect columns that contain JSON arrays of values and offer a faceting interface against those columns. This is useful for modelling things like tags without needing to break them out into a new table. Example here: latest.datasette.io/fixtures/facetable?_facet_array=tags | 68 | |
16 | 16 | Facet by date | If Datasette finds any columns that contain dates in the first 100 values, it will offer a faceting interface against the dates of those values. This works especially well against timestamp values such as 2019-03-01 12:44:00 . Example here: latest.datasette.io/fixtures/facetable?_facet_date=created | 68 | |
363 | 363 | Facet by date | If a column contains datetime values, Datasette can now facet that column by date. ( #481 ) | 68 | |
270 | 270 | Faceting | The number of unique values in a facet is now always displayed. Previously it was only displayed if the user specified ?_facet_size=max . ( #1556 ) Facets of type date or array can now be configured in metadata.json , see Facets in metadata.json . Thanks, David Larlet. ( #1552 ) New ?_nosuggest=1 parameter for table views, which disables facet suggestion. ( #1557 ) Fixed bug where ?_facet_array=tags&_facet=tags would only display one of the two selected facets. ( #625 ) | 68 | |
369 | 369 | Faceting improvements, and faceting plugins | Datasette Facets provide an intuitive way to quickly summarize and interact with data. Previously the only supported faceting technique was column faceting, but 0.28 introduces two powerful new capabilities: facet-by-JSON-array and the ability to define further facet types using plugins. Facet by array ( #359 ) is only available if your SQLite installation provides the json1 extension. Datasette will automatically detect columns that contain JSON arrays of values and offer a faceting interface against those columns - useful for modelling things like tags without needing to break them out into a new table. See Facet by JSON array for more. The new register_facet_classes() plugin hook ( #445 ) can be used to register additional custom facet classes. Each facet class should provide two methods: suggest() which suggests facet selections that might be appropriate for a provided SQL query, and facet_results() which executes a facet operation and returns results. Datasette's own faceting implementations have been refactored to use the same API as these plugins. | 68 | |
10 | 10 | Facets | Datasette facets can be used to add a faceted browse interface to any database table. With facets, tables are displayed along with a summary showing the most common values in specified columns. These values can be selected to further filter the table. Here's an example : Facets can be specified in two ways: using query string parameters, or in metadata.json configuration for the table. | 68 | |
12 | 12 | Facets in metadata.json | You can turn facets on by default for specific tables by adding them to a "facets" key in a Datasette Metadata file. Here's an example that turns on faceting by default for the qLegalStatus column in the Street_Tree_List table in the sf-trees database: { "databases": { "sf-trees": { "tables": { "Street_Tree_List": { "facets": ["qLegalStatus"] } } } } } Facets defined in this way will always be shown in the interface and returned in the API, regardless of the _facet arguments passed to the view. You can specify array or date facets in metadata using JSON objects with a single key of array or date and a value specifying the column, like this: { "facets": [ {"array": "tags"}, {"date": "created"} ] } You can change the default facet size (the number of results shown for each facet) for a table using facet_size : { "databases": { "sf-trees": { "tables": { "Street_Tree_List": { "facets": ["qLegalStatus"], "facet_size": 10 } } } } } | 68 | |
11 | 11 | Facets in query strings | To turn on faceting for specific columns on a Datasette table view, add one or more _facet=COLUMN parameters to the URL. For example, if you want to turn on facets for the city_id and state columns, construct a URL that looks like this: /dbname/tablename?_facet=state&_facet=city_id This works for both the HTML interface and the .json view. When enabled, facets will cause a facet_results block to be added to the JSON output, looking something like this: { "state": { "name": "state", "results": [ { "value": "CA", "label": "CA", "count": 10, "toggle_url": "http://...?_facet=city_id&_facet=state&state=CA", "selected": false }, { "value": "MI", "label": "MI", "count": 4, "toggle_url": "http://...?_facet=city_id&_facet=state&state=MI", "selected": false }, { "value": "MC", "label": "MC", "count": 1, "toggle_url": "http://...?_facet=city_id&_facet=state&state=MC", "selected": false } ], "truncated": false } "city_id": { "name": "city_id", "results": [ { "value": 1, "label": "San Francisco", "count": 6, "toggle_url": "http://...?_facet=city_id&_facet=state&city_id=1", "selected": false }, { "value": 2, "label": "Los Angeles", "count": 4, "toggle_url": "http://...?_facet=city_id&_facet=state&city_id=2", "selected": false }, { "value": 3, "label": "Detroit", "count": 4, "toggle_url": "http://...?_facet=city_id&_facet=state&city_id=3", "selected": false }, { "value": 4, "label": "Memnonia", "count": 1, "toggle_url": "http://...?_facet=city_id&_facet=state&city_id=4", "selected": false } ], "truncated": false } } If Datasette detect… | 68 | |
256 | 256 | Features | Now tested against Python 3.11. Docker containers used by datasette publish and datasette package both now use that version of Python. ( #1853 ) --load-extension option now supports entrypoints. Thanks, Alex Garcia. ( #1789 ) Facet size can now be set per-table with the new facet_size table metadata option. ( #1804 ) The truncate_cells_html setting now also affects long URLs in columns. ( #1805 ) The non-JavaScript SQL editor textarea now increases height to fit the SQL query. ( #1786 ) Facets are now displayed with better line-breaks in long values. Thanks, Daniel Rech. ( #1794 ) The settings.json file used in Configuration directory mode is now validated on startup. ( #1816 ) SQL queries can now include leading SQL comments, using /* ... */ or -- ... syntax. Thanks, Charles Nepote. ( #1860 ) SQL query is now re-displayed when terminated with a time limit error. ( #1819 ) The inspect data mechanism is now used to speed up server startup - thanks, Forest Gregg. ( #1834 ) In Configuration directory mode databases with filenames ending in .sqlite or .sqlite3 are now automatically added to the Datasette instance. ( #1646 ) Breadcrumb navigation display now respects the current user's permissions. ( #1831 ) | 68 | |
260 | 260 | Features | Datasette is now compatible with Pyodide . This is the enabling technology behind Datasette Lite . ( #1733 ) Database file downloads now implement conditional GET using ETags. ( #1739 ) HTML for facet results and suggested results has been extracted out into new templates _facet_results.html and _suggested_facets.html . Thanks, M. Nasimul Haque. ( #1759 ) Datasette now runs some SQL queries in parallel. This has limited impact on performance, see this research issue for details. New --nolock option for ignoring file locks when opening read-only databases. ( #1744 ) Spaces in the database names in URLs are now encoded as + rather than ~20 . ( #1701 ) <Binary: 2427344 bytes> is now displayed as <Binary: 2,427,344 bytes> and is accompanied by tooltip showing "2.3MB". ( #1712 ) The base Docker image used by datasette publish cloudrun , datasette package and the official Datasette image has been upgraded to 3.10.6-slim-bullseye . ( #1768 ) Canned writable queries against immutable databases now show a warning message. ( #1728 ) datasette publish cloudrun has a new --timeout option which can be used to increase the time limit applied by the Google Cloud build environment. Thanks, Tim Sherratt. ( #1717 ) datasette publish cloudrun has new --min-instances and --max-instances options. ( #1779 ) | 68 | |
329 | 329 | Flash messages | Writable canned queries needed a mechanism to let the user know that the query has been successfully executed. The new flash messaging system ( #790 ) allows messages to persist in signed cookies which are then displayed to the user on the next page that they visit. Plugins can use this mechanism to display their own messages, see .add_message(request, message, type=datasette.INFO) for details. You can try out the new messages using the /-/messages debug tool, for example at https://latest.datasette.io/-/messages | 68 | |
134 | 134 | Follow a tutorial | Datasette has several tutorials to help you get started with the tool. Try one of the following: Exploring a database with Datasette shows how to use the Datasette web interface to explore a new database. Learn SQL with Datasette introduces SQL, and shows how to use that query language to ask questions of your data. Cleaning data with sqlite-utils and Datasette guides you through using sqlite-utils to turn a CSV file into a database that you can explore using Datasette. | 68 | |
386 | 386 | Foreign key expansions | When Datasette detects a foreign key reference it attempts to resolve a label for that reference (automatically or using the Specifying the label column for a table metadata option) so it can display a link to the associated row. This expansion is now also available for JSON and CSV representations of the table, using the new _labels=on query string option. See Expanding foreign key references for more details. | 68 | |
60 | 60 | Full-text search | SQLite includes a powerful mechanism for enabling full-text search against SQLite records. Datasette can detect if a table has had full-text search configured for it in the underlying database and display a search interface for filtering that table. Here's an example search : Datasette automatically detects which tables have been configured for full-text search. | 68 | |
184 | 184 | General guidelines | main should always be releasable . Incomplete features should live in branches. This ensures that any small bug fixes can be quickly released. The ideal commit should bundle together the implementation, unit tests and associated documentation updates. The commit message should link to an associated issue. New plugin hooks should only be shipped if accompanied by a separate release of a non-demo plugin that uses them. | 68 | |
132 | 132 | Getting started | 68 | ||
223 | 223 | HTTP caching | If your database is immutable and guaranteed not to change, you can gain major performance improvements from Datasette by enabling HTTP caching. This can work at two different levels. First, it can tell browsers to cache the results of queries and serve future requests from the browser cache. More significantly, it allows you to run Datasette behind a caching proxy such as Varnish or use a cache provided by a hosted service such as Fastly or Cloudflare . This can provide incredible speed-ups since a query only needs to be executed by Datasette the first time it is accessed - all subsequent hits can then be served by the cache. Using a caching proxy in this way could enable a Datasette-backed visualization to serve thousands of hits a second while running Datasette itself on extremely inexpensive hosting. Datasette's integration with HTTP caches can be enabled using a combination of configuration options and query string arguments. The default_cache_ttl setting sets the default HTTP cache TTL for all Datasette pages. This is 5 seconds unless you change it - you can set it to 0 if you wish to disable HTTP caching entirely. You can also change the cache timeout on a per-request basis using the ?_ttl=10 query string parameter. This can be useful when you are working with the Datasette JSON API - you may decide that a specific query can be cached for a longer time, or maybe you need to set ?_ttl=0 for some requests for example if you are running a SQL order by random() query. | 68 | |
234 | 234 | Hiding tables | You can hide tables from the database listing view (in the same way that FTS and SpatiaLite tables are automatically hidden) using "hidden": true : { "databases": { "database1": { "tables": { "example_table": { "hidden": true } } } } } | 68 | |
221 | 221 | Immutable mode | If you can be certain that a SQLite database file will not be changed by another process you can tell Datasette to open that file in immutable mode . Doing so will disable all locking and change detection, which can result in improved query performance. This also enables further optimizations relating to HTTP caching, described below. To open a file in immutable mode pass it to the datasette command using the -i option: datasette -i data.db When you open a file in immutable mode like this Datasette will also calculate and cache the row counts for each table in that database when it first starts up, further improving performance. | 68 | |
182 | 182 | Import shortcuts | The following commonly used symbols can be imported directly from the datasette module: from datasette import Response from datasette import Forbidden from datasette import NotFound from datasette import hookimpl from datasette import actor_matches_allow | 68 | |
25 | 25 | Importing GeoJSON polygons using Shapely | Another common form of polygon data is the GeoJSON format. This can be imported into SpatiaLite directly, or by using the Shapely Python library. Who's On First is an excellent source of openly licensed GeoJSON polygons. Let's import the geographical polygon for Wales. First, we can use the Who's On First Spelunker tool to find the record for Wales: spelunker.whosonfirst.org/id/404227475 That page includes a link to the GeoJSON record, which can be accessed here: data.whosonfirst.org/404/227/475/404227475.geojson Here's Python code to create a SQLite database, enable SpatiaLite, create a places table and then add a record for Wales: import sqlite3 conn = sqlite3.connect("places.db") # Enable SpatialLite extension conn.enable_load_extension(True) conn.load_extension("/usr/local/lib/mod_spatialite.dylib") # Create the masic countries table conn.execute("select InitSpatialMetadata(1)") conn.execute( "create table places (id integer primary key, name text);" ) # Add a MULTIPOLYGON Geometry column conn.execute( "SELECT AddGeometryColumn('places', 'geom', 4326, 'MULTIPOLYGON', 2);" ) # Add a spatial index against the new column conn.execute("SELECT CreateSpatialIndex('places', 'geom');") # Now populate the table from shapely.geometry.multipolygon import MultiPolygon from shapely.geometry import shape import requests geojson = requests.get( "https://data.whosonfirst.org/404/227/475/404227475.geojson" ).json() # Convert to "Well Known Text" format wkt = shape(geojson["geometry"]).wkt # Insert and commit the record conn.execute( "INSERT INTO places (id, name, geom) VALUES(null, ?, GeomFromText(?, 4326))", ("Wales", wkt), ) conn.commit() | 68 | |
24 | 24 | Importing shapefiles into SpatiaLite | The shapefile format is a common format for distributing geospatial data. You can use the spatialite command-line tool to create a new database table from a shapefile. Try it now with the North America shapefile available from the University of North Carolina Global River Database project. Download the file and unzip it (this will create files called narivs.dbf , narivs.prj , narivs.shp and narivs.shx in the current directory), then run the following: $ spatialite rivers-database.db SpatiaLite version ..: 4.3.0a Supported Extensions: ... spatialite> .loadshp narivs rivers CP1252 23032 ======== Loading shapefile at 'narivs' into SQLite table 'rivers' ... Inserted 467973 rows into 'rivers' from SHAPEFILE This will load the data from the narivs shapefile into a new database table called rivers . Exit out of spatialite (using Ctrl+D ) and run Datasette against your new database like this: datasette rivers-database.db \ --load-extension=/usr/local/lib/mod_spatialite.dylib If you browse to http://localhost:8001/rivers-database/rivers you will see the new table... but the Geometry column will contain unreadable binary data (SpatiaLite uses a custom format based on WKB ). The easiest way to turn this into semi-readable data is to use the SpatiaLite AsGeoJSON function. Try the following using the SQL query interface at http://localhost:8001/rivers-database : select *, AsGeoJSON(Geometry) from rivers limit 10; This will give you back an additional column of GeoJSON. You can copy and paste GeoJSON from this column into the debugging tool at geojson.io to visualize it on a map. To see a more interesting example, try ordering the records with the longest geometry first. Since there are 467,000 rows in the table you will first need to increase the SQL time limit imposed by Datasette: datasette rivers-database.db \ --load-extension=/us… | 68 | |
389 | 389 | Improved support for SpatiaLite | The SpatiaLite module for SQLite adds robust geospatial features to the database. Getting SpatiaLite working can be tricky, especially if you want to use the most recent alpha version (with support for K-nearest neighbor). Datasette now includes extensive documentation on SpatiaLite , and thanks to Ravi Kotecha our GitHub repo includes a Dockerfile that can build the latest SpatiaLite and configure it for use with Datasette. The datasette publish and datasette package commands now accept a new --spatialite argument which causes them to install and configure SpatiaLite as part of the container they deploy. | 68 | |
439 | 439 | Including an expiry time | ds_actor cookies can optionally include a signed expiry timestamp, after which the cookies will no longer be valid. Authentication plugins may chose to use this mechanism to limit the lifetime of the cookie. For example, if a plugin implements single-sign-on against another source it may decide to set short-lived cookies so that if the user is removed from the SSO system their existing Datasette cookies will stop working shortly afterwards. To include an expiry, add a "e" key to the cookie value containing a base62-encoded integer representing the timestamp when the cookie should expire. For example, here's how to set a cookie that expires after 24 hours: import time from datasette.utils import baseconv expires_at = int(time.time()) + (24 * 60 * 60) response = Response.redirect("/") response.set_cookie( "ds_actor", datasette.sign( { "a": {"id": "cleopaws"}, "e": baseconv.base62.encode(expires_at), }, "actor", ), ) The resulting cookie will encode data that looks something like this: { "a": { "id": "cleopaws" }, "e": "1jjSji" } | 68 | |
19 | 19 | Installation | 68 | ||
466 | 466 | Installation | If you just want to try Datasette out you don't need to install anything: see Try Datasette without installing anything using Glitch There are two main options for installing Datasette. You can install it directly on to your machine, or you can install it using Docker. If you want to start making contributions to the Datasette project by installing a copy that lets you directly modify the code, take a look at our guide to Setting up a development environment . Basic installation Datasette Desktop for Mac Using Homebrew Using pip Advanced installation options Using pipx Installing plugins using pipx Upgrading packages using pipx Using Docker Loading SpatiaLite Installing plugins A note about extensions | 68 | |
21 | 21 | Installing SpatiaLite on Linux | SpatiaLite is packaged for most Linux distributions. apt install spatialite-bin libsqlite3-mod-spatialite Depending on your distribution, you should be able to run Datasette something like this: datasette --load-extension=/usr/lib/x86_64-linux-gnu/mod_spatialite.so If you are unsure of the location of the module, try running locate mod_spatialite and see what comes back. | 68 | |
20 | 20 | Installing SpatiaLite on OS X | The easiest way to install SpatiaLite on OS X is to use Homebrew . brew update brew install spatialite-tools This will install the spatialite command-line tool and the mod_spatialite dynamic library. You can now run Datasette like so: datasette --load-extension=spatialite | 68 | |
82 | 82 | Installing plugins | If a plugin has been packaged for distribution using setuptools you can use the plugin by installing it alongside Datasette in the same virtual environment or Docker container. You can install plugins using the datasette install command: datasette install datasette-vega You can uninstall plugins with datasette uninstall : datasette uninstall datasette-vega You can upgrade plugins with datasette install --upgrade or datasette install -U : datasette install -U datasette-vega This command can also be used to upgrade Datasette itself to the latest released version: datasette install -U datasette These commands are thin wrappers around pip install and pip uninstall , which ensure they run pip in the same virtual environment as Datasette itself. | 68 | |
477 | 477 | Installing plugins | If you want to install plugins into your local Datasette Docker image you can do so using the following recipe. This will install the plugins and then save a brand new local image called datasette-with-plugins : docker run datasetteproject/datasette \ pip install datasette-vega docker commit $(docker ps -lq) datasette-with-plugins You can now run the new custom image like so: docker run -p 8001:8001 -v `pwd`:/mnt \ datasette-with-plugins \ datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db You can confirm that the plugins are installed by visiting http://127.0.0.1:8001/-/plugins Some plugins such as datasette-ripgrep may need additional system packages. You can install these by running apt-get install inside the container: docker run datasette-057a0 bash -c ' apt-get update && apt-get install ripgrep && pip install datasette-ripgrep' docker commit $(docker ps -lq) datasette-with-ripgrep | 68 | |
473 | 473 | Installing plugins using pipx | You can install additional datasette plugins with pipx inject like so: $ pipx inject datasette datasette-json-html injected package datasette-json-html into venv datasette done! ✨ 🌟 ✨ $ datasette plugins [ { "name": "datasette-json-html", "static": false, "templates": false, "version": "0.6" } ] | 68 | |
138 | 138 | Internals for plugins | Many Plugin hooks are passed objects that provide access to internal Datasette functionality. The interface to these objects should not be considered stable with the exception of methods that are documented here. | 68 | |
114 | 114 | Introspection | Datasette includes some pages and JSON API endpoints for introspecting the current instance. These can be used to understand some of the internals of Datasette and to see how a particular instance has been configured. Each of these pages can be viewed in your browser. Add .json to the URL to get back the contents as JSON. | 68 | |
123 | 123 | JSON API | Datasette provides a JSON API for your SQLite databases. Anything you can do through the Datasette user interface can also be accessed as JSON via the API. To access the API for a page, either click on the .json link on that page or edit the URL and add a .json extension to it. If you started Datasette with the --cors option, each JSON endpoint will be served with the following additional HTTP headers: Access-Control-Allow-Origin: * Access-Control-Allow-Headers: Authorization Access-Control-Expose-Headers: Link This means JavaScript running on any domain will be able to make cross-origin requests to fetch the data. If you start Datasette without the --cors option only JavaScript running on the same domain as Datasette will be able to access the API. | 68 | |
463 | 463 | JSON API for writable canned queries | Writable canned queries can also be accessed using a JSON API. You can POST data to them using JSON, and you can request that their response is returned to you as JSON. To submit JSON to a writable canned query, encode key/value parameters as a JSON document: POST /mydatabase/add_message {"message": "Message goes here"} You can also continue to submit data using regular form encoding, like so: POST /mydatabase/add_message message=Message+goes+here There are three options for specifying that you would like the response to your request to return JSON data, as opposed to an HTTP redirect to another page. Set an Accept: application/json header on your request Include ?_json=1 in the URL that you POST to Include "_json": 1 in your JSON body, or &_json=1 in your form encoded body The JSON response will look like this: { "ok": true, "message": "Query executed, 1 row affected", "redirect": "/data/add_name" } The "message" and "redirect" values here will take into account on_success_message , on_success_redirect , on_error_message and on_error_redirect , if they have been set. | 68 | |
290 | 290 | JavaScript modules | JavaScript modules were introduced in ECMAScript 2015 and provide native browser support for the import and export keywords. To use modules, JavaScript needs to be included in <script> tags with a type="module" attribute. Datasette now has the ability to output <script type="module"> in places where you may wish to take advantage of modules. The extra_js_urls option described in Custom CSS and JavaScript can now be used with modules, and module support is also available for the extra_body_script() plugin hook. ( #1186 , #1187 ) datasette-leaflet-freedraw is the first example of a Datasette plugin that takes advantage of the new support for JavaScript modules. See Drawing shapes on a map to query a SpatiaLite database for more on this plugin. | 68 | |
451 | 451 | Linking to binary downloads | The .blob output format is used to return binary data. It requires a _blob_column= query string argument specifying which BLOB column should be downloaded, for example: https://latest.datasette.io/fixtures/binary_data/1.blob?_blob_column=data This output format can also be used to return binary data from an arbitrary SQL query. Since such queries do not specify an exact row, an additional ?_blob_hash= parameter can be used to specify the SHA-256 hash of the value that is being linked to. Consider the query select data from binary_data - demonstrated here . That page links to the binary value downloads. Those links look like this: https://latest.datasette.io/fixtures.blob?sql=select+data+from+binary_data&_blob_column=data&_blob_hash=f3088978da8f9aea479ffc7f631370b968d2e855eeb172bea7f6c7a04262bb6d These .blob links are also returned in the .csv exports Datasette provides for binary tables and queries, since the CSV format does not have a mechanism for representing binary data. | 68 | |
476 | 476 | Loading SpatiaLite | The datasetteproject/datasette image includes a recent version of the SpatiaLite extension for SQLite. To load and enable that module, use the following command: docker run -p 8001:8001 -v `pwd`:/mnt \ datasetteproject/datasette \ datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db \ --load-extension=spatialite You can confirm that SpatiaLite is successfully loaded by visiting http://127.0.0.1:8001/-/versions | 68 | |
321 | 321 | Log out | The ds_actor cookie can be used by plugins (or by Datasette's --root mechanism ) to authenticate users. The new /-/logout page provides a way to clear that cookie. A "Log out" button now shows in the global navigation provided the user is authenticated using the ds_actor cookie. ( #840 ) | 68 | |
462 | 462 | Magic parameters | Named parameters that start with an underscore are special: they can be used to automatically add values created by Datasette that are not contained in the incoming form fields or query string. These magic parameters are only supported for canned queries: to avoid security issues (such as queries that extract the user's private cookies) they are not available to SQL that is executed by the user as a custom SQL query. Available magic parameters are: _actor_* - e.g. _actor_id , _actor_name Fields from the currently authenticated Actors . _header_* - e.g. _header_user_agent Header from the incoming HTTP request. The key should be in lower case and with hyphens converted to underscores e.g. _header_user_agent or _header_accept_language . _cookie_* - e.g. _cookie_lang The value of the incoming cookie of that name. _now_epoch The number of seconds since the Unix epoch. _now_date_utc The date in UTC, e.g. 2020-06-01 _now_datetime_utc The ISO 8601 datetime in UTC, e.g. 2020-06-24T18:01:07Z _random_chars_* - e.g. … | 68 | |
320 | 320 | Magic parameters for canned queries | Canned queries now support Magic parameters , which can be used to insert or select automatically generated values. For example: insert into logs (user_id, timestamp) values (:_actor_id, :_now_datetime_utc) This inserts the currently authenticated actor ID and the current datetime. ( #842 ) | 68 | |
23 | 23 | Making use of a spatial index | SpatiaLite spatial indexes are R*Trees. They allow you to run efficient bounding box queries using a sub-select, with a similar pattern to that used for Searches using custom SQL . In the above example, the resulting index will be called idx_museums_point_geom . This takes the form of a SQLite virtual table. You can inspect its contents using the following query: select * from idx_museums_point_geom limit 10; Here's a live example: timezones-api.datasette.io/timezones/idx_timezones_Geometry pkid xmin xmax ymin ymax 1 -8.601725578308105 -2.4930307865142822 4.162120819091797 10.74019718170166 2 … | 68 | |
372 | 372 | Medium changes | Datasette now conforms to the Black coding style ( #449 ) - and has a unit test to enforce this in the future New Special table arguments : ?columnname__in=value1,value2,value3 filter for executing SQL IN queries against a table, see Table arguments ( #433 ) ?columnname__date=yyyy-mm-dd filter which returns rows where the spoecified datetime column falls on the specified date ( 583b22a ) ?tags__arraycontains=tag filter which acts against a JSON array contained in a column ( 78e45ea ) ?_where=sql-fragment filter for the table view ( #429 ) ?_fts_table=mytable and ?_fts_pk=mycolumn query string options can be used to specify which FTS table to use for a search query - see Configuring full-text search for a table or view ( #428 ) You can now pass the same table filter multiple times - for example, ?content__not=world&content__not=hello will return all rows where the content column is neither hello or world ( #288 ) … | 68 | |
225 | 225 | Metadata | Data loves metadata. Any time you run Datasette you can optionally include a JSON file with metadata about your databases and tables. Datasette will then display that information in the web UI. Run Datasette like this: datasette database1.db database2.db --metadata metadata.json Your metadata.json file can look something like this: { "title": "Custom title for your index page", "description": "Some description text can go here", "license": "ODbL", "license_url": "https://opendatacommons.org/licenses/odbl/", "source": "Original Data Source", "source_url": "http://example.com/" } You can optionally use YAML instead of JSON, see Using YAML for metadata . The above metadata will be displayed on the index page of your Datasette-powered site. The source and license information will also be included in the footer of every page served by Datasette. Any special HTML characters in description will be escaped. If you want to include HTML in your description, you can use a description_html property instead. | 68 | |
391 | 391 | Miscellaneous | Got JSON data in one of your columns? Use the new ?_json=COLNAME argument to tell Datasette to return that JSON value directly rather than encoding it as a string. If you just want an array of the first value of each row, use the new ?_shape=arrayfirst option - example . | 68 | |
289 | 289 | Named in-memory database support | As part of the work building the _internal database, Datasette now supports named in-memory databases that can be shared across multiple connections. This allows plugins to create in-memory databases which will persist data for the lifetime of the Datasette server process. ( #1151 ) The new memory_name= parameter to the Database class can be used to create named, shared in-memory databases. | 68 | |
454 | 454 | Named parameters | Datasette has special support for SQLite named parameters. Consider a SQL query like this: select * from Street_Tree_List where "PermitNotes" like :notes and "qSpecies" = :species If you execute this query using the custom query editor, Datasette will extract the two named parameters and use them to construct form fields for you to provide values. You can also provide values for these fields by constructing a URL: /mydatabase?sql=select...&species=44 SQLite string escaping rules will be applied to values passed using named parameters - they will be wrapped in quotes and their content will be correctly escaped. Values from named parameters are treated as SQLite strings. If you need to perform numeric comparisons on them you should cast them to an integer or float first using cast(:name as integer) or cast(:name as real) , for example: select * from Street_Tree_List where latitude > cast(:min_latitude as real) and latitude < cast(:max_latitude as real) Datasette disallows custom SQL queries containing the string PRAGMA (with a small number of exceptions ) as SQLite pragma statements can be used to change database settings at runtime. If you need to include the string "pragma" in a query you can do so safely using a named parameter. | 68 | |
387 | 387 | New configuration settings | Datasette's Settings now also supports boolean settings. A number of new configuration options have been added: num_sql_threads - the number of threads used to execute SQLite queries. Defaults to 3. allow_facet - enable or disable custom Facets using the _facet= parameter. Defaults to on. suggest_facets - should Datasette suggest facets? Defaults to on. allow_download - should users be allowed to download the entire SQLite database? Defaults to on. allow_sql - should users be allowed to execute custom SQL queries? Defaults to on. default_cache_ttl - Default HTTP caching max-age header in seconds. Defaults to 365 days - caching can be disabled entirely by settings this to 0. cache_size_kb - Set the amount of memory SQLite uses for its per-connection cache , in KB. allow_csv_stream - allow users to stream entire result sets as a single CSV file. Defaults to on. max_csv_mb - maximum size of a returned CSV file in MB. Defaults to 100MB, set to 0 to disable this limit. | 68 | |
281 | 281 | New features | If an error occurs while executing a user-provided SQL query, that query is now re-displayed in an editable form along with the error message. ( #619 ) New ?_col= and ?_nocol= parameters to show and hide columns in a table, plus an interface for hiding and showing columns in the column cog menu. ( #615 ) A new ?_facet_size= parameter for customizing the number of facet results returned on a table or view page. ( #1332 ) ?_facet_size=max sets that to the maximum, which defaults to 1,000 and is controlled by the the max_returned_rows setting. If facet results are truncated the … at the bottom of the facet list now links to this parameter. ( #1337 ) ?_nofacet=1 option to disable all facet calculations on a page, used as a performance optimization for CSV exports and ?_shape=array/object . ( #1349 , #263 ) ?_nocount=1 option to disable full query result counts. ( #1353 ) ?_trace=1 debugging option is now controlled by the new trace_debug setting, which is turned off by default. ( #1359 ) | 68 | |
360 | 360 | New plugin hook: asgi_wrapper | The asgi_wrapper(datasette) plugin hook allows plugins to entirely wrap the Datasette ASGI application in their own ASGI middleware. ( #520 ) Two new plugins take advantage of this hook: datasette-auth-github adds a authentication layer: users will have to sign in using their GitHub account before they can view data or interact with Datasette. You can also use it to restrict access to specific GitHub users, or to members of specified GitHub organizations or teams . datasette-cors allows you to configure CORS headers for your Datasette instance. You can use this to enable JavaScript running on a whitelisted set of domains to make fetch() calls to the JSON API provided by your Datasette instance. | 68 | |
361 | 361 | New plugin hook: extra_template_vars | The extra_template_vars(template, database, table, columns, view_name, request, datasette) plugin hook allows plugins to inject their own additional variables into the Datasette template context. This can be used in conjunction with custom templates to customize the Datasette interface. datasette-auth-github uses this hook to add custom HTML to the new top navigation bar (which is designed to be modified by plugins, see #540 ). | 68 | |
323 | 323 | New plugin hooks | register_magic_parameters(datasette) can be used to define new types of magic canned query parameters. startup(datasette) can run custom code when Datasette first starts up. datasette-init is a new plugin that uses this hook to create database tables and views on startup if they have not yet been created. ( #834 ) canned_queries(datasette, database, actor) lets plugins provide additional canned queries beyond those defined in Datasette's metadata. See datasette-saved-queries for an example of this hook in action. ( #852 ) forbidden(datasette, request, message) is a hook for customizing how Datasette responds to 403 forbidden errors. ( #812 ) | 68 | |
302 | 302 | New visual design | Datasette is no longer white and grey with blue and purple links! Natalie Downe has been working on a visual refresh, the first iteration of which is included in this release. ( #1056 ) | 68 | |
79 | 79 | Nginx proxy configuration | Here is an example of an nginx configuration file that will proxy traffic to Datasette: daemon off; events { worker_connections 1024; } http { server { listen 80; location /my-datasette { proxy_pass http://127.0.0.1:8009/my-datasette; proxy_set_header Host $host; } } } You can also use the --uds option to Datasette to listen on a Unix domain socket instead of a port, configuring the nginx upstream proxy like this: daemon off; events { worker_connections 1024; } http { server { listen 80; location /my-datasette { proxy_pass http://datasette/my-datasette; proxy_set_header Host $host; } } upstream datasette { server unix:/tmp/datasette.sock; } } Then run Datasette with datasette --uds /tmp/datasette.sock path/to/database.db --setting base_url /my-datasette/ . | 68 | |
83 | 83 | One-off plugins using --plugins-dir | You can also define one-off per-project plugins by saving them as plugin_name.py functions in a plugins/ folder and then passing that folder to datasette using the --plugins-dir option: datasette mydb.db --plugins-dir=plugins/ | 68 | |
292 | 292 | Other changes | Datasette can now open multiple database files with the same name, e.g. if you run datasette path/to/one.db path/to/other/one.db . ( #509 ) datasette publish cloudrun now sets force_https_urls for every deployment, fixing some incorrect http:// links. ( #1178 ) Fixed a bug in the example nginx configuration in Running Datasette behind a proxy . ( #1091 ) The Datasette Ecosystem documentation page has been reduced in size in favour of the datasette.io tools and plugins directories. ( #1182 ) The request object now provides a request.full_path property, which returns the path including any query string. ( #1184 ) Better error message for disallowed PRAGMA clauses in SQL queries. ( #1185 ) datasette publish heroku now deploys using python-3.8.7 . New plugin testing documentation on Testing outbound HTTP calls with pytest-httpx . ( #1198 ) All ?_* query string parameters passed to the table page are now persisted in hidden form fields, so parameters such as ?_size=10 will be correctly passed to the next page when query filters are changed. ( #1194 ) Fixed a bug loading a database file called test-database (1).sqlite . ( #1181 ) | 68 | |
271 | 271 | Other small fixes | Made several performance improvements to the database schema introspection code that runs when Datasette first starts up. ( #1555 ) Label columns detected for foreign keys are now case-insensitive, so Name or TITLE will be detected in the same way as name or title . ( #1544 ) Upgraded Pluggy dependency to 1.0. ( #1575 ) Now using Plausible analytics for the Datasette documentation. explain query plan is now allowed with varying amounts of whitespace in the query. ( #1588 ) New CLI reference page showing the output of --help for each of the datasette sub-commands. This lead to several small improvements to the help copy. ( #1594 ) Fixed bug where writable canned queries could not be used with custom templates. ( #1547 ) Improved fix for a bug where columns with a underscore prefix could result in unnecessary hidden form fields. ( #1527 ) | 68 | |
4 | 4 | Packaging a plugin | Plugins can be packaged using Python setuptools. You can see an example of a packaged plugin at https://github.com/simonw/datasette-plugin-demos The example consists of two files: a setup.py file that defines the plugin: from setuptools import setup VERSION = "0.1" setup( name="datasette-plugin-demos", description="Examples of plugins for Datasette", author="Simon Willison", url="https://github.com/simonw/datasette-plugin-demos", license="Apache License, Version 2.0", version=VERSION, py_modules=["datasette_plugin_demos"], entry_points={ "datasette": [ "plugin_demos = datasette_plugin_demos" ] }, install_requires=["datasette"], ) And a Python module file, datasette_plugin_demos.py , that implements the plugin: from datasette import hookimpl import random @hookimpl def prepare_jinja2_environment(env): env.filters["uppercase"] = lambda u: u.upper() @hookimpl def prepare_connection(conn): conn.create_function( "random_integer", 2, random.randint ) Having built a plugin in this way you can turn it into an installable package using the following command: python3 setup.py sdist This will create a .tar.gz file in the dist/ directory. You can then install your new plugin into a Datasette virtual environment or Docker container using pip : pip install datasette-plugin-demos-0.1.tar.gz To learn how to upload your plugin to PyPI for use by other people, read the PyPA guide to Packaging and distributing projects . | 68 | |
212 | 212 | Pages and API endpoints | The Datasette web application offers a number of different pages that can be accessed to explore the data in question, each of which is accompanied by an equivalent JSON API. | 68 | |
125 | 125 | Pagination | The default JSON representation includes a "next_url" key which can be used to access the next page of results. If that key is null or missing then it means you have reached the final page of results. Other representations include pagination information in the link HTTP header. That header will look something like this: link: <https://latest.datasette.io/fixtures/sortable.json?_next=d%2Cv>; rel="next" Here is an example Python function built using requests that returns a list of all of the paginated items from one of these API endpoints: def paginate(url): items = [] while url: response = requests.get(url) try: url = response.links.get("next").get("url") except AttributeError: url = None items.extend(response.json()) return items | 68 | |
464 | 464 | Pagination | Datasette's default table pagination is designed to be extremely efficient. SQL OFFSET/LIMIT pagination can have a significant performance penalty once you get into multiple thousands of rows, as each page still requires the database to scan through every preceding row to find the correct offset. When paginating through tables, Datasette instead orders the rows in the table by their primary key and performs a WHERE clause against the last seen primary key for the previous page. For example: select rowid, * from Tree_List where rowid > 200 order by rowid limit 101 This represents page three for this particular table, with a page size of 100. Note that we request 101 items in the limit clause rather than 100. This allows us to detect if we are on the last page of the results: if the query returns less than 101 rows we know we have reached the end of the pagination set. Datasette will only return the first 100 rows - the 101st is used purely to detect if there should be another page. Since the where clause acts against the index on the primary key, the query is extremely fast even for records that are a long way into the overall pagination set. | 68 | |
207 | 207 | Path parameters for pages | You can define custom pages that match multiple paths by creating files with {variable} definitions in their filenames. For example, to capture any request to a URL matching /about/* , you would create a template in the following location: templates/pages/about/{slug}.html A hit to /about/news would render that template and pass in a variable called slug with a value of "news" . If you use this mechanism don't forget to return a 404 if the referenced content could not be found. You can do this using {{ raise_404() }} described below. Templates defined using custom page routes work particularly well with the sql() template function from datasette-template-sql or the graphql() template function from datasette-graphql . | 68 | |
226 | 226 | Per-database and per-table metadata | Metadata at the top level of the JSON will be shown on the index page and in the footer on every page of the site. The license and source is expected to apply to all of your data. You can also provide metadata at the per-database or per-table level, like this: { "databases": { "database1": { "source": "Alternative source", "source_url": "http://example.com/", "tables": { "example_table": { "description_html": "Custom <em>table</em> description", "license": "CC BY 3.0 US", "license_url": "https://creativecommons.org/licenses/by/3.0/us/" } } } } } Each of the top-level metadata fields can be used at the database and table level. | 68 | |
220 | 220 | Performance and caching | Datasette runs on top of SQLite, and SQLite has excellent performance. For small databases almost any query should return in just a few milliseconds, and larger databases (100s of MBs or even GBs of data) should perform extremely well provided your queries make sensible use of database indexes. That said, there are a number of tricks you can use to improve Datasette's performance. | 68 | |
327 | 327 | Permissions | Datasette also now has a built-in concept of Permissions . The permissions system answers the following question: Is this actor allowed to perform this action , optionally against this particular resource ? You can use the new "allow" block syntax in metadata.json (or metadata.yaml ) to set required permissions at the instance, database, table or canned query level. For example, to restrict access to the fixtures.db database to the "root" user: { "databases": { "fixtures": { "allow": { "id" "root" } } } } See Defining permissions with "allow" blocks for more details. Plugins can implement their own custom permission checks using the new permission_allowed(datasette, actor, action, resource) hook. A new debug page at /-/permissions shows recent permission checks, to help administrators and plugin authors understand exactly what checks are being performed. This tool defaults to only being available to the root user, but can be exposed to other users by plugins that respond to the permissions-debug permission. ( #788 ) | 68 | |
426 | 426 | Permissions | Datasette has an extensive permissions system built-in, which can be further extended and customized by plugins. The key question the permissions system answers is this: Is this actor allowed to perform this action , optionally against this particular resource ? Actors are described above . An action is a string describing the action the actor would like to perform. A full list is provided below - examples include view-table and execute-sql . A resource is the item the actor wishes to interact with - for example a specific database or table. Some actions, such as permissions-debug , are not associated with a particular resource. Datasette's built-in view permissions ( view-database , view-table etc) default to allow - unless you configure additional permission rules unauthenticated users will be allowed to access content. Permissions with potentially harmful effects should default to deny . Plugin authors should account for this when designing new plugins - for example, the datasette-upload-csvs plugin defaults to deny so that installations don't accidentally allow unauthenticated users to create new tables by uploading a CSV file. | 68 | |
133 | 133 | Play with a live demo | The best way to experience Datasette for the first time is with a demo: global-power-plants.datasettes.com provides a searchable database of power plants around the world, using data from the World Resources Institude rendered using the datasette-cluster-map plugin. fivethirtyeight.datasettes.com shows Datasette running against over 400 datasets imported from the FiveThirtyEight GitHub repository . | 68 | |
86 | 86 | Plugin configuration | Plugins can have their own configuration, embedded in a Metadata file. Configuration options for plugins live within a "plugins" key in that file, which can be included at the root, database or table level. Here is an example of some plugin configuration for a specific table: { "databases": { "sf-trees": { "tables": { "Street_Tree_List": { "plugins": { "datasette-cluster-map": { "latitude_column": "lat", "longitude_column": "lng" } } } } } } } This tells the datasette-cluster-map column which latitude and longitude columns should be used for a table called Street_Tree_List inside a database file called sf-trees.db . | 68 | |
27 | 27 | Plugin hooks | Datasette plugins use plugin hooks to customize Datasette's behavior. These hooks are powered by the pluggy plugin system. Each plugin can implement one or more hooks using the @hookimpl decorator against a function named that matches one of the hooks documented on this page. When you implement a plugin hook you can accept any or all of the parameters that are documented as being passed to that hook. For example, you can implement the render_cell plugin hook like this even though the full documented hook signature is render_cell(row, value, column, table, database, datasette) : @hookimpl def render_cell(value, column): if column == "stars": return "*" * int(value) List of plugin hooks prepare_connection(conn, database, datasette) prepare_jinja2_environment(env, datasette) extra_template_vars(template, database, table, columns, view_name, request, datasette) extra_css_urls(template, database, table, columns, view_name, request, datasette) extra_js_urls(template, database, table, columns, view_name, request, datasette) extra_body_script(template, database, table, columns, view_name, request, datasette) publish_subcommand(publish) render_cell(row, value, column, table, database, datasette) register_output_renderer(datasette) register_routes(datasette) register_commands(cli) … | 68 | |
261 | 261 | Plugin hooks | New plugin hook: handle_exception() , for custom handling of exceptions caught by Datasette. ( #1770 ) The render_cell() plugin hook is now also passed a row argument, representing the sqlite3.Row object that is being rendered. ( #1300 ) The configuration directory is now stored in datasette.config_dir , making it available to plugins. Thanks, Chris Amico. ( #1766 ) | 68 | |
257 | 257 | Plugin hooks and internals | The prepare_jinja2_environment(env, datasette) plugin hook now accepts an optional datasette argument. Hook implementations can also now return an async function which will be awaited automatically. ( #1809 ) Database(is_mutable=) now defaults to True . ( #1808 ) The datasette.check_visibility() method now accepts an optional permissions= list, allowing it to take multiple permissions into account at once when deciding if something should be shown as public or private. This has been used to correctly display padlock icons in more places in the Datasette interface. ( #1829 ) Datasette no longer enforces upper bounds on its dependencies. ( #1800 ) | 68 | |
81 | 81 | Plugins | Datasette's plugin system allows additional features to be implemented as Python code (or front-end JavaScript) which can be wrapped up in a separate Python package. The underlying mechanism uses pluggy . See the Datasette plugins directory for a list of existing plugins, or take a look at the datasette-plugin topic on GitHub. Things you can do with plugins include: Add visualizations to Datasette, for example datasette-cluster-map and datasette-vega . Make new custom SQL functions available for use within Datasette, for example datasette-haversine and datasette-jellyfish . Define custom output formats with custom extensions, for example datasette-atom and datasette-ics . Add template functions that can be called within your Jinja custom templates, for example datasette-render-markdown . Customize how database values are rendered in the Datasette interface, for example datasette-render-binary and datasette-pretty-json . Customize how Datasette's authentication and permissions systems work, for example datasette-auth-tokens and datasette-permissions-sql . | 68 | |
269 | 269 | Plugins and internals | New plugin hook: filters_from_request(request, database, table, datasette) , which runs on the table page and can be used to support new custom query string parameters that modify the SQL query. ( #473 ) Added two additional methods for writing to the database: await db.execute_write_script(sql, block=True) and await db.execute_write_many(sql, params_seq, block=True) . ( #1570 ) The db.execute_write() internal method now defaults to blocking until the write operation has completed. Previously it defaulted to queuing the write and then continuing to run code while the write was in the queue. ( #1579 ) Database write connections now execute the prepare_connection(conn, database, datasette) plugin hook. ( #1564 ) The Datasette() constructor no longer requires the files= argument, and is now documented at Datasette class . ( #1563 ) The tracing feature now traces write queries, not just read queries. ( #1568 ) The query string variables exposed by request.args will now include blank strings for arguments such as foo in ?foo=&bar=1 rather than ignoring those parameters entirely. ( #1551 ) | 68 | |
303 | 303 | Plugins can now add links within Datasette | A number of existing Datasette plugins add new pages to the Datasette interface, providig tools for things like uploading CSVs , editing table schemas or configuring full-text search . Plugins like this can now link to themselves from other parts of Datasette interface. The menu_links(datasette, actor, request) hook ( #1064 ) lets plugins add links to Datasette's new top-right application menu, and the table_actions(datasette, actor, database, table, request) hook ( #1066 ) adds links to a new "table actions" menu on the table page. The demo at latest.datasette.io now includes some example plugins. To see the new table actions menu first sign into that demo as root and then visit the facetable table to see the new cog icon menu at the top of the page. | 68 | |
192 | 192 | Prettier | To install Prettier, install Node.js and then run the following in the root of your datasette repository checkout: $ npm install This will install Prettier in a node_modules directory. You can then check that your code matches the coding style like so: $ npm run prettier -- --check > prettier > prettier 'datasette/static/*[!.min].js' "--check" Checking formatting... [warn] datasette/static/plugins.js [warn] Code style issues found in the above file(s). Forgot to run Prettier? You can fix any problems by running: $ npm run fix | 68 | |
236 | 236 | Publishing data | Datasette includes tools for publishing and deploying your data to the internet. The datasette publish command will deploy a new Datasette instance containing your databases directly to a Heroku or Google Cloud hosting account. You can also use datasette package to create a Docker image that bundles your databases together with the datasette application that is used to serve them. | 68 | |
204 | 204 | Publishing static assets | The datasette publish command can be used to publish your static assets, using the same syntax as above: $ datasette publish cloudrun mydb.db --static assets:static-files/ This will upload the contents of the static-files/ directory as part of the deployment, and configure Datasette to correctly serve the assets from /assets/ . | 68 | |
241 | 241 | Publishing to Fly | Fly is a competitively priced Docker-compatible hosting platform that supports running applications in globally distributed data centers close to your end users. You can deploy Datasette instances to Fly using the datasette-publish-fly plugin. pip install datasette-publish-fly datasette publish fly mydatabase.db --app="my-app" Consult the datasette-publish-fly README for more details. | 68 | |
238 | 238 | Publishing to Google Cloud Run | Google Cloud Run allows you to publish data in a scale-to-zero environment, so your application will start running when the first request is received and will shut down again when traffic ceases. This means you only pay for time spent serving traffic. Cloud Run is a great option for inexpensively hosting small, low traffic projects - but costs can add up for projects that serve a lot of requests. Be particularly careful if your project has tables with large numbers of rows. Search engine crawlers that index a page for every row could result in a high bill. The datasette-block-robots plugin can be used to request search engine crawlers omit crawling your site, which can help avoid this issue. You will first need to install and configure the Google Cloud CLI tools by following these instructions . You can then publish one or more SQLite database files to Google Cloud Run using the following command: datasette publish cloudrun mydatabase.db --service=my-database A Cloud Run service is a single hosted application. The service name you specify will be used as part of the Cloud Run URL. If you deploy to a service name that you have used in the past your new deployment will replace the previous one. If you omit the --service option you will be asked to pick a service name interactively during the deploy. You may need to interact with prompts from the tool. Many of the prompts ask for values that can be set as properties for the Google Cloud SDK if you want to avoid the prompts. For example, the default region for the deployed instance can be set using the command: gcloud config set run/region us-central1 You should replace us-central1 with your desired region . Alternately, you can specify the region by setting the CLOUDSDK_RUN_REGION environment… | 68 | |
239 | 239 | Publishing to Heroku | To publish your data using Heroku , first create an account there and install and configure the Heroku CLI tool . You can publish one or more databases to Heroku using the following command: datasette publish heroku mydatabase.db This will output some details about the new deployment, including a URL like this one: https://limitless-reef-88278.herokuapp.com/ deployed to Heroku You can specify a custom app name by passing -n my-app-name to the publish command. This will also allow you to overwrite an existing app. Rather than deploying directly you can use the --generate-dir option to output the files that would be deployed to a directory: datasette publish heroku mydatabase.db --generate-dir=/tmp/deploy-this-to-heroku See datasette publish heroku for the full list of options for this command. | 68 | |
240 | 240 | Publishing to Vercel | Vercel - previously known as Zeit Now - provides a layer over AWS Lambda to allow for quick, scale-to-zero deployment. You can deploy Datasette instances to Vercel using the datasette-publish-vercel plugin. pip install datasette-publish-vercel datasette publish vercel mydatabase.db --project my-database-project Not every feature is supported: consult the datasette-publish-vercel README for more details. | 68 | |
26 | 26 | Querying polygons using within() | The within() SQL function can be used to check if a point is within a geometry: select name from places where within(GeomFromText('POINT(-3.1724366 51.4704448)'), places.geom); The GeomFromText() function takes a string of well-known text. Note that the order used here is longitude then latitude . To run that same within() query in a way that benefits from the spatial index, use the following: select name from places where within(GeomFromText('POINT(-3.1724366 51.4704448)'), places.geom) and rowid in ( SELECT pkid FROM idx_places_geom where xmin < -3.1724366 and xmax > -3.1724366 and ymin < 51.4704448 and ymax > 51.4704448 ); | 68 | |
59 | 59 | Registering a plugin for the duration of a test | When writing tests for plugins you may find it useful to register a test plugin just for the duration of a single test. You can do this using pm.register() and pm.unregister() like this: from datasette import hookimpl from datasette.app import Datasette from datasette.plugins import pm import pytest @pytest.mark.asyncio async def test_using_test_plugin(): class TestPlugin: __name__ = "TestPlugin" # Use hookimpl and method names to register hooks @hookimpl def register_routes(self): return [ (r"^/error$", lambda: 1 / 0), ] pm.register(TestPlugin(), name="undo") try: # The test implementation goes here datasette = Datasette() response = await datasette.client.get("/error") assert response.status_code == 500 finally: pm.unregister(name="undo") | 68 | |
196 | 196 | Release process | Datasette releases are performed using tags. When a new release is published on GitHub, a GitHub Action workflow will perform the following: Run the unit tests against all supported Python versions. If the tests pass... Build a Docker image of the release and push a tag to https://hub.docker.com/r/datasetteproject/datasette Re-point the "latest" tag on Docker Hub to the new image Build a wheel bundle of the underlying Python source code Push that new wheel up to PyPI: https://pypi.org/project/datasette/ To deploy new releases you will need to have push access to the main Datasette GitHub repository. Datasette follows Semantic Versioning : major.minor.patch We increment major for backwards-incompatible releases. Datasette is currently pre-1.0 so the major version is always 0 . We increment minor for new features. We increment patch for bugfix releass. Alpha and beta releases may have an additional a0 or b0 prefix - the integer component will be incremented with each subsequent alpha or beta. To release a new version, first create a commit that updates the version number in datasette/version.py and the the changelog with highlights of the new version. An example commit can be seen here : # Update changelog git commit -m " Release 0.51a1 Refs #1056, #1039, #998, #1045, #1033, #1036, #1034, #976, #1057, #1058, #1053, #1064, #1066" -a git push Referencing the issues that are part of the release in the commit message ensures the name of the release shows up on those issue pages, e.g. here . You can generate the list of issue referen… | 68 | |
198 | 198 | Releasing bug fixes from a branch | If it's necessary to publish a bug fix release without shipping new features that have landed on main a release branch can be used. Create it from the relevant last tagged release like so: git branch 0.52.x 0.52.4 git checkout 0.52.x Next cherry-pick the commits containing the bug fixes: git cherry-pick COMMIT Write the release notes in the branch, and update the version number in version.py . Then push the branch: git push -u origin 0.52.x Once the tests have completed, publish the release from that branch target using the GitHub Draft a new release form. Finally, cherry-pick the commit with the release notes and version number bump across to main : git checkout main git cherry-pick COMMIT git push | 68 | |
139 | 139 | Request object | The request object is passed to various plugin hooks. It represents an incoming HTTP request. It has the following properties: .scope - dictionary The ASGI scope that was used to construct this request, described in the ASGI HTTP connection scope specification. .method - string The HTTP method for this request, usually GET or POST . .url - string The full URL for this request, e.g. https://latest.datasette.io/fixtures . .scheme - string The request scheme - usually https or http . .headers - dictionary (str -> str) A dictionary of incoming HTTP request headers. Header names have been converted to lowercase. .cookies - dictionary (str -> str) A dictionary of incoming cookies .host - string The host header from the incoming request, e.g. latest.datasette.io or localhost . .path - string The path of the request excluding the query string, e.g. /fixtures . .full_path - string The path of the… | 68 | |
141 | 141 | Response class | The Response class can be returned from view functions that have been registered using the register_routes(datasette) hook. The Response() constructor takes the following arguments: body - string The body of the response. status - integer (optional) The HTTP status - defaults to 200. headers - dictionary (optional) A dictionary of extra HTTP headers, e.g. {"x-hello": "world"} . content_type - string (optional) The content-type for the response. Defaults to text/plain . For example: from datasette.utils.asgi import Response response = Response( "<xml>This is XML</xml>", content_type="application/xml; charset=utf-8", ) The quickest way to create responses is using the Response.text(...) , Response.html(...) , Response.json(...) or Response.redirect(...) helper methods: from datasette.utils.asgi import Response html_response = Response.html("This is HTML") json_response = Response.json({"this_is": "json"}) text_response = Response.text( "This will become utf-8 encoded text" ) # Redirects are served as 302, unless you pass status=301: redirect_response = Response.redirect( "https://latest.datasette.io/" ) Each of these responses will use the correct corresponding content-type - text/html; charset=utf-8 , application/json; charset=utf-8 or text/plain; charset=utf-8 respectively. Each of the helper methods take optional status= and headers= argument… | 68 | |
166 | 166 | Results | The db.execute() method returns a single Results object. This can be used to access the rows returned by the query. Iterating over a Results object will yield SQLite Row objects . Each of these can be treated as a tuple or can be accessed using row["column"] syntax: info = [] results = await db.execute("select name from sqlite_master") for row in results: info.append(row["name"]) The Results object also has the following properties and methods: .truncated - boolean Indicates if this query was truncated - if it returned more results than the specified page_size . If this is true then the results object will only provide access to the first page_size rows in the query result. You can disable truncation by passing truncate=False to the db.query() method. .columns - list of strings A list of column names returned by the query. .rows - list of sqlite3.Row This property provides direct access to the list of rows returned by the database. You can access specific rows by index using results.rows[0] . .first() - row or None Returns the first row in the results, or None if no rows were returned. .single_value() Returns the value of the first column of the first row of results - but only if the query returned a single row with… | 68 | |
209 | 209 | Returning 404s | To indicate that content could not be found and display the default 404 page you can use the raise_404(message) function: {% if not rows %} {{ raise_404("Content not found") }} {% endif %} If you call raise_404() the other content in your template will be ignored. | 68 | |
142 | 142 | Returning a response with .asgi_send(send) | In most cases you will return Response objects from your own view functions. You can also use a Response instance to respond at a lower level via ASGI, for example if you are writing code that uses the asgi_wrapper(datasette) hook. Create a Response object and then use await response.asgi_send(send) , passing the ASGI send function. For example: async def require_authorization(scope, receive, send): response = Response.text( "401 Authorization Required", headers={ "www-authenticate": 'Basic realm="Datasette", charset="UTF-8"' }, status=401, ) await response.asgi_send(send) | 68 | |
216 | 216 | Row | Every row in every Datasette table has its own URL. This means individual records can be linked to directly. Table cells with extremely long text contents are truncated on the table view according to the truncate_cells_html setting. If a cell has been truncated the full length version of that cell will be available on the row page. Rows which are the targets of foreign key references from other tables will show a link to a filtered search for all records that reference that row. Here's an example from the Registers of Members Interests database: ../people/uk.org.publicwhip%2Fperson%2F10001 Note that this URL includes the encoded primary key of the record. Here's that same page as JSON: ../people/uk.org.publicwhip%2Fperson%2F10001.json | 68 |
Advanced export
JSON shape: default, array, newline-delimited
CREATE VIRTUAL TABLE [sections_fts] USING FTS5 ( [title], [content], tokenize='porter', content=[sections] );