{"rowid": 478, "title": "A note about extensions", "content": "SQLite supports extensions, such as SpatiaLite for geospatial operations. \n These can be loaded using the --load-extension argument, like so: \n datasette --load-extension=/usr/local/lib/mod_spatialite.dylib \n Some Python installations do not include support for SQLite extensions. If this is the case you will see the following error when you attempt to load an extension: \n \n Your Python installation does not have the ability to load SQLite extensions. \n \n In some cases you may see the following error message instead: \n AttributeError: 'sqlite3.Connection' object has no attribute 'enable_load_extension' \n On macOS the easiest fix for this is to install Datasette using Homebrew: \n brew install datasette \n Use which datasette to confirm that datasette will run that version. The output should look something like this: \n /usr/local/opt/datasette/bin/datasette \n If you get a different location here such as /Library/Frameworks/Python.framework/Versions/3.10/bin/datasette you can run the following command to cause datasette to execute the Homebrew version instead: \n alias datasette=$(echo $(brew --prefix datasette)/bin/datasette) \n You can undo this operation using: \n unalias datasette \n If you need to run SQLite with extension support for other Python code, you can do so by install Python itself using Homebrew: \n brew install python \n Then executing Python using: \n /usr/local/opt/python@3/libexec/bin/python \n A more convenient way to work with this version of Python may be to use it to create a virtual environment: \n /usr/local/opt/python@3/libexec/bin/python -m venv datasette-venv \n Then activate it like this: \n source datasette-venv/bin/activate \n Now running python and pip will work against a version of Python 3 that includes support for SQLite extensions: \n pip install datasette\nwhich datasette\ndatasette --version", "sections_fts": 70, "rank": null} {"rowid": 477, "title": "Installing plugins", "content": "If you want to install plugins into your local Datasette Docker image you can do\n so using the following recipe. This will install the plugins and then save a\n brand new local image called datasette-with-plugins : \n docker run datasetteproject/datasette \\\n pip install datasette-vega\n\ndocker commit $(docker ps -lq) datasette-with-plugins \n You can now run the new custom image like so: \n docker run -p 8001:8001 -v `pwd`:/mnt \\\n datasette-with-plugins \\\n datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db \n You can confirm that the plugins are installed by visiting\n http://127.0.0.1:8001/-/plugins \n Some plugins such as datasette-ripgrep may need additional system packages. You can install these by running apt-get install inside the container: \n docker run datasette-057a0 bash -c '\n apt-get update &&\n apt-get install ripgrep &&\n pip install datasette-ripgrep'\n\ndocker commit $(docker ps -lq) datasette-with-ripgrep", "sections_fts": 70, "rank": null} {"rowid": 476, "title": "Loading SpatiaLite", "content": "The datasetteproject/datasette image includes a recent version of the\n SpatiaLite extension for SQLite. To load and enable that\n module, use the following command: \n docker run -p 8001:8001 -v `pwd`:/mnt \\\n datasetteproject/datasette \\\n datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db \\\n --load-extension=spatialite \n You can confirm that SpatiaLite is successfully loaded by visiting\n http://127.0.0.1:8001/-/versions", "sections_fts": 70, "rank": null} {"rowid": 475, "title": "Using Docker", "content": "A Docker image containing the latest release of Datasette is published to Docker\n Hub here: https://hub.docker.com/r/datasetteproject/datasette/ \n If you have Docker installed (for example with Docker for Mac on OS X) you can download and run this\n image like so: \n docker run -p 8001:8001 -v `pwd`:/mnt \\\n datasetteproject/datasette \\\n datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db \n This will start an instance of Datasette running on your machine's port 8001,\n serving the fixtures.db file in your current directory. \n Now visit http://127.0.0.1:8001/ to access Datasette. \n (You can download a copy of fixtures.db from\n https://latest.datasette.io/fixtures.db ) \n To upgrade to the most recent release of Datasette, run the following: \n docker pull datasetteproject/datasette", "sections_fts": 70, "rank": null} {"rowid": 474, "title": "Upgrading packages using pipx", "content": "You can upgrade your pipx installation to the latest release of Datasette using pipx upgrade datasette : \n $ pipx upgrade datasette\nupgraded package datasette from 0.39 to 0.40 (location: /Users/simon/.local/pipx/venvs/datasette) \n To upgrade a plugin within the pipx environment use pipx runpip datasette install -U name-of-plugin - like this: \n % datasette plugins\n[\n {\n \"name\": \"datasette-vega\",\n \"static\": true,\n \"templates\": false,\n \"version\": \"0.6\"\n }\n]\n\n$ pipx runpip datasette install -U datasette-vega\nCollecting datasette-vega\nDownloading datasette_vega-0.6.2-py3-none-any.whl (1.8 MB)\n |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1.8 MB 2.0 MB/s\n...\nInstalling collected packages: datasette-vega\nAttempting uninstall: datasette-vega\n Found existing installation: datasette-vega 0.6\n Uninstalling datasette-vega-0.6:\n Successfully uninstalled datasette-vega-0.6\nSuccessfully installed datasette-vega-0.6.2\n\n$ datasette plugins\n[\n {\n \"name\": \"datasette-vega\",\n \"static\": true,\n \"templates\": false,\n \"version\": \"0.6.2\"\n }\n]", "sections_fts": 70, "rank": null} {"rowid": 473, "title": "Installing plugins using pipx", "content": "You can install additional datasette plugins with pipx inject like so: \n $ pipx inject datasette datasette-json-html\ninjected package datasette-json-html into venv datasette\ndone! \u2728 \ud83c\udf1f \u2728\n\n$ datasette plugins\n[\n {\n \"name\": \"datasette-json-html\",\n \"static\": false,\n \"templates\": false,\n \"version\": \"0.6\"\n }\n]", "sections_fts": 70, "rank": null} {"rowid": 472, "title": "Using pipx", "content": "pipx is a tool for installing Python software with all of its dependencies in an isolated environment, to ensure that they will not conflict with any other installed Python software. \n If you use Homebrew on macOS you can install pipx like this: \n brew install pipx\npipx ensurepath \n Without Homebrew you can install it like so: \n python3 -m pip install --user pipx\npython3 -m pipx ensurepath \n The pipx ensurepath command configures your shell to ensure it can find commands that have been installed by pipx - generally by making sure ~/.local/bin has been added to your PATH . \n Once pipx is installed you can use it to install Datasette like this: \n pipx install datasette \n Then run datasette --version to confirm that it has been successfully installed.", "sections_fts": 70, "rank": null} {"rowid": 471, "title": "Advanced installation options", "content": "", "sections_fts": 70, "rank": null} {"rowid": 470, "title": "Using pip", "content": "Datasette requires Python 3.7 or higher. The Python.org Python For Beginners page has instructions for getting started. \n You can install Datasette and its dependencies using pip : \n pip install datasette \n You can now run Datasette like so: \n datasette", "sections_fts": 70, "rank": null} {"rowid": 469, "title": "Using Homebrew", "content": "If you have a Mac and use Homebrew , you can install Datasette by running this command in your terminal: \n brew install datasette \n This should install the latest version. You can confirm by running: \n datasette --version \n You can upgrade to the latest Homebrew packaged version using: \n brew upgrade datasette \n Once you have installed Datasette you can install plugins using the following: \n datasette install datasette-vega \n If the latest packaged release of Datasette has not yet been made available through Homebrew, you can upgrade your Homebrew installation in-place using: \n datasette install -U datasette", "sections_fts": 70, "rank": null} {"rowid": 468, "title": "Datasette Desktop for Mac", "content": "Datasette Desktop is a packaged Mac application which bundles Datasette together with Python and allows you to install and run Datasette directly on your laptop. This is the best option for local installation if you are not comfortable using the command line.", "sections_fts": 70, "rank": null} {"rowid": 467, "title": "Basic installation", "content": "", "sections_fts": 70, "rank": null} {"rowid": 466, "title": "Installation", "content": "If you just want to try Datasette out you don't need to install anything: see Try Datasette without installing anything using Glitch \n \n There are two main options for installing Datasette. You can install it directly on to your machine, or you can install it using Docker. \n If you want to start making contributions to the Datasette project by installing a copy that lets you directly modify the code, take a look at our guide to Setting up a development environment . \n \n \n \n Basic installation \n \n \n Datasette Desktop for Mac \n \n \n Using Homebrew \n \n \n Using pip \n \n \n \n \n Advanced installation options \n \n \n Using pipx \n \n \n Installing plugins using pipx \n \n \n Upgrading packages using pipx \n \n \n \n \n Using Docker \n \n \n Loading SpatiaLite \n \n \n Installing plugins \n \n \n \n \n \n \n A note about extensions", "sections_fts": 70, "rank": null} {"rowid": 465, "title": "Cross-database queries", "content": "SQLite has the ability to run queries that join across multiple databases. Up to ten databases can be attached to a single SQLite connection and queried together. \n Datasette can execute joins across multiple databases if it is started with the --crossdb option: \n datasette fixtures.db extra_database.db --crossdb \n If it is started in this way, the /_memory page can be used to execute queries that join across multiple databases. \n References to tables in attached databases should be preceded by the database name and a period. \n For example, this query will show a list of tables across both of the above databases: \n select\n 'fixtures' as database, *\nfrom\n [fixtures].sqlite_master\nunion\nselect\n 'extra_database' as database, *\nfrom\n [extra_database].sqlite_master \n Try that out here .", "sections_fts": 70, "rank": null} {"rowid": 464, "title": "Pagination", "content": "Datasette's default table pagination is designed to be extremely efficient. SQL OFFSET/LIMIT pagination can have a significant performance penalty once you get into multiple thousands of rows, as each page still requires the database to scan through every preceding row to find the correct offset. \n When paginating through tables, Datasette instead orders the rows in the table by their primary key and performs a WHERE clause against the last seen primary key for the previous page. For example: \n select rowid, * from Tree_List where rowid > 200 order by rowid limit 101 \n This represents page three for this particular table, with a page size of 100. \n Note that we request 101 items in the limit clause rather than 100. This allows us to detect if we are on the last page of the results: if the query returns less than 101 rows we know we have reached the end of the pagination set. Datasette will only return the first 100 rows - the 101st is used purely to detect if there should be another page. \n Since the where clause acts against the index on the primary key, the query is extremely fast even for records that are a long way into the overall pagination set.", "sections_fts": 70, "rank": null} {"rowid": 463, "title": "JSON API for writable canned queries", "content": "Writable canned queries can also be accessed using a JSON API. You can POST data to them using JSON, and you can request that their response is returned to you as JSON. \n To submit JSON to a writable canned query, encode key/value parameters as a JSON document: \n POST /mydatabase/add_message\n\n{\"message\": \"Message goes here\"} \n You can also continue to submit data using regular form encoding, like so: \n POST /mydatabase/add_message\n\nmessage=Message+goes+here \n There are three options for specifying that you would like the response to your request to return JSON data, as opposed to an HTTP redirect to another page. \n \n \n Set an Accept: application/json header on your request \n \n \n Include ?_json=1 in the URL that you POST to \n \n \n Include \"_json\": 1 in your JSON body, or &_json=1 in your form encoded body \n \n \n The JSON response will look like this: \n {\n \"ok\": true,\n \"message\": \"Query executed, 1 row affected\",\n \"redirect\": \"/data/add_name\"\n} \n The \"message\" and \"redirect\" values here will take into account on_success_message , on_success_redirect , on_error_message and on_error_redirect , if they have been set.", "sections_fts": 70, "rank": null} {"rowid": 462, "title": "Magic parameters", "content": "Named parameters that start with an underscore are special: they can be used to automatically add values created by Datasette that are not contained in the incoming form fields or query string. \n These magic parameters are only supported for canned queries: to avoid security issues (such as queries that extract the user's private cookies) they are not available to SQL that is executed by the user as a custom SQL query. \n Available magic parameters are: \n \n \n _actor_* - e.g. _actor_id , _actor_name \n \n Fields from the currently authenticated Actors . \n \n \n \n _header_* - e.g. _header_user_agent \n \n Header from the incoming HTTP request. The key should be in lower case and with hyphens converted to underscores e.g. _header_user_agent or _header_accept_language . \n \n \n \n _cookie_* - e.g. _cookie_lang \n \n The value of the incoming cookie of that name. \n \n \n \n _now_epoch \n \n The number of seconds since the Unix epoch. \n \n \n \n _now_date_utc \n \n The date in UTC, e.g. 2020-06-01 \n \n \n \n _now_datetime_utc \n \n The ISO 8601 datetime in UTC, e.g. 2020-06-24T18:01:07Z \n \n \n \n _random_chars_* - e.g. _random_chars_128 \n \n A random string of characters of the specified length. \n \n \n \n Here's an example configuration (this time using metadata.yaml since that provides better support for multi-line SQL queries) that adds a message from the authenticated user, storing various pieces of additional metadata using magic parameters: \n databases:\n mydatabase:\n queries:\n add_message:\n allow:\n id: \"*\"\n sql: |-\n INSERT INTO messages (\n user_id, message, datetime\n ) VALUES (\n :_actor_id, :message, :_now_datetime_utc\n )\n write: true \n The form presented at /mydatabase/add_message will have just a field for message - the other parameters will be populated by the magic parameter mechanism. \n Additional custom magic parameters can be added by plugins using the register_magic_parameters(datasette) hook.", "sections_fts": 70, "rank": null} {"rowid": 461, "title": "Writable canned queries", "content": "Canned queries by default are read-only. You can use the \"write\": true key to indicate that a canned query can write to the database. \n See Controlling access to specific canned queries for details on how to add permission checks to canned queries, using the \"allow\" key. \n {\n \"databases\": {\n \"mydatabase\": {\n \"queries\": {\n \"add_name\": {\n \"sql\": \"INSERT INTO names (name) VALUES (:name)\",\n \"write\": true\n }\n }\n }\n }\n} \n This configuration will create a page at /mydatabase/add_name displaying a form with a name field. Submitting that form will execute the configured INSERT query. \n You can customize how Datasette represents success and errors using the following optional properties: \n \n \n on_success_message - the message shown when a query is successful \n \n \n on_success_redirect - the path or URL the user is redirected to on success \n \n \n on_error_message - the message shown when a query throws an error \n \n \n on_error_redirect - the path or URL the user is redirected to on error \n \n \n For example: \n {\n \"databases\": {\n \"mydatabase\": {\n \"queries\": {\n \"add_name\": {\n \"sql\": \"INSERT INTO names (name) VALUES (:name)\",\n \"write\": true,\n \"on_success_message\": \"Name inserted\",\n \"on_success_redirect\": \"/mydatabase/names\",\n \"on_error_message\": \"Name insert failed\",\n \"on_error_redirect\": \"/mydatabase\"\n }\n }\n }\n }\n} \n You can use \"params\" to explicitly list the named parameters that should be displayed as form fields - otherwise they will be automatically detected. \n You can pre-populate form fields when the page first loads using a query string, e.g. /mydatabase/add_name?name=Prepopulated . The user will have to submit the form to execute the query.", "sections_fts": 70, "rank": null} {"rowid": 460, "title": "fragment", "content": "Some plugins, such as datasette-vega , can be configured by including additional data in the fragment hash of the URL - the bit that comes after a # symbol. \n You can set a default fragment hash that will be included in the link to the canned query from the database index page using the \"fragment\" key. \n This example demonstrates both fragment and hide_sql : \n {\n \"databases\": {\n \"fixtures\": {\n \"queries\": {\n \"neighborhood_search\": {\n \"sql\": \"select neighborhood, facet_cities.name, state\\nfrom facetable join facet_cities on facetable.city_id = facet_cities.id\\nwhere neighborhood like '%' || :text || '%' order by neighborhood;\",\n \"fragment\": \"fragment-goes-here\",\n \"hide_sql\": true\n }\n }\n }\n }\n} \n See here for a demo of this in action.", "sections_fts": 70, "rank": null} {"rowid": 459, "title": "hide_sql", "content": "Canned queries default to displaying their SQL query at the top of the page. If the query is extremely long you may want to hide it by default, with a \"show\" link that can be used to make it visible. \n Add the \"hide_sql\": true option to hide the SQL query by default.", "sections_fts": 70, "rank": null} {"rowid": 458, "title": "Additional canned query options", "content": "Additional options can be specified for canned queries in the YAML or JSON configuration.", "sections_fts": 70, "rank": null} {"rowid": 457, "title": "Canned query parameters", "content": "Canned queries support named parameters, so if you include those in the SQL you will then be able to enter them using the form fields on the canned query page or by adding them to the URL. This means canned queries can be used to create custom JSON APIs based on a carefully designed SQL statement. \n Here's an example of a canned query with a named parameter: \n select neighborhood, facet_cities.name, state\nfrom facetable\n join facet_cities on facetable.city_id = facet_cities.id\nwhere neighborhood like '%' || :text || '%'\norder by neighborhood; \n In the canned query metadata (here Using YAML for metadata as metadata.yaml ) it looks like this: \n databases:\n fixtures:\n queries:\n neighborhood_search:\n sql: |-\n select neighborhood, facet_cities.name, state\n from facetable\n join facet_cities on facetable.city_id = facet_cities.id\n where neighborhood like '%' || :text || '%'\n order by neighborhood\n title: Search neighborhoods \n Here's the equivalent using JSON (as metadata.json ): \n {\n \"databases\": {\n \"fixtures\": {\n \"queries\": {\n \"neighborhood_search\": {\n \"sql\": \"select neighborhood, facet_cities.name, state\\nfrom facetable\\n join facet_cities on facetable.city_id = facet_cities.id\\nwhere neighborhood like '%' || :text || '%'\\norder by neighborhood\",\n \"title\": \"Search neighborhoods\"\n }\n }\n }\n }\n} \n Note that we are using SQLite string concatenation here - the || operator - to add wildcard % characters to the string provided by the user. \n You can try this canned query out here:\n https://latest.datasette.io/fixtures/neighborhood_search?text=town \n In this example the :text named parameter is automatically extracted from the query using a regular expression. \n You can alternatively provide an explicit list of named parameters using the \"params\" key, like this: \n databases:\n fixtures:\n queries:\n neighborhood_search:\n params:\n - text\n sql: |-\n select neighborhood, facet_cities.name, state\n from facetable\n join facet_cities on facetable.city_id = facet_cities.id\n where neighborhood like '%' || :text || '%'\n order by neighborhood\n title: Search neighborhoods", "sections_fts": 70, "rank": null} {"rowid": 456, "title": "Canned queries", "content": "As an alternative to adding views to your database, you can define canned queries inside your metadata.json file. Here's an example: \n {\n \"databases\": {\n \"sf-trees\": {\n \"queries\": {\n \"just_species\": {\n \"sql\": \"select qSpecies from Street_Tree_List\"\n }\n }\n }\n }\n} \n Then run Datasette like this: \n datasette sf-trees.db -m metadata.json \n Each canned query will be listed on the database index page, and will also get its own URL at: \n /database-name/canned-query-name \n For the above example, that URL would be: \n /sf-trees/just_species \n You can optionally include \"title\" and \"description\" keys to show a title and description on the canned query page. As with regular table metadata you can alternatively specify \"description_html\" to have your description rendered as HTML (rather than having HTML special characters escaped).", "sections_fts": 70, "rank": null} {"rowid": 455, "title": "Views", "content": "If you want to bundle some pre-written SQL queries with your Datasette-hosted database you can do so in two ways. The first is to include SQL views in your database - Datasette will then list those views on your database index page. \n The quickest way to create views is with the SQLite command-line interface: \n $ sqlite3 sf-trees.db\nSQLite version 3.19.3 2017-06-27 16:48:08\nEnter \".help\" for usage hints.\nsqlite> CREATE VIEW demo_view AS select qSpecies from Street_Tree_List;\n", "sections_fts": 70, "rank": null} {"rowid": 454, "title": "Named parameters", "content": "Datasette has special support for SQLite named parameters. Consider a SQL query like this: \n select * from Street_Tree_List\nwhere \"PermitNotes\" like :notes\nand \"qSpecies\" = :species \n If you execute this query using the custom query editor, Datasette will extract the two named parameters and use them to construct form fields for you to provide values. \n You can also provide values for these fields by constructing a URL: \n /mydatabase?sql=select...&species=44 \n SQLite string escaping rules will be applied to values passed using named parameters - they will be wrapped in quotes and their content will be correctly escaped. \n Values from named parameters are treated as SQLite strings. If you need to perform numeric comparisons on them you should cast them to an integer or float first using cast(:name as integer) or cast(:name as real) , for example: \n select * from Street_Tree_List\nwhere latitude > cast(:min_latitude as real)\nand latitude < cast(:max_latitude as real) \n Datasette disallows custom SQL queries containing the string PRAGMA (with a small number of exceptions ) as SQLite pragma statements can be used to change database settings at runtime. If you need to include the string \"pragma\" in a query you can do so safely using a named parameter.", "sections_fts": 70, "rank": null} {"rowid": 453, "title": "Running SQL queries", "content": "Datasette treats SQLite database files as read-only and immutable. This means it is not possible to execute INSERT or UPDATE statements using Datasette, which allows us to expose SELECT statements to the outside world without needing to worry about SQL injection attacks. \n The easiest way to execute custom SQL against Datasette is through the web UI. The database index page includes a SQL editor that lets you run any SELECT query you like. You can also construct queries using the filter interface on the tables page, then click \"View and edit SQL\" to open that query in the custom SQL editor. \n Note that this interface is only available if the execute-sql permission is allowed. \n Any Datasette SQL query is reflected in the URL of the page, allowing you to bookmark them, share them with others and navigate through previous queries using your browser back button. \n You can also retrieve the results of any query as JSON by adding .json to the base URL.", "sections_fts": 70, "rank": null} {"rowid": 452, "title": "Binary plugins", "content": "Several Datasette plugins are available that change the way Datasette treats binary data. \n \n \n datasette-render-binary modifies Datasette's default interface to show an automatic guess at what type of binary data is being stored, along with a visual representation of the binary value that displays ASCII strings directly in the interface. \n \n \n datasette-render-images detects common image formats and renders them as images directly in the Datasette interface. \n \n \n datasette-media allows Datasette interfaces to be configured to serve binary files from configured SQL queries, and includes the ability to resize images directly before serving them.", "sections_fts": 70, "rank": null} {"rowid": 451, "title": "Linking to binary downloads", "content": "The .blob output format is used to return binary data. It requires a _blob_column= query string argument specifying which BLOB column should be downloaded, for example: \n https://latest.datasette.io/fixtures/binary_data/1.blob?_blob_column=data \n This output format can also be used to return binary data from an arbitrary SQL query. Since such queries do not specify an exact row, an additional ?_blob_hash= parameter can be used to specify the SHA-256 hash of the value that is being linked to. \n Consider the query select data from binary_data - demonstrated here . \n That page links to the binary value downloads. Those links look like this: \n https://latest.datasette.io/fixtures.blob?sql=select+data+from+binary_data&_blob_column=data&_blob_hash=f3088978da8f9aea479ffc7f631370b968d2e855eeb172bea7f6c7a04262bb6d \n These .blob links are also returned in the .csv exports Datasette provides for binary tables and queries, since the CSV format does not have a mechanism for representing binary data.", "sections_fts": 70, "rank": null} {"rowid": 450, "title": "Binary data", "content": "SQLite tables can contain binary data in BLOB columns. \n Datasette includes special handling for these binary values. The Datasette interface detects binary values and provides a link to download their content, for example on https://latest.datasette.io/fixtures/binary_data \n \n Binary data is represented in .json exports using Base64 encoding. \n https://latest.datasette.io/fixtures/binary_data.json?_shape=array \n [\n {\n \"rowid\": 1,\n \"data\": {\n \"$base64\": true,\n \"encoded\": \"FRwCx60F/g==\"\n }\n },\n {\n \"rowid\": 2,\n \"data\": {\n \"$base64\": true,\n \"encoded\": \"FRwDx60F/g==\"\n }\n },\n {\n \"rowid\": 3,\n \"data\": null\n }\n]", "sections_fts": 70, "rank": null} {"rowid": 449, "title": "debug-menu", "content": "Controls if the various debug pages are displayed in the navigation menu. \n Default deny .", "sections_fts": 70, "rank": null} {"rowid": 448, "title": "permissions-debug", "content": "Actor is allowed to view the /-/permissions debug page. \n Default deny .", "sections_fts": 70, "rank": null} {"rowid": 447, "title": "execute-sql", "content": "Actor is allowed to run arbitrary SQL queries against a specific database, e.g. https://latest.datasette.io/fixtures?sql=select+100 \n \n \n resource - string \n \n The name of the database \n \n \n \n Default allow . See also the default_allow_sql setting .", "sections_fts": 70, "rank": null} {"rowid": 446, "title": "view-query", "content": "Actor is allowed to view (and execute) a canned query page, e.g. https://latest.datasette.io/fixtures/pragma_cache_size - this includes executing Writable canned queries . \n \n \n resource - tuple: (string, string) \n \n The name of the database, then the name of the canned query \n \n \n \n Default allow .", "sections_fts": 70, "rank": null} {"rowid": 445, "title": "view-table", "content": "Actor is allowed to view a table (or view) page, e.g. https://latest.datasette.io/fixtures/complex_foreign_keys \n \n \n resource - tuple: (string, string) \n \n The name of the database, then the name of the table \n \n \n \n Default allow .", "sections_fts": 70, "rank": null} {"rowid": 444, "title": "view-database-download", "content": "Actor is allowed to download a database, e.g. https://latest.datasette.io/fixtures.db \n \n \n resource - string \n \n The name of the database \n \n \n \n Default allow .", "sections_fts": 70, "rank": null} {"rowid": 443, "title": "view-database", "content": "Actor is allowed to view a database page, e.g. https://latest.datasette.io/fixtures \n \n \n resource - string \n \n The name of the database \n \n \n \n Default allow .", "sections_fts": 70, "rank": null} {"rowid": 442, "title": "view-instance", "content": "Top level permission - Actor is allowed to view any pages within this instance, starting at https://latest.datasette.io/ \n Default allow .", "sections_fts": 70, "rank": null} {"rowid": 441, "title": "Built-in permissions", "content": "This section lists all of the permission checks that are carried out by Datasette core, along with the resource if it was passed.", "sections_fts": 70, "rank": null} {"rowid": 440, "title": "The /-/logout page", "content": "The page at /-/logout provides the ability to log out of a ds_actor cookie authentication session.", "sections_fts": 70, "rank": null} {"rowid": 439, "title": "Including an expiry time", "content": "ds_actor cookies can optionally include a signed expiry timestamp, after which the cookies will no longer be valid. Authentication plugins may chose to use this mechanism to limit the lifetime of the cookie. For example, if a plugin implements single-sign-on against another source it may decide to set short-lived cookies so that if the user is removed from the SSO system their existing Datasette cookies will stop working shortly afterwards. \n To include an expiry, add a \"e\" key to the cookie value containing a base62-encoded integer representing the timestamp when the cookie should expire. For example, here's how to set a cookie that expires after 24 hours: \n import time\nfrom datasette.utils import baseconv\n\nexpires_at = int(time.time()) + (24 * 60 * 60)\n\nresponse = Response.redirect(\"/\")\nresponse.set_cookie(\n \"ds_actor\",\n datasette.sign(\n {\n \"a\": {\"id\": \"cleopaws\"},\n \"e\": baseconv.base62.encode(expires_at),\n },\n \"actor\",\n ),\n) \n The resulting cookie will encode data that looks something like this: \n {\n \"a\": {\n \"id\": \"cleopaws\"\n },\n \"e\": \"1jjSji\"\n}", "sections_fts": 70, "rank": null} {"rowid": 438, "title": "The ds_actor cookie", "content": "Datasette includes a default authentication plugin which looks for a signed ds_actor cookie containing a JSON actor dictionary. This is how the root actor mechanism works. \n Authentication plugins can set signed ds_actor cookies themselves like so: \n response = Response.redirect(\"/\")\nresponse.set_cookie(\n \"ds_actor\",\n datasette.sign({\"a\": {\"id\": \"cleopaws\"}}, \"actor\"),\n) \n Note that you need to pass \"actor\" as the namespace to .sign(value, namespace=\"default\") . \n The shape of data encoded in the cookie is as follows: \n {\n \"a\": {... actor ...}\n}", "sections_fts": 70, "rank": null} {"rowid": 437, "title": "The permissions debug tool", "content": "The debug tool at /-/permissions is only available to the authenticated root user (or any actor granted the permissions-debug action according to a plugin). \n It shows the thirty most recent permission checks that have been carried out by the Datasette instance. \n This is designed to help administrators and plugin authors understand exactly how permission checks are being carried out, in order to effectively configure Datasette's permission system.", "sections_fts": 70, "rank": null} {"rowid": 436, "title": "actor_matches_allow()", "content": "Plugins that wish to implement this same \"allow\" block permissions scheme can take advantage of the datasette.utils.actor_matches_allow(actor, allow) function: \n from datasette.utils import actor_matches_allow\n\nactor_matches_allow({\"id\": \"root\"}, {\"id\": \"*\"})\n# returns True \n The currently authenticated actor is made available to plugins as request.actor .", "sections_fts": 70, "rank": null} {"rowid": 435, "title": "Checking permissions in plugins", "content": "Datasette plugins can check if an actor has permission to perform an action using the datasette.permission_allowed(...) method. \n Datasette core performs a number of permission checks, documented below . Plugins can implement the permission_allowed(datasette, actor, action, resource) plugin hook to participate in decisions about whether an actor should be able to perform a specified action.", "sections_fts": 70, "rank": null} {"rowid": 434, "title": "Controlling the ability to execute arbitrary SQL", "content": "Datasette defaults to allowing any site visitor to execute their own custom SQL queries, for example using the form on the database page or by appending a ?_where= parameter to the table page like this . \n Access to this ability is controlled by the execute-sql permission. \n The easiest way to disable arbitrary SQL queries is using the default_allow_sql setting when you first start Datasette running. \n You can alternatively use an \"allow_sql\" block to control who is allowed to execute arbitrary SQL queries. \n To prevent any user from executing arbitrary SQL queries, use this: \n {\n \"allow_sql\": false\n} \n To enable just the root user to execute SQL for all databases in your instance, use the following: \n {\n \"allow_sql\": {\n \"id\": \"root\"\n }\n} \n To limit this ability for just one specific database, use this: \n {\n \"databases\": {\n \"mydatabase\": {\n \"allow_sql\": {\n \"id\": \"root\"\n }\n }\n }\n}", "sections_fts": 70, "rank": null} {"rowid": 433, "title": "Controlling access to specific canned queries", "content": "Canned queries allow you to configure named SQL queries in your metadata.json that can be executed by users. These queries can be set up to both read and write to the database, so controlling who can execute them can be important. \n To limit access to the add_name canned query in your dogs.db database to just the root user : \n {\n \"databases\": {\n \"dogs\": {\n \"queries\": {\n \"add_name\": {\n \"sql\": \"INSERT INTO names (name) VALUES (:name)\",\n \"write\": true,\n \"allow\": {\n \"id\": [\"root\"]\n }\n }\n }\n }\n }\n}", "sections_fts": 70, "rank": null} {"rowid": 432, "title": "Controlling access to specific tables and views", "content": "To limit access to the users table in your bakery.db database: \n {\n \"databases\": {\n \"bakery\": {\n \"tables\": {\n \"users\": {\n \"allow\": {\n \"id\": \"*\"\n }\n }\n }\n }\n }\n} \n This works for SQL views as well - you can list their names in the \"tables\" block above in the same way as regular tables. \n \n Restricting access to tables and views in this way will NOT prevent users from querying them using arbitrary SQL queries, like this for example. \n If you are restricting access to specific tables you should also use the \"allow_sql\" block to prevent users from bypassing the limit with their own SQL queries - see Controlling the ability to execute arbitrary SQL .", "sections_fts": 70, "rank": null} {"rowid": 431, "title": "Controlling access to specific databases", "content": "To limit access to a specific private.db database to just authenticated users, use the \"allow\" block like this: \n {\n \"databases\": {\n \"private\": {\n \"allow\": {\n \"id\": \"*\"\n }\n }\n }\n}", "sections_fts": 70, "rank": null} {"rowid": 430, "title": "Controlling access to an instance", "content": "Here's how to restrict access to your entire Datasette instance to just the \"id\": \"root\" user: \n {\n \"title\": \"My private Datasette instance\",\n \"allow\": {\n \"id\": \"root\"\n }\n} \n To deny access to all users, you can use \"allow\": false : \n {\n \"title\": \"My entirely inaccessible instance\",\n \"allow\": false\n} \n One reason to do this is if you are using a Datasette plugin - such as datasette-permissions-sql - to control permissions instead.", "sections_fts": 70, "rank": null} {"rowid": 429, "title": "Configuring permissions in metadata.json", "content": "You can limit who is allowed to view different parts of your Datasette instance using \"allow\" keys in your Metadata configuration. \n You can control the following: \n \n \n Access to the entire Datasette instance \n \n \n Access to specific databases \n \n \n Access to specific tables and views \n \n \n Access to specific Canned queries \n \n \n If a user cannot access a specific database, they will not be able to access tables, views or queries within that database. If a user cannot access the instance they will not be able to access any of the databases, tables, views or queries.", "sections_fts": 70, "rank": null} {"rowid": 428, "title": "The /-/allow-debug tool", "content": "The /-/allow-debug tool lets you try out different \"action\" blocks against different \"actor\" JSON objects. You can try that out here: https://latest.datasette.io/-/allow-debug", "sections_fts": 70, "rank": null} {"rowid": 427, "title": "Defining permissions with \"allow\" blocks", "content": "The standard way to define permissions in Datasette is to use an \"allow\" block. This is a JSON document describing which actors are allowed to perform a permission. \n The most basic form of allow block is this ( allow demo , deny demo ): \n {\n \"allow\": {\n \"id\": \"root\"\n }\n} \n This will match any actors with an \"id\" property of \"root\" - for example, an actor that looks like this: \n {\n \"id\": \"root\",\n \"name\": \"Root User\"\n} \n An allow block can specify \"deny all\" using false ( demo ): \n {\n \"allow\": false\n} \n An \"allow\" of true allows all access ( demo ): \n {\n \"allow\": true\n} \n Allow keys can provide a list of values. These will match any actor that has any of those values ( allow demo , deny demo ): \n {\n \"allow\": {\n \"id\": [\"simon\", \"cleopaws\"]\n }\n} \n This will match any actor with an \"id\" of either \"simon\" or \"cleopaws\" . \n Actors can have properties that feature a list of values. These will be matched against the list of values in an allow block. Consider the following actor: \n {\n \"id\": \"simon\",\n \"roles\": [\"staff\", \"developer\"]\n} \n This allow block will provide access to any actor that has \"developer\" as one of their roles ( allow demo , deny demo ): \n {\n \"allow\": {\n \"roles\": [\"developer\"]\n }\n} \n Note that \"roles\" is not a concept that is baked into Datasette - it's a convention that plugins can choose to implement and act on. \n If you want to provide access to any actor with a value for a specific key, use \"*\" . For example, to match any logged-in user specify the following ( allow demo , deny demo ): \n {\n \"allow\": {\n \"id\": \"*\"\n }\n} \n You can specify that only unauthenticated actors (from anynomous HTTP requests) should be allowed access using the special \"unauthenticated\": true key in an allow block ( allow demo , deny demo ): \n {\n \"allow\": {\n \"unauthenticated\": true\n }\n} \n Allow keys act as an \"or\" mechanism. An actor will be able to execute the query if any of their JSON properties match any of the values in the corresponding lists in the allow block. The following block will allow users with either a role of \"ops\" OR users who have an id of \"simon\" or \"cleopaws\" : \n {\n \"allow\": {\n \"id\": [\"simon\", \"cleopaws\"],\n \"role\": \"ops\"\n }\n} \n Demo for cleopaws , demo for ops role , demo for an actor matching neither rule .", "sections_fts": 70, "rank": null} {"rowid": 426, "title": "Permissions", "content": "Datasette has an extensive permissions system built-in, which can be further extended and customized by plugins. \n The key question the permissions system answers is this: \n \n Is this actor allowed to perform this action , optionally against this particular resource ? \n \n Actors are described above . \n An action is a string describing the action the actor would like to perform. A full list is provided below - examples include view-table and execute-sql . \n A resource is the item the actor wishes to interact with - for example a specific database or table. Some actions, such as permissions-debug , are not associated with a particular resource. \n Datasette's built-in view permissions ( view-database , view-table etc) default to allow - unless you configure additional permission rules unauthenticated users will be allowed to access content. \n Permissions with potentially harmful effects should default to deny . Plugin authors should account for this when designing new plugins - for example, the datasette-upload-csvs plugin defaults to deny so that installations don't accidentally allow unauthenticated users to create new tables by uploading a CSV file.", "sections_fts": 70, "rank": null} {"rowid": 425, "title": "Using the \"root\" actor", "content": "Datasette currently leaves almost all forms of authentication to plugins - datasette-auth-github for example. \n The one exception is the \"root\" account, which you can sign into while using Datasette on your local machine. This provides access to a small number of debugging features. \n To sign in as root, start Datasette using the --root command-line option, like this: \n $ datasette --root\nhttp://127.0.0.1:8001/-/auth-token?token=786fc524e0199d70dc9a581d851f466244e114ca92f33aa3b42a139e9388daa7\nINFO: Started server process [25801]\nINFO: Waiting for application startup.\nINFO: Application startup complete.\nINFO: Uvicorn running on http://127.0.0.1:8001 (Press CTRL+C to quit) \n The URL on the first line includes a one-use token which can be used to sign in as the \"root\" actor in your browser. Click on that link and then visit http://127.0.0.1:8001/-/actor to confirm that you are authenticated as an actor that looks like this: \n {\n \"id\": \"root\"\n}", "sections_fts": 70, "rank": null} {"rowid": 424, "title": "Actors", "content": "Through plugins, Datasette can support both authenticated users (with cookies) and authenticated API agents (via authentication tokens). The word \"actor\" is used to cover both of these cases. \n Every request to Datasette has an associated actor value, available in the code as request.actor . This can be None for unauthenticated requests, or a JSON compatible Python dictionary for authenticated users or API agents. \n The actor dictionary can be any shape - the design of that data structure is left up to the plugins. A useful convention is to include an \"id\" string, as demonstrated by the \"root\" actor below. \n Plugins can use the actor_from_request(datasette, request) hook to implement custom logic for authenticating an actor based on the incoming HTTP request.", "sections_fts": 70, "rank": null} {"rowid": 423, "title": "Authentication and permissions", "content": "Datasette does not require authentication by default. Any visitor to a Datasette instance can explore the full data and execute read-only SQL queries. \n Datasette's plugin system can be used to add many different styles of authentication, such as user accounts, single sign-on or API keys.", "sections_fts": 70, "rank": null} {"rowid": 422, "title": "datasette inspect", "content": "Outputs JSON representing introspected data about one or more SQLite database files. \n If you are opening an immutable database, you can pass this file to the --inspect-data option to improve Datasette's performance by allowing it to skip running row counts against the database when it first starts running: \n datasette inspect mydatabase.db > inspect-data.json\ndatasette serve -i mydatabase.db --inspect-file inspect-data.json \n This performance optimization is used automatically by some of the datasette publish commands. You are unlikely to need to apply this optimization manually. \n [[[cog\nhelp([\"inspect\", \"--help\"]) \n ]]] \n Usage: datasette inspect [OPTIONS] [FILES]...\n\n Generate JSON summary of provided database files\n\n This can then be passed to \"datasette --inspect-file\" to speed up count\n operations against immutable database files.\n\nOptions:\n --inspect-file TEXT\n --load-extension PATH:ENTRYPOINT?\n Path to a SQLite extension to load, and\n optional entrypoint\n --help Show this message and exit. \n [[[end]]]", "sections_fts": 70, "rank": null} {"rowid": 421, "title": "datasette package", "content": "Package SQLite files into a Datasette Docker container, see datasette package . \n [[[cog\nhelp([\"package\", \"--help\"]) \n ]]] \n Usage: datasette package [OPTIONS] FILES...\n\n Package SQLite files into a Datasette Docker container\n\nOptions:\n -t, --tag TEXT Name for the resulting Docker container, can\n optionally use name:tag format\n -m, --metadata FILENAME Path to JSON/YAML file containing metadata to\n publish\n --extra-options TEXT Extra options to pass to datasette serve\n --branch TEXT Install datasette from a GitHub branch e.g. main\n --template-dir DIRECTORY Path to directory containing custom templates\n --plugins-dir DIRECTORY Path to directory containing custom plugins\n --static MOUNT:DIRECTORY Serve static files from this directory at /MOUNT/...\n --install TEXT Additional packages (e.g. plugins) to install\n --spatialite Enable SpatialLite extension\n --version-note TEXT Additional note to show on /-/versions\n --secret TEXT Secret used for signing secure values, such as\n signed cookies\n -p, --port INTEGER RANGE Port to run the server on, defaults to 8001\n [1<=x<=65535]\n --title TEXT Title for metadata\n --license TEXT License label for metadata\n --license_url TEXT License URL for metadata\n --source TEXT Source label for metadata\n --source_url TEXT Source URL for metadata\n --about TEXT About label for metadata\n --about_url TEXT About URL for metadata\n --help Show this message and exit. \n [[[end]]]", "sections_fts": 70, "rank": null} {"rowid": 420, "title": "datasette publish heroku", "content": "See Publishing to Heroku . \n [[[cog\nhelp([\"publish\", \"heroku\", \"--help\"]) \n ]]] \n Usage: datasette publish heroku [OPTIONS] [FILES]...\n\n Publish databases to Datasette running on Heroku\n\nOptions:\n -m, --metadata FILENAME Path to JSON/YAML file containing metadata to\n publish\n --extra-options TEXT Extra options to pass to datasette serve\n --branch TEXT Install datasette from a GitHub branch e.g.\n main\n --template-dir DIRECTORY Path to directory containing custom templates\n --plugins-dir DIRECTORY Path to directory containing custom plugins\n --static MOUNT:DIRECTORY Serve static files from this directory at\n /MOUNT/...\n --install TEXT Additional packages (e.g. plugins) to install\n --plugin-secret ...\n Secrets to pass to plugins, e.g. --plugin-\n secret datasette-auth-github client_id xxx\n --version-note TEXT Additional note to show on /-/versions\n --secret TEXT Secret used for signing secure values, such as\n signed cookies\n --title TEXT Title for metadata\n --license TEXT License label for metadata\n --license_url TEXT License URL for metadata\n --source TEXT Source label for metadata\n --source_url TEXT Source URL for metadata\n --about TEXT About label for metadata\n --about_url TEXT About URL for metadata\n -n, --name TEXT Application name to use when deploying\n --tar TEXT --tar option to pass to Heroku, e.g.\n --tar=/usr/local/bin/gtar\n --generate-dir DIRECTORY Output generated application files and stop\n without deploying\n --help Show this message and exit. \n [[[end]]]", "sections_fts": 70, "rank": null} {"rowid": 419, "title": "datasette publish cloudrun", "content": "See Publishing to Google Cloud Run . \n [[[cog\nhelp([\"publish\", \"cloudrun\", \"--help\"]) \n ]]] \n Usage: datasette publish cloudrun [OPTIONS] [FILES]...\n\n Publish databases to Datasette running on Cloud Run\n\nOptions:\n -m, --metadata FILENAME Path to JSON/YAML file containing metadata to\n publish\n --extra-options TEXT Extra options to pass to datasette serve\n --branch TEXT Install datasette from a GitHub branch e.g.\n main\n --template-dir DIRECTORY Path to directory containing custom templates\n --plugins-dir DIRECTORY Path to directory containing custom plugins\n --static MOUNT:DIRECTORY Serve static files from this directory at\n /MOUNT/...\n --install TEXT Additional packages (e.g. plugins) to install\n --plugin-secret ...\n Secrets to pass to plugins, e.g. --plugin-\n secret datasette-auth-github client_id xxx\n --version-note TEXT Additional note to show on /-/versions\n --secret TEXT Secret used for signing secure values, such as\n signed cookies\n --title TEXT Title for metadata\n --license TEXT License label for metadata\n --license_url TEXT License URL for metadata\n --source TEXT Source label for metadata\n --source_url TEXT Source URL for metadata\n --about TEXT About label for metadata\n --about_url TEXT About URL for metadata\n -n, --name TEXT Application name to use when building\n --service TEXT Cloud Run service to deploy (or over-write)\n --spatialite Enable SpatialLite extension\n --show-files Output the generated Dockerfile and\n metadata.json\n --memory TEXT Memory to allocate in Cloud Run, e.g. 1Gi\n --cpu [1|2|4] Number of vCPUs to allocate in Cloud Run\n --timeout INTEGER Build timeout in seconds\n --apt-get-install TEXT Additional packages to apt-get install\n --max-instances INTEGER Maximum Cloud Run instances\n --min-instances INTEGER Minimum Cloud Run instances\n --help Show this message and exit. \n [[[end]]]", "sections_fts": 70, "rank": null} {"rowid": 418, "title": "datasette publish", "content": "Shows a list of available deployment targets for publishing data with Datasette. \n Additional deployment targets can be added by plugins that use the publish_subcommand(publish) hook. \n [[[cog\nhelp([\"publish\", \"--help\"]) \n ]]] \n Usage: datasette publish [OPTIONS] COMMAND [ARGS]...\n\n Publish specified SQLite database files to the internet along with a\n Datasette-powered interface and API\n\nOptions:\n --help Show this message and exit.\n\nCommands:\n cloudrun Publish databases to Datasette running on Cloud Run\n heroku Publish databases to Datasette running on Heroku \n [[[end]]]", "sections_fts": 70, "rank": null} {"rowid": 417, "title": "datasette uninstall", "content": "Uninstall one or more plugins. \n [[[cog\nhelp([\"uninstall\", \"--help\"]) \n ]]] \n Usage: datasette uninstall [OPTIONS] PACKAGES...\n\n Uninstall plugins and Python packages from the Datasette environment\n\nOptions:\n -y, --yes Don't ask for confirmation\n --help Show this message and exit. \n [[[end]]]", "sections_fts": 70, "rank": null} {"rowid": 416, "title": "datasette install", "content": "Install new Datasette plugins. This command works like pip install but ensures that your plugins will be installed into the same environment as Datasette. \n This command: \n datasette install datasette-cluster-map \n Would install the datasette-cluster-map plugin. \n [[[cog\nhelp([\"install\", \"--help\"]) \n ]]] \n Usage: datasette install [OPTIONS] PACKAGES...\n\n Install plugins and packages from PyPI into the same environment as Datasette\n\nOptions:\n -U, --upgrade Upgrade packages to latest version\n --help Show this message and exit. \n [[[end]]]", "sections_fts": 70, "rank": null} {"rowid": 415, "title": "datasette plugins", "content": "Output JSON showing all currently installed plugins, their versions, whether they include static files or templates and which Plugin hooks they use. \n [[[cog\nhelp([\"plugins\", \"--help\"]) \n ]]] \n Usage: datasette plugins [OPTIONS]\n\n List currently installed plugins\n\nOptions:\n --all Include built-in default plugins\n --plugins-dir DIRECTORY Path to directory containing custom plugins\n --help Show this message and exit. \n [[[end]]] \n Example output: \n [\n {\n \"name\": \"datasette-geojson\",\n \"static\": false,\n \"templates\": false,\n \"version\": \"0.3.1\",\n \"hooks\": [\n \"register_output_renderer\"\n ]\n },\n {\n \"name\": \"datasette-geojson-map\",\n \"static\": true,\n \"templates\": false,\n \"version\": \"0.4.0\",\n \"hooks\": [\n \"extra_body_script\",\n \"extra_css_urls\",\n \"extra_js_urls\"\n ]\n },\n {\n \"name\": \"datasette-leaflet\",\n \"static\": true,\n \"templates\": false,\n \"version\": \"0.2.2\",\n \"hooks\": [\n \"extra_body_script\",\n \"extra_template_vars\"\n ]\n }\n]", "sections_fts": 70, "rank": null} {"rowid": 414, "title": "datasette serve --help-settings", "content": "This command outputs all of the available Datasette settings . \n These can be passed to datasette serve using datasette serve --setting name value . \n [[[cog\nhelp([\"--help-settings\"]) \n ]]] \n Settings:\n default_page_size Default page size for the table view\n (default=100)\n max_returned_rows Maximum rows that can be returned from a table or\n custom query (default=1000)\n num_sql_threads Number of threads in the thread pool for\n executing SQLite queries (default=3)\n sql_time_limit_ms Time limit for a SQL query in milliseconds\n (default=1000)\n default_facet_size Number of values to return for requested facets\n (default=30)\n facet_time_limit_ms Time limit for calculating a requested facet\n (default=200)\n facet_suggest_time_limit_ms Time limit for calculating a suggested facet\n (default=50)\n allow_facet Allow users to specify columns to facet using\n ?_facet= parameter (default=True)\n default_allow_sql Allow anyone to run arbitrary SQL queries\n (default=True)\n allow_download Allow users to download the original SQLite\n database files (default=True)\n suggest_facets Calculate and display suggested facets\n (default=True)\n default_cache_ttl Default HTTP cache TTL (used in Cache-Control:\n max-age= header) (default=5)\n cache_size_kb SQLite cache size in KB (0 == use SQLite default)\n (default=0)\n allow_csv_stream Allow .csv?_stream=1 to download all rows\n (ignoring max_returned_rows) (default=True)\n max_csv_mb Maximum size allowed for CSV export in MB - set 0\n to disable this limit (default=100)\n truncate_cells_html Truncate cells longer than this in HTML table\n view - set 0 to disable (default=2048)\n force_https_urls Force URLs in API output to always use https://\n protocol (default=False)\n template_debug Allow display of template debug information with\n ?_context=1 (default=False)\n trace_debug Allow display of SQL trace debug information with\n ?_trace=1 (default=False)\n base_url Datasette URLs should use this base path\n (default=/) \n [[[end]]]", "sections_fts": 70, "rank": null} {"rowid": 413, "title": "datasette --get", "content": "The --get option to datasette serve (or just datasette ) specifies the path to a page within Datasette and causes Datasette to output the content from that path without starting the web server. \n This means that all of Datasette's functionality can be accessed directly from the command-line. \n For example: \n $ datasette --get '/-/versions.json' | jq .\n{\n \"python\": {\n \"version\": \"3.8.5\",\n \"full\": \"3.8.5 (default, Jul 21 2020, 10:48:26) \\n[Clang 11.0.3 (clang-1103.0.32.62)]\"\n },\n \"datasette\": {\n \"version\": \"0.46+15.g222a84a.dirty\"\n },\n \"asgi\": \"3.0\",\n \"uvicorn\": \"0.11.8\",\n \"sqlite\": {\n \"version\": \"3.32.3\",\n \"fts_versions\": [\n \"FTS5\",\n \"FTS4\",\n \"FTS3\"\n ],\n \"extensions\": {\n \"json1\": null\n },\n \"compile_options\": [\n \"COMPILER=clang-11.0.3\",\n \"ENABLE_COLUMN_METADATA\",\n \"ENABLE_FTS3\",\n \"ENABLE_FTS3_PARENTHESIS\",\n \"ENABLE_FTS4\",\n \"ENABLE_FTS5\",\n \"ENABLE_GEOPOLY\",\n \"ENABLE_JSON1\",\n \"ENABLE_PREUPDATE_HOOK\",\n \"ENABLE_RTREE\",\n \"ENABLE_SESSION\",\n \"MAX_VARIABLE_NUMBER=250000\",\n \"THREADSAFE=1\"\n ]\n }\n} \n The exit code will be 0 if the request succeeds and 1 if the request produced an HTTP status code other than 200 - e.g. a 404 or 500 error. \n This lets you use datasette --get / to run tests against a Datasette application in a continuous integration environment such as GitHub Actions.", "sections_fts": 70, "rank": null} {"rowid": 412, "title": "datasette serve", "content": "This command starts the Datasette web application running on your machine: \n datasette serve mydatabase.db \n Or since this is the default command you can run this instead: \n datasette mydatabase.db \n Once started you can access it at http://localhost:8001 \n [[[cog\nhelp([\"serve\", \"--help\"]) \n ]]] \n Usage: datasette serve [OPTIONS] [FILES]...\n\n Serve up specified SQLite database files with a web UI\n\nOptions:\n -i, --immutable PATH Database files to open in immutable mode\n -h, --host TEXT Host for server. Defaults to 127.0.0.1 which\n means only connections from the local machine\n will be allowed. Use 0.0.0.0 to listen to all\n IPs and allow access from other machines.\n -p, --port INTEGER RANGE Port for server, defaults to 8001. Use -p 0 to\n automatically assign an available port.\n [0<=x<=65535]\n --uds TEXT Bind to a Unix domain socket\n --reload Automatically reload if code or metadata\n change detected - useful for development\n --cors Enable CORS by serving Access-Control-Allow-\n Origin: *\n --load-extension PATH:ENTRYPOINT?\n Path to a SQLite extension to load, and\n optional entrypoint\n --inspect-file TEXT Path to JSON file created using \"datasette\n inspect\"\n -m, --metadata FILENAME Path to JSON/YAML file containing\n license/source metadata\n --template-dir DIRECTORY Path to directory containing custom templates\n --plugins-dir DIRECTORY Path to directory containing custom plugins\n --static MOUNT:DIRECTORY Serve static files from this directory at\n /MOUNT/...\n --memory Make /_memory database available\n --config CONFIG Deprecated: set config option using\n configname:value. Use --setting instead.\n --setting SETTING... Setting, see\n docs.datasette.io/en/stable/settings.html\n --secret TEXT Secret used for signing secure values, such as\n signed cookies\n --root Output URL that sets a cookie authenticating\n the root user\n --get TEXT Run an HTTP GET request against this path,\n print results and exit\n --version-note TEXT Additional note to show on /-/versions\n --help-settings Show available settings\n --pdb Launch debugger on any errors\n -o, --open Open Datasette in your web browser\n --create Create database files if they do not exist\n --crossdb Enable cross-database joins using the /_memory\n database\n --nolock Ignore locking, open locked files in read-only\n mode\n --ssl-keyfile TEXT SSL key file\n --ssl-certfile TEXT SSL certificate file\n --help Show this message and exit. \n [[[end]]]", "sections_fts": 70, "rank": null} {"rowid": 411, "title": "datasette --help", "content": "Running datasette --help shows a list of all of the available commands. \n [[[cog\nhelp([\"--help\"]) \n ]]] \n Usage: datasette [OPTIONS] COMMAND [ARGS]...\n\n Datasette is an open source multi-tool for exploring and publishing data\n\n About Datasette: https://datasette.io/\n Full documentation: https://docs.datasette.io/\n\nOptions:\n --version Show the version and exit.\n --help Show this message and exit.\n\nCommands:\n serve* Serve up specified SQLite database files with a web UI\n inspect Generate JSON summary of provided database files\n install Install plugins and packages from PyPI into the same...\n package Package SQLite files into a Datasette Docker container\n plugins List currently installed plugins\n publish Publish specified SQLite database files to the internet along...\n uninstall Uninstall plugins and Python packages from the Datasette... \n [[[end]]] \n Additional commands added by plugins that use the register_commands(cli) hook will be listed here as well.", "sections_fts": 70, "rank": null} {"rowid": 410, "title": "CLI reference", "content": "The datasette CLI tool provides a number of commands. \n Running datasette without specifying a command runs the default command, datasette serve . See datasette serve for the full list of options for that command. \n [[[cog\nfrom datasette import cli\nfrom click.testing import CliRunner\nimport textwrap\ndef help(args):\n title = \"datasette \" + \" \".join(args)\n cog.out(\"\\n::\\n\\n\")\n result = CliRunner().invoke(cli.cli, args)\n output = result.output.replace(\"Usage: cli \", \"Usage: datasette \")\n cog.out(textwrap.indent(output, ' '))\n cog.out(\"\\n\\n\") \n ]]] \n [[[end]]]", "sections_fts": 70, "rank": null} {"rowid": 409, "title": "Contents", "content": "Getting started Play with a live demo Follow a tutorial Datasette in your browser with Datasette Lite Try Datasette without installing anything using Glitch Using Datasette on your own computer Installation Basic installation Datasette Desktop for Mac Using Homebrew Using pip Advanced installation options Using pipx Using Docker A note about extensions The Datasette Ecosystem sqlite-utils Dogsheep CLI reference datasette --help datasette serve datasette --get datasette serve --help-settings datasette plugins datasette install datasette uninstall datasette publish datasette publish cloudrun datasette publish heroku datasette package datasette inspect Pages and API endpoints Top-level index Database Table Row Publishing data datasette publish Publishing to Google Cloud Run Publishing to Heroku Publishing to Vercel Publishing to Fly Custom metadata and plugins datasette package Deploying Datasette Deployment fundamentals Running Datasette using systemd Running Datasette using OpenRC Deploying using buildpacks Running Datasette behind a proxy Nginx proxy configuration Apache proxy configuration JSON API Different shapes Pagination Special JSON arguments Table arguments Column filter arguments Special table arguments Expanding foreign key references Discovering the JSON for a page Running SQL queries Named parameters Views Canned queries Canned query parameters Additional canned query options Writable canned queries Magic parameters JSON API for writable canned queries Pagination Cross-database queries Authentication and permissions Actors Using the \"root\" actor Permissions Defining permissions with \"allow\" blocks The /-/allow-debug tool Configuring permissions in metadata.json Controlling access to an instance Controlling access to specific databases Controlling access to specific tables and views Controlling access to specific canned queries Controlling the ability to execute arbitrary SQL Checking permissions in plugins actor_matches_allow() The permissions debug tool The ds_actor cookie Including an expiry time The /-/logout page Built-in permissions view-instance view-database view-database-download view-table view-query execute-sql permissions-debug debug-menu Performance and caching Immutable mode Using \"datasette inspect\" HTTP caching datasette-hashed-urls CSV export URL parameters Streaming all records Binary data Linking to binary downloads Binary plugins Facets Facets in query strings Facets in metadata.json Suggested facets Speeding up facets with indexes Facet by JSON array Facet by date Full-text search The table page and table view API Advanced SQLite search queries Configuring full-text search for a table or view Searches using custom SQL Enabling full-text search for a SQLite table Configuring FTS using sqlite-utils Configuring FTS using csvs-to-sqlite Configuring FTS by hand FTS versions SpatiaLite Warning Installation Installing SpatiaLite on OS X Installing SpatiaLite on Linux Spatial indexing latitude/longitude columns Making use of a spatial index Importing shapefiles into SpatiaLite Importing GeoJSON polygons using Shapely Querying polygons using within() Metadata Per-database and per-table metadata Source, license and about Column descriptions Specifying units for a column Setting a default sort order Setting a custom page size Setting which columns can be used for sorting Specifying the label column for a table Hiding tables Using YAML for metadata Settings Using --setting Configuration directory mode Settings default_allow_sql default_page_size sql_time_limit_ms max_returned_rows num_sql_threads allow_facet default_facet_size facet_time_limit_ms facet_suggest_time_limit_ms suggest_facets allow_download default_cache_ttl cache_size_kb allow_csv_stream max_csv_mb truncate_cells_html force_https_urls template_debug trace_debug base_url Configuring the secret Using secrets with datasette publish Introspection /-/metadata /-/versions /-/plugins /-/settings /-/databases /-/threads /-/actor /-/messages Custom pages and templates Custom CSS and JavaScript CSS classes on the Serving static files Publishing static assets Custom templates Custom pages Path parameters for pages Custom headers and status codes Returning 404s Custom redirects Custom error pages Plugins Installing plugins One-off plugins using --plugins-dir Deploying plugins using datasette publish Seeing what plugins are installed Plugin configuration Secret configuration values Writing plugins Writing one-off plugins Starting an installable plugin using cookiecutter Packaging a plugin Static assets Custom templates Writing plugins that accept configuration Designing URLs for your plugin Building URLs within plugins Plugin hooks prepare_connection(conn, database, datasette) prepare_jinja2_environment(env, datasette) extra_template_vars(template, database, table, columns, view_name, request, datasette) extra_css_urls(template, database, table, columns, view_name, request, datasette) extra_js_urls(template, database, table, columns, view_name, request, datasette) extra_body_script(template, database, table, columns, view_name, request, datasette) publish_subcommand(publish) render_cell(row, value, column, table, database, datasette) register_output_renderer(datasette) register_routes(datasette) register_commands(cli) register_facet_classes() asgi_wrapper(datasette) startup(datasette) canned_queries(datasette, database, actor) actor_from_request(datasette, request) filters_from_request(request, database, table, datasette) permission_allowed(datasette, actor, action, resource) register_magic_parameters(datasette) forbidden(datasette, request, message) handle_exception(datasette, request, exception) menu_links(datasette, actor, request) table_actions(datasette, actor, database, table, request) database_actions(datasette, actor, database, request) skip_csrf(datasette, scope) get_metadata(datasette, key, database, table) Testing plugins Setting up a Datasette test instance Using pdb for errors thrown inside Datasette Using pytest fixtures Testing outbound HTTP calls with pytest-httpx Registering a plugin for the duration of a test Internals for plugins Request object The MultiParams class Response class Returning a response with .asgi_send(send) Setting cookies with response.set_cookie() Datasette class .databases .plugin_config(plugin_name, database=None, table=None) await .render_template(template, context=None, request=None) await .permission_allowed(actor, action, resource=None, default=False) await .ensure_permissions(actor, permissions) await .check_visibility(actor, action=None, resource=None, permissions=None) .get_database(name) .add_database(db, name=None, route=None) .add_memory_database(name) .remove_database(name) .sign(value, namespace=\"default\") .unsign(value, namespace=\"default\") .add_message(request, message, type=datasette.INFO) .absolute_url(request, path) .setting(key) datasette.client datasette.urls Database class Database(ds, path=None, is_mutable=True, is_memory=False, memory_name=None) db.hash await db.execute(sql, ...) Results await db.execute_fn(fn) await db.execute_write(sql, params=None, block=True) await db.execute_write_script(sql, block=True) await db.execute_write_many(sql, params_seq, block=True) await db.execute_write_fn(fn, block=True) db.close() Database introspection CSRF protection The _internal database The datasette.utils module parse_metadata(content) await_me_maybe(value) Tilde encoding datasette.tracer Tracing child tasks Import shortcuts Contributing General guidelines Setting up a development environment Running the tests Using fixtures Debugging Code formatting Running Black blacken-docs Prettier Editing and building the documentation Running Cog Continuously deployed demo instances Release process Alpha and beta releases Releasing bug fixes from a branch Upgrading CodeMirror Changelog 0.64.6 (2023-12-22) 0.64.5 (2023-10-08) 0.64.4 (2023-09-21) 0.64.3 (2023-04-27) 0.64.2 (2023-03-08) 0.64.1 (2023-01-11) 0.64 (2023-01-09) 0.63.3 (2022-12-17) 0.63.2 (2022-11-18) 0.63.1 (2022-11-10) 0.63 (2022-10-27) Features Plugin hooks and internals Documentation 0.62 (2022-08-14) Features Plugin hooks Bug fixes Documentation 0.61.1 (2022-03-23) 0.61 (2022-03-23) 0.60.2 (2022-02-07) 0.60.1 (2022-01-20) 0.60 (2022-01-13) Plugins and internals Faceting Other small fixes 0.59.4 (2021-11-29) 0.59.3 (2021-11-20) 0.59.2 (2021-11-13) 0.59.1 (2021-10-24) 0.59 (2021-10-14) 0.58.1 (2021-07-16) 0.58 (2021-07-14) 0.57.1 (2021-06-08) 0.57 (2021-06-05) New features Bug fixes and other improvements 0.56.1 (2021-06-05) 0.56 (2021-03-28) 0.55 (2021-02-18) 0.54.1 (2021-02-02) 0.54 (2021-01-25) The _internal database Named in-memory database support JavaScript modules Code formatting with Black and Prettier Other changes 0.53 (2020-12-10) 0.52.5 (2020-12-09) 0.52.4 (2020-12-05) 0.52.3 (2020-12-03) 0.52.2 (2020-12-02) 0.52.1 (2020-11-29) 0.52 (2020-11-28) 0.51.1 (2020-10-31) 0.51 (2020-10-31) New visual design Plugins can now add links within Datasette Binary data URL building Running Datasette behind a proxy Smaller changes 0.50.2 (2020-10-09) 0.50.1 (2020-10-09) 0.50 (2020-10-09) 0.49.1 (2020-09-15) 0.49 (2020-09-14) 0.48 (2020-08-16) 0.47.3 (2020-08-15) 0.47.2 (2020-08-12) 0.47.1 (2020-08-11) 0.47 (2020-08-11) 0.46 (2020-08-09) 0.45 (2020-07-01) Magic parameters for canned queries Log out Better plugin documentation New plugin hooks Smaller changes 0.44 (2020-06-11) Authentication Permissions Writable canned queries Flash messages Signed values and secrets CSRF protection Cookie methods register_routes() plugin hooks Smaller changes The road to Datasette 1.0 0.43 (2020-05-28) 0.42 (2020-05-08) 0.41 (2020-05-06) 0.40 (2020-04-21) 0.39 (2020-03-24) 0.38 (2020-03-08) 0.37.1 (2020-03-02) 0.37 (2020-02-25) 0.36 (2020-02-21) 0.35 (2020-02-04) 0.34 (2020-01-29) 0.33 (2019-12-22) 0.32 (2019-11-14) 0.31.2 (2019-11-13) 0.31.1 (2019-11-12) 0.31 (2019-11-11) 0.30.2 (2019-11-02) 0.30.1 (2019-10-30) 0.30 (2019-10-18) 0.29.3 (2019-09-02) 0.29.2 (2019-07-13) 0.29.1 (2019-07-11) 0.29 (2019-07-07) ASGI New plugin hook: asgi_wrapper New plugin hook: extra_template_vars Secret plugin configuration options Facet by date Easier custom templates for table rows ?_through= for joins through many-to-many tables Small changes 0.28 (2019-05-19) Supporting databases that change Faceting improvements, and faceting plugins datasette publish cloudrun register_output_renderer plugins Medium changes Small changes 0.27.1 (2019-05-09) 0.27 (2019-01-31) 0.26.1 (2019-01-10) 0.26 (2019-01-02) 0.25.2 (2018-12-16) 0.25.1 (2018-11-04) 0.25 (2018-09-19) 0.24 (2018-07-23) 0.23.2 (2018-07-07) 0.23.1 (2018-06-21) 0.23 (2018-06-18) CSV export Foreign key expansions New configuration settings Control HTTP caching with ?_ttl= Improved support for SpatiaLite latest.datasette.io Miscellaneous 0.22.1 (2018-05-23) 0.22 (2018-05-20) 0.21 (2018-05-05) 0.20 (2018-04-20) 0.19 (2018-04-16) 0.18 (2018-04-14) 0.17 (2018-04-13) 0.16 (2018-04-13) 0.15 (2018-04-09) 0.14 (2017-12-09) 0.13 (2017-11-24) 0.12 (2017-11-16) 0.11 (2017-11-14) 0.10 (2017-11-14) 0.9 (2017-11-13) 0.8 (2017-11-13)", "sections_fts": 70, "rank": null} {"rowid": 408, "title": "Datasette", "content": "An open source multi-tool for exploring and publishing data \n Datasette is a tool for exploring and publishing data. It helps people take data of any shape or size and publish that as an interactive, explorable website and accompanying API. \n Datasette is aimed at data journalists, museum curators, archivists, local governments and anyone else who has data that they wish to share with the world. It is part of a wider ecosystem of tools and plugins dedicated to making working with structured data as productive as possible. \n Explore a demo , watch a presentation about the project or Try Datasette without installing anything using Glitch . \n Interested in learning Datasette? Start with the official tutorials . \n Support questions, feedback? Join our GitHub Discussions forum .", "sections_fts": 70, "rank": null} {"rowid": 407, "title": "0.8 (2017-11-13)", "content": "V0.8 - added PyPI metadata, ready to ship. \n \n \n Implemented offset/limit pagination for views ( #70 ). \n \n \n Improved pagination. ( #78 ) \n \n \n Limit on max rows returned, controlled by --max_returned_rows option. ( #69 ) \n If someone executes 'select * from table' against a table with a million rows\n in it, we could run into problems: just serializing that much data as JSON is\n likely to lock up the server. \n Solution: we now have a hard limit on the maximum number of rows that can be\n returned by a query. If that limit is exceeded, the server will return a\n \"truncated\": true field in the JSON. \n This limit can be optionally controlled by the new --max_returned_rows \n option. Setting that option to 0 disables the limit entirely.", "sections_fts": 70, "rank": null} {"rowid": 406, "title": "0.9 (2017-11-13)", "content": "Added --sql_time_limit_ms and --extra-options . \n The serve command now accepts --sql_time_limit_ms for customizing the SQL time\n limit. \n The publish and package commands now accept --extra-options which can be used\n to specify additional options to be passed to the datasite serve command when\n it executes inside the resulting Docker containers.", "sections_fts": 70, "rank": null} {"rowid": 405, "title": "0.10 (2017-11-14)", "content": "Fixed #83 - 500 error on individual row pages. \n \n \n Stop using sqlite WITH RECURSIVE in our tests. \n The version of Python 3 running in Travis CI doesn't support this.", "sections_fts": 70, "rank": null} {"rowid": 404, "title": "0.11 (2017-11-14)", "content": "Added datasette publish now --force option. \n This calls now with --force - useful as it means you get a fresh copy of datasette even if Now has already cached that docker layer. \n \n \n Enable --cors by default when running in a container.", "sections_fts": 70, "rank": null} {"rowid": 403, "title": "0.12 (2017-11-16)", "content": "Added __version__ , now displayed as tooltip in page footer ( #108 ). \n \n \n Added initial docs, including a changelog ( #99 ). \n \n \n Turned on auto-escaping in Jinja. \n \n \n Added a UI for editing named parameters ( #96 ). \n You can now construct a custom SQL statement using SQLite named\n parameters (e.g. :name ) and datasette will display form fields for\n editing those parameters. Here\u2019s an example which lets you see the\n most popular names for dogs of different species registered through\n various dog registration schemes in Australia. \n \n \n \n \n \n Pin to specific Jinja version. ( #100 ). \n \n \n Default to 127.0.0.1 not 0.0.0.0. ( #98 ). \n \n \n Added extra metadata options to publish and package commands. ( #92 ). \n You can now run these commands like so: \n datasette now publish mydb.db \\\n --title=\"My Title\" \\\n --source=\"Source\" \\\n --source_url=\"http://www.example.com/\" \\\n --license=\"CC0\" \\\n --license_url=\"https://creativecommons.org/publicdomain/zero/1.0/\" \n This will write those values into the metadata.json that is packaged with the\n app. If you also pass --metadata=metadata.json that file will be updated with the extra\n values before being written into the Docker image. \n \n \n Added production-ready Dockerfile ( #94 ) [Andrew\n Cutler] \n \n \n New ?_sql_time_limit_ms=10 argument to database and table page ( #95 ) \n \n \n SQL syntax highlighting with Codemirror ( #89 ) [Tom Dyson]", "sections_fts": 70, "rank": null} {"rowid": 402, "title": "0.13 (2017-11-24)", "content": "Search now applies to current filters. \n Combined search into the same form as filters. \n Closes #133 \n \n \n Much tidier design for table view header. \n Closes #147 \n \n \n Added ?column__not=blah filter. \n Closes #148 \n \n \n Row page now resolves foreign keys. \n Closes #132 \n \n \n Further tweaks to select/input filter styling. \n Refs #86 - thanks for the help, @natbat! \n \n \n Show linked foreign key in table cells. \n \n \n Added UI for editing table filters. \n Refs #86 \n \n \n Hide FTS-created tables on index pages. \n Closes #129 \n \n \n Add publish to heroku support [Jacob Kaplan-Moss] \n datasette publish heroku mydb.db \n Pull request #104 \n \n \n Initial implementation of ?_group_count=column . \n URL shortcut for counting rows grouped by one or more columns. \n ?_group_count=column1&_group_count=column2 works as well. \n SQL generated looks like this: \n select \"qSpecies\", count(*) as \"count\"\nfrom Street_Tree_List\ngroup by \"qSpecies\"\norder by \"count\" desc limit 100 \n Or for two columns like this: \n select \"qSpecies\", \"qSiteInfo\", count(*) as \"count\"\nfrom Street_Tree_List\ngroup by \"qSpecies\", \"qSiteInfo\"\norder by \"count\" desc limit 100 \n Refs #44 \n \n \n Added --build=master option to datasette publish and package. \n The datasette publish and datasette package commands both now accept an\n optional --build argument. If provided, this can be used to specify a branch\n published to GitHub that should be built into the container. \n This makes it easier to test code that has not yet been officially released to\n PyPI, e.g.: \n datasette publish now mydb.db --branch=master \n \n \n Implemented ?_search=XXX + UI if a FTS table is detected. \n Closes #131 \n \n \n Added datasette --version support. \n \n \n Table views now show expanded foreign key references, if possible. \n If a table has foreign key columns, and those foreign key tables have\n label_columns , the TableView will now query those other tables for the\n corresponding values and display those values as links in the corresponding\n table cells. \n label_columns are currently detected by the inspect() function, which looks\n for any table that has just two columns - an ID column and one other - and\n sets the label_column to be that second non-ID column. \n \n \n Don't prevent tabbing to \"Run SQL\" button ( #117 ) [Robert Gieseke] \n See comment in #115 \n \n \n Add keyboard shortcut to execute SQL query ( #115 ) [Robert Gieseke] \n \n \n Allow --load-extension to be set via environment variable. \n \n \n Add support for ?field__isnull=1 ( #107 ) [Ray N] \n \n \n Add spatialite, switch to debian and local build ( #114 ) [Ariel N\u00fa\u00f1ez] \n \n \n Added --load-extension argument to datasette serve. \n Allows loading of SQLite extensions. Refs #110 .", "sections_fts": 70, "rank": null} {"rowid": 401, "title": "0.14 (2017-12-09)", "content": "The theme of this release is customization: Datasette now allows every aspect\n of its presentation to be customized \n either using additional CSS or by providing entirely new templates. \n Datasette's metadata.json format \n has also been expanded, to allow per-database and per-table metadata. A new\n datasette skeleton command can be used to generate a skeleton JSON file\n ready to be filled in with per-database and per-table details. \n The metadata.json file can also be used to define\n canned queries ,\n as a more powerful alternative to SQL views. \n \n \n extra_css_urls / extra_js_urls in metadata \n A mechanism in the metadata.json format for adding custom CSS and JS urls. \n Create a metadata.json file that looks like this: \n {\n \"extra_css_urls\": [\n \"https://simonwillison.net/static/css/all.bf8cd891642c.css\"\n ],\n \"extra_js_urls\": [\n \"https://code.jquery.com/jquery-3.2.1.slim.min.js\"\n ]\n} \n Then start datasette like this: \n datasette mydb.db --metadata=metadata.json \n The CSS and JavaScript files will be linked in the of every page. \n You can also specify a SRI (subresource integrity hash) for these assets: \n {\n \"extra_css_urls\": [\n {\n \"url\": \"https://simonwillison.net/static/css/all.bf8cd891642c.css\",\n \"sri\": \"sha384-9qIZekWUyjCyDIf2YK1FRoKiPJq4PHt6tp/ulnuuyRBvazd0hG7pWbE99zvwSznI\"\n }\n ],\n \"extra_js_urls\": [\n {\n \"url\": \"https://code.jquery.com/jquery-3.2.1.slim.min.js\",\n \"sri\": \"sha256-k2WSCIexGzOj3Euiig+TlR8gA0EmPjuc79OEeY5L45g=\"\n }\n ]\n} \n Modern browsers will only execute the stylesheet or JavaScript if the SRI hash\n matches the content served. You can generate hashes using https://www.srihash.org/ \n \n \n Auto-link column values that look like URLs ( #153 ) \n \n \n CSS styling hooks as classes on the body ( #153 ) \n Every template now gets CSS classes in the body designed to support custom\n styling. \n The index template (the top level page at / ) gets this: \n \n The database template ( /dbname/ ) gets this: \n \n The table template ( /dbname/tablename ) gets: \n \n The row template ( /dbname/tablename/rowid ) gets: \n \n The db-x and table-x classes use the database or table names themselves IF\n they are valid CSS identifiers. If they aren't, we strip any invalid\n characters out and append a 6 character md5 digest of the original name, in\n order to ensure that multiple tables which resolve to the same stripped\n character version still have different CSS classes. \n Some examples (extracted from the unit tests): \n \"simple\" => \"simple\"\n\"MixedCase\" => \"MixedCase\"\n\"-no-leading-hyphens\" => \"no-leading-hyphens-65bea6\"\n\"_no-leading-underscores\" => \"no-leading-underscores-b921bc\"\n\"no spaces\" => \"no-spaces-7088d7\"\n\"-\" => \"336d5e\"\n\"no $ characters\" => \"no--characters-59e024\" \n \n \n datasette --template-dir=mytemplates/ argument \n You can now pass an additional argument specifying a directory to look for\n custom templates in. \n Datasette will fall back on the default templates if a template is not\n found in that directory. \n \n \n Ability to over-ride templates for individual tables/databases. \n It is now possible to over-ride templates on a per-database / per-row or per-\n table basis. \n When you access e.g. /mydatabase/mytable Datasette will look for the following: \n - table-mydatabase-mytable.html\n- table.html \n If you provided a --template-dir argument to datasette serve it will look in\n that directory first. \n The lookup rules are as follows: \n Index page (/):\n index.html\n\nDatabase page (/mydatabase):\n database-mydatabase.html\n database.html\n\nTable page (/mydatabase/mytable):\n table-mydatabase-mytable.html\n table.html\n\nRow page (/mydatabase/mytable/id):\n row-mydatabase-mytable.html\n row.html \n If a table name has spaces or other unexpected characters in it, the template\n filename will follow the same rules as our custom CSS classes\n - for example, a table called \"Food Trucks\"\n will attempt to load the following templates: \n table-mydatabase-Food-Trucks-399138.html\ntable.html \n It is possible to extend the default templates using Jinja template\n inheritance. If you want to customize EVERY row template with some additional\n content you can do so by creating a row.html template like this: \n {% extends \"default:row.html\" %}\n\n{% block content %}\n

EXTRA HTML AT THE TOP OF THE CONTENT BLOCK

\n

This line renders the original block:

\n{{ super() }}\n{% endblock %} \n \n \n --static option for datasette serve ( #160 ) \n You can now tell Datasette to serve static files from a specific location at a\n specific mountpoint. \n For example: \n datasette serve mydb.db --static extra-css:/tmp/static/css \n Now if you visit this URL: \n http://localhost:8001/extra-css/blah.css \n The following file will be served: \n /tmp/static/css/blah.css \n \n \n Canned query support. \n Named canned queries can now be defined in metadata.json like this: \n {\n \"databases\": {\n \"timezones\": {\n \"queries\": {\n \"timezone_for_point\": \"select tzid from timezones ...\"\n }\n }\n }\n} \n These will be shown in a new \"Queries\" section beneath \"Views\" on the database page. \n \n \n New datasette skeleton command for generating metadata.json ( #164 ) \n \n \n metadata.json support for per-table/per-database metadata ( #165 ) \n Also added support for descriptions and HTML descriptions. \n Here's an example metadata.json file illustrating custom per-database and per-\n table metadata: \n {\n \"title\": \"Overall datasette title\",\n \"description_html\": \"This is a description with HTML.\",\n \"databases\": {\n \"db1\": {\n \"title\": \"First database\",\n \"description\": \"This is a string description & has no HTML\",\n \"license_url\": \"http://example.com/\",\n \"license\": \"The example license\",\n \"queries\": {\n \"canned_query\": \"select * from table1 limit 3;\"\n },\n \"tables\": {\n \"table1\": {\n \"title\": \"Custom title for table1\",\n \"description\": \"Tables can have descriptions too\",\n \"source\": \"This has a custom source\",\n \"source_url\": \"http://example.com/\"\n }\n }\n }\n }\n} \n \n \n Renamed datasette build command to datasette inspect ( #130 ) \n \n \n Upgrade to Sanic 0.7.0 ( #168 ) \n https://github.com/channelcat/sanic/releases/tag/0.7.0 \n \n \n Package and publish commands now accept --static and --template-dir \n Example usage: \n datasette package --static css:extra-css/ --static js:extra-js/ \\\n sf-trees.db --template-dir templates/ --tag sf-trees --branch master \n This creates a local Docker image that includes copies of the templates/,\n extra-css/ and extra-js/ directories. You can then run it like this: \n docker run -p 8001:8001 sf-trees \n For publishing to Zeit now: \n datasette publish now --static css:extra-css/ --static js:extra-js/ \\\n sf-trees.db --template-dir templates/ --name sf-trees --branch master \n \n \n HTML comment showing which templates were considered for a page ( #171 )", "sections_fts": 70, "rank": null} {"rowid": 400, "title": "0.15 (2018-04-09)", "content": "The biggest new feature in this release is the ability to sort by column. On the\n table page the column headers can now be clicked to apply sort (or descending\n sort), or you can specify ?_sort=column or ?_sort_desc=column directly\n in the URL. \n \n \n table_rows => table_rows_count , filtered_table_rows =>\n filtered_table_rows_count \n Renamed properties. Closes #194 \n \n \n New sortable_columns option in metadata.json to control sort options. \n You can now explicitly set which columns in a table can be used for sorting\n using the _sort and _sort_desc arguments using metadata.json : \n {\n \"databases\": {\n \"database1\": {\n \"tables\": {\n \"example_table\": {\n \"sortable_columns\": [\n \"height\",\n \"weight\"\n ]\n }\n }\n }\n }\n} \n Refs #189 \n \n \n Column headers now link to sort/desc sort - refs #189 \n \n \n _sort and _sort_desc parameters for table views \n Allows for paginated sorted results based on a specified column. \n Refs #189 \n \n \n Total row count now correct even if _next applied \n \n \n Use .custom_sql() for _group_count implementation (refs #150 ) \n \n \n Make HTML title more readable in query template ( #180 ) [Ryan Pitts] \n \n \n New ?_shape=objects/object/lists param for JSON API ( #192 ) \n New _shape= parameter replacing old .jsono extension \n Now instead of this: \n /database/table.jsono \n We use the _shape parameter like this: \n /database/table.json?_shape=objects \n Also introduced a new _shape called object which looks like this: \n /database/table.json?_shape=object \n Returning an object for the rows key: \n ...\n\"rows\": {\n \"pk1\": {\n ...\n },\n \"pk2\": {\n ...\n }\n} \n Refs #122 \n \n \n Utility for writing test database fixtures to a .db file \n python tests/fixtures.py /tmp/hello.db \n This is useful for making a SQLite database of the test fixtures for\n interactive exploration. \n \n \n Compound primary key _next= now plays well with extra filters \n Closes #190 \n \n \n Fixed bug with keyset pagination over compound primary keys \n Refs #190 \n \n \n Database/Table views inherit source/license/source_url/license_url \n metadata \n If you set the source_url/license_url/source/license fields in your root\n metadata those values will now be inherited all the way down to the database\n and table templates. \n The title/description are NOT inherited. \n Also added unit tests for the HTML generated by the metadata. \n Refs #185 \n \n \n Add metadata, if it exists, to heroku temp dir ( #178 ) [Tony Hirst] \n \n \n Initial documentation for pagination \n \n \n Broke up test_app into test_api and test_html \n \n \n Fixed bug with .json path regular expression \n I had a table called geojson and it caused an exception because the regex\n was matching .json and not \\.json \n \n \n Deploy to Heroku with Python 3.6.3", "sections_fts": 70, "rank": null} {"rowid": 399, "title": "0.16 (2018-04-13)", "content": "Better mechanism for handling errors; 404s for missing table/database \n New error mechanism closes #193 \n 404s for missing tables/databases closes #184 \n \n \n long_description in markdown for the new PyPI \n \n \n Hide SpatiaLite system tables. [Russ Garrett] \n \n \n Allow explain select / explain query plan select #201 \n \n \n Datasette inspect now finds primary_keys #195 \n \n \n Ability to sort using form fields (for mobile portrait mode) #199 \n We now display sort options as a select box plus a descending checkbox, which\n means you can apply sort orders even in portrait mode on a mobile phone where\n the column headers are hidden.", "sections_fts": 70, "rank": null} {"rowid": 398, "title": "0.17 (2018-04-13)", "content": "Release 0.17 to fix issues with PyPI", "sections_fts": 70, "rank": null} {"rowid": 397, "title": "0.18 (2018-04-14)", "content": "This release introduces support for units ,\n contributed by Russ Garrett ( #203 ).\n You can now optionally specify the units for specific columns using metadata.json .\n Once specified, units will be displayed in the HTML view of your table. They also become\n available for use in filters - if a column is configured with a unit of distance, you can\n request all rows where that column is less than 50 meters or more than 20 feet for example. \n \n \n Link foreign keys which don't have labels. [Russ Garrett] \n This renders unlabeled FKs as simple links. \n Also includes bonus fixes for two minor issues: \n \n \n In foreign key link hrefs the primary key was escaped using HTML\n escaping rather than URL escaping. This broke some non-integer PKs. \n \n \n Print tracebacks to console when handling 500 errors. \n \n \n \n \n Fix SQLite error when loading rows with no incoming FKs. [Russ\n Garrett] \n This fixes an error caused by an invalid query when loading incoming FKs. \n The error was ignored due to async but it still got printed to the\n console. \n \n \n Allow custom units to be registered with Pint. [Russ Garrett] \n \n \n Support units in filters. [Russ Garrett] \n \n \n Tidy up units support. [Russ Garrett] \n \n \n Add units to exported JSON \n \n \n Units key in metadata skeleton \n \n \n Docs \n \n \n \n \n Initial units support. [Russ Garrett] \n Add support for specifying units for a column in metadata.json and\n rendering them on display using\n pint", "sections_fts": 70, "rank": null} {"rowid": 396, "title": "0.19 (2018-04-16)", "content": "This is the first preview of the new Datasette plugins mechanism. Only two\n plugin hooks are available so far - for custom SQL functions and custom template\n filters. There's plenty more to come - read the documentation and get involved in\n the tracking ticket if you\n have feedback on the direction so far. \n \n \n Fix for _sort_desc=sortable_with_nulls test, refs #216 \n \n \n Fixed #216 - paginate correctly when sorting by nullable column \n \n \n Initial documentation for plugins, closes #213 \n https://docs.datasette.io/en/stable/plugins.html \n \n \n New --plugins-dir=plugins/ option ( #212 ) \n New option causing Datasette to load and evaluate all of the Python files in\n the specified directory and register any plugins that are defined in those\n files. \n This new option is available for the following commands: \n datasette serve mydb.db --plugins-dir=plugins/\ndatasette publish now/heroku mydb.db --plugins-dir=plugins/\ndatasette package mydb.db --plugins-dir=plugins/ \n \n \n Start of the plugin system, based on pluggy ( #210 ) \n Uses https://pluggy.readthedocs.io/ originally created for the py.test project \n We're starting with two plugin hooks: \n prepare_connection(conn) \n This is called when a new SQLite connection is created. It can be used to register custom SQL functions. \n prepare_jinja2_environment(env) \n This is called with the Jinja2 environment. It can be used to register custom template tags and filters. \n An example plugin which uses these two hooks can be found at https://github.com/simonw/datasette-plugin-demos or installed using pip install datasette-plugin-demos \n Refs #14 \n \n \n Return HTTP 405 on InvalidUsage rather than 500. [Russ Garrett] \n This also stops it filling up the logs. This happens for HEAD requests\n at the moment - which perhaps should be handled better, but that's a\n different issue.", "sections_fts": 70, "rank": null} {"rowid": 395, "title": "0.20 (2018-04-20)", "content": "Mostly new work on the Plugins mechanism: plugins can now bundle static assets and custom templates, and datasette publish has a new --install=name-of-plugin option. \n \n \n Add col-X classes to HTML table on custom query page \n \n \n Fixed out-dated template in documentation \n \n \n Plugins can now bundle custom templates, #224 \n \n \n Added /-/metadata /-/plugins /-/inspect, #225 \n \n \n Documentation for --install option, refs #223 \n \n \n Datasette publish/package --install option, #223 \n \n \n Fix for plugins in Python 3.5, #222 \n \n \n New plugin hooks: extra_css_urls() and extra_js_urls(), #214 \n \n \n /-/static-plugins/PLUGIN_NAME/ now serves static/ from plugins \n \n \n now gets class=\"col-X\" - plus added col-X documentation \n \n \n Use to_css_class for table cell column classes \n This ensures that columns with spaces in the name will still\n generate usable CSS class names. Refs #209 \n \n \n Add column name classes to s, make PK bold [Russ Garrett] \n \n \n Don't duplicate simple primary keys in the link column [Russ Garrett] \n When there's a simple (single-column) primary key, it looks weird to\n duplicate it in the link column. \n This change removes the second PK column and treats the link column as\n if it were the PK column from a header/sorting perspective. \n \n \n Correct escaping for HTML display of row links [Russ Garrett] \n \n \n Longer time limit for test_paginate_compound_keys \n It was failing intermittently in Travis - see #209 \n \n \n Use application/octet-stream for downloadable databases \n \n \n Updated PyPI classifiers \n \n \n Updated PyPI link to pypi.org", "sections_fts": 70, "rank": null} {"rowid": 394, "title": "0.21 (2018-05-05)", "content": "New JSON _shape= options, the ability to set table _size= and a mechanism for searching within specific columns. \n \n \n Default tests to using a longer timelimit \n Every now and then a test will fail in Travis CI on Python 3.5 because it hit\n the default 20ms SQL time limit. \n Test fixtures now default to a 200ms time limit, and we only use the 20ms time\n limit for the specific test that tests query interruption. This should make\n our tests on Python 3.5 in Travis much more stable. \n \n \n Support _search_COLUMN=text searches, closes #237 \n \n \n Show version on /-/plugins page, closes #248 \n \n \n ?_size=max option, closes #249 \n \n \n Added /-/versions and /-/versions.json , closes #244 \n Sample output: \n {\n \"python\": {\n \"version\": \"3.6.3\",\n \"full\": \"3.6.3 (default, Oct 4 2017, 06:09:38) \\n[GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.37)]\"\n },\n \"datasette\": {\n \"version\": \"0.20\"\n },\n \"sqlite\": {\n \"version\": \"3.23.1\",\n \"extensions\": {\n \"json1\": null,\n \"spatialite\": \"4.3.0a\"\n }\n }\n} \n \n \n Renamed ?_sql_time_limit_ms= to ?_timelimit , closes #242 \n \n \n New ?_shape=array option + tweaks to _shape , closes #245 \n \n \n Default is now ?_shape=arrays (renamed from lists ) \n \n \n New ?_shape=array returns an array of objects as the root object \n \n \n Changed ?_shape=object to return the object as the root \n \n \n Updated docs \n \n \n \n \n FTS tables now detected by inspect() , closes #240 \n \n \n New ?_size=XXX query string parameter for table view, closes #229 \n Also added documentation for all of the _special arguments. \n Plus deleted some duplicate logic implementing _group_count . \n \n \n If max_returned_rows==page_size , increment max_returned_rows - fixes #230 \n \n \n New hidden: True option for table metadata, closes #239 \n \n \n Hide idx_* tables if spatialite detected, closes #228 \n \n \n Added class=rows-and-columns to custom query results table \n \n \n Added CSS class rows-and-columns to main table \n \n \n label_column option in metadata.json - closes #234", "sections_fts": 70, "rank": null} {"rowid": 393, "title": "0.22 (2018-05-20)", "content": "The big new feature in this release is Facets . Datasette can now apply faceted browse to any column in any table. It will also suggest possible facets. See the Datasette Facets announcement post for more details. \n In addition to the work on facets: \n \n \n Added docs for introspection endpoints \n \n \n New --config option, added --help-config , closes #274 \n Removed the --page_size= argument to datasette serve in favour of: \n datasette serve --config default_page_size:50 mydb.db \n Added new help section: \n $ datasette --help-config\nConfig options:\n default_page_size Default page size for the table view\n (default=100)\n max_returned_rows Maximum rows that can be returned from a table\n or custom query (default=1000)\n sql_time_limit_ms Time limit for a SQL query in milliseconds\n (default=1000)\n default_facet_size Number of values to return for requested facets\n (default=30)\n facet_time_limit_ms Time limit for calculating a requested facet\n (default=200)\n facet_suggest_time_limit_ms Time limit for calculating a suggested facet\n (default=50) \n \n \n Only apply responsive table styles to .rows-and-column \n Otherwise they interfere with tables in the description, e.g. on\n https://fivethirtyeight.datasettes.com/fivethirtyeight/nba-elo%2Fnbaallelo \n \n \n Refactored views into new views/ modules, refs #256 \n \n \n Documentation for SQLite full-text search support, closes #253 \n \n \n /-/versions now includes SQLite fts_versions , closes #252", "sections_fts": 70, "rank": null} {"rowid": 392, "title": "0.22.1 (2018-05-23)", "content": "Bugfix release, plus we now use versioneer for our version numbers. \n \n \n Faceting no longer breaks pagination, fixes #282 \n \n \n Add __version_info__ derived from __version__ [Robert Gieseke] \n This might be tuple of more than two values (major and minor\n version) if commits have been made after a release. \n \n \n Add version number support with Versioneer. [Robert Gieseke] \n Versioneer Licence:\n Public Domain (CC0-1.0) \n Closes #273 \n \n \n Refactor inspect logic [Russ Garrett]", "sections_fts": 70, "rank": null} {"rowid": 391, "title": "Miscellaneous", "content": "Got JSON data in one of your columns? Use the new ?_json=COLNAME argument\n to tell Datasette to return that JSON value directly rather than encoding it\n as a string. \n \n \n If you just want an array of the first value of each row, use the new\n ?_shape=arrayfirst option - example .", "sections_fts": 70, "rank": null} {"rowid": 390, "title": "latest.datasette.io", "content": "Every commit to Datasette master is now automatically deployed by Travis CI to\n https://latest.datasette.io/ - ensuring there is always a live demo of the\n latest version of the software. \n The demo uses the fixtures from our\n unit tests, ensuring it demonstrates the same range of functionality that is\n covered by the tests. \n You can see how the deployment mechanism works in our .travis.yml file.", "sections_fts": 70, "rank": null} {"rowid": 389, "title": "Improved support for SpatiaLite", "content": "The SpatiaLite module \n for SQLite adds robust geospatial features to the database. \n Getting SpatiaLite working can be tricky, especially if you want to use the most\n recent alpha version (with support for K-nearest neighbor). \n Datasette now includes extensive documentation on SpatiaLite , and thanks to Ravi Kotecha our GitHub\n repo includes a Dockerfile that can build\n the latest SpatiaLite and configure it for use with Datasette. \n The datasette publish and datasette package commands now accept a new\n --spatialite argument which causes them to install and configure SpatiaLite\n as part of the container they deploy.", "sections_fts": 70, "rank": null} {"rowid": 388, "title": "Control HTTP caching with ?_ttl=", "content": "You can now customize the HTTP max-age header that is sent on a per-URL basis, using the new ?_ttl= query string parameter. \n You can set this to any value in seconds, or you can set it to 0 to disable HTTP caching entirely. \n Consider for example this query which returns a randomly selected member of the Avengers: \n select * from [avengers/avengers] order by random() limit 1 \n If you hit the following page repeatedly you will get the same result, due to HTTP caching: \n /fivethirtyeight?sql=select+*+from+%5Bavengers%2Favengers%5D+order+by+random%28%29+limit+1 \n By adding ?_ttl=0 to the zero you can ensure the page will not be cached and get back a different super hero every time: \n /fivethirtyeight?sql=select+*+from+%5Bavengers%2Favengers%5D+order+by+random%28%29+limit+1&_ttl=0", "sections_fts": 70, "rank": null} {"rowid": 387, "title": "New configuration settings", "content": "Datasette's Settings now also supports boolean settings. A number of new\n configuration options have been added: \n \n \n num_sql_threads - the number of threads used to execute SQLite queries. Defaults to 3. \n \n \n allow_facet - enable or disable custom Facets using the _facet= parameter. Defaults to on. \n \n \n suggest_facets - should Datasette suggest facets? Defaults to on. \n \n \n allow_download - should users be allowed to download the entire SQLite database? Defaults to on. \n \n \n allow_sql - should users be allowed to execute custom SQL queries? Defaults to on. \n \n \n default_cache_ttl - Default HTTP caching max-age header in seconds. Defaults to 365 days - caching can be disabled entirely by settings this to 0. \n \n \n cache_size_kb - Set the amount of memory SQLite uses for its per-connection cache , in KB. \n \n \n allow_csv_stream - allow users to stream entire result sets as a single CSV file. Defaults to on. \n \n \n max_csv_mb - maximum size of a returned CSV file in MB. Defaults to 100MB, set to 0 to disable this limit.", "sections_fts": 70, "rank": null} {"rowid": 386, "title": "Foreign key expansions", "content": "When Datasette detects a foreign key reference it attempts to resolve a label\n for that reference (automatically or using the Specifying the label column for a table metadata\n option) so it can display a link to the associated row. \n This expansion is now also available for JSON and CSV representations of the\n table, using the new _labels=on query string option. See\n Expanding foreign key references for more details.", "sections_fts": 70, "rank": null} {"rowid": 385, "title": "CSV export", "content": "Any Datasette table, view or custom SQL query can now be exported as CSV. \n \n Check out the CSV export documentation for more details, or\n try the feature out on\n https://fivethirtyeight.datasettes.com/fivethirtyeight/bechdel%2Fmovies \n If your table has more than max_returned_rows (default 1,000)\n Datasette provides the option to stream all rows . This option takes advantage\n of async Python and Datasette's efficient pagination to\n iterate through the entire matching result set and stream it back as a\n downloadable CSV file.", "sections_fts": 70, "rank": null} {"rowid": 384, "title": "0.23 (2018-06-18)", "content": "This release features CSV export, improved options for foreign key expansions,\n new configuration settings and improved support for SpatiaLite. \n See datasette/compare/0.22.1...0.23 for a full list of\n commits added since the last release.", "sections_fts": 70, "rank": null} {"rowid": 383, "title": "0.23.1 (2018-06-21)", "content": "Minor bugfix release. \n \n \n Correctly display empty strings in HTML table, closes #314 \n \n \n Allow \".\" in database filenames, closes #302 \n \n \n 404s ending in slash redirect to remove that slash, closes #309 \n \n \n Fixed incorrect display of compound primary keys with foreign key\n references. Closes #319 \n \n \n Docs + example of canned SQL query using || concatenation. Closes #321 \n \n \n Correctly display facets with value of 0 - closes #318 \n \n \n Default 'expand labels' to checked in CSV advanced export", "sections_fts": 70, "rank": null} {"rowid": 382, "title": "0.23.2 (2018-07-07)", "content": "Minor bugfix and documentation release. \n \n \n CSV export now respects --cors , fixes #326 \n \n \n Installation instructions , including docker image - closes #328 \n \n \n Fix for row pages for tables with / in, closes #325", "sections_fts": 70, "rank": null} {"rowid": 381, "title": "0.24 (2018-07-23)", "content": "A number of small new features: \n \n \n datasette publish heroku now supports --extra-options , fixes #334 \n \n \n Custom error message if SpatiaLite is needed for specified database, closes #331 \n \n \n New config option: truncate_cells_html for truncating long cell values in HTML view - closes #330 \n \n \n Documentation for datasette publish and datasette package , closes #337 \n \n \n Fixed compatibility with Python 3.7 \n \n \n datasette publish heroku now supports app names via the -n option, which can also be used to overwrite an existing application [Russ Garrett] \n \n \n Title and description metadata can now be set for canned SQL queries , closes #342 \n \n \n New force_https_on config option, fixes https:// API URLs when deploying to Zeit Now - closes #333 \n \n \n ?_json_infinity=1 query string argument for handling Infinity/-Infinity values in JSON, closes #332 \n \n \n URLs displayed in the results of custom SQL queries are now URLified, closes #298", "sections_fts": 70, "rank": null} {"rowid": 380, "title": "0.25 (2018-09-19)", "content": "New plugin hooks, improved database view support and an easier way to use more recent versions of SQLite. \n \n \n New publish_subcommand plugin hook. A plugin can now add additional datasette publish publishers in addition to the default now and heroku , both of which have been refactored into default plugins. publish_subcommand documentation . Closes #349 \n \n \n New render_cell plugin hook. Plugins can now customize how values are displayed in the HTML tables produced by Datasette's browsable interface. datasette-json-html and datasette-render-images are two new plugins that use this hook. render_cell documentation . Closes #352 \n \n \n New extra_body_script plugin hook, enabling plugins to provide additional JavaScript that should be added to the page footer. extra_body_script documentation . \n \n \n extra_css_urls and extra_js_urls hooks now take additional optional parameters, allowing them to be more selective about which pages they apply to. Documentation . \n \n \n You can now use the sortable_columns metadata setting to explicitly enable sort-by-column in the interface for database views, as well as for specific tables. \n \n \n The new fts_table and fts_pk metadata settings can now be used to explicitly configure full-text search for a table or a view , even if that table is not directly coupled to the SQLite FTS feature in the database schema itself. \n \n \n Datasette will now use pysqlite3 in place of the standard library sqlite3 module if it has been installed in the current environment. This makes it much easier to run Datasette against a more recent version of SQLite, including the just-released SQLite 3.25.0 which adds window function support. More details on how to use this in #360 \n \n \n New mechanism that allows plugin configuration options to be set using metadata.json .", "sections_fts": 70, "rank": null} {"rowid": 379, "title": "0.25.1 (2018-11-04)", "content": "Documentation improvements plus a fix for publishing to Zeit Now. \n \n \n datasette publish now now uses Zeit's v1 platform, to work around the new 100MB image limit. Thanks, @slygent - closes #366 .", "sections_fts": 70, "rank": null}