{"id": "custom_templates:custom-pages-redirects", "page": "custom_templates", "ref": "custom-pages-redirects", "title": "Custom redirects", "content": "You can use the custom_redirect(location) function to redirect users to another page, for example in a file called pages/datasette.html : \n {{ custom_redirect(\"https://github.com/simonw/datasette\") }} \n Now requests to http://localhost:8001/datasette will result in a redirect. \n These redirects are served with a 302 Found status code by default. You can send a 301 Moved Permanently code by passing 301 as the second argument to the function: \n {{ custom_redirect(\"https://github.com/simonw/datasette\", 301) }}", "breadcrumbs": "[\"Custom pages and templates\", \"Custom pages\"]", "references": "[]"} {"id": "custom_templates:customization", "page": "custom_templates", "ref": "customization", "title": "Custom pages and templates", "content": "Datasette provides a number of ways of customizing the way data is displayed.", "breadcrumbs": "[]", "references": "[]"} {"id": "custom_templates:customization-static-files", "page": "custom_templates", "ref": "customization-static-files", "title": "Serving static files", "content": "Datasette can serve static files for you, using the --static option.\n Consider the following directory structure: \n metadata.json\nstatic-files/styles.css\nstatic-files/app.js \n You can start Datasette using --static assets:static-files/ to serve those\n files from the /assets/ mount point: \n $ datasette -m metadata.json --static assets:static-files/ --memory \n The following URLs will now serve the content from those CSS and JS files: \n http://localhost:8001/assets/styles.css\nhttp://localhost:8001/assets/app.js \n You can reference those files from metadata.json like so: \n {\n \"extra_css_urls\": [\n \"/assets/styles.css\"\n ],\n \"extra_js_urls\": [\n \"/assets/app.js\"\n ]\n}", "breadcrumbs": "[\"Custom pages and templates\", \"Custom CSS and JavaScript\"]", "references": "[]"} {"id": "custom_templates:id1", "page": "custom_templates", "ref": "id1", "title": "Custom pages", "content": "You can add templated pages to your Datasette instance by creating HTML files in a pages directory within your templates directory. \n For example, to add a custom page that is served at http://localhost/about you would create a file in templates/pages/about.html , then start Datasette like this: \n $ datasette mydb.db --template-dir=templates/ \n You can nest directories within pages to create a nested structure. To create a http://localhost:8001/about/map page you would create templates/pages/about/map.html .", "breadcrumbs": "[\"Custom pages and templates\"]", "references": "[]"} {"id": "custom_templates:publishing-static-assets", "page": "custom_templates", "ref": "publishing-static-assets", "title": "Publishing static assets", "content": "The datasette publish command can be used to publish your static assets,\n using the same syntax as above: \n $ datasette publish cloudrun mydb.db --static assets:static-files/ \n This will upload the contents of the static-files/ directory as part of the\n deployment, and configure Datasette to correctly serve the assets from /assets/ .", "breadcrumbs": "[\"Custom pages and templates\", \"Custom CSS and JavaScript\"]", "references": "[]"} {"id": "deploying:deploying", "page": "deploying", "ref": "deploying", "title": "Deploying Datasette", "content": "The quickest way to deploy a Datasette instance on the internet is to use the datasette publish command, described in Publishing data . This can be used to quickly deploy Datasette to a number of hosting providers including Heroku, Google Cloud Run and Vercel. \n You can deploy Datasette to other hosting providers using the instructions on this page.", "breadcrumbs": "[]", "references": "[]"} {"id": "deploying:deploying-fundamentals", "page": "deploying", "ref": "deploying-fundamentals", "title": "Deployment fundamentals", "content": "Datasette can be deployed as a single datasette process that listens on a port. Datasette is not designed to be run as root, so that process should listen on a higher port such as port 8000. \n If you want to serve Datasette on port 80 (the HTTP default port) or port 443 (for HTTPS) you should run it behind a proxy server, such as nginx, Apache or HAProxy. The proxy server can listen on port 80/443 and forward traffic on to Datasette.", "breadcrumbs": "[\"Deploying Datasette\"]", "references": "[]"} {"id": "deploying:deploying-proxy", "page": "deploying", "ref": "deploying-proxy", "title": "Running Datasette behind a proxy", "content": "You may wish to run Datasette behind an Apache or nginx proxy, using a path within your existing site. \n You can use the base_url configuration setting to tell Datasette to serve traffic with a specific URL prefix. For example, you could run Datasette like this: \n datasette my-database.db --setting base_url /my-datasette/ -p 8009 \n This will run Datasette with the following URLs: \n \n \n http://127.0.0.1:8009/my-datasette/ - the Datasette homepage \n \n \n http://127.0.0.1:8009/my-datasette/my-database - the page for the my-database.db database \n \n \n http://127.0.0.1:8009/my-datasette/my-database/some_table - the page for the some_table table \n \n \n You can now set your nginx or Apache server to proxy the /my-datasette/ path to this Datasette instance.", "breadcrumbs": "[\"Deploying Datasette\"]", "references": "[]"} {"id": "deploying:deploying-systemd", "page": "deploying", "ref": "deploying-systemd", "title": "Running Datasette using systemd", "content": "You can run Datasette on Ubuntu or Debian systems using systemd . \n First, ensure you have Python 3 and pip installed. On Ubuntu you can use sudo apt-get install python3 python3-pip . \n You can install Datasette into a virtual environment, or you can install it system-wide. To install system-wide, use sudo pip3 install datasette . \n Now create a folder for your Datasette databases, for example using mkdir /home/ubuntu/datasette-root . \n You can copy a test database into that folder like so: \n cd /home/ubuntu/datasette-root\ncurl -O https://latest.datasette.io/fixtures.db \n Create a file at /etc/systemd/system/datasette.service with the following contents: \n [Unit]\nDescription=Datasette\nAfter=network.target\n\n[Service]\nType=simple\nUser=ubuntu\nEnvironment=DATASETTE_SECRET=\nWorkingDirectory=/home/ubuntu/datasette-root\nExecStart=datasette serve . -h 127.0.0.1 -p 8000\nRestart=on-failure\n\n[Install]\nWantedBy=multi-user.target \n Add a random value for the DATASETTE_SECRET - this will be used to sign Datasette cookies such as the CSRF token cookie. You can generate a suitable value like so: \n $ python3 -c 'import secrets; print(secrets.token_hex(32))' \n This configuration will run Datasette against all database files contained in the /home/ubuntu/datasette-root directory. If that directory contains a metadata.yml (or .json ) file or a templates/ or plugins/ sub-directory those will automatically be loaded by Datasette - see Configuration directory mode for details. \n You can start the Datasette process running using the following: \n sudo systemctl daemon-reload\nsudo systemctl start datasette.service \n You will need to restart the Datasette service after making changes to its metadata.json configuration or adding a new database file to that directory. You can do that using: \n sudo systemctl restart datasette.service \n Once the service has started you can confirm that Datasette is running on port 8000 like so: \n curl 127.0.0.1:8000/-/versions.json\n# Should output JSON showing the installed version \n Datasette will not be accessible from outside the server because it is listening on 127.0.0.1 . You can expose it by instead listening on 0.0.0.0 , but a better way is to set up a proxy such as nginx - see Running Datasette behind a proxy .", "breadcrumbs": "[\"Deploying Datasette\"]", "references": "[]"} {"id": "facets:facets-in-query-strings", "page": "facets", "ref": "facets-in-query-strings", "title": "Facets in query strings", "content": "To turn on faceting for specific columns on a Datasette table view, add one or more _facet=COLUMN parameters to the URL.\n For example, if you want to turn on facets for the city_id and state columns, construct a URL that looks like this: \n /dbname/tablename?_facet=state&_facet=city_id \n This works for both the HTML interface and the .json view.\n When enabled, facets will cause a facet_results block to be added to the JSON output, looking something like this: \n {\n \"state\": {\n \"name\": \"state\",\n \"results\": [\n {\n \"value\": \"CA\",\n \"label\": \"CA\",\n \"count\": 10,\n \"toggle_url\": \"http://...?_facet=city_id&_facet=state&state=CA\",\n \"selected\": false\n },\n {\n \"value\": \"MI\",\n \"label\": \"MI\",\n \"count\": 4,\n \"toggle_url\": \"http://...?_facet=city_id&_facet=state&state=MI\",\n \"selected\": false\n },\n {\n \"value\": \"MC\",\n \"label\": \"MC\",\n \"count\": 1,\n \"toggle_url\": \"http://...?_facet=city_id&_facet=state&state=MC\",\n \"selected\": false\n }\n ],\n \"truncated\": false\n }\n \"city_id\": {\n \"name\": \"city_id\",\n \"results\": [\n {\n \"value\": 1,\n \"label\": \"San Francisco\",\n \"count\": 6,\n \"toggle_url\": \"http://...?_facet=city_id&_facet=state&city_id=1\",\n \"selected\": false\n },\n {\n \"value\": 2,\n \"label\": \"Los Angeles\",\n \"count\": 4,\n \"toggle_url\": \"http://...?_facet=city_id&_facet=state&city_id=2\",\n \"selected\": false\n },\n {\n \"value\": 3,\n \"label\": \"Detroit\",\n \"count\": 4,\n \"toggle_url\": \"http://...?_facet=city_id&_facet=state&city_id=3\",\n \"selected\": false\n },\n {\n \"value\": 4,\n \"label\": \"Memnonia\",\n \"count\": 1,\n \"toggle_url\": \"http://...?_facet=city_id&_facet=state&city_id=4\",\n \"selected\": false\n }\n ],\n \"truncated\": false\n }\n} \n If Datasette detects that a column is a foreign key, the \"label\" property will be automatically derived from the detected label column on the referenced table. \n The default number of facet results returned is 30, controlled by the default_facet_size setting.\n You can increase this on an individual page by adding ?_facet_size=100 to the query string, up to a maximum of max_returned_rows (which defaults to 1000).", "breadcrumbs": "[\"Facets\"]", "references": "[]"} {"id": "facets:facets-metadata", "page": "facets", "ref": "facets-metadata", "title": "Facets in metadata.json", "content": "You can turn facets on by default for specific tables by adding them to a \"facets\" key in a Datasette Metadata file. \n Here's an example that turns on faceting by default for the qLegalStatus column in the Street_Tree_List table in the sf-trees database: \n {\n \"databases\": {\n \"sf-trees\": {\n \"tables\": {\n \"Street_Tree_List\": {\n \"facets\": [\"qLegalStatus\"]\n }\n }\n }\n }\n} \n Facets defined in this way will always be shown in the interface and returned in the API, regardless of the _facet arguments passed to the view. \n You can specify array or date facets in metadata using JSON objects with a single key of array or date and a value specifying the column, like this: \n {\n \"facets\": [\n {\"array\": \"tags\"},\n {\"date\": \"created\"}\n ]\n} \n You can change the default facet size (the number of results shown for each facet) for a table using facet_size : \n {\n \"databases\": {\n \"sf-trees\": {\n \"tables\": {\n \"Street_Tree_List\": {\n \"facets\": [\"qLegalStatus\"],\n \"facet_size\": 10\n }\n }\n }\n }\n}", "breadcrumbs": "[\"Facets\"]", "references": "[]"} {"id": "facets:suggested-facets", "page": "facets", "ref": "suggested-facets", "title": "Suggested facets", "content": "Datasette's table UI will suggest facets for the user to apply, based on the following criteria: \n For the currently filtered data are there any columns which, if applied as a facet... \n \n \n Will return 30 or less unique options \n \n \n Will return more than one unique option \n \n \n Will return less unique options than the total number of filtered rows \n \n \n And the query used to evaluate this criteria can be completed in under 50ms \n \n \n That last point is particularly important: Datasette runs a query for every column that is displayed on a page, which could get expensive - so to avoid slow load times it sets a time limit of just 50ms for each of those queries.\n This means suggested facets are unlikely to appear for tables with millions of records in them.", "breadcrumbs": "[\"Facets\"]", "references": "[]"} {"id": "full_text_search:full-text-search-fts-versions", "page": "full_text_search", "ref": "full-text-search-fts-versions", "title": "FTS versions", "content": "There are three different versions of the SQLite FTS module: FTS3, FTS4 and FTS5. You can tell which versions are supported by your instance of Datasette by checking the /-/versions page. \n FTS5 is the most advanced module but may not be available in the SQLite version that is bundled with your Python installation. Most importantly, FTS5 is the only version that has the ability to order by search relevance without needing extra code. \n If you can't be sure that FTS5 will be available, you should use FTS4.", "breadcrumbs": "[\"Full-text search\"]", "references": "[]"} {"id": "getting_started:getting-started", "page": "getting_started", "ref": "getting-started", "title": "Getting started", "content": "", "breadcrumbs": "[]", "references": "[]"} {"id": "index:contents", "page": "index", "ref": "contents", "title": "Contents", "content": "Getting started Play with a live demo Follow a tutorial Datasette in your browser with Datasette Lite Try Datasette without installing anything using Glitch Using Datasette on your own computer Installation Basic installation Datasette Desktop for Mac Using Homebrew Using pip Advanced installation options Using pipx Using Docker A note about extensions The Datasette Ecosystem sqlite-utils Dogsheep CLI reference datasette --help datasette serve datasette --get datasette serve --help-settings datasette plugins datasette install datasette uninstall datasette publish datasette publish cloudrun datasette publish heroku datasette package datasette inspect Pages and API endpoints Top-level index Database Table Row Publishing data datasette publish Publishing to Google Cloud Run Publishing to Heroku Publishing to Vercel Publishing to Fly Custom metadata and plugins datasette package Deploying Datasette Deployment fundamentals Running Datasette using systemd Running Datasette using OpenRC Deploying using buildpacks Running Datasette behind a proxy Nginx proxy configuration Apache proxy configuration JSON API Different shapes Pagination Special JSON arguments Table arguments Column filter arguments Special table arguments Expanding foreign key references Discovering the JSON for a page Running SQL queries Named parameters Views Canned queries Canned query parameters Additional canned query options Writable canned queries Magic parameters JSON API for writable canned queries Pagination Cross-database queries Authentication and permissions Actors Using the \"root\" actor Permissions Defining permissions with \"allow\" blocks The /-/allow-debug tool Configuring permissions in metadata.json Controlling access to an instance Controlling access to specific databases Controlling access to specific tables and views Controlling access to specific canned queries Controlling the ability to execute arbitrary SQL Checking permissions in plugins actor_matches_allow() The permissions debug tool The ds_actor cookie Including an expiry time The /-/logout page Built-in permissions view-instance view-database view-database-download view-table view-query execute-sql permissions-debug debug-menu Performance and caching Immutable mode Using \"datasette inspect\" HTTP caching datasette-hashed-urls CSV export URL parameters Streaming all records Binary data Linking to binary downloads Binary plugins Facets Facets in query strings Facets in metadata.json Suggested facets Speeding up facets with indexes Facet by JSON array Facet by date Full-text search The table page and table view API Advanced SQLite search queries Configuring full-text search for a table or view Searches using custom SQL Enabling full-text search for a SQLite table Configuring FTS using sqlite-utils Configuring FTS using csvs-to-sqlite Configuring FTS by hand FTS versions SpatiaLite Warning Installation Installing SpatiaLite on OS X Installing SpatiaLite on Linux Spatial indexing latitude/longitude columns Making use of a spatial index Importing shapefiles into SpatiaLite Importing GeoJSON polygons using Shapely Querying polygons using within() Metadata Per-database and per-table metadata Source, license and about Column descriptions Specifying units for a column Setting a default sort order Setting a custom page size Setting which columns can be used for sorting Specifying the label column for a table Hiding tables Using YAML for metadata Settings Using --setting Configuration directory mode Settings default_allow_sql default_page_size sql_time_limit_ms max_returned_rows num_sql_threads allow_facet default_facet_size facet_time_limit_ms facet_suggest_time_limit_ms suggest_facets allow_download default_cache_ttl cache_size_kb allow_csv_stream max_csv_mb truncate_cells_html force_https_urls template_debug trace_debug base_url Configuring the secret Using secrets with datasette publish Introspection /-/metadata /-/versions /-/plugins /-/settings /-/databases /-/threads /-/actor /-/messages Custom pages and templates Custom CSS and JavaScript CSS classes on the Serving static files Publishing static assets Custom templates Custom pages Path parameters for pages Custom headers and status codes Returning 404s Custom redirects Custom error pages Plugins Installing plugins One-off plugins using --plugins-dir Deploying plugins using datasette publish Seeing what plugins are installed Plugin configuration Secret configuration values Writing plugins Writing one-off plugins Starting an installable plugin using cookiecutter Packaging a plugin Static assets Custom templates Writing plugins that accept configuration Designing URLs for your plugin Building URLs within plugins Plugin hooks prepare_connection(conn, database, datasette) prepare_jinja2_environment(env, datasette) extra_template_vars(template, database, table, columns, view_name, request, datasette) extra_css_urls(template, database, table, columns, view_name, request, datasette) extra_js_urls(template, database, table, columns, view_name, request, datasette) extra_body_script(template, database, table, columns, view_name, request, datasette) publish_subcommand(publish) render_cell(row, value, column, table, database, datasette) register_output_renderer(datasette) register_routes(datasette) register_commands(cli) register_facet_classes() asgi_wrapper(datasette) startup(datasette) canned_queries(datasette, database, actor) actor_from_request(datasette, request) filters_from_request(request, database, table, datasette) permission_allowed(datasette, actor, action, resource) register_magic_parameters(datasette) forbidden(datasette, request, message) handle_exception(datasette, request, exception) menu_links(datasette, actor, request) table_actions(datasette, actor, database, table, request) database_actions(datasette, actor, database, request) skip_csrf(datasette, scope) get_metadata(datasette, key, database, table) Testing plugins Setting up a Datasette test instance Using pdb for errors thrown inside Datasette Using pytest fixtures Testing outbound HTTP calls with pytest-httpx Registering a plugin for the duration of a test Internals for plugins Request object The MultiParams class Response class Returning a response with .asgi_send(send) Setting cookies with response.set_cookie() Datasette class .databases .plugin_config(plugin_name, database=None, table=None) await .render_template(template, context=None, request=None) await .permission_allowed(actor, action, resource=None, default=False) await .ensure_permissions(actor, permissions) await .check_visibility(actor, action=None, resource=None, permissions=None) .get_database(name) .add_database(db, name=None, route=None) .add_memory_database(name) .remove_database(name) .sign(value, namespace=\"default\") .unsign(value, namespace=\"default\") .add_message(request, message, type=datasette.INFO) .absolute_url(request, path) .setting(key) datasette.client datasette.urls Database class Database(ds, path=None, is_mutable=True, is_memory=False, memory_name=None) db.hash await db.execute(sql, ...) Results await db.execute_fn(fn) await db.execute_write(sql, params=None, block=True) await db.execute_write_script(sql, block=True) await db.execute_write_many(sql, params_seq, block=True) await db.execute_write_fn(fn, block=True) db.close() Database introspection CSRF protection The _internal database The datasette.utils module parse_metadata(content) await_me_maybe(value) Tilde encoding datasette.tracer Tracing child tasks Import shortcuts Contributing General guidelines Setting up a development environment Running the tests Using fixtures Debugging Code formatting Running Black blacken-docs Prettier Editing and building the documentation Running Cog Continuously deployed demo instances Release process Alpha and beta releases Releasing bug fixes from a branch Upgrading CodeMirror Changelog 0.64.6 (2023-12-22) 0.64.5 (2023-10-08) 0.64.4 (2023-09-21) 0.64.3 (2023-04-27) 0.64.2 (2023-03-08) 0.64.1 (2023-01-11) 0.64 (2023-01-09) 0.63.3 (2022-12-17) 0.63.2 (2022-11-18) 0.63.1 (2022-11-10) 0.63 (2022-10-27) Features Plugin hooks and internals Documentation 0.62 (2022-08-14) Features Plugin hooks Bug fixes Documentation 0.61.1 (2022-03-23) 0.61 (2022-03-23) 0.60.2 (2022-02-07) 0.60.1 (2022-01-20) 0.60 (2022-01-13) Plugins and internals Faceting Other small fixes 0.59.4 (2021-11-29) 0.59.3 (2021-11-20) 0.59.2 (2021-11-13) 0.59.1 (2021-10-24) 0.59 (2021-10-14) 0.58.1 (2021-07-16) 0.58 (2021-07-14) 0.57.1 (2021-06-08) 0.57 (2021-06-05) New features Bug fixes and other improvements 0.56.1 (2021-06-05) 0.56 (2021-03-28) 0.55 (2021-02-18) 0.54.1 (2021-02-02) 0.54 (2021-01-25) The _internal database Named in-memory database support JavaScript modules Code formatting with Black and Prettier Other changes 0.53 (2020-12-10) 0.52.5 (2020-12-09) 0.52.4 (2020-12-05) 0.52.3 (2020-12-03) 0.52.2 (2020-12-02) 0.52.1 (2020-11-29) 0.52 (2020-11-28) 0.51.1 (2020-10-31) 0.51 (2020-10-31) New visual design Plugins can now add links within Datasette Binary data URL building Running Datasette behind a proxy Smaller changes 0.50.2 (2020-10-09) 0.50.1 (2020-10-09) 0.50 (2020-10-09) 0.49.1 (2020-09-15) 0.49 (2020-09-14) 0.48 (2020-08-16) 0.47.3 (2020-08-15) 0.47.2 (2020-08-12) 0.47.1 (2020-08-11) 0.47 (2020-08-11) 0.46 (2020-08-09) 0.45 (2020-07-01) Magic parameters for canned queries Log out Better plugin documentation New plugin hooks Smaller changes 0.44 (2020-06-11) Authentication Permissions Writable canned queries Flash messages Signed values and secrets CSRF protection Cookie methods register_routes() plugin hooks Smaller changes The road to Datasette 1.0 0.43 (2020-05-28) 0.42 (2020-05-08) 0.41 (2020-05-06) 0.40 (2020-04-21) 0.39 (2020-03-24) 0.38 (2020-03-08) 0.37.1 (2020-03-02) 0.37 (2020-02-25) 0.36 (2020-02-21) 0.35 (2020-02-04) 0.34 (2020-01-29) 0.33 (2019-12-22) 0.32 (2019-11-14) 0.31.2 (2019-11-13) 0.31.1 (2019-11-12) 0.31 (2019-11-11) 0.30.2 (2019-11-02) 0.30.1 (2019-10-30) 0.30 (2019-10-18) 0.29.3 (2019-09-02) 0.29.2 (2019-07-13) 0.29.1 (2019-07-11) 0.29 (2019-07-07) ASGI New plugin hook: asgi_wrapper New plugin hook: extra_template_vars Secret plugin configuration options Facet by date Easier custom templates for table rows ?_through= for joins through many-to-many tables Small changes 0.28 (2019-05-19) Supporting databases that change Faceting improvements, and faceting plugins datasette publish cloudrun register_output_renderer plugins Medium changes Small changes 0.27.1 (2019-05-09) 0.27 (2019-01-31) 0.26.1 (2019-01-10) 0.26 (2019-01-02) 0.25.2 (2018-12-16) 0.25.1 (2018-11-04) 0.25 (2018-09-19) 0.24 (2018-07-23) 0.23.2 (2018-07-07) 0.23.1 (2018-06-21) 0.23 (2018-06-18) CSV export Foreign key expansions New configuration settings Control HTTP caching with ?_ttl= Improved support for SpatiaLite latest.datasette.io Miscellaneous 0.22.1 (2018-05-23) 0.22 (2018-05-20) 0.21 (2018-05-05) 0.20 (2018-04-20) 0.19 (2018-04-16) 0.18 (2018-04-14) 0.17 (2018-04-13) 0.16 (2018-04-13) 0.15 (2018-04-09) 0.14 (2017-12-09) 0.13 (2017-11-24) 0.12 (2017-11-16) 0.11 (2017-11-14) 0.10 (2017-11-14) 0.9 (2017-11-13) 0.8 (2017-11-13)", "breadcrumbs": "[\"Datasette\"]", "references": "[]"} {"id": "installation:id1", "page": "installation", "ref": "id1", "title": "Installation", "content": "If you just want to try Datasette out you don't need to install anything: see Try Datasette without installing anything using Glitch \n \n There are two main options for installing Datasette. You can install it directly on to your machine, or you can install it using Docker. \n If you want to start making contributions to the Datasette project by installing a copy that lets you directly modify the code, take a look at our guide to Setting up a development environment . \n \n \n \n Basic installation \n \n \n Datasette Desktop for Mac \n \n \n Using Homebrew \n \n \n Using pip \n \n \n \n \n Advanced installation options \n \n \n Using pipx \n \n \n Installing plugins using pipx \n \n \n Upgrading packages using pipx \n \n \n \n \n Using Docker \n \n \n Loading SpatiaLite \n \n \n Installing plugins \n \n \n \n \n \n \n A note about extensions", "breadcrumbs": "[]", "references": "[]"} {"id": "installation:installation-advanced", "page": "installation", "ref": "installation-advanced", "title": "Advanced installation options", "content": "", "breadcrumbs": "[\"Installation\"]", "references": "[]"} {"id": "installation:installation-basic", "page": "installation", "ref": "installation-basic", "title": "Basic installation", "content": "", "breadcrumbs": "[\"Installation\"]", "references": "[]"} {"id": "installation:installation-extensions", "page": "installation", "ref": "installation-extensions", "title": "A note about extensions", "content": "SQLite supports extensions, such as SpatiaLite for geospatial operations. \n These can be loaded using the --load-extension argument, like so: \n datasette --load-extension=/usr/local/lib/mod_spatialite.dylib \n Some Python installations do not include support for SQLite extensions. If this is the case you will see the following error when you attempt to load an extension: \n \n Your Python installation does not have the ability to load SQLite extensions. \n \n In some cases you may see the following error message instead: \n AttributeError: 'sqlite3.Connection' object has no attribute 'enable_load_extension' \n On macOS the easiest fix for this is to install Datasette using Homebrew: \n brew install datasette \n Use which datasette to confirm that datasette will run that version. The output should look something like this: \n /usr/local/opt/datasette/bin/datasette \n If you get a different location here such as /Library/Frameworks/Python.framework/Versions/3.10/bin/datasette you can run the following command to cause datasette to execute the Homebrew version instead: \n alias datasette=$(echo $(brew --prefix datasette)/bin/datasette) \n You can undo this operation using: \n unalias datasette \n If you need to run SQLite with extension support for other Python code, you can do so by install Python itself using Homebrew: \n brew install python \n Then executing Python using: \n /usr/local/opt/python@3/libexec/bin/python \n A more convenient way to work with this version of Python may be to use it to create a virtual environment: \n /usr/local/opt/python@3/libexec/bin/python -m venv datasette-venv \n Then activate it like this: \n source datasette-venv/bin/activate \n Now running python and pip will work against a version of Python 3 that includes support for SQLite extensions: \n pip install datasette\nwhich datasette\ndatasette --version", "breadcrumbs": "[\"Installation\"]", "references": "[]"} {"id": "installation:installing-plugins-using-pipx", "page": "installation", "ref": "installing-plugins-using-pipx", "title": "Installing plugins using pipx", "content": "You can install additional datasette plugins with pipx inject like so: \n $ pipx inject datasette datasette-json-html\ninjected package datasette-json-html into venv datasette\ndone! \u2728 \ud83c\udf1f \u2728\n\n$ datasette plugins\n[\n {\n \"name\": \"datasette-json-html\",\n \"static\": false,\n \"templates\": false,\n \"version\": \"0.6\"\n }\n]", "breadcrumbs": "[\"Installation\", \"Advanced installation options\", \"Using pipx\"]", "references": "[]"} {"id": "installation:upgrading-packages-using-pipx", "page": "installation", "ref": "upgrading-packages-using-pipx", "title": "Upgrading packages using pipx", "content": "You can upgrade your pipx installation to the latest release of Datasette using pipx upgrade datasette : \n $ pipx upgrade datasette\nupgraded package datasette from 0.39 to 0.40 (location: /Users/simon/.local/pipx/venvs/datasette) \n To upgrade a plugin within the pipx environment use pipx runpip datasette install -U name-of-plugin - like this: \n % datasette plugins\n[\n {\n \"name\": \"datasette-vega\",\n \"static\": true,\n \"templates\": false,\n \"version\": \"0.6\"\n }\n]\n\n$ pipx runpip datasette install -U datasette-vega\nCollecting datasette-vega\nDownloading datasette_vega-0.6.2-py3-none-any.whl (1.8 MB)\n |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1.8 MB 2.0 MB/s\n...\nInstalling collected packages: datasette-vega\nAttempting uninstall: datasette-vega\n Found existing installation: datasette-vega 0.6\n Uninstalling datasette-vega-0.6:\n Successfully uninstalled datasette-vega-0.6\nSuccessfully installed datasette-vega-0.6.2\n\n$ datasette plugins\n[\n {\n \"name\": \"datasette-vega\",\n \"static\": true,\n \"templates\": false,\n \"version\": \"0.6.2\"\n }\n]", "breadcrumbs": "[\"Installation\", \"Advanced installation options\", \"Using pipx\"]", "references": "[]"} {"id": "internals:database-close", "page": "internals", "ref": "database-close", "title": "db.close()", "content": "Closes all of the open connections to file-backed databases. This is mainly intended to be used by large test suites, to avoid hitting limits on the number of open files.", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[]"} {"id": "internals:database-constructor", "page": "internals", "ref": "database-constructor", "title": "Database(ds, path=None, is_mutable=True, is_memory=False, memory_name=None)", "content": "The Database() constructor can be used by plugins, in conjunction with .add_database(db, name=None, route=None) , to create and register new databases. \n The arguments are as follows: \n \n \n ds - Datasette class (required) \n \n The Datasette instance you are attaching this database to. \n \n \n \n path - string \n \n Path to a SQLite database file on disk. \n \n \n \n is_mutable - boolean \n \n Set this to False to cause Datasette to open the file in immutable mode. \n \n \n \n is_memory - boolean \n \n Use this to create non-shared memory connections. \n \n \n \n memory_name - string or None \n \n Use this to create a named in-memory database. Unlike regular memory databases these can be accessed by multiple threads and will persist an changes made to them for the lifetime of the Datasette server process. \n \n \n \n The first argument is the datasette instance you are attaching to, the second is a path= , then is_mutable and is_memory are both optional arguments.", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[]"} {"id": "internals:database-execute", "page": "internals", "ref": "database-execute", "title": "await db.execute(sql, ...)", "content": "Executes a SQL query against the database and returns the resulting rows (see Results ). \n \n \n sql - string (required) \n \n The SQL query to execute. This can include ? or :named parameters. \n \n \n \n params - list or dict \n \n A list or dictionary of values to use for the parameters. List for ? , dictionary for :named . \n \n \n \n truncate - boolean \n \n Should the rows returned by the query be truncated at the maximum page size? Defaults to True , set this to False to disable truncation. \n \n \n \n custom_time_limit - integer ms \n \n A custom time limit for this query. This can be set to a lower value than the Datasette configured default. If a query takes longer than this it will be terminated early and raise a dataette.database.QueryInterrupted exception. \n \n \n \n page_size - integer \n \n Set a custom page size for truncation, over-riding the configured Datasette default. \n \n \n \n log_sql_errors - boolean \n \n Should any SQL errors be logged to the console in addition to being raised as an error? Defaults to True .", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[]"} {"id": "internals:database-execute-fn", "page": "internals", "ref": "database-execute-fn", "title": "await db.execute_fn(fn)", "content": "Executes a given callback function against a read-only database connection running in a thread. The function will be passed a SQLite connection, and the return value from the function will be returned by the await . \n Example usage: \n def get_version(conn):\n return conn.execute(\n \"select sqlite_version()\"\n ).fetchall()[0][0]\n\n\nversion = await db.execute_fn(get_version)", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[]"} {"id": "internals:database-execute-write", "page": "internals", "ref": "database-execute-write", "title": "await db.execute_write(sql, params=None, block=True)", "content": "SQLite only allows one database connection to write at a time. Datasette handles this for you by maintaining a queue of writes to be executed against a given database. Plugins can submit write operations to this queue and they will be executed in the order in which they are received. \n This method can be used to queue up a non-SELECT SQL query to be executed against a single write connection to the database. \n You can pass additional SQL parameters as a tuple or dictionary. \n The method will block until the operation is completed, and the return value will be the return from calling conn.execute(...) using the underlying sqlite3 Python library. \n If you pass block=False this behaviour changes to \"fire and forget\" - queries will be added to the write queue and executed in a separate thread while your code can continue to do other things. The method will return a UUID representing the queued task.", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[]"} {"id": "internals:database-execute-write-fn", "page": "internals", "ref": "database-execute-write-fn", "title": "await db.execute_write_fn(fn, block=True)", "content": "This method works like .execute_write() , but instead of a SQL statement you give it a callable Python function. Your function will be queued up and then called when the write connection is available, passing that connection as the argument to the function. \n The function can then perform multiple actions, safe in the knowledge that it has exclusive access to the single writable connection for as long as it is executing. \n \n fn needs to be a regular function, not an async def function. \n \n For example: \n def delete_and_return_count(conn):\n conn.execute(\"delete from some_table where id > 5\")\n return conn.execute(\n \"select count(*) from some_table\"\n ).fetchone()[0]\n\n\ntry:\n num_rows_left = await database.execute_write_fn(\n delete_and_return_count\n )\nexcept Exception as e:\n print(\"An error occurred:\", e) \n The value returned from await database.execute_write_fn(...) will be the return value from your function. \n If your function raises an exception that exception will be propagated up to the await line. \n If you specify block=False the method becomes fire-and-forget, queueing your function to be executed and then allowing your code after the call to .execute_write_fn() to continue running while the underlying thread waits for an opportunity to run your function. A UUID representing the queued task will be returned. Any exceptions in your code will be silently swallowed.", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[]"} {"id": "internals:database-hash", "page": "internals", "ref": "database-hash", "title": "db.hash", "content": "If the database was opened in immutable mode, this property returns the 64 character SHA-256 hash of the database contents as a string. Otherwise it returns None .", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[]"} {"id": "internals:datasette-absolute-url", "page": "internals", "ref": "datasette-absolute-url", "title": ".absolute_url(request, path)", "content": "request - Request \n \n The current Request object \n \n \n \n path - string \n \n A path, for example /dbname/table.json \n \n \n \n Returns the absolute URL for the given path, including the protocol and host. For example: \n absolute_url = datasette.absolute_url(\n request, \"/dbname/table.json\"\n)\n# Would return \"http://localhost:8001/dbname/table.json\" \n The current request object is used to determine the hostname and protocol that should be used for the returned URL. The force_https_urls configuration setting is taken into account.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-add-database", "page": "internals", "ref": "datasette-add-database", "title": ".add_database(db, name=None, route=None)", "content": "db - datasette.database.Database instance \n \n The database to be attached. \n \n \n \n name - string, optional \n \n The name to be used for this database . If not specified Datasette will pick one based on the filename or memory name. \n \n \n \n route - string, optional \n \n This will be used in the URL path. If not specified, it will default to the same thing as the name . \n \n \n \n The datasette.add_database(db) method lets you add a new database to the current Datasette instance. \n The db parameter should be an instance of the datasette.database.Database class. For example: \n from datasette.database import Database\n\ndatasette.add_database(\n Database(\n datasette,\n path=\"path/to/my-new-database.db\",\n )\n) \n This will add a mutable database and serve it at /my-new-database . \n Use is_mutable=False to add an immutable database. \n .add_database() returns the Database instance, with its name set as the database.name attribute. Any time you are working with a newly added database you should use the return value of .add_database() , for example: \n db = datasette.add_database(\n Database(datasette, memory_name=\"statistics\")\n)\nawait db.execute_write(\n \"CREATE TABLE foo(id integer primary key)\"\n)", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-add-memory-database", "page": "internals", "ref": "datasette-add-memory-database", "title": ".add_memory_database(name)", "content": "Adds a shared in-memory database with the specified name: \n datasette.add_memory_database(\"statistics\") \n This is a shortcut for the following: \n from datasette.database import Database\n\ndatasette.add_database(\n Database(datasette, memory_name=\"statistics\")\n) \n Using either of these pattern will result in the in-memory database being served at /statistics .", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-add-message", "page": "internals", "ref": "datasette-add-message", "title": ".add_message(request, message, type=datasette.INFO)", "content": "request - Request \n \n The current Request object \n \n \n \n message - string \n \n The message string \n \n \n \n type - constant, optional \n \n The message type - datasette.INFO , datasette.WARNING or datasette.ERROR \n \n \n \n Datasette's flash messaging mechanism allows you to add a message that will be displayed to the user on the next page that they visit. Messages are persisted in a ds_messages cookie. This method adds a message to that cookie. \n You can try out these messages (including the different visual styling of the three message types) using the /-/messages debugging tool.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-check-visibility", "page": "internals", "ref": "datasette-check-visibility", "title": "await .check_visibility(actor, action=None, resource=None, permissions=None)", "content": "actor - dictionary \n \n The authenticated actor. This is usually request.actor . \n \n \n \n action - string, optional \n \n The name of the action that is being permission checked. \n \n \n \n resource - string or tuple, optional \n \n The resource, e.g. the name of the database, or a tuple of two strings containing the name of the database and the name of the table. Only some permissions apply to a resource. \n \n \n \n permissions - list of action strings or (action, resource) tuples, optional \n \n Provide this instead of action and resource to check multiple permissions at once. \n \n \n \n This convenience method can be used to answer the question \"should this item be considered private, in that it is visible to me but it is not visible to anonymous users?\" \n It returns a tuple of two booleans, (visible, private) . visible indicates if the actor can see this resource. private will be True if an anonymous user would not be able to view the resource. \n This example checks if the user can access a specific table, and sets private so that a padlock icon can later be displayed: \n visible, private = await self.ds.check_visibility(\n request.actor,\n action=\"view-table\",\n resource=(database, table),\n) \n The following example runs three checks in a row, similar to await .ensure_permissions(actor, permissions) . If any of the checks are denied before one of them is explicitly granted then visible will be False . private will be True if an anonymous user would not be able to view the resource. \n visible, private = await self.ds.check_visibility(\n request.actor,\n permissions=[\n (\"view-table\", (database, table)),\n (\"view-database\", database),\n \"view-instance\",\n ],\n)", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-databases", "page": "internals", "ref": "datasette-databases", "title": ".databases", "content": "Property exposing a collections.OrderedDict of databases currently connected to Datasette. \n The dictionary keys are the name of the database that is used in the URL - e.g. /fixtures would have a key of \"fixtures\" . The values are Database class instances. \n All databases are listed, irrespective of user permissions. This means that the _internal database will always be listed here.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-ensure-permissions", "page": "internals", "ref": "datasette-ensure-permissions", "title": "await .ensure_permissions(actor, permissions)", "content": "actor - dictionary \n \n The authenticated actor. This is usually request.actor . \n \n \n \n permissions - list \n \n A list of permissions to check. Each permission in that list can be a string action name or a 2-tuple of (action, resource) . \n \n \n \n This method allows multiple permissions to be checked at once. It raises a datasette.Forbidden exception if any of the checks are denied before one of them is explicitly granted. \n This is useful when you need to check multiple permissions at once. For example, an actor should be able to view a table if either one of the following checks returns True or not a single one of them returns False : \n await self.ds.ensure_permissions(\n request.actor,\n [\n (\"view-table\", (database, table)),\n (\"view-database\", database),\n \"view-instance\",\n ],\n)", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-get-database", "page": "internals", "ref": "datasette-get-database", "title": ".get_database(name)", "content": "name - string, optional \n \n The name of the database - optional. \n \n \n \n Returns the specified database object. Raises a KeyError if the database does not exist. Call this method without an argument to return the first connected database.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-permission-allowed", "page": "internals", "ref": "datasette-permission-allowed", "title": "await .permission_allowed(actor, action, resource=None, default=False)", "content": "actor - dictionary \n \n The authenticated actor. This is usually request.actor . \n \n \n \n action - string \n \n The name of the action that is being permission checked. \n \n \n \n resource - string or tuple, optional \n \n The resource, e.g. the name of the database, or a tuple of two strings containing the name of the database and the name of the table. Only some permissions apply to a resource. \n \n \n \n default - optional, True or False \n \n Should this permission check be default allow or default deny. \n \n \n \n Check if the given actor has permission to perform the given action on the given resource. \n Some permission checks are carried out against rules defined in metadata.json , while other custom permissions may be decided by plugins that implement the permission_allowed(datasette, actor, action, resource) plugin hook. \n If neither metadata.json nor any of the plugins provide an answer to the permission query the default argument will be returned. \n See Built-in permissions for a full list of permission actions included in Datasette core.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-plugin-config", "page": "internals", "ref": "datasette-plugin-config", "title": ".plugin_config(plugin_name, database=None, table=None)", "content": "plugin_name - string \n \n The name of the plugin to look up configuration for. Usually this is something similar to datasette-cluster-map . \n \n \n \n database - None or string \n \n The database the user is interacting with. \n \n \n \n table - None or string \n \n The table the user is interacting with. \n \n \n \n This method lets you read plugin configuration values that were set in metadata.json . See Writing plugins that accept configuration for full details of how this method should be used. \n The return value will be the value from the configuration file - usually a dictionary. \n If the plugin is not configured the return value will be None .", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-remove-database", "page": "internals", "ref": "datasette-remove-database", "title": ".remove_database(name)", "content": "name - string \n \n The name of the database to be removed. \n \n \n \n This removes a database that has been previously added. name= is the unique name of that database.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-setting", "page": "internals", "ref": "datasette-setting", "title": ".setting(key)", "content": "key - string \n \n The name of the setting, e.g. base_url . \n \n \n \n Returns the configured value for the specified setting . This can be a string, boolean or integer depending on the requested setting. \n For example: \n downloads_are_allowed = datasette.setting(\"allow_download\")", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-unsign", "page": "internals", "ref": "datasette-unsign", "title": ".unsign(value, namespace=\"default\")", "content": "signed - any serializable type \n \n The signed string that was created using .sign(value, namespace=\"default\") . \n \n \n \n namespace - string, optional \n \n The alternative namespace, if one was used. \n \n \n \n Returns the original, decoded object that was passed to .sign(value, namespace=\"default\") . If the signature is not valid this raises a itsdangerous.BadSignature exception.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:internals", "page": "internals", "ref": "internals", "title": "Internals for plugins", "content": "Many Plugin hooks are passed objects that provide access to internal Datasette functionality. The interface to these objects should not be considered stable with the exception of methods that are documented here.", "breadcrumbs": "[]", "references": "[]"} {"id": "internals:internals-database", "page": "internals", "ref": "internals-database", "title": "Database class", "content": "Instances of the Database class can be used to execute queries against attached SQLite databases, and to run introspection against their schemas.", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[]"} {"id": "internals:internals-database-introspection", "page": "internals", "ref": "internals-database-introspection", "title": "Database introspection", "content": "The Database class also provides properties and methods for introspecting the database. \n \n \n db.name - string \n \n The name of the database - usually the filename without the .db prefix. \n \n \n \n db.size - integer \n \n The size of the database file in bytes. 0 for :memory: databases. \n \n \n \n db.mtime_ns - integer or None \n \n The last modification time of the database file in nanoseconds since the epoch. None for :memory: databases. \n \n \n \n db.is_mutable - boolean \n \n Is this database mutable, and allowed to accept writes? \n \n \n \n db.is_memory - boolean \n \n Is this database an in-memory database? \n \n \n \n await db.attached_databases() - list of named tuples \n \n Returns a list of additional databases that have been connected to this database using the SQLite ATTACH command. Each named tuple has fields seq , name and file . \n \n \n \n await db.table_exists(table) - boolean \n \n Check if a table called table exists. \n \n \n \n await db.table_names() - list of strings \n \n List of names of tables in the database. \n \n \n \n await db.view_names() - list of strings \n \n List of names of views in the database. \n \n \n \n await db.table_columns(table) - list of strings \n \n Names of columns in a specific table. \n \n \n \n await db.table_column_details(table) - list of named tuples \n \n Full details of the columns in a specific table. Each column is represented by a Column named tuple with fields cid (integer representing the column position), name (string), type (string, e.g. REAL or VARCHAR(30) ), notnull (integer 1 or 0), default_value (string or None), is_pk (integer 1 or 0). \n \n \n \n await db.primary_keys(table) - list of strings \n \n Names of the columns that are part of the primary key for this table. \n \n \n \n await db.fts_table(table) - string or None \n \n The name of the FTS table associated with this table, if one exists. \n \n \n \n await db.label_column_for_table(table) - string or None \n \n The label column that is associated with this table - either automatically detected or using the \"label_column\" key from Metadata , see Specifying the label column for a table . \n \n \n \n await db.foreign_keys_for_table(table) - list of dictionaries \n \n Details of columns in this table which are foreign keys to other tables. A list of dictionaries where each dictionary is shaped like this: {\"column\": string, \"other_table\": string, \"other_column\": string} . \n \n \n \n await db.hidden_table_names() - list of strings \n \n List of tables which Datasette \"hides\" by default - usually these are tables associated with SQLite's full-text search feature, the SpatiaLite extension or tables hidden using the Hiding tables feature. \n \n \n \n await db.get_table_definition(table) - string \n \n Returns the SQL definition for the table - the CREATE TABLE statement and any associated CREATE INDEX statements. \n \n \n \n await db.get_view_definition(view) - string \n \n Returns the SQL definition of the named view. \n \n \n \n await db.get_all_foreign_keys() - dictionary \n \n Dictionary representing both incoming and outgoing foreign keys for this table. It has two keys, \"incoming\" and \"outgoing\" , each of which is a list of dictionaries with keys \"column\" , \"other_table\" and \"other_column\" . For example: \n {\n \"incoming\": [],\n \"outgoing\": [\n {\n \"other_table\": \"attraction_characteristic\",\n \"column\": \"characteristic_id\",\n \"other_column\": \"pk\",\n },\n {\n \"other_table\": \"roadside_attractions\",\n \"column\": \"attraction_id\",\n \"other_column\": \"pk\",\n }\n ]\n}", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[]"} {"id": "internals:internals-datasette", "page": "internals", "ref": "internals-datasette", "title": "Datasette class", "content": "This object is an instance of the Datasette class, passed to many plugin hooks as an argument called datasette . \n You can create your own instance of this - for example to help write tests for a plugin - like so: \n from datasette.app import Datasette\n\n# With no arguments a single in-memory database will be attached\ndatasette = Datasette()\n\n# The files= argument can load files from disk\ndatasette = Datasette(files=[\"/path/to/my-database.db\"])\n\n# Pass metadata as a JSON dictionary like this\ndatasette = Datasette(\n files=[\"/path/to/my-database.db\"],\n metadata={\n \"databases\": {\n \"my-database\": {\n \"description\": \"This is my database\"\n }\n }\n },\n) \n Constructor parameters include: \n \n \n files=[...] - a list of database files to open \n \n \n immutables=[...] - a list of database files to open in immutable mode \n \n \n metadata={...} - a dictionary of Metadata \n \n \n config_dir=... - the configuration directory to use, stored in datasette.config_dir", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[]"} {"id": "internals:internals-datasette-urls", "page": "internals", "ref": "internals-datasette-urls", "title": "datasette.urls", "content": "The datasette.urls object contains methods for building URLs to pages within Datasette. Plugins should use this to link to pages, since these methods take into account any base_url configuration setting that might be in effect. \n \n \n datasette.urls.instance(format=None) \n \n Returns the URL to the Datasette instance root page. This is usually \"/\" . \n \n \n \n datasette.urls.path(path, format=None) \n \n Takes a path and returns the full path, taking base_url into account. \n For example, datasette.urls.path(\"-/logout\") will return the path to the logout page, which will be \"/-/logout\" by default or /prefix-path/-/logout if base_url is set to /prefix-path/ \n \n \n \n datasette.urls.logout() \n \n Returns the URL to the logout page, usually \"/-/logout\" \n \n \n \n datasette.urls.static(path) \n \n Returns the URL of one of Datasette's default static assets, for example \"/-/static/app.css\" \n \n \n \n datasette.urls.static_plugins(plugin_name, path) \n \n Returns the URL of one of the static assets belonging to a plugin. \n datasette.urls.static_plugins(\"datasette_cluster_map\", \"datasette-cluster-map.js\") would return \"/-/static-plugins/datasette_cluster_map/datasette-cluster-map.js\" \n \n \n \n datasette.urls.static(path) \n \n Returns the URL of one of Datasette's default static assets, for example \"/-/static/app.css\" \n \n \n \n datasette.urls.database(database_name, format=None) \n \n Returns the URL to a database page, for example \"/fixtures\" \n \n \n \n datasette.urls.table(database_name, table_name, format=None) \n \n Returns the URL to a table page, for example \"/fixtures/facetable\" \n \n \n \n datasette.urls.query(database_name, query_name, format=None) \n \n Returns the URL to a query page, for example \"/fixtures/pragma_cache_size\" \n \n \n \n These functions can be accessed via the {{ urls }} object in Datasette templates, for example: \n Homepage\nFixtures database\nfacetable table\npragma_cache_size query \n Use the format=\"json\" (or \"csv\" or other formats supported by plugins) arguments to get back URLs to the JSON representation. This is the path with .json added on the end. \n These methods each return a datasette.utils.PrefixedUrlString object, which is a subclass of the Python str type. This allows the logic that considers the base_url setting to detect if that prefix has already been applied to the path.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:internals-multiparams", "page": "internals", "ref": "internals-multiparams", "title": "The MultiParams class", "content": "request.args is a MultiParams object - a dictionary-like object which provides access to query string parameters that may have multiple values. \n Consider the query string ?foo=1&foo=2&bar=3 - with two values for foo and one value for bar . \n \n \n request.args[key] - string \n \n Returns the first value for that key, or raises a KeyError if the key is missing. For the above example request.args[\"foo\"] would return \"1\" . \n \n \n \n request.args.get(key) - string or None \n \n Returns the first value for that key, or None if the key is missing. Pass a second argument to specify a different default, e.g. q = request.args.get(\"q\", \"\") . \n \n \n \n request.args.getlist(key) - list of strings \n \n Returns the list of strings for that key. request.args.getlist(\"foo\") would return [\"1\", \"2\"] in the above example. request.args.getlist(\"bar\") would return [\"3\"] . If the key is missing an empty list will be returned. \n \n \n \n request.args.keys() - list of strings \n \n Returns the list of available keys - for the example this would be [\"foo\", \"bar\"] . \n \n \n \n key in request.args - True or False \n \n You can use if key in request.args to check if a key is present. \n \n \n \n for key in request.args - iterator \n \n This lets you loop through every available key. \n \n \n \n len(request.args) - integer \n \n Returns the number of keys.", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[]"} {"id": "internals:internals-response", "page": "internals", "ref": "internals-response", "title": "Response class", "content": "The Response class can be returned from view functions that have been registered using the register_routes(datasette) hook. \n The Response() constructor takes the following arguments: \n \n \n body - string \n \n The body of the response. \n \n \n \n status - integer (optional) \n \n The HTTP status - defaults to 200. \n \n \n \n headers - dictionary (optional) \n \n A dictionary of extra HTTP headers, e.g. {\"x-hello\": \"world\"} . \n \n \n \n content_type - string (optional) \n \n The content-type for the response. Defaults to text/plain . \n \n \n \n For example: \n from datasette.utils.asgi import Response\n\nresponse = Response(\n \"This is XML\",\n content_type=\"application/xml; charset=utf-8\",\n) \n The quickest way to create responses is using the Response.text(...) , Response.html(...) , Response.json(...) or Response.redirect(...) helper methods: \n from datasette.utils.asgi import Response\n\nhtml_response = Response.html(\"This is HTML\")\njson_response = Response.json({\"this_is\": \"json\"})\ntext_response = Response.text(\n \"This will become utf-8 encoded text\"\n)\n# Redirects are served as 302, unless you pass status=301:\nredirect_response = Response.redirect(\n \"https://latest.datasette.io/\"\n) \n Each of these responses will use the correct corresponding content-type - text/html; charset=utf-8 , application/json; charset=utf-8 or text/plain; charset=utf-8 respectively. \n Each of the helper methods take optional status= and headers= arguments, documented above.", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[]"} {"id": "internals:internals-response-asgi-send", "page": "internals", "ref": "internals-response-asgi-send", "title": "Returning a response with .asgi_send(send)", "content": "In most cases you will return Response objects from your own view functions. You can also use a Response instance to respond at a lower level via ASGI, for example if you are writing code that uses the asgi_wrapper(datasette) hook. \n Create a Response object and then use await response.asgi_send(send) , passing the ASGI send function. For example: \n async def require_authorization(scope, receive, send):\n response = Response.text(\n \"401 Authorization Required\",\n headers={\n \"www-authenticate\": 'Basic realm=\"Datasette\", charset=\"UTF-8\"'\n },\n status=401,\n )\n await response.asgi_send(send)", "breadcrumbs": "[\"Internals for plugins\", \"Response class\"]", "references": "[]"} {"id": "internals:internals-response-set-cookie", "page": "internals", "ref": "internals-response-set-cookie", "title": "Setting cookies with response.set_cookie()", "content": "To set cookies on the response, use the response.set_cookie(...) method. The method signature looks like this: \n def set_cookie(\n self,\n key,\n value=\"\",\n max_age=None,\n expires=None,\n path=\"/\",\n domain=None,\n secure=False,\n httponly=False,\n samesite=\"lax\",\n):\n ... \n You can use this with datasette.sign() to set signed cookies. Here's how you would set the ds_actor cookie for use with Datasette authentication : \n response = Response.redirect(\"/\")\nresponse.set_cookie(\n \"ds_actor\",\n datasette.sign({\"a\": {\"id\": \"cleopaws\"}}, \"actor\"),\n)\nreturn response", "breadcrumbs": "[\"Internals for plugins\", \"Response class\"]", "references": "[]"} {"id": "internals:internals-shortcuts", "page": "internals", "ref": "internals-shortcuts", "title": "Import shortcuts", "content": "The following commonly used symbols can be imported directly from the datasette module: \n from datasette import Response\nfrom datasette import Forbidden\nfrom datasette import NotFound\nfrom datasette import hookimpl\nfrom datasette import actor_matches_allow", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[]"} {"id": "internals:internals-tracer-trace-child-tasks", "page": "internals", "ref": "internals-tracer-trace-child-tasks", "title": "Tracing child tasks", "content": "If your code uses a mechanism such as asyncio.gather() to execute code in additional tasks you may find that some of the traces are missing from the display. \n You can use the trace_child_tasks() context manager to ensure these child tasks are correctly handled. \n from datasette import tracer\n\nwith tracer.trace_child_tasks():\n results = await asyncio.gather(\n # ... async tasks here\n ) \n This example uses the register_routes() plugin hook to add a page at /parallel-queries which executes two SQL queries in parallel using asyncio.gather() and returns their results. \n from datasette import hookimpl\nfrom datasette import tracer\n\n\n@hookimpl\ndef register_routes():\n async def parallel_queries(datasette):\n db = datasette.get_database()\n with tracer.trace_child_tasks():\n one, two = await asyncio.gather(\n db.execute(\"select 1\"),\n db.execute(\"select 2\"),\n )\n return Response.json(\n {\n \"one\": one.single_value(),\n \"two\": two.single_value(),\n }\n )\n\n return [\n (r\"/parallel-queries$\", parallel_queries),\n ] \n Adding ?_trace=1 will show that the trace covers both of those child tasks.", "breadcrumbs": "[\"Internals for plugins\", \"datasette.tracer\"]", "references": "[]"} {"id": "internals:internals-utils-parse-metadata", "page": "internals", "ref": "internals-utils-parse-metadata", "title": "parse_metadata(content)", "content": "This function accepts a string containing either JSON or YAML, expected to be of the format described in Metadata . It returns a nested Python dictionary representing the parsed data from that string. \n If the metadata cannot be parsed as either JSON or YAML the function will raise a utils.BadMetadataError exception. \n \n \n datasette.utils. parse_metadata content : str dict \n \n Detects if content is JSON or YAML and parses it appropriately.", "breadcrumbs": "[\"Internals for plugins\", \"The datasette.utils module\"]", "references": "[]"} {"id": "introspection:id1", "page": "introspection", "ref": "id1", "title": "Introspection", "content": "Datasette includes some pages and JSON API endpoints for introspecting the current instance. These can be used to understand some of the internals of Datasette and to see how a particular instance has been configured. \n Each of these pages can be viewed in your browser. Add .json to the URL to get back the contents as JSON.", "breadcrumbs": "[]", "references": "[]"} {"id": "introspection:jsondataview-actor", "page": "introspection", "ref": "jsondataview-actor", "title": "/-/actor", "content": "Shows the currently authenticated actor. Useful for debugging Datasette authentication plugins. \n {\n \"actor\": {\n \"id\": 1,\n \"username\": \"some-user\"\n }\n}", "breadcrumbs": "[\"Introspection\"]", "references": "[]"} {"id": "introspection:messagesdebugview", "page": "introspection", "ref": "messagesdebugview", "title": "/-/messages", "content": "The debug tool at /-/messages can be used to set flash messages to try out that feature. See .add_message(request, message, type=datasette.INFO) for details of this feature.", "breadcrumbs": "[\"Introspection\"]", "references": "[]"} {"id": "json_api:column-filter-arguments", "page": "json_api", "ref": "column-filter-arguments", "title": "Column filter arguments", "content": "You can filter the data returned by the table based on column values using a query string argument. \n \n \n ?column__exact=value or ?_column=value \n \n Returns rows where the specified column exactly matches the value. \n \n \n \n ?column__not=value \n \n Returns rows where the column does not match the value. \n \n \n \n ?column__contains=value \n \n Rows where the string column contains the specified value ( column like \"%value%\" in SQL). \n \n \n \n ?column__endswith=value \n \n Rows where the string column ends with the specified value ( column like \"%value\" in SQL). \n \n \n \n ?column__startswith=value \n \n Rows where the string column starts with the specified value ( column like \"value%\" in SQL). \n \n \n \n ?column__gt=value \n \n Rows which are greater than the specified value. \n \n \n \n ?column__gte=value \n \n Rows which are greater than or equal to the specified value. \n \n \n \n ?column__lt=value \n \n Rows which are less than the specified value. \n \n \n \n ?column__lte=value \n \n Rows which are less than or equal to the specified value. \n \n \n \n ?column__like=value \n \n Match rows with a LIKE clause, case insensitive and with % as the wildcard character. \n \n \n \n ?column__notlike=value \n \n Match rows that do not match the provided LIKE clause. \n \n \n \n ?column__glob=value \n \n Similar to LIKE but uses Unix wildcard syntax and is case sensitive. \n \n \n \n ?column__in=value1,value2,value3 \n \n Rows where column matches any of the provided values. \n You can use a comma separated string, or you can use a JSON array. \n The JSON array option is useful if one of your matching values itself contains a comma: \n ?column__in=[\"value\",\"value,with,commas\"] \n \n \n \n ?column__notin=value1,value2,value3 \n \n Rows where column does not match any of the provided values. The inverse of __in= . Also supports JSON arrays. \n \n \n \n ?column__arraycontains=value \n \n Works against columns that contain JSON arrays - matches if any of the values in that array match the provided value. \n This is only available if the json1 SQLite extension is enabled. \n \n \n \n ?column__arraynotcontains=value \n \n Works against columns that contain JSON arrays - matches if none of the values in that array match the provided value. \n This is only available if the json1 SQLite extension is enabled. \n \n \n \n ?column__date=value \n \n Column is a datestamp occurring on the specified YYYY-MM-DD date, e.g. 2018-01-02 . \n \n \n \n ?column__isnull=1 \n \n Matches rows where the column is null. \n \n \n \n ?column__notnull=1 \n \n Matches rows where the column is not null. \n \n \n \n ?column__isblank=1 \n \n Matches rows where the column is blank, meaning null or the empty string. \n \n \n \n ?column__notblank=1 \n \n Matches rows where the column is not blank.", "breadcrumbs": "[\"JSON API\", \"Table arguments\"]", "references": "[]"} {"id": "json_api:expand-foreign-keys", "page": "json_api", "ref": "expand-foreign-keys", "title": "Expanding foreign key references", "content": "Datasette can detect foreign key relationships and resolve those references into\n labels. The HTML interface does this by default for every detected foreign key\n column - you can turn that off using ?_labels=off . \n You can request foreign keys be expanded in JSON using the _labels=on or\n _label=COLUMN special query string parameters. Here's what an expanded row\n looks like: \n [\n {\n \"rowid\": 1,\n \"TreeID\": 141565,\n \"qLegalStatus\": {\n \"value\": 1,\n \"label\": \"Permitted Site\"\n },\n \"qSpecies\": {\n \"value\": 1,\n \"label\": \"Myoporum laetum :: Myoporum\"\n },\n \"qAddress\": \"501X Baker St\",\n \"SiteOrder\": 1\n }\n] \n The column in the foreign key table that is used for the label can be specified\n in metadata.json - see Specifying the label column for a table .", "breadcrumbs": "[\"JSON API\"]", "references": "[]"} {"id": "json_api:id1", "page": "json_api", "ref": "id1", "title": "JSON API", "content": "Datasette provides a JSON API for your SQLite databases. Anything you can do\n through the Datasette user interface can also be accessed as JSON via the API. \n To access the API for a page, either click on the .json link on that page or\n edit the URL and add a .json extension to it. \n If you started Datasette with the --cors option, each JSON endpoint will be\n served with the following additional HTTP headers: \n Access-Control-Allow-Origin: *\nAccess-Control-Allow-Headers: Authorization\nAccess-Control-Expose-Headers: Link \n This means JavaScript running on any domain will be able to make cross-origin\n requests to fetch the data. \n If you start Datasette without the --cors option only JavaScript running on\n the same domain as Datasette will be able to access the API.", "breadcrumbs": "[]", "references": "[]"} {"id": "json_api:id2", "page": "json_api", "ref": "id2", "title": "Table arguments", "content": "The Datasette table view takes a number of special query string arguments.", "breadcrumbs": "[\"JSON API\"]", "references": "[]"} {"id": "json_api:json-api-discover-alternate", "page": "json_api", "ref": "json-api-discover-alternate", "title": "Discovering the JSON for a page", "content": "Most of the HTML pages served by Datasette provide a mechanism for discovering their JSON equivalents using the HTML link mechanism. \n You can find this near the top of the source code of those pages, looking like this: \n \n The JSON URL is also made available in a Link HTTP header for the page: \n Link: https://latest.datasette.io/fixtures/sortable.json; rel=\"alternate\"; type=\"application/json+datasette\"", "breadcrumbs": "[\"JSON API\"]", "references": "[]"} {"id": "json_api:json-api-shapes", "page": "json_api", "ref": "json-api-shapes", "title": "Different shapes", "content": "The default JSON representation of data from a SQLite table or custom query\n looks like this: \n {\n \"database\": \"sf-trees\",\n \"table\": \"qSpecies\",\n \"columns\": [\n \"id\",\n \"value\"\n ],\n \"rows\": [\n [\n 1,\n \"Myoporum laetum :: Myoporum\"\n ],\n [\n 2,\n \"Metrosideros excelsa :: New Zealand Xmas Tree\"\n ],\n [\n 3,\n \"Pinus radiata :: Monterey Pine\"\n ]\n ],\n \"truncated\": false,\n \"next\": \"100\",\n \"next_url\": \"http://127.0.0.1:8001/sf-trees-02c8ef1/qSpecies.json?_next=100\",\n \"query_ms\": 1.9571781158447266\n} \n The columns key lists the columns that are being returned, and the rows \n key then returns a list of lists, each one representing a row. The order of the\n values in each row corresponds to the columns. \n The _shape parameter can be used to access alternative formats for the\n rows key which may be more convenient for your application. There are three\n options: \n \n \n ?_shape=arrays - \"rows\" is the default option, shown above \n \n \n ?_shape=objects - \"rows\" is a list of JSON key/value objects \n \n \n ?_shape=array - an JSON array of objects \n \n \n ?_shape=array&_nl=on - a newline-separated list of JSON objects \n \n \n ?_shape=arrayfirst - a flat JSON array containing just the first value from each row \n \n \n ?_shape=object - a JSON object keyed using the primary keys of the rows \n \n \n _shape=objects looks like this: \n {\n \"database\": \"sf-trees\",\n ...\n \"rows\": [\n {\n \"id\": 1,\n \"value\": \"Myoporum laetum :: Myoporum\"\n },\n {\n \"id\": 2,\n \"value\": \"Metrosideros excelsa :: New Zealand Xmas Tree\"\n },\n {\n \"id\": 3,\n \"value\": \"Pinus radiata :: Monterey Pine\"\n }\n ]\n} \n _shape=array looks like this: \n [\n {\n \"id\": 1,\n \"value\": \"Myoporum laetum :: Myoporum\"\n },\n {\n \"id\": 2,\n \"value\": \"Metrosideros excelsa :: New Zealand Xmas Tree\"\n },\n {\n \"id\": 3,\n \"value\": \"Pinus radiata :: Monterey Pine\"\n }\n] \n _shape=array&_nl=on looks like this: \n {\"id\": 1, \"value\": \"Myoporum laetum :: Myoporum\"}\n{\"id\": 2, \"value\": \"Metrosideros excelsa :: New Zealand Xmas Tree\"}\n{\"id\": 3, \"value\": \"Pinus radiata :: Monterey Pine\"} \n _shape=arrayfirst looks like this: \n [1, 2, 3] \n _shape=object looks like this: \n {\n \"1\": {\n \"id\": 1,\n \"value\": \"Myoporum laetum :: Myoporum\"\n },\n \"2\": {\n \"id\": 2,\n \"value\": \"Metrosideros excelsa :: New Zealand Xmas Tree\"\n },\n \"3\": {\n \"id\": 3,\n \"value\": \"Pinus radiata :: Monterey Pine\"\n }\n] \n The object shape is only available for queries against tables - custom SQL\n queries and views do not have an obvious primary key so cannot be returned using\n this format. \n The object keys are always strings. If your table has a compound primary\n key, the object keys will be a comma-separated string.", "breadcrumbs": "[\"JSON API\"]", "references": "[]"} {"id": "metadata:id1", "page": "metadata", "ref": "id1", "title": "Metadata", "content": "Data loves metadata. Any time you run Datasette you can optionally include a\n JSON file with metadata about your databases and tables. Datasette will then\n display that information in the web UI. \n Run Datasette like this: \n datasette database1.db database2.db --metadata metadata.json \n Your metadata.json file can look something like this: \n {\n \"title\": \"Custom title for your index page\",\n \"description\": \"Some description text can go here\",\n \"license\": \"ODbL\",\n \"license_url\": \"https://opendatacommons.org/licenses/odbl/\",\n \"source\": \"Original Data Source\",\n \"source_url\": \"http://example.com/\"\n} \n You can optionally use YAML instead of JSON, see Using YAML for metadata . \n The above metadata will be displayed on the index page of your Datasette-powered\n site. The source and license information will also be included in the footer of\n every page served by Datasette. \n Any special HTML characters in description will be escaped. If you want to\n include HTML in your description, you can use a description_html property\n instead.", "breadcrumbs": "[]", "references": "[]"} {"id": "metadata:label-columns", "page": "metadata", "ref": "label-columns", "title": "Specifying the label column for a table", "content": "Datasette's HTML interface attempts to display foreign key references as\n labelled hyperlinks. By default, it looks for referenced tables that only have\n two columns: a primary key column and one other. It assumes that the second\n column should be used as the link label. \n If your table has more than two columns you can specify which column should be\n used for the link label with the label_column property: \n {\n \"databases\": {\n \"database1\": {\n \"tables\": {\n \"example_table\": {\n \"label_column\": \"title\"\n }\n }\n }\n }\n}", "breadcrumbs": "[\"Metadata\"]", "references": "[]"} {"id": "metadata:metadata-default-sort", "page": "metadata", "ref": "metadata-default-sort", "title": "Setting a default sort order", "content": "By default Datasette tables are sorted by primary key. You can over-ride this default for a specific table using the \"sort\" or \"sort_desc\" metadata properties: \n {\n \"databases\": {\n \"mydatabase\": {\n \"tables\": {\n \"example_table\": {\n \"sort\": \"created\"\n }\n }\n }\n }\n} \n Or use \"sort_desc\" to sort in descending order: \n {\n \"databases\": {\n \"mydatabase\": {\n \"tables\": {\n \"example_table\": {\n \"sort_desc\": \"created\"\n }\n }\n }\n }\n}", "breadcrumbs": "[\"Metadata\"]", "references": "[]"} {"id": "metadata:metadata-hiding-tables", "page": "metadata", "ref": "metadata-hiding-tables", "title": "Hiding tables", "content": "You can hide tables from the database listing view (in the same way that FTS and\n SpatiaLite tables are automatically hidden) using \"hidden\": true : \n {\n \"databases\": {\n \"database1\": {\n \"tables\": {\n \"example_table\": {\n \"hidden\": true\n }\n }\n }\n }\n}", "breadcrumbs": "[\"Metadata\"]", "references": "[]"} {"id": "metadata:metadata-page-size", "page": "metadata", "ref": "metadata-page-size", "title": "Setting a custom page size", "content": "Datasette defaults to displaying 100 rows per page, for both tables and views. You can change this default page size on a per-table or per-view basis using the \"size\" key in metadata.json : \n {\n \"databases\": {\n \"mydatabase\": {\n \"tables\": {\n \"example_table\": {\n \"size\": 10\n }\n }\n }\n }\n} \n This size can still be over-ridden by passing e.g. ?_size=50 in the query string.", "breadcrumbs": "[\"Metadata\"]", "references": "[]"} {"id": "metadata:metadata-sortable-columns", "page": "metadata", "ref": "metadata-sortable-columns", "title": "Setting which columns can be used for sorting", "content": "Datasette allows any column to be used for sorting by default. If you need to\n control which columns are available for sorting you can do so using the optional\n sortable_columns key: \n {\n \"databases\": {\n \"database1\": {\n \"tables\": {\n \"example_table\": {\n \"sortable_columns\": [\n \"height\",\n \"weight\"\n ]\n }\n }\n }\n }\n} \n This will restrict sorting of example_table to just the height and\n weight columns. \n You can also disable sorting entirely by setting \"sortable_columns\": [] \n You can use sortable_columns to enable specific sort orders for a view called name_of_view in the database my_database like so: \n {\n \"databases\": {\n \"my_database\": {\n \"tables\": {\n \"name_of_view\": {\n \"sortable_columns\": [\n \"clicks\",\n \"impressions\"\n ]\n }\n }\n }\n }\n}", "breadcrumbs": "[\"Metadata\"]", "references": "[]"} {"id": "metadata:metadata-source-license-about", "page": "metadata", "ref": "metadata-source-license-about", "title": "Source, license and about", "content": "The three visible metadata fields you can apply to everything, specific databases or specific tables are source, license and about. All three are optional. \n source and source_url should be used to indicate where the underlying data came from. \n license and license_url should be used to indicate the license under which the data can be used. \n about and about_url can be used to link to further information about the project - an accompanying blog entry for example. \n For each of these you can provide just the *_url field and Datasette will treat that as the default link label text and display the URL directly on the page.", "breadcrumbs": "[\"Metadata\"]", "references": "[]"} {"id": "metadata:metadata-yaml", "page": "metadata", "ref": "metadata-yaml", "title": "Using YAML for metadata", "content": "Datasette accepts YAML as an alternative to JSON for your metadata configuration file. YAML is particularly useful for including multiline HTML and SQL strings. \n Here's an example of a metadata.yml file, re-using an example from Canned queries . \n title: Demonstrating Metadata from YAML\ndescription_html: |-\n

This description includes a long HTML string

\n \nlicense: ODbL\nlicense_url: https://opendatacommons.org/licenses/odbl/\ndatabases:\n fixtures:\n tables:\n no_primary_key:\n hidden: true\n queries:\n neighborhood_search:\n sql: |-\n select neighborhood, facet_cities.name, state\n from facetable join facet_cities on facetable.city_id = facet_cities.id\n where neighborhood like '%' || :text || '%' order by neighborhood;\n title: Search neighborhoods\n description_html: |-\n

This demonstrates basic LIKE search \n The metadata.yml file is passed to Datasette using the same --metadata option: \n datasette fixtures.db --metadata metadata.yml", "breadcrumbs": "[\"Metadata\"]", "references": "[]"} {"id": "metadata:per-database-and-per-table-metadata", "page": "metadata", "ref": "per-database-and-per-table-metadata", "title": "Per-database and per-table metadata", "content": "Metadata at the top level of the JSON will be shown on the index page and in the\n footer on every page of the site. The license and source is expected to apply to\n all of your data. \n You can also provide metadata at the per-database or per-table level, like this: \n {\n \"databases\": {\n \"database1\": {\n \"source\": \"Alternative source\",\n \"source_url\": \"http://example.com/\",\n \"tables\": {\n \"example_table\": {\n \"description_html\": \"Custom table description\",\n \"license\": \"CC BY 3.0 US\",\n \"license_url\": \"https://creativecommons.org/licenses/by/3.0/us/\"\n }\n }\n }\n }\n} \n Each of the top-level metadata fields can be used at the database and table level.", "breadcrumbs": "[\"Metadata\"]", "references": "[]"} {"id": "pages:pages", "page": "pages", "ref": "pages", "title": "Pages and API endpoints", "content": "The Datasette web application offers a number of different pages that can be accessed to explore the data in question, each of which is accompanied by an equivalent JSON API.", "breadcrumbs": "[]", "references": "[]"} {"id": "performance:performance", "page": "performance", "ref": "performance", "title": "Performance and caching", "content": "Datasette runs on top of SQLite, and SQLite has excellent performance. For small databases almost any query should return in just a few milliseconds, and larger databases (100s of MBs or even GBs of data) should perform extremely well provided your queries make sensible use of database indexes. \n That said, there are a number of tricks you can use to improve Datasette's performance.", "breadcrumbs": "[]", "references": "[]"} {"id": "performance:performance-immutable-mode", "page": "performance", "ref": "performance-immutable-mode", "title": "Immutable mode", "content": "If you can be certain that a SQLite database file will not be changed by another process you can tell Datasette to open that file in immutable mode . \n Doing so will disable all locking and change detection, which can result in improved query performance. \n This also enables further optimizations relating to HTTP caching, described below. \n To open a file in immutable mode pass it to the datasette command using the -i option: \n datasette -i data.db \n When you open a file in immutable mode like this Datasette will also calculate and cache the row counts for each table in that database when it first starts up, further improving performance.", "breadcrumbs": "[\"Performance and caching\"]", "references": "[]"} {"id": "performance:performance-inspect", "page": "performance", "ref": "performance-inspect", "title": "Using \"datasette inspect\"", "content": "Counting the rows in a table can be a very expensive operation on larger databases. In immutable mode Datasette performs this count only once and caches the results, but this can still cause server startup time to increase by several seconds or more. \n If you know that a database is never going to change you can precalculate the table row counts once and store then in a JSON file, then use that file when you later start the server. \n To create a JSON file containing the calculated row counts for a database, use the following: \n datasette inspect data.db --inspect-file=counts.json \n Then later you can start Datasette against the counts.json file and use it to skip the row counting step and speed up server startup: \n datasette -i data.db --inspect-file=counts.json \n You need to use the -i immutable mode against the database file here or the counts from the JSON file will be ignored. \n You will rarely need to use this optimization in every-day use, but several of the datasette publish commands described in Publishing data use this optimization for better performance when deploying a database file to a hosting provider.", "breadcrumbs": "[\"Performance and caching\"]", "references": "[]"} {"id": "plugin_hooks:plugin-hook-forbidden", "page": "plugin_hooks", "ref": "plugin-hook-forbidden", "title": "forbidden(datasette, request, message)", "content": "datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to render templates or execute SQL queries. \n \n \n \n request - Request object \n \n The current HTTP request. \n \n \n \n message - string \n \n A message hinting at why the request was forbidden. \n \n \n \n Plugins can use this to customize how Datasette responds when a 403 Forbidden error occurs - usually because a page failed a permission check, see Permissions . \n If a plugin hook wishes to react to the error, it should return a Response object . \n This example returns a redirect to a /-/login page: \n from datasette import hookimpl\nfrom urllib.parse import urlencode\n\n\n@hookimpl\ndef forbidden(request, message):\n return Response.redirect(\n \"/-/login?=\" + urlencode({\"message\": message})\n ) \n The function can alternatively return an awaitable function if it needs to make any asynchronous method calls. This example renders a template: \n from datasette import hookimpl, Response\n\n\n@hookimpl\ndef forbidden(datasette):\n async def inner():\n return Response.html(\n await datasette.render_template(\n \"render_message.html\", request=request\n )\n )\n\n return inner", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[]"} {"id": "plugin_hooks:plugin-hook-register-magic-parameters", "page": "plugin_hooks", "ref": "plugin-hook-register-magic-parameters", "title": "register_magic_parameters(datasette)", "content": "datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) . \n \n \n \n Magic parameters can be used to add automatic parameters to canned queries . This plugin hook allows additional magic parameters to be defined by plugins. \n Magic parameters all take this format: _prefix_rest_of_parameter . The prefix indicates which magic parameter function should be called - the rest of the parameter is passed as an argument to that function. \n To register a new function, return it as a tuple of (string prefix, function) from this hook. The function you register should take two arguments: key and request , where key is the rest_of_parameter portion of the parameter and request is the current Request object . \n This example registers two new magic parameters: :_request_http_version returning the HTTP version of the current request, and :_uuid_new which returns a new UUID: \n from uuid import uuid4\n\n\ndef uuid(key, request):\n if key == \"new\":\n return str(uuid4())\n else:\n raise KeyError\n\n\ndef request(key, request):\n if key == \"http_version\":\n return request.scope[\"http_version\"]\n else:\n raise KeyError\n\n\n@hookimpl\ndef register_magic_parameters(datasette):\n return [\n (\"request\", request),\n (\"uuid\", uuid),\n ]", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[]"} {"id": "plugins:deploying-plugins-using-datasette-publish", "page": "plugins", "ref": "deploying-plugins-using-datasette-publish", "title": "Deploying plugins using datasette publish", "content": "The datasette publish and datasette package commands both take an optional --install argument. You can use this one or more times to tell Datasette to pip install specific plugins as part of the process: \n datasette publish cloudrun mydb.db --install=datasette-vega \n You can use the name of a package on PyPI or any of the other valid arguments to pip install such as a URL to a .zip file: \n datasette publish cloudrun mydb.db \\\n --install=https://url-to-my-package.zip", "breadcrumbs": "[\"Plugins\", \"Installing plugins\"]", "references": "[]"} {"id": "plugins:one-off-plugins-using-plugins-dir", "page": "plugins", "ref": "one-off-plugins-using-plugins-dir", "title": "One-off plugins using --plugins-dir", "content": "You can also define one-off per-project plugins by saving them as plugin_name.py functions in a plugins/ folder and then passing that folder to datasette using the --plugins-dir option: \n datasette mydb.db --plugins-dir=plugins/", "breadcrumbs": "[\"Plugins\", \"Installing plugins\"]", "references": "[]"} {"id": "plugins:plugins-configuration", "page": "plugins", "ref": "plugins-configuration", "title": "Plugin configuration", "content": "Plugins can have their own configuration, embedded in a Metadata file. Configuration options for plugins live within a \"plugins\" key in that file, which can be included at the root, database or table level. \n Here is an example of some plugin configuration for a specific table: \n {\n \"databases\": {\n \"sf-trees\": {\n \"tables\": {\n \"Street_Tree_List\": {\n \"plugins\": {\n \"datasette-cluster-map\": {\n \"latitude_column\": \"lat\",\n \"longitude_column\": \"lng\"\n }\n }\n }\n }\n }\n }\n} \n This tells the datasette-cluster-map column which latitude and longitude columns should be used for a table called Street_Tree_List inside a database file called sf-trees.db .", "breadcrumbs": "[\"Plugins\"]", "references": "[]"} {"id": "plugins:plugins-configuration-secret", "page": "plugins", "ref": "plugins-configuration-secret", "title": "Secret configuration values", "content": "Any values embedded in metadata.json will be visible to anyone who views the /-/metadata page of your Datasette instance. Some plugins may need configuration that should stay secret - API keys for example. There are two ways in which you can store secret configuration values. \n As environment variables . If your secret lives in an environment variable that is available to the Datasette process, you can indicate that the configuration value should be read from that environment variable like so: \n {\n \"plugins\": {\n \"datasette-auth-github\": {\n \"client_secret\": {\n \"$env\": \"GITHUB_CLIENT_SECRET\"\n }\n }\n }\n} \n As values in separate files . Your secrets can also live in files on disk. To specify a secret should be read from a file, provide the full file path like this: \n {\n \"plugins\": {\n \"datasette-auth-github\": {\n \"client_secret\": {\n \"$file\": \"/secrets/client-secret\"\n }\n }\n }\n} \n If you are publishing your data using the datasette publish family of commands, you can use the --plugin-secret option to set these secrets at publish time. For example, using Heroku you might run the following command: \n $ datasette publish heroku my_database.db \\\n --name my-heroku-app-demo \\\n --install=datasette-auth-github \\\n --plugin-secret datasette-auth-github client_id your_client_id \\\n --plugin-secret datasette-auth-github client_secret your_client_secret \n This will set the necessary environment variables and add the following to the deployed metadata.json : \n {\n \"plugins\": {\n \"datasette-auth-github\": {\n \"client_id\": {\n \"$env\": \"DATASETTE_AUTH_GITHUB_CLIENT_ID\"\n },\n \"client_secret\": {\n \"$env\": \"DATASETTE_AUTH_GITHUB_CLIENT_SECRET\"\n }\n }\n }\n}", "breadcrumbs": "[\"Plugins\", \"Plugin configuration\"]", "references": "[]"} {"id": "plugins:plugins-installing", "page": "plugins", "ref": "plugins-installing", "title": "Installing plugins", "content": "If a plugin has been packaged for distribution using setuptools you can use the plugin by installing it alongside Datasette in the same virtual environment or Docker container. \n You can install plugins using the datasette install command: \n datasette install datasette-vega \n You can uninstall plugins with datasette uninstall : \n datasette uninstall datasette-vega \n You can upgrade plugins with datasette install --upgrade or datasette install -U : \n datasette install -U datasette-vega \n This command can also be used to upgrade Datasette itself to the latest released version: \n datasette install -U datasette \n These commands are thin wrappers around pip install and pip uninstall , which ensure they run pip in the same virtual environment as Datasette itself.", "breadcrumbs": "[\"Plugins\"]", "references": "[]"} {"id": "publish:publishing", "page": "publish", "ref": "publishing", "title": "Publishing data", "content": "Datasette includes tools for publishing and deploying your data to the internet. The datasette publish command will deploy a new Datasette instance containing your databases directly to a Heroku or Google Cloud hosting account. You can also use datasette package to create a Docker image that bundles your databases together with the datasette application that is used to serve them.", "breadcrumbs": "[]", "references": "[]"} {"id": "settings:config-dir", "page": "settings", "ref": "config-dir", "title": "Configuration directory mode", "content": "Normally you configure Datasette using command-line options. For a Datasette instance with custom templates, custom plugins, a static directory and several databases this can get quite verbose: \n $ datasette one.db two.db \\\n --metadata=metadata.json \\\n --template-dir=templates/ \\\n --plugins-dir=plugins \\\n --static css:css \n As an alternative to this, you can run Datasette in configuration directory mode. Create a directory with the following structure: \n # In a directory called my-app:\nmy-app/one.db\nmy-app/two.db\nmy-app/metadata.json\nmy-app/templates/index.html\nmy-app/plugins/my_plugin.py\nmy-app/static/my.css \n Now start Datasette by providing the path to that directory: \n $ datasette my-app/ \n Datasette will detect the files in that directory and automatically configure itself using them. It will serve all *.db files that it finds, will load metadata.json if it exists, and will load the templates , plugins and static folders if they are present. \n The files that can be included in this directory are as follows. All are optional. \n \n \n *.db (or *.sqlite3 or *.sqlite ) - SQLite database files that will be served by Datasette \n \n \n metadata.json - Metadata for those databases - metadata.yaml or metadata.yml can be used as well \n \n \n inspect-data.json - the result of running datasette inspect *.db --inspect-file=inspect-data.json from the configuration directory - any database files listed here will be treated as immutable, so they should not be changed while Datasette is running \n \n \n settings.json - settings that would normally be passed using --setting - here they should be stored as a JSON object of key/value pairs \n \n \n templates/ - a directory containing Custom templates \n \n \n plugins/ - a directory containing plugins, see Writing one-off plugins \n \n \n static/ - a directory containing static files - these will be served from /static/filename.txt , see Serving static files", "breadcrumbs": "[\"Settings\"]", "references": "[]"} {"id": "settings:id1", "page": "settings", "ref": "id1", "title": "Settings", "content": "", "breadcrumbs": "[]", "references": "[]"} {"id": "settings:id2", "page": "settings", "ref": "id2", "title": "Settings", "content": "The following options can be set using --setting name value , or by storing them in the settings.json file for use with Configuration directory mode .", "breadcrumbs": "[\"Settings\"]", "references": "[]"} {"id": "settings:setting-allow-csv-stream", "page": "settings", "ref": "setting-allow-csv-stream", "title": "allow_csv_stream", "content": "Enables the CSV export feature where an entire table\n (potentially hundreds of thousands of rows) can be exported as a single CSV\n file. This is turned on by default - you can turn it off like this: \n datasette mydatabase.db --setting allow_csv_stream off", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"} {"id": "settings:setting-allow-download", "page": "settings", "ref": "setting-allow-download", "title": "allow_download", "content": "Should users be able to download the original SQLite database using a link on the database index page? This is turned on by default. However, databases can only be downloaded if they are served in immutable mode and not in-memory. If downloading is unavailable for either of these reasons, the download link is hidden even if allow_download is on. To disable database downloads, use the following: \n datasette mydatabase.db --setting allow_download off", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"} {"id": "settings:setting-allow-facet", "page": "settings", "ref": "setting-allow-facet", "title": "allow_facet", "content": "Allow users to specify columns they would like to facet on using the ?_facet=COLNAME URL parameter to the table view. \n This is enabled by default. If disabled, facets will still be displayed if they have been specifically enabled in metadata.json configuration for the table. \n Here's how to disable this feature: \n datasette mydatabase.db --setting allow_facet off", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"} {"id": "settings:setting-base-url", "page": "settings", "ref": "setting-base-url", "title": "base_url", "content": "If you are running Datasette behind a proxy, it may be useful to change the root path used for the Datasette instance. \n For example, if you are sending traffic from https://www.example.com/tools/datasette/ through to a proxied Datasette instance you may wish Datasette to use /tools/datasette/ as its root URL. \n You can do that like so: \n datasette mydatabase.db --setting base_url /tools/datasette/", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"} {"id": "settings:setting-default-allow-sql", "page": "settings", "ref": "setting-default-allow-sql", "title": "default_allow_sql", "content": "Should users be able to execute arbitrary SQL queries by default? \n Setting this to off causes permission checks for execute-sql to fail by default. \n datasette mydatabase.db --setting default_allow_sql off \n There are two ways to achieve this: the other is to add \"allow_sql\": false to your metadata.json file, as described in Controlling the ability to execute arbitrary SQL . This setting offers a more convenient way to do this.", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"} {"id": "settings:setting-default-cache-ttl", "page": "settings", "ref": "setting-default-cache-ttl", "title": "default_cache_ttl", "content": "Default HTTP caching max-age header in seconds, used for Cache-Control: max-age=X . Can be over-ridden on a per-request basis using the ?_ttl= query string parameter. Set this to 0 to disable HTTP caching entirely. Defaults to 5 seconds. \n datasette mydatabase.db --setting default_cache_ttl 60", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"} {"id": "settings:setting-default-facet-size", "page": "settings", "ref": "setting-default-facet-size", "title": "default_facet_size", "content": "The default number of unique rows returned by Facets is 30. You can customize it like this: \n datasette mydatabase.db --setting default_facet_size 50", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"} {"id": "settings:setting-default-page-size", "page": "settings", "ref": "setting-default-page-size", "title": "default_page_size", "content": "The default number of rows returned by the table page. You can over-ride this on a per-page basis using the ?_size=80 query string parameter, provided you do not specify a value higher than the max_returned_rows setting. You can set this default using --setting like so: \n datasette mydatabase.db --setting default_page_size 50", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"} {"id": "settings:setting-facet-suggest-time-limit-ms", "page": "settings", "ref": "setting-facet-suggest-time-limit-ms", "title": "facet_suggest_time_limit_ms", "content": "When Datasette calculates suggested facets it needs to run a SQL query for every column in your table. The default for this time limit is 50ms to account for the fact that it needs to run once for every column. If the time limit is exceeded the column will not be suggested as a facet. \n You can increase this time limit like so: \n datasette mydatabase.db --setting facet_suggest_time_limit_ms 500", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"} {"id": "settings:setting-facet-time-limit-ms", "page": "settings", "ref": "setting-facet-time-limit-ms", "title": "facet_time_limit_ms", "content": "This is the time limit Datasette allows for calculating a facet, which defaults to 200ms: \n datasette mydatabase.db --setting facet_time_limit_ms 1000", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"} {"id": "settings:setting-force-https-urls", "page": "settings", "ref": "setting-force-https-urls", "title": "force_https_urls", "content": "Forces self-referential URLs in the JSON output to always use the https:// \n protocol. This is useful for cases where the application itself is hosted using\n HTTP but is served to the outside world via a proxy that enables HTTPS. \n datasette mydatabase.db --setting force_https_urls 1", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"} {"id": "settings:setting-max-csv-mb", "page": "settings", "ref": "setting-max-csv-mb", "title": "max_csv_mb", "content": "The maximum size of CSV that can be exported, in megabytes. Defaults to 100MB.\n You can disable the limit entirely by settings this to 0: \n datasette mydatabase.db --setting max_csv_mb 0", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"} {"id": "settings:setting-max-returned-rows", "page": "settings", "ref": "setting-max-returned-rows", "title": "max_returned_rows", "content": "Datasette returns a maximum of 1,000 rows of data at a time. If you execute a query that returns more than 1,000 rows, Datasette will return the first 1,000 and include a warning that the result set has been truncated. You can use OFFSET/LIMIT or other methods in your SQL to implement pagination if you need to return more than 1,000 rows. \n You can increase or decrease this limit like so: \n datasette mydatabase.db --setting max_returned_rows 2000", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"} {"id": "settings:setting-publish-secrets", "page": "settings", "ref": "setting-publish-secrets", "title": "Using secrets with datasette publish", "content": "The datasette publish and datasette package commands both generate a secret for you automatically when Datasette is deployed. \n This means that every time you deploy a new version of a Datasette project, a new secret will be generated. This will cause signed cookies to become invalid on every fresh deploy. \n You can fix this by creating a secret that will be used for multiple deploys and passing it using the --secret option: \n datasette publish cloudrun mydb.db --service=my-service --secret=cdb19e94283a20f9d42cca5", "breadcrumbs": "[\"Settings\"]", "references": "[]"}