{"id": "plugin_hooks:plugin-hook-filters-from-request", "page": "plugin_hooks", "ref": "plugin-hook-filters-from-request", "title": "filters_from_request(request, database, table, datasette)", "content": "request - Request object \n \n The current HTTP request. \n \n \n \n database - string \n \n The name of the database. \n \n \n \n table - string \n \n The name of the table. \n \n \n \n datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries. \n \n \n \n This hook runs on the table page, and can influence the where clause of the SQL query used to populate that page, based on query string arguments on the incoming request. \n The hook should return an instance of datasette.filters.FilterArguments which has one required and three optional arguments: \n return FilterArguments(\n where_clauses=[\"id > :max_id\"],\n params={\"max_id\": 5},\n human_descriptions=[\"max_id is greater than 5\"],\n extra_context={},\n) \n The arguments to the FilterArguments class constructor are as follows: \n \n \n where_clauses - list of strings, required \n \n A list of SQL fragments that will be inserted into the SQL query, joined by the and operator. These can include :named parameters which will be populated using data in params . \n \n \n \n params - dictionary, optional \n \n Additional keyword arguments to be used when the query is executed. These should match any :arguments in the where clauses. \n \n \n \n human_descriptions - list of strings, optional \n \n These strings will be included in the human-readable description at the top of the page and the page . \n \n \n \n extra_context - dictionary, optional \n \n Additional context variables that should be made available to the table.html template when it is rendered. \n \n \n \n This example plugin causes 0 results to be returned if ?_nothing=1 is added to the URL: \n from datasette import hookimpl\nfrom datasette.filters import FilterArguments\n\n\n@hookimpl\ndef filters_from_request(self, request):\n if request.args.get(\"_nothing\"):\n return FilterArguments(\n [\"1 = 0\"], human_descriptions=[\"NOTHING\"]\n ) \n Example: datasette-leaflet-freedraw", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[{\"href\": \"https://datasette.io/plugins/datasette-leaflet-freedraw\", \"label\": \"datasette-leaflet-freedraw\"}]"} {"id": "plugin_hooks:plugin-hook-extra-template-vars", "page": "plugin_hooks", "ref": "plugin-hook-extra-template-vars", "title": "extra_template_vars(template, database, table, columns, view_name, request, datasette)", "content": "Extra template variables that should be made available in the rendered template context. \n \n \n template - string \n \n The template that is being rendered, e.g. database.html \n \n \n \n database - string or None \n \n The name of the database, or None if the page does not correspond to a database (e.g. the root page) \n \n \n \n table - string or None \n \n The name of the table, or None if the page does not correct to a table \n \n \n \n columns - list of strings or None \n \n The names of the database columns that will be displayed on this page. None if the page does not contain a table. \n \n \n \n view_name - string \n \n The name of the view being displayed. ( index , database , table , and row are the most important ones.) \n \n \n \n request - Request object or None \n \n The current HTTP request. This can be None if the request object is not available. \n \n \n \n datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) \n \n \n \n This hook can return one of three different types: \n \n \n Dictionary \n \n If you return a dictionary its keys and values will be merged into the template context. \n \n \n \n Function that returns a dictionary \n \n If you return a function it will be executed. If it returns a dictionary those values will will be merged into the template context. \n \n \n \n Function that returns an awaitable function that returns a dictionary \n \n You can also return a function which returns an awaitable function which returns a dictionary. \n \n \n \n Datasette runs Jinja2 in async mode , which means you can add awaitable functions to the template scope and they will be automatically awaited when they are rendered by the template. \n Here's an example plugin that adds a \"user_agent\" variable to the template context containing the current request's User-Agent header: \n @hookimpl\ndef extra_template_vars(request):\n return {\"user_agent\": request.headers.get(\"user-agent\")} \n This example returns an awaitable function which adds a list of hidden_table_names to the context: \n @hookimpl\ndef extra_template_vars(datasette, database):\n async def hidden_table_names():\n if database:\n db = datasette.databases[database]\n return {\n \"hidden_table_names\": await db.hidden_table_names()\n }\n else:\n return {}\n\n return hidden_table_names \n And here's an example which adds a sql_first(sql_query) function which executes a SQL statement and returns the first column of the first row of results: \n @hookimpl\ndef extra_template_vars(datasette, database):\n async def sql_first(sql, dbname=None):\n dbname = (\n dbname\n or database\n or next(iter(datasette.databases.keys()))\n )\n result = await datasette.execute(dbname, sql)\n return result.rows[0][0]\n\n return {\"sql_first\": sql_first} \n You can then use the new function in a template like so: \n SQLite version: {{ sql_first(\"select sqlite_version()\") }} \n Examples: datasette-search-all , datasette-template-sql", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[{\"href\": \"https://jinja.palletsprojects.com/en/2.10.x/api/#async-support\", \"label\": \"async mode\"}, {\"href\": \"https://datasette.io/plugins/datasette-search-all\", \"label\": \"datasette-search-all\"}, {\"href\": \"https://datasette.io/plugins/datasette-template-sql\", \"label\": \"datasette-template-sql\"}]"} {"id": "plugin_hooks:plugin-hook-extra-js-urls", "page": "plugin_hooks", "ref": "plugin-hook-extra-js-urls", "title": "extra_js_urls(template, database, table, columns, view_name, request, datasette)", "content": "This takes the same arguments as extra_template_vars(...) \n This works in the same way as extra_css_urls() but for JavaScript. You can\n return a list of URLs, a list of dictionaries or an awaitable function that returns those things: \n from datasette import hookimpl\n\n\n@hookimpl\ndef extra_js_urls():\n return [\n {\n \"url\": \"https://code.jquery.com/jquery-3.3.1.slim.min.js\",\n \"sri\": \"sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo\",\n }\n ] \n You can also return URLs to files from your plugin's static/ directory, if\n you have one: \n @hookimpl\ndef extra_js_urls():\n return [\"/-/static-plugins/your-plugin/app.js\"] \n Note that your-plugin here should be the hyphenated plugin name - the name that is displayed in the list on the /-/plugins debug page. \n If your code uses JavaScript modules you should include the \"module\": True key. See Custom CSS and JavaScript for more details. \n @hookimpl\ndef extra_js_urls():\n return [\n {\n \"url\": \"/-/static-plugins/your-plugin/app.js\",\n \"module\": True,\n }\n ] \n Examples: datasette-cluster-map , datasette-vega", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[{\"href\": \"https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules\", \"label\": \"JavaScript modules\"}, {\"href\": \"https://datasette.io/plugins/datasette-cluster-map\", \"label\": \"datasette-cluster-map\"}, {\"href\": \"https://datasette.io/plugins/datasette-vega\", \"label\": \"datasette-vega\"}]"} {"id": "plugin_hooks:plugin-hook-extra-css-urls", "page": "plugin_hooks", "ref": "plugin-hook-extra-css-urls", "title": "extra_css_urls(template, database, table, columns, view_name, request, datasette)", "content": "This takes the same arguments as extra_template_vars(...) \n Return a list of extra CSS URLs that should be included on the page. These can\n take advantage of the CSS class hooks described in Custom pages and templates . \n This can be a list of URLs: \n from datasette import hookimpl\n\n\n@hookimpl\ndef extra_css_urls():\n return [\n \"https://stackpath.bootstrapcdn.com/bootstrap/4.1.0/css/bootstrap.min.css\"\n ] \n Or a list of dictionaries defining both a URL and an\n SRI hash : \n @hookimpl\ndef extra_css_urls():\n return [\n {\n \"url\": \"https://stackpath.bootstrapcdn.com/bootstrap/4.1.0/css/bootstrap.min.css\",\n \"sri\": \"sha384-9gVQ4dYFwwWSjIDZnLEWnxCjeSWFphJiwGPXr1jddIhOegiu1FwO5qRGvFXOdJZ4\",\n }\n ] \n This function can also return an awaitable function, useful if it needs to run any async code: \n @hookimpl\ndef extra_css_urls(datasette):\n async def inner():\n db = datasette.get_database()\n results = await db.execute(\n \"select url from css_files\"\n )\n return [r[0] for r in results]\n\n return inner \n Examples: datasette-cluster-map , datasette-vega", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[{\"href\": \"https://www.srihash.org/\", \"label\": \"SRI hash\"}, {\"href\": \"https://datasette.io/plugins/datasette-cluster-map\", \"label\": \"datasette-cluster-map\"}, {\"href\": \"https://datasette.io/plugins/datasette-vega\", \"label\": \"datasette-vega\"}]"} {"id": "plugin_hooks:plugin-hook-extra-body-script", "page": "plugin_hooks", "ref": "plugin-hook-extra-body-script", "title": "extra_body_script(template, database, table, columns, view_name, request, datasette)", "content": "Extra JavaScript to be added to a <script> block at the end of the <body> element on the page. \n This takes the same arguments as extra_template_vars(...) \n The template , database , table and view_name options can be used to return different code depending on which template is being rendered and which database or table are being processed. \n The datasette instance is provided primarily so that you can consult any plugin configuration options that may have been set, using the datasette.plugin_config(plugin_name) method documented above. \n This function can return a string containing JavaScript, or a dictionary as described below, or a function or awaitable function that returns a string or dictionary. \n Use a dictionary if you want to specify that the code should be placed in a <script type=\"module\">...</script> element: \n @hookimpl\ndef extra_body_script():\n return {\n \"module\": True,\n \"script\": \"console.log('Your JavaScript goes here...')\",\n } \n This will add the following to the end of your page: \n <script type=\"module\">console.log('Your JavaScript goes here...')</script> \n Example: datasette-cluster-map", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[{\"href\": \"https://datasette.io/plugins/datasette-cluster-map\", \"label\": \"datasette-cluster-map\"}]"} {"id": "plugin_hooks:plugin-hook-database-actions", "page": "plugin_hooks", "ref": "plugin-hook-database-actions", "title": "database_actions(datasette, actor, database, request)", "content": "datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries. \n \n \n \n actor - dictionary or None \n \n The currently authenticated actor . \n \n \n \n database - string \n \n The name of the database. \n \n \n \n request - Request object \n \n The current HTTP request. \n \n \n \n This hook is similar to table_actions(datasette, actor, database, table, request) but populates an actions menu on the database page. \n Example: datasette-graphql", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[{\"href\": \"https://datasette.io/plugins/datasette-graphql\", \"label\": \"datasette-graphql\"}]"} {"id": "plugin_hooks:plugin-hook-canned-queries", "page": "plugin_hooks", "ref": "plugin-hook-canned-queries", "title": "canned_queries(datasette, database, actor)", "content": "datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries. \n \n \n \n database - string \n \n The name of the database. \n \n \n \n actor - dictionary or None \n \n The currently authenticated actor . \n \n \n \n Use this hook to return a dictionary of additional canned query definitions for the specified database. The return value should be the same shape as the JSON described in the canned query documentation. \n from datasette import hookimpl\n\n\n@hookimpl\ndef canned_queries(datasette, database):\n if database == \"mydb\":\n return {\n \"my_query\": {\n \"sql\": \"select * from my_table where id > :min_id\"\n }\n } \n The hook can alternatively return an awaitable function that returns a list. Here's an example that returns queries that have been stored in the saved_queries database table, if one exists: \n from datasette import hookimpl\n\n\n@hookimpl\ndef canned_queries(datasette, database):\n async def inner():\n db = datasette.get_database(database)\n if await db.table_exists(\"saved_queries\"):\n results = await db.execute(\n \"select name, sql from saved_queries\"\n )\n return {\n result[\"name\"]: {\"sql\": result[\"sql\"]}\n for result in results\n }\n\n return inner \n The actor parameter can be used to include the currently authenticated actor in your decision. Here's an example that returns saved queries that were saved by that actor: \n from datasette import hookimpl\n\n\n@hookimpl\ndef canned_queries(datasette, database, actor):\n async def inner():\n db = datasette.get_database(database)\n if actor is not None and await db.table_exists(\n \"saved_queries\"\n ):\n results = await db.execute(\n \"select name, sql from saved_queries where actor_id = :id\",\n {\"id\": actor[\"id\"]},\n )\n return {\n result[\"name\"]: {\"sql\": result[\"sql\"]}\n for result in results\n }\n\n return inner \n Example: datasette-saved-queries", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[{\"href\": \"https://datasette.io/plugins/datasette-saved-queries\", \"label\": \"datasette-saved-queries\"}]"} {"id": "plugin_hooks:plugin-hook-actor-from-request", "page": "plugin_hooks", "ref": "plugin-hook-actor-from-request", "title": "actor_from_request(datasette, request)", "content": "datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries. \n \n \n \n request - Request object \n \n The current HTTP request. \n \n \n \n This is part of Datasette's authentication and permissions system . The function should attempt to authenticate an actor (either a user or an API actor of some sort) based on information in the request. \n If it cannot authenticate an actor, it should return None . Otherwise it should return a dictionary representing that actor. \n Here's an example that authenticates the actor based on an incoming API key: \n from datasette import hookimpl\nimport secrets\n\nSECRET_KEY = \"this-is-a-secret\"\n\n\n@hookimpl\ndef actor_from_request(datasette, request):\n authorization = (\n request.headers.get(\"authorization\") or \"\"\n )\n expected = \"Bearer {}\".format(SECRET_KEY)\n\n if secrets.compare_digest(authorization, expected):\n return {\"id\": \"bot\"} \n If you install this in your plugins directory you can test it like this: \n $ curl -H 'Authorization: Bearer this-is-a-secret' http://localhost:8003/-/actor.json \n Instead of returning a dictionary, this function can return an awaitable function which itself returns either None or a dictionary. This is useful for authentication functions that need to make a database query - for example: \n from datasette import hookimpl\n\n\n@hookimpl\ndef actor_from_request(datasette, request):\n async def inner():\n token = request.args.get(\"_token\")\n if not token:\n return None\n # Look up ?_token=xxx in sessions table\n result = await datasette.get_database().execute(\n \"select count(*) from sessions where token = ?\",\n [token],\n )\n if result.first()[0]:\n return {\"token\": token}\n else:\n return None\n\n return inner \n Example: datasette-auth-tokens", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[{\"href\": \"https://datasette.io/plugins/datasette-auth-tokens\", \"label\": \"datasette-auth-tokens\"}]"} {"id": "plugin_hooks:plugin-asgi-wrapper", "page": "plugin_hooks", "ref": "plugin-asgi-wrapper", "title": "asgi_wrapper(datasette)", "content": "Return an ASGI middleware wrapper function that will be applied to the Datasette ASGI application. \n This is a very powerful hook. You can use it to manipulate the entire Datasette response, or even to configure new URL routes that will be handled by your own custom code. \n You can write your ASGI code directly against the low-level specification, or you can use the middleware utilities provided by an ASGI framework such as Starlette . \n This example plugin adds a x-databases HTTP header listing the currently attached databases: \n from datasette import hookimpl\nfrom functools import wraps\n\n\n@hookimpl\ndef asgi_wrapper(datasette):\n def wrap_with_databases_header(app):\n @wraps(app)\n async def add_x_databases_header(\n scope, receive, send\n ):\n async def wrapped_send(event):\n if event[\"type\"] == \"http.response.start\":\n original_headers = (\n event.get(\"headers\") or []\n )\n event = {\n \"type\": event[\"type\"],\n \"status\": event[\"status\"],\n \"headers\": original_headers\n + [\n [\n b\"x-databases\",\n \", \".join(\n datasette.databases.keys()\n ).encode(\"utf-8\"),\n ]\n ],\n }\n await send(event)\n\n await app(scope, receive, wrapped_send)\n\n return add_x_databases_header\n\n return wrap_with_databases_header \n Examples: datasette-cors , datasette-pyinstrument , datasette-total-page-time", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[{\"href\": \"https://asgi.readthedocs.io/\", \"label\": \"ASGI\"}, {\"href\": \"https://www.starlette.io/middleware/\", \"label\": \"Starlette\"}, {\"href\": \"https://datasette.io/plugins/datasette-cors\", \"label\": \"datasette-cors\"}, {\"href\": \"https://datasette.io/plugins/datasette-pyinstrument\", \"label\": \"datasette-pyinstrument\"}, {\"href\": \"https://datasette.io/plugins/datasette-total-page-time\", \"label\": \"datasette-total-page-time\"}]"} {"id": "authentication:permissionsdebugview", "page": "authentication", "ref": "permissionsdebugview", "title": "The permissions debug tool", "content": "The debug tool at /-/permissions is only available to the authenticated root user (or any actor granted the permissions-debug action according to a plugin). \n It shows the thirty most recent permission checks that have been carried out by the Datasette instance. \n This is designed to help administrators and plugin authors understand exactly how permission checks are being carried out, in order to effectively configure Datasette's permission system.", "breadcrumbs": "[\"Authentication and permissions\"]", "references": "[]"} {"id": "authentication:permissions-view-table", "page": "authentication", "ref": "permissions-view-table", "title": "view-table", "content": "Actor is allowed to view a table (or view) page, e.g. https://latest.datasette.io/fixtures/complex_foreign_keys \n \n \n resource - tuple: (string, string) \n \n The name of the database, then the name of the table \n \n \n \n Default allow .", "breadcrumbs": "[\"Authentication and permissions\", \"Built-in permissions\"]", "references": "[{\"href\": \"https://latest.datasette.io/fixtures/complex_foreign_keys\", \"label\": \"https://latest.datasette.io/fixtures/complex_foreign_keys\"}]"} {"id": "authentication:permissions-view-query", "page": "authentication", "ref": "permissions-view-query", "title": "view-query", "content": "Actor is allowed to view (and execute) a canned query page, e.g. https://latest.datasette.io/fixtures/pragma_cache_size - this includes executing Writable canned queries . \n \n \n resource - tuple: (string, string) \n \n The name of the database, then the name of the canned query \n \n \n \n Default allow .", "breadcrumbs": "[\"Authentication and permissions\", \"Built-in permissions\"]", "references": "[{\"href\": \"https://latest.datasette.io/fixtures/pragma_cache_size\", \"label\": \"https://latest.datasette.io/fixtures/pragma_cache_size\"}]"} {"id": "authentication:permissions-view-instance", "page": "authentication", "ref": "permissions-view-instance", "title": "view-instance", "content": "Top level permission - Actor is allowed to view any pages within this instance, starting at https://latest.datasette.io/ \n Default allow .", "breadcrumbs": "[\"Authentication and permissions\", \"Built-in permissions\"]", "references": "[{\"href\": \"https://latest.datasette.io/\", \"label\": \"https://latest.datasette.io/\"}]"} {"id": "authentication:permissions-view-database-download", "page": "authentication", "ref": "permissions-view-database-download", "title": "view-database-download", "content": "Actor is allowed to download a database, e.g. https://latest.datasette.io/fixtures.db \n \n \n resource - string \n \n The name of the database \n \n \n \n Default allow .", "breadcrumbs": "[\"Authentication and permissions\", \"Built-in permissions\"]", "references": "[{\"href\": \"https://latest.datasette.io/fixtures.db\", \"label\": \"https://latest.datasette.io/fixtures.db\"}]"} {"id": "authentication:permissions-view-database", "page": "authentication", "ref": "permissions-view-database", "title": "view-database", "content": "Actor is allowed to view a database page, e.g. https://latest.datasette.io/fixtures \n \n \n resource - string \n \n The name of the database \n \n \n \n Default allow .", "breadcrumbs": "[\"Authentication and permissions\", \"Built-in permissions\"]", "references": "[{\"href\": \"https://latest.datasette.io/fixtures\", \"label\": \"https://latest.datasette.io/fixtures\"}]"} {"id": "authentication:permissions-plugins", "page": "authentication", "ref": "permissions-plugins", "title": "Checking permissions in plugins", "content": "Datasette plugins can check if an actor has permission to perform an action using the datasette.permission_allowed(...) method. \n Datasette core performs a number of permission checks, documented below . Plugins can implement the permission_allowed(datasette, actor, action, resource) plugin hook to participate in decisions about whether an actor should be able to perform a specified action.", "breadcrumbs": "[\"Authentication and permissions\"]", "references": "[]"} {"id": "authentication:permissions-permissions-debug", "page": "authentication", "ref": "permissions-permissions-debug", "title": "permissions-debug", "content": "Actor is allowed to view the /-/permissions debug page. \n Default deny .", "breadcrumbs": "[\"Authentication and permissions\", \"Built-in permissions\"]", "references": "[]"} {"id": "authentication:permissions-execute-sql", "page": "authentication", "ref": "permissions-execute-sql", "title": "execute-sql", "content": "Actor is allowed to run arbitrary SQL queries against a specific database, e.g. https://latest.datasette.io/fixtures?sql=select+100 \n \n \n resource - string \n \n The name of the database \n \n \n \n Default allow . See also the default_allow_sql setting .", "breadcrumbs": "[\"Authentication and permissions\", \"Built-in permissions\"]", "references": "[{\"href\": \"https://latest.datasette.io/fixtures?sql=select+100\", \"label\": \"https://latest.datasette.io/fixtures?sql=select+100\"}]"} {"id": "authentication:permissions-debug-menu", "page": "authentication", "ref": "permissions-debug-menu", "title": "debug-menu", "content": "Controls if the various debug pages are displayed in the navigation menu. \n Default deny .", "breadcrumbs": "[\"Authentication and permissions\", \"Built-in permissions\"]", "references": "[]"} {"id": "changelog:permissions", "page": "changelog", "ref": "permissions", "title": "Permissions", "content": "Datasette also now has a built-in concept of Permissions . The permissions system answers the following question: \n \n Is this actor allowed to perform this action , optionally against this particular resource ? \n \n You can use the new \"allow\" block syntax in metadata.json (or metadata.yaml ) to set required permissions at the instance, database, table or canned query level. For example, to restrict access to the fixtures.db database to the \"root\" user: \n {\n \"databases\": {\n \"fixtures\": {\n \"allow\": {\n \"id\" \"root\"\n }\n }\n }\n} \n See Defining permissions with \"allow\" blocks for more details. \n Plugins can implement their own custom permission checks using the new permission_allowed(datasette, actor, action, resource) hook. \n A new debug page at /-/permissions shows recent permission checks, to help administrators and plugin authors understand exactly what checks are being performed. This tool defaults to only being available to the root user, but can be exposed to other users by plugins that respond to the permissions-debug permission. ( #788 )", "breadcrumbs": "[\"Changelog\", \"0.44 (2020-06-11)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/788\", \"label\": \"#788\"}]"} {"id": "performance:performance-inspect", "page": "performance", "ref": "performance-inspect", "title": "Using \"datasette inspect\"", "content": "Counting the rows in a table can be a very expensive operation on larger databases. In immutable mode Datasette performs this count only once and caches the results, but this can still cause server startup time to increase by several seconds or more. \n If you know that a database is never going to change you can precalculate the table row counts once and store then in a JSON file, then use that file when you later start the server. \n To create a JSON file containing the calculated row counts for a database, use the following: \n datasette inspect data.db --inspect-file=counts.json \n Then later you can start Datasette against the counts.json file and use it to skip the row counting step and speed up server startup: \n datasette -i data.db --inspect-file=counts.json \n You need to use the -i immutable mode against the database file here or the counts from the JSON file will be ignored. \n You will rarely need to use this optimization in every-day use, but several of the datasette publish commands described in Publishing data use this optimization for better performance when deploying a database file to a hosting provider.", "breadcrumbs": "[\"Performance and caching\"]", "references": "[]"} {"id": "performance:performance-immutable-mode", "page": "performance", "ref": "performance-immutable-mode", "title": "Immutable mode", "content": "If you can be certain that a SQLite database file will not be changed by another process you can tell Datasette to open that file in immutable mode . \n Doing so will disable all locking and change detection, which can result in improved query performance. \n This also enables further optimizations relating to HTTP caching, described below. \n To open a file in immutable mode pass it to the datasette command using the -i option: \n datasette -i data.db \n When you open a file in immutable mode like this Datasette will also calculate and cache the row counts for each table in that database when it first starts up, further improving performance.", "breadcrumbs": "[\"Performance and caching\"]", "references": "[]"} {"id": "performance:performance-hashed-urls", "page": "performance", "ref": "performance-hashed-urls", "title": "datasette-hashed-urls", "content": "If you open a database file in immutable mode using the -i option, you can be assured that the content of that database will not change for the lifetime of the Datasette server. \n The datasette-hashed-urls plugin implements an optimization where your database is served with part of the SHA-256 hash of the database contents baked into the URL. \n A database at /fixtures will instead be served at /fixtures-aa7318b , and a year-long cache expiry header will be returned with those pages. \n This will then be cached by both browsers and caching proxies such as Cloudflare or Fastly, providing a potentially significant performance boost. \n To install the plugin, run the following: \n datasette install datasette-hashed-urls \n \n Prior to Datasette 0.61 hashed URL mode was a core Datasette feature, enabled using the hash_urls setting. This implementation has now been removed in favor of the datasette-hashed-urls plugin. \n Prior to Datasette 0.28 hashed URL mode was the default behaviour for Datasette, since all database files were assumed to be immutable and unchanging. From 0.28 onwards the default has been to treat database files as mutable unless explicitly configured otherwise.", "breadcrumbs": "[\"Performance and caching\"]", "references": "[{\"href\": \"https://datasette.io/plugins/datasette-hashed-urls\", \"label\": \"datasette-hashed-urls plugin\"}]"} {"id": "performance:performance", "page": "performance", "ref": "performance", "title": "Performance and caching", "content": "Datasette runs on top of SQLite, and SQLite has excellent performance. For small databases almost any query should return in just a few milliseconds, and larger databases (100s of MBs or even GBs of data) should perform extremely well provided your queries make sensible use of database indexes. \n That said, there are a number of tricks you can use to improve Datasette's performance.", "breadcrumbs": "[]", "references": "[]"} {"id": "metadata:per-database-and-per-table-metadata", "page": "metadata", "ref": "per-database-and-per-table-metadata", "title": "Per-database and per-table metadata", "content": "Metadata at the top level of the JSON will be shown on the index page and in the\n footer on every page of the site. The license and source is expected to apply to\n all of your data. \n You can also provide metadata at the per-database or per-table level, like this: \n {\n \"databases\": {\n \"database1\": {\n \"source\": \"Alternative source\",\n \"source_url\": \"http://example.com/\",\n \"tables\": {\n \"example_table\": {\n \"description_html\": \"Custom <em>table</em> description\",\n \"license\": \"CC BY 3.0 US\",\n \"license_url\": \"https://creativecommons.org/licenses/by/3.0/us/\"\n }\n }\n }\n }\n} \n Each of the top-level metadata fields can be used at the database and table level.", "breadcrumbs": "[\"Metadata\"]", "references": "[]"} {"id": "pages:pages", "page": "pages", "ref": "pages", "title": "Pages and API endpoints", "content": "The Datasette web application offers a number of different pages that can be accessed to explore the data in question, each of which is accompanied by an equivalent JSON API.", "breadcrumbs": "[]", "references": "[]"} {"id": "changelog:other-small-fixes", "page": "changelog", "ref": "other-small-fixes", "title": "Other small fixes", "content": "Made several performance improvements to the database schema introspection code that runs when Datasette first starts up. ( #1555 ) \n \n \n Label columns detected for foreign keys are now case-insensitive, so Name or TITLE will be detected in the same way as name or title . ( #1544 ) \n \n \n Upgraded Pluggy dependency to 1.0. ( #1575 ) \n \n \n Now using Plausible analytics for the Datasette documentation. \n \n \n explain query plan is now allowed with varying amounts of whitespace in the query. ( #1588 ) \n \n \n New CLI reference page showing the output of --help for each of the datasette sub-commands. This lead to several small improvements to the help copy. ( #1594 ) \n \n \n Fixed bug where writable canned queries could not be used with custom templates. ( #1547 ) \n \n \n Improved fix for a bug where columns with a underscore prefix could result in unnecessary hidden form fields. ( #1527 )", "breadcrumbs": "[\"Changelog\", \"0.60 (2022-01-13)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/1555\", \"label\": \"#1555\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1544\", \"label\": \"#1544\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1575\", \"label\": \"#1575\"}, {\"href\": \"https://plausible.io/\", \"label\": \"Plausible analytics\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1588\", \"label\": \"#1588\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1594\", \"label\": \"#1594\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1547\", \"label\": \"#1547\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1527\", \"label\": \"#1527\"}]"} {"id": "changelog:other-changes", "page": "changelog", "ref": "other-changes", "title": "Other changes", "content": "Datasette can now open multiple database files with the same name, e.g. if you run datasette path/to/one.db path/to/other/one.db . ( #509 ) \n \n \n datasette publish cloudrun now sets force_https_urls for every deployment, fixing some incorrect http:// links. ( #1178 ) \n \n \n Fixed a bug in the example nginx configuration in Running Datasette behind a proxy . ( #1091 ) \n \n \n The Datasette Ecosystem documentation page has been reduced in size in favour of the datasette.io tools and plugins directories. ( #1182 ) \n \n \n The request object now provides a request.full_path property, which returns the path including any query string. ( #1184 ) \n \n \n Better error message for disallowed PRAGMA clauses in SQL queries. ( #1185 ) \n \n \n datasette publish heroku now deploys using python-3.8.7 . \n \n \n New plugin testing documentation on Testing outbound HTTP calls with pytest-httpx . ( #1198 ) \n \n \n All ?_* query string parameters passed to the table page are now persisted in hidden form fields, so parameters such as ?_size=10 will be correctly passed to the next page when query filters are changed. ( #1194 ) \n \n \n Fixed a bug loading a database file called test-database (1).sqlite . ( #1181 )", "breadcrumbs": "[\"Changelog\", \"0.54 (2021-01-25)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/509\", \"label\": \"#509\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1178\", \"label\": \"#1178\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1091\", \"label\": \"#1091\"}, {\"href\": \"https://datasette.io/tools\", \"label\": \"tools\"}, {\"href\": \"https://datasette.io/plugins\", \"label\": \"plugins\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1182\", \"label\": \"#1182\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1184\", \"label\": \"#1184\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1185\", \"label\": \"#1185\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1198\", \"label\": \"#1198\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1194\", \"label\": \"#1194\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1181\", \"label\": \"#1181\"}]"} {"id": "plugins:one-off-plugins-using-plugins-dir", "page": "plugins", "ref": "one-off-plugins-using-plugins-dir", "title": "One-off plugins using --plugins-dir", "content": "You can also define one-off per-project plugins by saving them as plugin_name.py functions in a plugins/ folder and then passing that folder to datasette using the --plugins-dir option: \n datasette mydb.db --plugins-dir=plugins/", "breadcrumbs": "[\"Plugins\", \"Installing plugins\"]", "references": "[]"} {"id": "deploying:nginx-proxy-configuration", "page": "deploying", "ref": "nginx-proxy-configuration", "title": "Nginx proxy configuration", "content": "Here is an example of an nginx configuration file that will proxy traffic to Datasette: \n daemon off;\n\nevents {\n worker_connections 1024;\n}\nhttp {\n server {\n listen 80;\n location /my-datasette {\n proxy_pass http://127.0.0.1:8009/my-datasette;\n proxy_set_header Host $host;\n }\n }\n} \n You can also use the --uds option to Datasette to listen on a Unix domain socket instead of a port, configuring the nginx upstream proxy like this: \n daemon off;\nevents {\n worker_connections 1024;\n}\nhttp {\n server {\n listen 80;\n location /my-datasette {\n proxy_pass http://datasette/my-datasette;\n proxy_set_header Host $host;\n }\n }\n upstream datasette {\n server unix:/tmp/datasette.sock;\n }\n} \n Then run Datasette with datasette --uds /tmp/datasette.sock path/to/database.db --setting base_url /my-datasette/ .", "breadcrumbs": "[\"Deploying Datasette\", \"Running Datasette behind a proxy\"]", "references": "[{\"href\": \"https://nginx.org/\", \"label\": \"nginx\"}]"} {"id": "changelog:new-visual-design", "page": "changelog", "ref": "new-visual-design", "title": "New visual design", "content": "Datasette is no longer white and grey with blue and purple links! Natalie Downe has been working on a visual refresh, the first iteration of which is included in this release. ( #1056 )", "breadcrumbs": "[\"Changelog\", \"0.51 (2020-10-31)\"]", "references": "[{\"href\": \"https://twitter.com/natbat\", \"label\": \"Natalie Downe\"}, {\"href\": \"https://github.com/simonw/datasette/pull/1056\", \"label\": \"#1056\"}]"} {"id": "changelog:new-plugin-hooks", "page": "changelog", "ref": "new-plugin-hooks", "title": "New plugin hooks", "content": "register_magic_parameters(datasette) can be used to define new types of magic canned query parameters. \n \n \n startup(datasette) can run custom code when Datasette first starts up. datasette-init is a new plugin that uses this hook to create database tables and views on startup if they have not yet been created. ( #834 ) \n \n \n canned_queries(datasette, database, actor) lets plugins provide additional canned queries beyond those defined in Datasette's metadata. See datasette-saved-queries for an example of this hook in action. ( #852 ) \n \n \n forbidden(datasette, request, message) is a hook for customizing how Datasette responds to 403 forbidden errors. ( #812 )", "breadcrumbs": "[\"Changelog\", \"0.45 (2020-07-01)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette-init\", \"label\": \"datasette-init\"}, {\"href\": \"https://github.com/simonw/datasette/issues/834\", \"label\": \"#834\"}, {\"href\": \"https://github.com/simonw/datasette-saved-queries\", \"label\": \"datasette-saved-queries\"}, {\"href\": \"https://github.com/simonw/datasette/issues/852\", \"label\": \"#852\"}, {\"href\": \"https://github.com/simonw/datasette/issues/812\", \"label\": \"#812\"}]"} {"id": "changelog:new-plugin-hook-extra-template-vars", "page": "changelog", "ref": "new-plugin-hook-extra-template-vars", "title": "New plugin hook: extra_template_vars", "content": "The extra_template_vars(template, database, table, columns, view_name, request, datasette) plugin hook allows plugins to inject their own additional variables into the Datasette template context. This can be used in conjunction with custom templates to customize the Datasette interface. datasette-auth-github uses this hook to add custom HTML to the new top navigation bar (which is designed to be modified by plugins, see #540 ).", "breadcrumbs": "[\"Changelog\", \"0.29 (2019-07-07)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette-auth-github\", \"label\": \"datasette-auth-github\"}, {\"href\": \"https://github.com/simonw/datasette/issues/540\", \"label\": \"#540\"}]"} {"id": "changelog:new-plugin-hook-asgi-wrapper", "page": "changelog", "ref": "new-plugin-hook-asgi-wrapper", "title": "New plugin hook: asgi_wrapper", "content": "The asgi_wrapper(datasette) plugin hook allows plugins to entirely wrap the Datasette ASGI application in their own ASGI middleware. ( #520 ) \n Two new plugins take advantage of this hook: \n \n \n datasette-auth-github adds a authentication layer: users will have to sign in using their GitHub account before they can view data or interact with Datasette. You can also use it to restrict access to specific GitHub users, or to members of specified GitHub organizations or teams . \n \n \n datasette-cors allows you to configure CORS headers for your Datasette instance. You can use this to enable JavaScript running on a whitelisted set of domains to make fetch() calls to the JSON API provided by your Datasette instance.", "breadcrumbs": "[\"Changelog\", \"0.29 (2019-07-07)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/520\", \"label\": \"#520\"}, {\"href\": \"https://github.com/simonw/datasette-auth-github\", \"label\": \"datasette-auth-github\"}, {\"href\": \"https://help.github.com/en/articles/about-organizations\", \"label\": \"organizations\"}, {\"href\": \"https://help.github.com/en/articles/organizing-members-into-teams\", \"label\": \"teams\"}, {\"href\": \"https://github.com/simonw/datasette-cors\", \"label\": \"datasette-cors\"}, {\"href\": \"https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS\", \"label\": \"CORS headers\"}]"} {"id": "changelog:new-features", "page": "changelog", "ref": "new-features", "title": "New features", "content": "If an error occurs while executing a user-provided SQL query, that query is now re-displayed in an editable form along with the error message. ( #619 ) \n \n \n New ?_col= and ?_nocol= parameters to show and hide columns in a table, plus an interface for hiding and showing columns in the column cog menu. ( #615 ) \n \n \n A new ?_facet_size= parameter for customizing the number of facet results returned on a table or view page. ( #1332 ) \n \n \n ?_facet_size=max sets that to the maximum, which defaults to 1,000 and is controlled by the the max_returned_rows setting. If facet results are truncated the \u2026 at the bottom of the facet list now links to this parameter. ( #1337 ) \n \n \n ?_nofacet=1 option to disable all facet calculations on a page, used as a performance optimization for CSV exports and ?_shape=array/object . ( #1349 , #263 ) \n \n \n ?_nocount=1 option to disable full query result counts. ( #1353 ) \n \n \n ?_trace=1 debugging option is now controlled by the new trace_debug setting, which is turned off by default. ( #1359 )", "breadcrumbs": "[\"Changelog\", \"0.57 (2021-06-05)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/619\", \"label\": \"#619\"}, {\"href\": \"https://github.com/simonw/datasette/issues/615\", \"label\": \"#615\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1332\", \"label\": \"#1332\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1337\", \"label\": \"#1337\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1349\", \"label\": \"#1349\"}, {\"href\": \"https://github.com/simonw/datasette/issues/263\", \"label\": \"#263\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1353\", \"label\": \"#1353\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1359\", \"label\": \"#1359\"}]"} {"id": "changelog:new-configuration-settings", "page": "changelog", "ref": "new-configuration-settings", "title": "New configuration settings", "content": "Datasette's Settings now also supports boolean settings. A number of new\n configuration options have been added: \n \n \n num_sql_threads - the number of threads used to execute SQLite queries. Defaults to 3. \n \n \n allow_facet - enable or disable custom Facets using the _facet= parameter. Defaults to on. \n \n \n suggest_facets - should Datasette suggest facets? Defaults to on. \n \n \n allow_download - should users be allowed to download the entire SQLite database? Defaults to on. \n \n \n allow_sql - should users be allowed to execute custom SQL queries? Defaults to on. \n \n \n default_cache_ttl - Default HTTP caching max-age header in seconds. Defaults to 365 days - caching can be disabled entirely by settings this to 0. \n \n \n cache_size_kb - Set the amount of memory SQLite uses for its per-connection cache , in KB. \n \n \n allow_csv_stream - allow users to stream entire result sets as a single CSV file. Defaults to on. \n \n \n max_csv_mb - maximum size of a returned CSV file in MB. Defaults to 100MB, set to 0 to disable this limit.", "breadcrumbs": "[\"Changelog\", \"0.23 (2018-06-18)\"]", "references": "[{\"href\": \"https://www.sqlite.org/pragma.html#pragma_cache_size\", \"label\": \"per-connection cache\"}]"} {"id": "changelog:named-in-memory-database-support", "page": "changelog", "ref": "named-in-memory-database-support", "title": "Named in-memory database support", "content": "As part of the work building the _internal database, Datasette now supports named in-memory databases that can be shared across multiple connections. This allows plugins to create in-memory databases which will persist data for the lifetime of the Datasette server process. ( #1151 ) \n The new memory_name= parameter to the Database class can be used to create named, shared in-memory databases.", "breadcrumbs": "[\"Changelog\", \"0.54 (2021-01-25)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/1151\", \"label\": \"#1151\"}]"} {"id": "changelog:miscellaneous", "page": "changelog", "ref": "miscellaneous", "title": "Miscellaneous", "content": "Got JSON data in one of your columns? Use the new ?_json=COLNAME argument\n to tell Datasette to return that JSON value directly rather than encoding it\n as a string. \n \n \n If you just want an array of the first value of each row, use the new\n ?_shape=arrayfirst option - example .", "breadcrumbs": "[\"Changelog\", \"0.23 (2018-06-18)\"]", "references": "[{\"href\": \"https://latest.datasette.io/fixtures.json?sql=select+neighborhood+from+facetable+order+by+pk+limit+101&_shape=arrayfirst\", \"label\": \"example\"}]"} {"id": "metadata:metadata-yaml", "page": "metadata", "ref": "metadata-yaml", "title": "Using YAML for metadata", "content": "Datasette accepts YAML as an alternative to JSON for your metadata configuration file. YAML is particularly useful for including multiline HTML and SQL strings. \n Here's an example of a metadata.yml file, re-using an example from Canned queries . \n title: Demonstrating Metadata from YAML\ndescription_html: |-\n <p>This description includes a long HTML string</p>\n <ul>\n <li>YAML is better for embedding HTML strings than JSON!</li>\n </ul>\nlicense: ODbL\nlicense_url: https://opendatacommons.org/licenses/odbl/\ndatabases:\n fixtures:\n tables:\n no_primary_key:\n hidden: true\n queries:\n neighborhood_search:\n sql: |-\n select neighborhood, facet_cities.name, state\n from facetable join facet_cities on facetable.city_id = facet_cities.id\n where neighborhood like '%' || :text || '%' order by neighborhood;\n title: Search neighborhoods\n description_html: |-\n <p>This demonstrates <em>basic</em> LIKE search \n The metadata.yml file is passed to Datasette using the same --metadata option: \n datasette fixtures.db --metadata metadata.yml", "breadcrumbs": "[\"Metadata\"]", "references": "[]"} {"id": "metadata:metadata-source-license-about", "page": "metadata", "ref": "metadata-source-license-about", "title": "Source, license and about", "content": "The three visible metadata fields you can apply to everything, specific databases or specific tables are source, license and about. All three are optional. \n source and source_url should be used to indicate where the underlying data came from. \n license and license_url should be used to indicate the license under which the data can be used. \n about and about_url can be used to link to further information about the project - an accompanying blog entry for example. \n For each of these you can provide just the *_url field and Datasette will treat that as the default link label text and display the URL directly on the page.", "breadcrumbs": "[\"Metadata\"]", "references": "[]"} {"id": "metadata:metadata-sortable-columns", "page": "metadata", "ref": "metadata-sortable-columns", "title": "Setting which columns can be used for sorting", "content": "Datasette allows any column to be used for sorting by default. If you need to\n control which columns are available for sorting you can do so using the optional\n sortable_columns key: \n {\n \"databases\": {\n \"database1\": {\n \"tables\": {\n \"example_table\": {\n \"sortable_columns\": [\n \"height\",\n \"weight\"\n ]\n }\n }\n }\n }\n} \n This will restrict sorting of example_table to just the height and\n weight columns. \n You can also disable sorting entirely by setting \"sortable_columns\": [] \n You can use sortable_columns to enable specific sort orders for a view called name_of_view in the database my_database like so: \n {\n \"databases\": {\n \"my_database\": {\n \"tables\": {\n \"name_of_view\": {\n \"sortable_columns\": [\n \"clicks\",\n \"impressions\"\n ]\n }\n }\n }\n }\n}", "breadcrumbs": "[\"Metadata\"]", "references": "[]"} {"id": "metadata:metadata-page-size", "page": "metadata", "ref": "metadata-page-size", "title": "Setting a custom page size", "content": "Datasette defaults to displaying 100 rows per page, for both tables and views. You can change this default page size on a per-table or per-view basis using the \"size\" key in metadata.json : \n {\n \"databases\": {\n \"mydatabase\": {\n \"tables\": {\n \"example_table\": {\n \"size\": 10\n }\n }\n }\n }\n} \n This size can still be over-ridden by passing e.g. ?_size=50 in the query string.", "breadcrumbs": "[\"Metadata\"]", "references": "[]"} {"id": "metadata:metadata-hiding-tables", "page": "metadata", "ref": "metadata-hiding-tables", "title": "Hiding tables", "content": "You can hide tables from the database listing view (in the same way that FTS and\n SpatiaLite tables are automatically hidden) using \"hidden\": true : \n {\n \"databases\": {\n \"database1\": {\n \"tables\": {\n \"example_table\": {\n \"hidden\": true\n }\n }\n }\n }\n}", "breadcrumbs": "[\"Metadata\"]", "references": "[]"} {"id": "metadata:metadata-default-sort", "page": "metadata", "ref": "metadata-default-sort", "title": "Setting a default sort order", "content": "By default Datasette tables are sorted by primary key. You can over-ride this default for a specific table using the \"sort\" or \"sort_desc\" metadata properties: \n {\n \"databases\": {\n \"mydatabase\": {\n \"tables\": {\n \"example_table\": {\n \"sort\": \"created\"\n }\n }\n }\n }\n} \n Or use \"sort_desc\" to sort in descending order: \n {\n \"databases\": {\n \"mydatabase\": {\n \"tables\": {\n \"example_table\": {\n \"sort_desc\": \"created\"\n }\n }\n }\n }\n}", "breadcrumbs": "[\"Metadata\"]", "references": "[]"} {"id": "metadata:metadata-column-descriptions", "page": "metadata", "ref": "metadata-column-descriptions", "title": "Column descriptions", "content": "You can include descriptions for your columns by adding a \"columns\": {\"name-of-column\": \"description-of-column\"} block to your table metadata: \n {\n \"databases\": {\n \"database1\": {\n \"tables\": {\n \"example_table\": {\n \"columns\": {\n \"column1\": \"Description of column 1\",\n \"column2\": \"Description of column 2\"\n }\n }\n }\n }\n }\n} \n These will be displayed at the top of the table page, and will also show in the cog menu for each column. \n You can see an example of how these look at latest.datasette.io/fixtures/roadside_attractions .", "breadcrumbs": "[\"Metadata\"]", "references": "[{\"href\": \"https://latest.datasette.io/fixtures/roadside_attractions\", \"label\": \"latest.datasette.io/fixtures/roadside_attractions\"}]"} {"id": "introspection:messagesdebugview", "page": "introspection", "ref": "messagesdebugview", "title": "/-/messages", "content": "The debug tool at /-/messages can be used to set flash messages to try out that feature. See .add_message(request, message, type=datasette.INFO) for details of this feature.", "breadcrumbs": "[\"Introspection\"]", "references": "[]"} {"id": "spatialite:making-use-of-a-spatial-index", "page": "spatialite", "ref": "making-use-of-a-spatial-index", "title": "Making use of a spatial index", "content": "SpatiaLite spatial indexes are R*Trees. They allow you to run efficient bounding box queries using a sub-select, with a similar pattern to that used for Searches using custom SQL . \n In the above example, the resulting index will be called idx_museums_point_geom . This takes the form of a SQLite virtual table. You can inspect its contents using the following query: \n select * from idx_museums_point_geom limit 10; \n Here's a live example: timezones-api.datasette.io/timezones/idx_timezones_Geometry \n \n \n \n \n \n \n \n \n \n \n pkid \n \n \n xmin \n \n \n xmax \n \n \n ymin \n \n \n ymax \n \n \n \n \n \n \n 1 \n \n \n -8.601725578308105 \n \n \n -2.4930307865142822 \n \n \n 4.162120819091797 \n \n \n 10.74019718170166 \n \n \n \n \n 2 \n \n \n -3.2607860565185547 \n \n \n 1.27329421043396 \n \n \n 4.539252281188965 \n \n \n 11.174856185913086 \n \n \n \n \n 3 \n \n \n 32.997581481933594 \n \n \n 47.98238754272461 \n \n \n 3.3974475860595703 \n \n \n 14.894054412841797 \n \n \n \n \n 4 \n \n \n -8.66890811920166 \n \n \n 11.997337341308594 \n \n \n 18.9681453704834 \n \n \n 37.296207427978516 \n \n \n \n \n 5 \n \n \n 36.43336486816406 \n \n \n 43.300174713134766 \n \n \n 12.354820251464844 \n \n \n 18.070993423461914 \n \n \n \n \n \n You can now construct efficient bounding box queries that will make use of the index like this: \n select * from museums where museums.rowid in (\n SELECT pkid FROM idx_museums_point_geom\n -- left-hand-edge of point > left-hand-edge of bbox (minx)\n where xmin > :bbox_minx\n -- right-hand-edge of point < right-hand-edge of bbox (maxx)\n and xmax < :bbox_maxx\n -- bottom-edge of point > bottom-edge of bbox (miny)\n and ymin > :bbox_miny\n -- top-edge of point < top-edge of bbox (maxy)\n and ymax < :bbox_maxy\n); \n Spatial indexes can be created against polygon columns as well as point columns, in which case they will represent the minimum bounding rectangle of that polygon. This is useful for accelerating within queries, as seen in the Timezones API example.", "breadcrumbs": "[\"SpatiaLite\"]", "references": "[{\"href\": \"https://timezones-api.datasette.io/timezones/idx_timezones_Geometry\", \"label\": \"timezones-api.datasette.io/timezones/idx_timezones_Geometry\"}]"} {"id": "changelog:magic-parameters-for-canned-queries", "page": "changelog", "ref": "magic-parameters-for-canned-queries", "title": "Magic parameters for canned queries", "content": "Canned queries now support Magic parameters , which can be used to insert or select automatically generated values. For example: \n insert into logs\n (user_id, timestamp)\nvalues\n (:_actor_id, :_now_datetime_utc) \n This inserts the currently authenticated actor ID and the current datetime. ( #842 )", "breadcrumbs": "[\"Changelog\", \"0.45 (2020-07-01)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/842\", \"label\": \"#842\"}]"} {"id": "authentication:logoutview", "page": "authentication", "ref": "logoutview", "title": "The /-/logout page", "content": "The page at /-/logout provides the ability to log out of a ds_actor cookie authentication session.", "breadcrumbs": "[\"Authentication and permissions\", \"The ds_actor cookie\"]", "references": "[]"} {"id": "changelog:log-out", "page": "changelog", "ref": "log-out", "title": "Log out", "content": "The ds_actor cookie can be used by plugins (or by Datasette's --root mechanism ) to authenticate users. The new /-/logout page provides a way to clear that cookie. \n A \"Log out\" button now shows in the global navigation provided the user is authenticated using the ds_actor cookie. ( #840 )", "breadcrumbs": "[\"Changelog\", \"0.45 (2020-07-01)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/840\", \"label\": \"#840\"}]"} {"id": "installation:loading-spatialite", "page": "installation", "ref": "loading-spatialite", "title": "Loading SpatiaLite", "content": "The datasetteproject/datasette image includes a recent version of the\n SpatiaLite extension for SQLite. To load and enable that\n module, use the following command: \n docker run -p 8001:8001 -v `pwd`:/mnt \\\n datasetteproject/datasette \\\n datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db \\\n --load-extension=spatialite \n You can confirm that SpatiaLite is successfully loaded by visiting\n http://127.0.0.1:8001/-/versions", "breadcrumbs": "[\"Installation\", \"Advanced installation options\", \"Using Docker\"]", "references": "[{\"href\": \"http://127.0.0.1:8001/-/versions\", \"label\": \"http://127.0.0.1:8001/-/versions\"}]"} {"id": "changelog:latest-datasette-io", "page": "changelog", "ref": "latest-datasette-io", "title": "latest.datasette.io", "content": "Every commit to Datasette master is now automatically deployed by Travis CI to\n https://latest.datasette.io/ - ensuring there is always a live demo of the\n latest version of the software. \n The demo uses the fixtures from our\n unit tests, ensuring it demonstrates the same range of functionality that is\n covered by the tests. \n You can see how the deployment mechanism works in our .travis.yml file.", "breadcrumbs": "[\"Changelog\", \"0.23 (2018-06-18)\"]", "references": "[{\"href\": \"https://latest.datasette.io/\", \"label\": \"https://latest.datasette.io/\"}, {\"href\": \"https://github.com/simonw/datasette/blob/master/tests/fixtures.py\", \"label\": \"the fixtures\"}, {\"href\": \"https://github.com/simonw/datasette/blob/master/.travis.yml\", \"label\": \".travis.yml\"}]"} {"id": "metadata:label-columns", "page": "metadata", "ref": "label-columns", "title": "Specifying the label column for a table", "content": "Datasette's HTML interface attempts to display foreign key references as\n labelled hyperlinks. By default, it looks for referenced tables that only have\n two columns: a primary key column and one other. It assumes that the second\n column should be used as the link label. \n If your table has more than two columns you can specify which column should be\n used for the link label with the label_column property: \n {\n \"databases\": {\n \"database1\": {\n \"tables\": {\n \"example_table\": {\n \"label_column\": \"title\"\n }\n }\n }\n }\n}", "breadcrumbs": "[\"Metadata\"]", "references": "[]"} {"id": "introspection:jsondataview-versions", "page": "introspection", "ref": "jsondataview-versions", "title": "/-/versions", "content": "Shows the version of Datasette, Python and SQLite. Versions example : \n {\n \"datasette\": {\n \"version\": \"0.60\"\n },\n \"python\": {\n \"full\": \"3.8.12 (default, Dec 21 2021, 10:45:09) \\n[GCC 10.2.1 20210110]\",\n \"version\": \"3.8.12\"\n },\n \"sqlite\": {\n \"extensions\": {\n \"json1\": null\n },\n \"fts_versions\": [\n \"FTS5\",\n \"FTS4\",\n \"FTS3\"\n ],\n \"compile_options\": [\n \"COMPILER=gcc-6.3.0 20170516\",\n \"ENABLE_FTS3\",\n \"ENABLE_FTS4\",\n \"ENABLE_FTS5\",\n \"ENABLE_JSON1\",\n \"ENABLE_RTREE\",\n \"THREADSAFE=1\"\n ],\n \"version\": \"3.37.0\"\n }\n}", "breadcrumbs": "[\"Introspection\"]", "references": "[{\"href\": \"https://latest.datasette.io/-/versions\", \"label\": \"Versions example\"}]"} {"id": "introspection:jsondataview-threads", "page": "introspection", "ref": "jsondataview-threads", "title": "/-/threads", "content": "Shows details of threads and asyncio tasks. Threads example : \n {\n \"num_threads\": 2,\n \"threads\": [\n {\n \"daemon\": false,\n \"ident\": 4759197120,\n \"name\": \"MainThread\"\n },\n {\n \"daemon\": true,\n \"ident\": 123145319682048,\n \"name\": \"Thread-1\"\n },\n ],\n \"num_tasks\": 3,\n \"tasks\": [\n \"<Task pending coro=<RequestResponseCycle.run_asgi() running at uvicorn/protocols/http/httptools_impl.py:385> cb=[set.discard()]>\",\n \"<Task pending coro=<Server.serve() running at uvicorn/main.py:361> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x10365c3d0>()]> cb=[run_until_complete.<locals>.<lambda>()]>\",\n \"<Task pending coro=<LifespanOn.main() running at uvicorn/lifespan/on.py:48> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x10364f050>()]>>\"\n ]\n}", "breadcrumbs": "[\"Introspection\"]", "references": "[{\"href\": \"https://latest.datasette.io/-/threads\", \"label\": \"Threads example\"}]"} {"id": "introspection:jsondataview-plugins", "page": "introspection", "ref": "jsondataview-plugins", "title": "/-/plugins", "content": "Shows a list of currently installed plugins and their versions. Plugins example : \n [\n {\n \"name\": \"datasette_cluster_map\",\n \"static\": true,\n \"templates\": false,\n \"version\": \"0.10\",\n \"hooks\": [\"extra_css_urls\", \"extra_js_urls\", \"extra_body_script\"]\n }\n] \n Add ?all=1 to include details of the default plugins baked into Datasette.", "breadcrumbs": "[\"Introspection\"]", "references": "[{\"href\": \"https://san-francisco.datasettes.com/-/plugins\", \"label\": \"Plugins example\"}]"} {"id": "introspection:jsondataview-metadata", "page": "introspection", "ref": "jsondataview-metadata", "title": "/-/metadata", "content": "Shows the contents of the metadata.json file that was passed to datasette serve , if any. Metadata example : \n {\n \"license\": \"CC Attribution 4.0 License\",\n \"license_url\": \"http://creativecommons.org/licenses/by/4.0/\",\n \"source\": \"fivethirtyeight/data on GitHub\",\n \"source_url\": \"https://github.com/fivethirtyeight/data\",\n \"title\": \"Five Thirty Eight\",\n \"databases\": {\n\n }\n}", "breadcrumbs": "[\"Introspection\"]", "references": "[{\"href\": \"https://fivethirtyeight.datasettes.com/-/metadata\", \"label\": \"Metadata example\"}]"} {"id": "introspection:jsondataview-databases", "page": "introspection", "ref": "jsondataview-databases", "title": "/-/databases", "content": "Shows currently attached databases. Databases example : \n [\n {\n \"hash\": null,\n \"is_memory\": false,\n \"is_mutable\": true,\n \"name\": \"fixtures\",\n \"path\": \"fixtures.db\",\n \"size\": 225280\n }\n]", "breadcrumbs": "[\"Introspection\"]", "references": "[{\"href\": \"https://latest.datasette.io/-/databases\", \"label\": \"Databases example\"}]"} {"id": "introspection:jsondataview-config", "page": "introspection", "ref": "jsondataview-config", "title": "/-/settings", "content": "Shows the Settings for this instance of Datasette. Settings example : \n {\n \"default_facet_size\": 30,\n \"default_page_size\": 100,\n \"facet_suggest_time_limit_ms\": 50,\n \"facet_time_limit_ms\": 1000,\n \"max_returned_rows\": 1000,\n \"sql_time_limit_ms\": 1000\n}", "breadcrumbs": "[\"Introspection\"]", "references": "[{\"href\": \"https://fivethirtyeight.datasettes.com/-/settings\", \"label\": \"Settings example\"}]"} {"id": "introspection:jsondataview-actor", "page": "introspection", "ref": "jsondataview-actor", "title": "/-/actor", "content": "Shows the currently authenticated actor. Useful for debugging Datasette authentication plugins. \n {\n \"actor\": {\n \"id\": 1,\n \"username\": \"some-user\"\n }\n}", "breadcrumbs": "[\"Introspection\"]", "references": "[]"} {"id": "json_api:json-api-table-arguments", "page": "json_api", "ref": "json-api-table-arguments", "title": "Special table arguments", "content": "?_col=COLUMN1&_col=COLUMN2 \n \n List specific columns to display. These will be shown along with any primary keys. \n \n \n \n ?_nocol=COLUMN1&_nocol=COLUMN2 \n \n List specific columns to hide - any column not listed will be displayed. Primary keys cannot be hidden. \n \n \n \n ?_labels=on/off \n \n Expand foreign key references for every possible column. See below. \n \n \n \n ?_label=COLUMN1&_label=COLUMN2 \n \n Expand foreign key references for one or more specified columns. \n \n \n \n ?_size=1000 or ?_size=max \n \n Sets a custom page size. This cannot exceed the max_returned_rows limit\n passed to datasette serve . Use max to get max_returned_rows . \n \n \n \n ?_sort=COLUMN \n \n Sorts the results by the specified column. \n \n \n \n ?_sort_desc=COLUMN \n \n Sorts the results by the specified column in descending order. \n \n \n \n ?_search=keywords \n \n For SQLite tables that have been configured for\n full-text search executes a search\n with the provided keywords. \n \n \n \n ?_search_COLUMN=keywords \n \n Like _search= but allows you to specify the column to be searched, as\n opposed to searching all columns that have been indexed by FTS. \n \n \n \n ?_searchmode=raw \n \n With this option, queries passed to ?_search= or ?_search_COLUMN= will\n not have special characters escaped. This means you can make use of the full\n set of advanced SQLite FTS syntax ,\n though this could potentially result in errors if the wrong syntax is used. \n \n \n \n ?_where=SQL-fragment \n \n If the execute-sql permission is enabled, this parameter\n can be used to pass one or more additional SQL fragments to be used in the\n WHERE clause of the SQL used to query the table. \n This is particularly useful if you are building a JavaScript application\n that needs to do something creative but still wants the other conveniences\n provided by the table view (such as faceting) and hence would like not to\n have to construct a completely custom SQL query. \n Some examples: \n \n \n facetable?_where=_neighborhood like \"%c%\"&_where=_city_id=3 \n \n \n facetable?_where=_city_id in (select id from facet_cities where name != \"Detroit\") \n \n \n \n \n \n ?_through={json} \n \n This can be used to filter rows via a join against another table. \n The JSON parameter must include three keys: table , column and value . \n table must be a table that the current table is related to via a foreign key relationship. \n column must be a column in that other table. \n value is the value that you want to match against. \n For example, to filter roadside_attractions to just show the attractions that have a characteristic of \"museum\", you would construct this JSON: \n {\n \"table\": \"roadside_attraction_characteristics\",\n \"column\": \"characteristic_id\",\n \"value\": \"1\"\n} \n As a URL, that looks like this: \n ?_through={%22table%22:%22roadside_attraction_characteristics%22,%22column%22:%22characteristic_id%22,%22value%22:%221%22} \n Here's an example . \n \n \n \n ?_next=TOKEN \n \n Pagination by continuation token - pass the token that was returned in the\n \"next\" property by the previous page. \n \n \n \n ?_facet=column \n \n Facet by column. Can be applied multiple times, see Facets . Only works on the default JSON output, not on any of the custom shapes. \n \n \n \n ?_facet_size=100 \n \n Increase the number of facet results returned for each facet. Use ?_facet_size=max for the maximum available size, determined by max_returned_rows . \n \n \n \n ?_nofacet=1 \n \n Disable all facets and facet suggestions for this page, including any defined by Facets in metadata.json . \n \n \n \n ?_nosuggest=1 \n \n Disable facet suggestions for this page. \n \n \n \n ?_nocount=1 \n \n Disable the select count(*) query used on this page - a count of None will be returned instead.", "breadcrumbs": "[\"JSON API\", \"Table arguments\"]", "references": "[{\"href\": \"https://www.sqlite.org/fts3.html\", \"label\": \"full-text search\"}, {\"href\": \"https://www.sqlite.org/fts5.html#full_text_query_syntax\", \"label\": \"advanced SQLite FTS syntax\"}, {\"href\": \"https://latest.datasette.io/fixtures/facetable?_where=_neighborhood%20like%20%22%c%%22&_where=_city_id=3\", \"label\": \"facetable?_where=_neighborhood like \\\"%c%\\\"&_where=_city_id=3\"}, {\"href\": \"https://latest.datasette.io/fixtures/facetable?_where=_city_id%20in%20(select%20id%20from%20facet_cities%20where%20name%20!=%20%22Detroit%22)\", \"label\": \"facetable?_where=_city_id in (select id from facet_cities where name != \\\"Detroit\\\")\"}, {\"href\": \"https://latest.datasette.io/fixtures/roadside_attractions?_through={%22table%22:%22roadside_attraction_characteristics%22,%22column%22:%22characteristic_id%22,%22value%22:%221%22}\", \"label\": \"an example\"}]"} {"id": "json_api:json-api-special", "page": "json_api", "ref": "json-api-special", "title": "Special JSON arguments", "content": "Every Datasette endpoint that can return JSON also accepts the following\n query string arguments: \n \n \n ?_shape=SHAPE \n \n The shape of the JSON to return, documented above. \n \n \n \n ?_nl=on \n \n When used with ?_shape=array produces newline-delimited JSON objects. \n \n \n \n ?_json=COLUMN1&_json=COLUMN2 \n \n If any of your SQLite columns contain JSON values, you can use one or more\n _json= parameters to request that those columns be returned as regular\n JSON. Without this argument those columns will be returned as JSON objects\n that have been double-encoded into a JSON string value. \n Compare this query without the argument to this query using the argument \n \n \n \n ?_json_infinity=on \n \n If your data contains infinity or -infinity values, Datasette will replace\n them with None when returning them as JSON. If you pass _json_infinity=1 \n Datasette will instead return them as Infinity or -Infinity which is\n invalid JSON but can be processed by some custom JSON parsers. \n \n \n \n ?_timelimit=MS \n \n Sets a custom time limit for the query in ms. You can use this for optimistic\n queries where you would like Datasette to give up if the query takes too\n long, for example if you want to implement autocomplete search but only if\n it can be executed in less than 10ms. \n \n \n \n ?_ttl=SECONDS \n \n For how many seconds should this response be cached by HTTP proxies? Use\n ?_ttl=0 to disable HTTP caching entirely for this request. \n \n \n \n ?_trace=1 \n \n Turns on tracing for this page: SQL queries executed during the request will\n be gathered and included in the response, either in a new \"_traces\" key\n for JSON responses or at the bottom of the page if the response is in HTML. \n The structure of the data returned here should be considered highly unstable\n and very likely to change. \n Only available if the trace_debug setting is enabled.", "breadcrumbs": "[\"JSON API\"]", "references": "[{\"href\": \"https://fivethirtyeight.datasettes.com/fivethirtyeight.json?sql=select+%27{%22this+is%22%3A+%22a+json+object%22}%27+as+d&_shape=array\", \"label\": \"this query without the argument\"}, {\"href\": \"https://fivethirtyeight.datasettes.com/fivethirtyeight.json?sql=select+%27{%22this+is%22%3A+%22a+json+object%22}%27+as+d&_shape=array&_json=d\", \"label\": \"this query using the argument\"}]"} {"id": "json_api:json-api-shapes", "page": "json_api", "ref": "json-api-shapes", "title": "Different shapes", "content": "The default JSON representation of data from a SQLite table or custom query\n looks like this: \n {\n \"database\": \"sf-trees\",\n \"table\": \"qSpecies\",\n \"columns\": [\n \"id\",\n \"value\"\n ],\n \"rows\": [\n [\n 1,\n \"Myoporum laetum :: Myoporum\"\n ],\n [\n 2,\n \"Metrosideros excelsa :: New Zealand Xmas Tree\"\n ],\n [\n 3,\n \"Pinus radiata :: Monterey Pine\"\n ]\n ],\n \"truncated\": false,\n \"next\": \"100\",\n \"next_url\": \"http://127.0.0.1:8001/sf-trees-02c8ef1/qSpecies.json?_next=100\",\n \"query_ms\": 1.9571781158447266\n} \n The columns key lists the columns that are being returned, and the rows \n key then returns a list of lists, each one representing a row. The order of the\n values in each row corresponds to the columns. \n The _shape parameter can be used to access alternative formats for the\n rows key which may be more convenient for your application. There are three\n options: \n \n \n ?_shape=arrays - \"rows\" is the default option, shown above \n \n \n ?_shape=objects - \"rows\" is a list of JSON key/value objects \n \n \n ?_shape=array - an JSON array of objects \n \n \n ?_shape=array&_nl=on - a newline-separated list of JSON objects \n \n \n ?_shape=arrayfirst - a flat JSON array containing just the first value from each row \n \n \n ?_shape=object - a JSON object keyed using the primary keys of the rows \n \n \n _shape=objects looks like this: \n {\n \"database\": \"sf-trees\",\n ...\n \"rows\": [\n {\n \"id\": 1,\n \"value\": \"Myoporum laetum :: Myoporum\"\n },\n {\n \"id\": 2,\n \"value\": \"Metrosideros excelsa :: New Zealand Xmas Tree\"\n },\n {\n \"id\": 3,\n \"value\": \"Pinus radiata :: Monterey Pine\"\n }\n ]\n} \n _shape=array looks like this: \n [\n {\n \"id\": 1,\n \"value\": \"Myoporum laetum :: Myoporum\"\n },\n {\n \"id\": 2,\n \"value\": \"Metrosideros excelsa :: New Zealand Xmas Tree\"\n },\n {\n \"id\": 3,\n \"value\": \"Pinus radiata :: Monterey Pine\"\n }\n] \n _shape=array&_nl=on looks like this: \n {\"id\": 1, \"value\": \"Myoporum laetum :: Myoporum\"}\n{\"id\": 2, \"value\": \"Metrosideros excelsa :: New Zealand Xmas Tree\"}\n{\"id\": 3, \"value\": \"Pinus radiata :: Monterey Pine\"} \n _shape=arrayfirst looks like this: \n [1, 2, 3] \n _shape=object looks like this: \n {\n \"1\": {\n \"id\": 1,\n \"value\": \"Myoporum laetum :: Myoporum\"\n },\n \"2\": {\n \"id\": 2,\n \"value\": \"Metrosideros excelsa :: New Zealand Xmas Tree\"\n },\n \"3\": {\n \"id\": 3,\n \"value\": \"Pinus radiata :: Monterey Pine\"\n }\n] \n The object shape is only available for queries against tables - custom SQL\n queries and views do not have an obvious primary key so cannot be returned using\n this format. \n The object keys are always strings. If your table has a compound primary\n key, the object keys will be a comma-separated string.", "breadcrumbs": "[\"JSON API\"]", "references": "[]"} {"id": "json_api:json-api-pagination", "page": "json_api", "ref": "json-api-pagination", "title": "Pagination", "content": "The default JSON representation includes a \"next_url\" key which can be used to access the next page of results. If that key is null or missing then it means you have reached the final page of results. \n Other representations include pagination information in the link HTTP header. That header will look something like this: \n link: <https://latest.datasette.io/fixtures/sortable.json?_next=d%2Cv>; rel=\"next\" \n Here is an example Python function built using requests that returns a list of all of the paginated items from one of these API endpoints: \n def paginate(url):\n items = []\n while url:\n response = requests.get(url)\n try:\n url = response.links.get(\"next\").get(\"url\")\n except AttributeError:\n url = None\n items.extend(response.json())\n return items", "breadcrumbs": "[\"JSON API\"]", "references": "[{\"href\": \"https://requests.readthedocs.io/\", \"label\": \"requests\"}]"} {"id": "json_api:json-api-discover-alternate", "page": "json_api", "ref": "json-api-discover-alternate", "title": "Discovering the JSON for a page", "content": "Most of the HTML pages served by Datasette provide a mechanism for discovering their JSON equivalents using the HTML link mechanism. \n You can find this near the top of the source code of those pages, looking like this: \n <link rel=\"alternate\"\n type=\"application/json+datasette\"\n href=\"https://latest.datasette.io/fixtures/sortable.json\"> \n The JSON URL is also made available in a Link HTTP header for the page: \n Link: https://latest.datasette.io/fixtures/sortable.json; rel=\"alternate\"; type=\"application/json+datasette\"", "breadcrumbs": "[\"JSON API\"]", "references": "[]"} {"id": "changelog:javascript-modules", "page": "changelog", "ref": "javascript-modules", "title": "JavaScript modules", "content": "JavaScript modules were introduced in ECMAScript 2015 and provide native browser support for the import and export keywords. \n To use modules, JavaScript needs to be included in <script> tags with a type=\"module\" attribute. \n Datasette now has the ability to output <script type=\"module\"> in places where you may wish to take advantage of modules. The extra_js_urls option described in Custom CSS and JavaScript can now be used with modules, and module support is also available for the extra_body_script() plugin hook. ( #1186 , #1187 ) \n datasette-leaflet-freedraw is the first example of a Datasette plugin that takes advantage of the new support for JavaScript modules. See Drawing shapes on a map to query a SpatiaLite database for more on this plugin.", "breadcrumbs": "[\"Changelog\", \"0.54 (2021-01-25)\"]", "references": "[{\"href\": \"https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules\", \"label\": \"JavaScript modules\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1186\", \"label\": \"#1186\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1187\", \"label\": \"#1187\"}, {\"href\": \"https://datasette.io/plugins/datasette-leaflet-freedraw\", \"label\": \"datasette-leaflet-freedraw\"}, {\"href\": \"https://simonwillison.net/2021/Jan/24/drawing-shapes-spatialite/\", \"label\": \"Drawing shapes on a map to query a SpatiaLite database\"}]"} {"id": "internals:internals-utils-parse-metadata", "page": "internals", "ref": "internals-utils-parse-metadata", "title": "parse_metadata(content)", "content": "This function accepts a string containing either JSON or YAML, expected to be of the format described in Metadata . It returns a nested Python dictionary representing the parsed data from that string. \n If the metadata cannot be parsed as either JSON or YAML the function will raise a utils.BadMetadataError exception. \n \n \n datasette.utils. parse_metadata content : str dict \n \n Detects if content is JSON or YAML and parses it appropriately.", "breadcrumbs": "[\"Internals for plugins\", \"The datasette.utils module\"]", "references": "[]"} {"id": "internals:internals-utils-await-me-maybe", "page": "internals", "ref": "internals-utils-await-me-maybe", "title": "await_me_maybe(value)", "content": "Utility function for calling await on a return value if it is awaitable, otherwise returning the value. This is used by Datasette to support plugin hooks that can optionally return awaitable functions. Read more about this function in The \u201cawait me maybe\u201d pattern for Python asyncio . \n \n \n async datasette.utils. await_me_maybe value : Any Any \n \n If value is callable, call it. If awaitable, await it. Otherwise return it.", "breadcrumbs": "[\"Internals for plugins\", \"The datasette.utils module\"]", "references": "[{\"href\": \"https://simonwillison.net/2020/Sep/2/await-me-maybe/\", \"label\": \"The \u201cawait me maybe\u201d pattern for Python asyncio\"}]"} {"id": "internals:internals-utils", "page": "internals", "ref": "internals-utils", "title": "The datasette.utils module", "content": "The datasette.utils module contains various utility functions used by Datasette. As a general rule you should consider anything in this module to be unstable - functions and classes here could change without warning or be removed entirely between Datasette releases, without being mentioned in the release notes. \n The exception to this rule is anythang that is documented here. If you find a need for an undocumented utility function in your own work, consider opening an issue requesting that the function you are using be upgraded to documented and supported status.", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/new\", \"label\": \"opening an issue\"}]"} {"id": "internals:internals-tracer-trace-child-tasks", "page": "internals", "ref": "internals-tracer-trace-child-tasks", "title": "Tracing child tasks", "content": "If your code uses a mechanism such as asyncio.gather() to execute code in additional tasks you may find that some of the traces are missing from the display. \n You can use the trace_child_tasks() context manager to ensure these child tasks are correctly handled. \n from datasette import tracer\n\nwith tracer.trace_child_tasks():\n results = await asyncio.gather(\n # ... async tasks here\n ) \n This example uses the register_routes() plugin hook to add a page at /parallel-queries which executes two SQL queries in parallel using asyncio.gather() and returns their results. \n from datasette import hookimpl\nfrom datasette import tracer\n\n\n@hookimpl\ndef register_routes():\n async def parallel_queries(datasette):\n db = datasette.get_database()\n with tracer.trace_child_tasks():\n one, two = await asyncio.gather(\n db.execute(\"select 1\"),\n db.execute(\"select 2\"),\n )\n return Response.json(\n {\n \"one\": one.single_value(),\n \"two\": two.single_value(),\n }\n )\n\n return [\n (r\"/parallel-queries$\", parallel_queries),\n ] \n Adding ?_trace=1 will show that the trace covers both of those child tasks.", "breadcrumbs": "[\"Internals for plugins\", \"datasette.tracer\"]", "references": "[]"} {"id": "internals:internals-tracer", "page": "internals", "ref": "internals-tracer", "title": "datasette.tracer", "content": "Running Datasette with --setting trace_debug 1 enables trace debug output, which can then be viewed by adding ?_trace=1 to the query string for any page. \n You can see an example of this at the bottom of latest.datasette.io/fixtures/facetable?_trace=1 . The JSON output shows full details of every SQL query that was executed to generate the page. \n The datasette-pretty-traces plugin can be installed to provide a more readable display of this information. You can see a demo of that here . \n You can add your own custom traces to the JSON output using the trace() context manager. This takes a string that identifies the type of trace being recorded, and records any keyword arguments as additional JSON keys on the resulting trace object. \n The start and end time, duration and a traceback of where the trace was executed will be automatically attached to the JSON object. \n This example uses trace to record the start, end and duration of any HTTP GET requests made using the function: \n from datasette.tracer import trace\nimport httpx\n\n\nasync def fetch_url(url):\n with trace(\"fetch-url\", url=url):\n async with httpx.AsyncClient() as client:\n return await client.get(url)", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[{\"href\": \"https://latest.datasette.io/fixtures/facetable?_trace=1\", \"label\": \"latest.datasette.io/fixtures/facetable?_trace=1\"}, {\"href\": \"https://datasette.io/plugins/datasette-pretty-traces\", \"label\": \"datasette-pretty-traces\"}, {\"href\": \"https://latest-with-plugins.datasette.io/github/commits?_trace=1\", \"label\": \"a demo of that here\"}]"} {"id": "internals:internals-tilde-encoding", "page": "internals", "ref": "internals-tilde-encoding", "title": "Tilde encoding", "content": "Datasette uses a custom encoding scheme in some places, called tilde encoding . This is primarily used for table names and row primary keys, to avoid any confusion between / characters in those values and the Datasette URLs that reference them. \n Tilde encoding uses the same algorithm as URL percent-encoding , but with the ~ tilde character used in place of % . \n Any character other than ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz0123456789_- will be replaced by the numeric equivalent preceded by a tilde. For example: \n \n \n / becomes ~2F \n \n \n . becomes ~2E \n \n \n % becomes ~25 \n \n \n ~ becomes ~7E \n \n \n Space becomes + \n \n \n polls/2022.primary becomes polls~2F2022~2Eprimary \n \n \n Note that the space character is a special case: it will be replaced with a + symbol. \n \n \n \n datasette.utils. tilde_encode s : str str \n \n Returns tilde-encoded string - for example /foo/bar -> ~2Ffoo~2Fbar \n \n \n \n \n \n datasette.utils. tilde_decode s : str str \n \n Decodes a tilde-encoded string, so ~2Ffoo~2Fbar -> /foo/bar", "breadcrumbs": "[\"Internals for plugins\", \"The datasette.utils module\"]", "references": "[{\"href\": \"https://developer.mozilla.org/en-US/docs/Glossary/percent-encoding\", \"label\": \"URL percent-encoding\"}]"} {"id": "internals:internals-shortcuts", "page": "internals", "ref": "internals-shortcuts", "title": "Import shortcuts", "content": "The following commonly used symbols can be imported directly from the datasette module: \n from datasette import Response\nfrom datasette import Forbidden\nfrom datasette import NotFound\nfrom datasette import hookimpl\nfrom datasette import actor_matches_allow", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[]"} {"id": "internals:internals-response-set-cookie", "page": "internals", "ref": "internals-response-set-cookie", "title": "Setting cookies with response.set_cookie()", "content": "To set cookies on the response, use the response.set_cookie(...) method. The method signature looks like this: \n def set_cookie(\n self,\n key,\n value=\"\",\n max_age=None,\n expires=None,\n path=\"/\",\n domain=None,\n secure=False,\n httponly=False,\n samesite=\"lax\",\n):\n ... \n You can use this with datasette.sign() to set signed cookies. Here's how you would set the ds_actor cookie for use with Datasette authentication : \n response = Response.redirect(\"/\")\nresponse.set_cookie(\n \"ds_actor\",\n datasette.sign({\"a\": {\"id\": \"cleopaws\"}}, \"actor\"),\n)\nreturn response", "breadcrumbs": "[\"Internals for plugins\", \"Response class\"]", "references": "[]"} {"id": "internals:internals-response-asgi-send", "page": "internals", "ref": "internals-response-asgi-send", "title": "Returning a response with .asgi_send(send)", "content": "In most cases you will return Response objects from your own view functions. You can also use a Response instance to respond at a lower level via ASGI, for example if you are writing code that uses the asgi_wrapper(datasette) hook. \n Create a Response object and then use await response.asgi_send(send) , passing the ASGI send function. For example: \n async def require_authorization(scope, receive, send):\n response = Response.text(\n \"401 Authorization Required\",\n headers={\n \"www-authenticate\": 'Basic realm=\"Datasette\", charset=\"UTF-8\"'\n },\n status=401,\n )\n await response.asgi_send(send)", "breadcrumbs": "[\"Internals for plugins\", \"Response class\"]", "references": "[]"} {"id": "internals:internals-response", "page": "internals", "ref": "internals-response", "title": "Response class", "content": "The Response class can be returned from view functions that have been registered using the register_routes(datasette) hook. \n The Response() constructor takes the following arguments: \n \n \n body - string \n \n The body of the response. \n \n \n \n status - integer (optional) \n \n The HTTP status - defaults to 200. \n \n \n \n headers - dictionary (optional) \n \n A dictionary of extra HTTP headers, e.g. {\"x-hello\": \"world\"} . \n \n \n \n content_type - string (optional) \n \n The content-type for the response. Defaults to text/plain . \n \n \n \n For example: \n from datasette.utils.asgi import Response\n\nresponse = Response(\n \"<xml>This is XML</xml>\",\n content_type=\"application/xml; charset=utf-8\",\n) \n The quickest way to create responses is using the Response.text(...) , Response.html(...) , Response.json(...) or Response.redirect(...) helper methods: \n from datasette.utils.asgi import Response\n\nhtml_response = Response.html(\"This is HTML\")\njson_response = Response.json({\"this_is\": \"json\"})\ntext_response = Response.text(\n \"This will become utf-8 encoded text\"\n)\n# Redirects are served as 302, unless you pass status=301:\nredirect_response = Response.redirect(\n \"https://latest.datasette.io/\"\n) \n Each of these responses will use the correct corresponding content-type - text/html; charset=utf-8 , application/json; charset=utf-8 or text/plain; charset=utf-8 respectively. \n Each of the helper methods take optional status= and headers= arguments, documented above.", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[]"} {"id": "internals:internals-request", "page": "internals", "ref": "internals-request", "title": "Request object", "content": "The request object is passed to various plugin hooks. It represents an incoming HTTP request. It has the following properties: \n \n \n .scope - dictionary \n \n The ASGI scope that was used to construct this request, described in the ASGI HTTP connection scope specification. \n \n \n \n .method - string \n \n The HTTP method for this request, usually GET or POST . \n \n \n \n .url - string \n \n The full URL for this request, e.g. https://latest.datasette.io/fixtures . \n \n \n \n .scheme - string \n \n The request scheme - usually https or http . \n \n \n \n .headers - dictionary (str -> str) \n \n A dictionary of incoming HTTP request headers. Header names have been converted to lowercase. \n \n \n \n .cookies - dictionary (str -> str) \n \n A dictionary of incoming cookies \n \n \n \n .host - string \n \n The host header from the incoming request, e.g. latest.datasette.io or localhost . \n \n \n \n .path - string \n \n The path of the request excluding the query string, e.g. /fixtures . \n \n \n \n .full_path - string \n \n The path of the request including the query string if one is present, e.g. /fixtures?sql=select+sqlite_version() . \n \n \n \n .query_string - string \n \n The query string component of the request, without the ? - e.g. name__contains=sam&age__gt=10 . \n \n \n \n .args - MultiParams \n \n An object representing the parsed query string parameters, see below. \n \n \n \n .url_vars - dictionary (str -> str) \n \n Variables extracted from the URL path, if that path was defined using a regular expression. See register_routes(datasette) . \n \n \n \n .actor - dictionary (str -> Any) or None \n \n The currently authenticated actor (see actors ), or None if the request is unauthenticated. \n \n \n \n The object also has two awaitable methods: \n \n \n await request.post_vars() - dictionary \n \n Returns a dictionary of form variables that were submitted in the request body via POST . Don't forget to read about CSRF protection ! \n \n \n \n await request.post_body() - bytes \n \n Returns the un-parsed body of a request submitted by POST - useful for things like incoming JSON data. \n \n \n \n And a class method that can be used to create fake request objects for use in tests: \n \n \n fake(path_with_query_string, method=\"GET\", scheme=\"http\", url_vars=None) \n \n Returns a Request instance for the specified path and method. For example: \n from datasette import Request\nfrom pprint import pprint\n\nrequest = Request.fake(\n \"/fixtures/facetable/\",\n url_vars={\"database\": \"fixtures\", \"table\": \"facetable\"},\n)\npprint(request.scope) \n This outputs: \n {'http_version': '1.1',\n 'method': 'GET',\n 'path': '/fixtures/facetable/',\n 'query_string': b'',\n 'raw_path': b'/fixtures/facetable/',\n 'scheme': 'http',\n 'type': 'http',\n 'url_route': {'kwargs': {'database': 'fixtures', 'table': 'facetable'}}}", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[{\"href\": \"https://asgi.readthedocs.io/en/latest/specs/www.html#connection-scope\", \"label\": \"ASGI HTTP connection scope\"}]"} {"id": "internals:internals-multiparams", "page": "internals", "ref": "internals-multiparams", "title": "The MultiParams class", "content": "request.args is a MultiParams object - a dictionary-like object which provides access to query string parameters that may have multiple values. \n Consider the query string ?foo=1&foo=2&bar=3 - with two values for foo and one value for bar . \n \n \n request.args[key] - string \n \n Returns the first value for that key, or raises a KeyError if the key is missing. For the above example request.args[\"foo\"] would return \"1\" . \n \n \n \n request.args.get(key) - string or None \n \n Returns the first value for that key, or None if the key is missing. Pass a second argument to specify a different default, e.g. q = request.args.get(\"q\", \"\") . \n \n \n \n request.args.getlist(key) - list of strings \n \n Returns the list of strings for that key. request.args.getlist(\"foo\") would return [\"1\", \"2\"] in the above example. request.args.getlist(\"bar\") would return [\"3\"] . If the key is missing an empty list will be returned. \n \n \n \n request.args.keys() - list of strings \n \n Returns the list of available keys - for the example this would be [\"foo\", \"bar\"] . \n \n \n \n key in request.args - True or False \n \n You can use if key in request.args to check if a key is present. \n \n \n \n for key in request.args - iterator \n \n This lets you loop through every available key. \n \n \n \n len(request.args) - integer \n \n Returns the number of keys.", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[]"} {"id": "internals:internals-internal", "page": "internals", "ref": "internals-internal", "title": "The _internal database", "content": "This API should be considered unstable - the structure of these tables may change prior to the release of Datasette 1.0. \n \n Datasette maintains an in-memory SQLite database with details of the the databases, tables and columns for all of the attached databases. \n By default all actors are denied access to the view-database permission for the _internal database, so the database is not visible to anyone unless they sign in as root . \n Plugins can access this database by calling db = datasette.get_database(\"_internal\") and then executing queries using the Database API . \n You can explore an example of this database by signing in as root to the latest.datasette.io demo instance and then navigating to latest.datasette.io/_internal .", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[{\"href\": \"https://latest.datasette.io/login-as-root\", \"label\": \"signing in as root\"}, {\"href\": \"https://latest.datasette.io/_internal\", \"label\": \"latest.datasette.io/_internal\"}]"} {"id": "internals:internals-datasette-urls", "page": "internals", "ref": "internals-datasette-urls", "title": "datasette.urls", "content": "The datasette.urls object contains methods for building URLs to pages within Datasette. Plugins should use this to link to pages, since these methods take into account any base_url configuration setting that might be in effect. \n \n \n datasette.urls.instance(format=None) \n \n Returns the URL to the Datasette instance root page. This is usually \"/\" . \n \n \n \n datasette.urls.path(path, format=None) \n \n Takes a path and returns the full path, taking base_url into account. \n For example, datasette.urls.path(\"-/logout\") will return the path to the logout page, which will be \"/-/logout\" by default or /prefix-path/-/logout if base_url is set to /prefix-path/ \n \n \n \n datasette.urls.logout() \n \n Returns the URL to the logout page, usually \"/-/logout\" \n \n \n \n datasette.urls.static(path) \n \n Returns the URL of one of Datasette's default static assets, for example \"/-/static/app.css\" \n \n \n \n datasette.urls.static_plugins(plugin_name, path) \n \n Returns the URL of one of the static assets belonging to a plugin. \n datasette.urls.static_plugins(\"datasette_cluster_map\", \"datasette-cluster-map.js\") would return \"/-/static-plugins/datasette_cluster_map/datasette-cluster-map.js\" \n \n \n \n datasette.urls.static(path) \n \n Returns the URL of one of Datasette's default static assets, for example \"/-/static/app.css\" \n \n \n \n datasette.urls.database(database_name, format=None) \n \n Returns the URL to a database page, for example \"/fixtures\" \n \n \n \n datasette.urls.table(database_name, table_name, format=None) \n \n Returns the URL to a table page, for example \"/fixtures/facetable\" \n \n \n \n datasette.urls.query(database_name, query_name, format=None) \n \n Returns the URL to a query page, for example \"/fixtures/pragma_cache_size\" \n \n \n \n These functions can be accessed via the {{ urls }} object in Datasette templates, for example: \n <a href=\"{{ urls.instance() }}\">Homepage</a>\n<a href=\"{{ urls.database(\"fixtures\") }}\">Fixtures database</a>\n<a href=\"{{ urls.table(\"fixtures\", \"facetable\") }}\">facetable table</a>\n<a href=\"{{ urls.query(\"fixtures\", \"pragma_cache_size\") }}\">pragma_cache_size query</a> \n Use the format=\"json\" (or \"csv\" or other formats supported by plugins) arguments to get back URLs to the JSON representation. This is the path with .json added on the end. \n These methods each return a datasette.utils.PrefixedUrlString object, which is a subclass of the Python str type. This allows the logic that considers the base_url setting to detect if that prefix has already been applied to the path.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:internals-datasette-client", "page": "internals", "ref": "internals-datasette-client", "title": "datasette.client", "content": "Plugins can make internal simulated HTTP requests to the Datasette instance within which they are running. This ensures that all of Datasette's external JSON APIs are also available to plugins, while avoiding the overhead of making an external HTTP call to access those APIs. \n The datasette.client object is a wrapper around the HTTPX Python library , providing an async-friendly API that is similar to the widely used Requests library . \n It offers the following methods: \n \n \n await datasette.client.get(path, **kwargs) - returns HTTPX Response \n \n Execute an internal GET request against that path. \n \n \n \n await datasette.client.post(path, **kwargs) - returns HTTPX Response \n \n Execute an internal POST request. Use data={\"name\": \"value\"} to pass form parameters. \n \n \n \n await datasette.client.options(path, **kwargs) - returns HTTPX Response \n \n Execute an internal OPTIONS request. \n \n \n \n await datasette.client.head(path, **kwargs) - returns HTTPX Response \n \n Execute an internal HEAD request. \n \n \n \n await datasette.client.put(path, **kwargs) - returns HTTPX Response \n \n Execute an internal PUT request. \n \n \n \n await datasette.client.patch(path, **kwargs) - returns HTTPX Response \n \n Execute an internal PATCH request. \n \n \n \n await datasette.client.delete(path, **kwargs) - returns HTTPX Response \n \n Execute an internal DELETE request. \n \n \n \n await datasette.client.request(method, path, **kwargs) - returns HTTPX Response \n \n Execute an internal request with the given HTTP method against that path. \n \n \n \n These methods can be used with datasette.urls - for example: \n table_json = (\n await datasette.client.get(\n datasette.urls.table(\n \"fixtures\", \"facetable\", format=\"json\"\n )\n )\n).json() \n datasette.client methods automatically take the current base_url setting into account, whether or not you use the datasette.urls family of methods to construct the path. \n For documentation on available **kwargs options and the shape of the HTTPX Response object refer to the HTTPX Async documentation .", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[{\"href\": \"https://www.python-httpx.org/\", \"label\": \"HTTPX Python library\"}, {\"href\": \"https://requests.readthedocs.io/\", \"label\": \"Requests library\"}, {\"href\": \"https://www.python-httpx.org/async/\", \"label\": \"HTTPX Async documentation\"}]"} {"id": "internals:internals-datasette", "page": "internals", "ref": "internals-datasette", "title": "Datasette class", "content": "This object is an instance of the Datasette class, passed to many plugin hooks as an argument called datasette . \n You can create your own instance of this - for example to help write tests for a plugin - like so: \n from datasette.app import Datasette\n\n# With no arguments a single in-memory database will be attached\ndatasette = Datasette()\n\n# The files= argument can load files from disk\ndatasette = Datasette(files=[\"/path/to/my-database.db\"])\n\n# Pass metadata as a JSON dictionary like this\ndatasette = Datasette(\n files=[\"/path/to/my-database.db\"],\n metadata={\n \"databases\": {\n \"my-database\": {\n \"description\": \"This is my database\"\n }\n }\n },\n) \n Constructor parameters include: \n \n \n files=[...] - a list of database files to open \n \n \n immutables=[...] - a list of database files to open in immutable mode \n \n \n metadata={...} - a dictionary of Metadata \n \n \n config_dir=... - the configuration directory to use, stored in datasette.config_dir", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[]"} {"id": "internals:internals-database-introspection", "page": "internals", "ref": "internals-database-introspection", "title": "Database introspection", "content": "The Database class also provides properties and methods for introspecting the database. \n \n \n db.name - string \n \n The name of the database - usually the filename without the .db prefix. \n \n \n \n db.size - integer \n \n The size of the database file in bytes. 0 for :memory: databases. \n \n \n \n db.mtime_ns - integer or None \n \n The last modification time of the database file in nanoseconds since the epoch. None for :memory: databases. \n \n \n \n db.is_mutable - boolean \n \n Is this database mutable, and allowed to accept writes? \n \n \n \n db.is_memory - boolean \n \n Is this database an in-memory database? \n \n \n \n await db.attached_databases() - list of named tuples \n \n Returns a list of additional databases that have been connected to this database using the SQLite ATTACH command. Each named tuple has fields seq , name and file . \n \n \n \n await db.table_exists(table) - boolean \n \n Check if a table called table exists. \n \n \n \n await db.table_names() - list of strings \n \n List of names of tables in the database. \n \n \n \n await db.view_names() - list of strings \n \n List of names of views in the database. \n \n \n \n await db.table_columns(table) - list of strings \n \n Names of columns in a specific table. \n \n \n \n await db.table_column_details(table) - list of named tuples \n \n Full details of the columns in a specific table. Each column is represented by a Column named tuple with fields cid (integer representing the column position), name (string), type (string, e.g. REAL or VARCHAR(30) ), notnull (integer 1 or 0), default_value (string or None), is_pk (integer 1 or 0). \n \n \n \n await db.primary_keys(table) - list of strings \n \n Names of the columns that are part of the primary key for this table. \n \n \n \n await db.fts_table(table) - string or None \n \n The name of the FTS table associated with this table, if one exists. \n \n \n \n await db.label_column_for_table(table) - string or None \n \n The label column that is associated with this table - either automatically detected or using the \"label_column\" key from Metadata , see Specifying the label column for a table . \n \n \n \n await db.foreign_keys_for_table(table) - list of dictionaries \n \n Details of columns in this table which are foreign keys to other tables. A list of dictionaries where each dictionary is shaped like this: {\"column\": string, \"other_table\": string, \"other_column\": string} . \n \n \n \n await db.hidden_table_names() - list of strings \n \n List of tables which Datasette \"hides\" by default - usually these are tables associated with SQLite's full-text search feature, the SpatiaLite extension or tables hidden using the Hiding tables feature. \n \n \n \n await db.get_table_definition(table) - string \n \n Returns the SQL definition for the table - the CREATE TABLE statement and any associated CREATE INDEX statements. \n \n \n \n await db.get_view_definition(view) - string \n \n Returns the SQL definition of the named view. \n \n \n \n await db.get_all_foreign_keys() - dictionary \n \n Dictionary representing both incoming and outgoing foreign keys for this table. It has two keys, \"incoming\" and \"outgoing\" , each of which is a list of dictionaries with keys \"column\" , \"other_table\" and \"other_column\" . For example: \n {\n \"incoming\": [],\n \"outgoing\": [\n {\n \"other_table\": \"attraction_characteristic\",\n \"column\": \"characteristic_id\",\n \"other_column\": \"pk\",\n },\n {\n \"other_table\": \"roadside_attractions\",\n \"column\": \"attraction_id\",\n \"other_column\": \"pk\",\n }\n ]\n}", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[]"} {"id": "internals:internals-database", "page": "internals", "ref": "internals-database", "title": "Database class", "content": "Instances of the Database class can be used to execute queries against attached SQLite databases, and to run introspection against their schemas.", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[]"} {"id": "internals:internals-csrf", "page": "internals", "ref": "internals-csrf", "title": "CSRF protection", "content": "Datasette uses asgi-csrf to guard against CSRF attacks on form POST submissions. Users receive a ds_csrftoken cookie which is compared against the csrftoken form field (or x-csrftoken HTTP header) for every incoming request. \n If your plugin implements a <form method=\"POST\"> anywhere you will need to include that token. You can do so with the following template snippet: \n <input type=\"hidden\" name=\"csrftoken\" value=\"{{ csrftoken() }}\"> \n If you are rendering templates using the await .render_template(template, context=None, request=None) method the csrftoken() helper will only work if you provide the request= argument to that method. If you forget to do this you will see the following error: \n form-urlencoded POST field did not match cookie \n You can selectively disable CSRF protection using the skip_csrf(datasette, scope) hook.", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[{\"href\": \"https://github.com/simonw/asgi-csrf\", \"label\": \"asgi-csrf\"}]"} {"id": "internals:internals", "page": "internals", "ref": "internals", "title": "Internals for plugins", "content": "Many Plugin hooks are passed objects that provide access to internal Datasette functionality. The interface to these objects should not be considered stable with the exception of methods that are documented here.", "breadcrumbs": "[]", "references": "[]"} {"id": "spatialite:installing-spatialite-on-os-x", "page": "spatialite", "ref": "installing-spatialite-on-os-x", "title": "Installing SpatiaLite on OS X", "content": "The easiest way to install SpatiaLite on OS X is to use Homebrew . \n brew update\nbrew install spatialite-tools \n This will install the spatialite command-line tool and the mod_spatialite dynamic library. \n You can now run Datasette like so: \n datasette --load-extension=spatialite", "breadcrumbs": "[\"SpatiaLite\", \"Installation\"]", "references": "[{\"href\": \"https://brew.sh/\", \"label\": \"Homebrew\"}]"} {"id": "spatialite:installing-spatialite-on-linux", "page": "spatialite", "ref": "installing-spatialite-on-linux", "title": "Installing SpatiaLite on Linux", "content": "SpatiaLite is packaged for most Linux distributions. \n apt install spatialite-bin libsqlite3-mod-spatialite \n Depending on your distribution, you should be able to run Datasette something like this: \n datasette --load-extension=/usr/lib/x86_64-linux-gnu/mod_spatialite.so \n If you are unsure of the location of the module, try running locate mod_spatialite and see what comes back.", "breadcrumbs": "[\"SpatiaLite\", \"Installation\"]", "references": "[]"} {"id": "installation:installing-plugins-using-pipx", "page": "installation", "ref": "installing-plugins-using-pipx", "title": "Installing plugins using pipx", "content": "You can install additional datasette plugins with pipx inject like so: \n $ pipx inject datasette datasette-json-html\ninjected package datasette-json-html into venv datasette\ndone! \u2728 \ud83c\udf1f \u2728\n\n$ datasette plugins\n[\n {\n \"name\": \"datasette-json-html\",\n \"static\": false,\n \"templates\": false,\n \"version\": \"0.6\"\n }\n]", "breadcrumbs": "[\"Installation\", \"Advanced installation options\", \"Using pipx\"]", "references": "[]"} {"id": "installation:installing-plugins", "page": "installation", "ref": "installing-plugins", "title": "Installing plugins", "content": "If you want to install plugins into your local Datasette Docker image you can do\n so using the following recipe. This will install the plugins and then save a\n brand new local image called datasette-with-plugins : \n docker run datasetteproject/datasette \\\n pip install datasette-vega\n\ndocker commit $(docker ps -lq) datasette-with-plugins \n You can now run the new custom image like so: \n docker run -p 8001:8001 -v `pwd`:/mnt \\\n datasette-with-plugins \\\n datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db \n You can confirm that the plugins are installed by visiting\n http://127.0.0.1:8001/-/plugins \n Some plugins such as datasette-ripgrep may need additional system packages. You can install these by running apt-get install inside the container: \n docker run datasette-057a0 bash -c '\n apt-get update &&\n apt-get install ripgrep &&\n pip install datasette-ripgrep'\n\ndocker commit $(docker ps -lq) datasette-with-ripgrep", "breadcrumbs": "[\"Installation\", \"Advanced installation options\", \"Using Docker\"]", "references": "[{\"href\": \"http://127.0.0.1:8001/-/plugins\", \"label\": \"http://127.0.0.1:8001/-/plugins\"}, {\"href\": \"https://datasette.io/plugins/datasette-ripgrep\", \"label\": \"datasette-ripgrep\"}]"} {"id": "installation:installation-pipx", "page": "installation", "ref": "installation-pipx", "title": "Using pipx", "content": "pipx is a tool for installing Python software with all of its dependencies in an isolated environment, to ensure that they will not conflict with any other installed Python software. \n If you use Homebrew on macOS you can install pipx like this: \n brew install pipx\npipx ensurepath \n Without Homebrew you can install it like so: \n python3 -m pip install --user pipx\npython3 -m pipx ensurepath \n The pipx ensurepath command configures your shell to ensure it can find commands that have been installed by pipx - generally by making sure ~/.local/bin has been added to your PATH . \n Once pipx is installed you can use it to install Datasette like this: \n pipx install datasette \n Then run datasette --version to confirm that it has been successfully installed.", "breadcrumbs": "[\"Installation\", \"Advanced installation options\"]", "references": "[{\"href\": \"https://pipxproject.github.io/pipx/\", \"label\": \"pipx\"}, {\"href\": \"https://brew.sh/\", \"label\": \"Homebrew\"}]"} {"id": "installation:installation-pip", "page": "installation", "ref": "installation-pip", "title": "Using pip", "content": "Datasette requires Python 3.7 or higher. The Python.org Python For Beginners page has instructions for getting started. \n You can install Datasette and its dependencies using pip : \n pip install datasette \n You can now run Datasette like so: \n datasette", "breadcrumbs": "[\"Installation\", \"Basic installation\"]", "references": "[{\"href\": \"https://www.python.org/about/gettingstarted/\", \"label\": \"Python.org Python For Beginners\"}]"} {"id": "installation:installation-homebrew", "page": "installation", "ref": "installation-homebrew", "title": "Using Homebrew", "content": "If you have a Mac and use Homebrew , you can install Datasette by running this command in your terminal: \n brew install datasette \n This should install the latest version. You can confirm by running: \n datasette --version \n You can upgrade to the latest Homebrew packaged version using: \n brew upgrade datasette \n Once you have installed Datasette you can install plugins using the following: \n datasette install datasette-vega \n If the latest packaged release of Datasette has not yet been made available through Homebrew, you can upgrade your Homebrew installation in-place using: \n datasette install -U datasette", "breadcrumbs": "[\"Installation\", \"Basic installation\"]", "references": "[{\"href\": \"https://brew.sh/\", \"label\": \"Homebrew\"}]"} {"id": "installation:installation-extensions", "page": "installation", "ref": "installation-extensions", "title": "A note about extensions", "content": "SQLite supports extensions, such as SpatiaLite for geospatial operations. \n These can be loaded using the --load-extension argument, like so: \n datasette --load-extension=/usr/local/lib/mod_spatialite.dylib \n Some Python installations do not include support for SQLite extensions. If this is the case you will see the following error when you attempt to load an extension: \n \n Your Python installation does not have the ability to load SQLite extensions. \n \n In some cases you may see the following error message instead: \n AttributeError: 'sqlite3.Connection' object has no attribute 'enable_load_extension' \n On macOS the easiest fix for this is to install Datasette using Homebrew: \n brew install datasette \n Use which datasette to confirm that datasette will run that version. The output should look something like this: \n /usr/local/opt/datasette/bin/datasette \n If you get a different location here such as /Library/Frameworks/Python.framework/Versions/3.10/bin/datasette you can run the following command to cause datasette to execute the Homebrew version instead: \n alias datasette=$(echo $(brew --prefix datasette)/bin/datasette) \n You can undo this operation using: \n unalias datasette \n If you need to run SQLite with extension support for other Python code, you can do so by install Python itself using Homebrew: \n brew install python \n Then executing Python using: \n /usr/local/opt/python@3/libexec/bin/python \n A more convenient way to work with this version of Python may be to use it to create a virtual environment: \n /usr/local/opt/python@3/libexec/bin/python -m venv datasette-venv \n Then activate it like this: \n source datasette-venv/bin/activate \n Now running python and pip will work against a version of Python 3 that includes support for SQLite extensions: \n pip install datasette\nwhich datasette\ndatasette --version", "breadcrumbs": "[\"Installation\"]", "references": "[]"} {"id": "installation:installation-docker", "page": "installation", "ref": "installation-docker", "title": "Using Docker", "content": "A Docker image containing the latest release of Datasette is published to Docker\n Hub here: https://hub.docker.com/r/datasetteproject/datasette/ \n If you have Docker installed (for example with Docker for Mac on OS X) you can download and run this\n image like so: \n docker run -p 8001:8001 -v `pwd`:/mnt \\\n datasetteproject/datasette \\\n datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db \n This will start an instance of Datasette running on your machine's port 8001,\n serving the fixtures.db file in your current directory. \n Now visit http://127.0.0.1:8001/ to access Datasette. \n (You can download a copy of fixtures.db from\n https://latest.datasette.io/fixtures.db ) \n To upgrade to the most recent release of Datasette, run the following: \n docker pull datasetteproject/datasette", "breadcrumbs": "[\"Installation\", \"Advanced installation options\"]", "references": "[{\"href\": \"https://hub.docker.com/r/datasetteproject/datasette/\", \"label\": \"https://hub.docker.com/r/datasetteproject/datasette/\"}, {\"href\": \"https://www.docker.com/docker-mac\", \"label\": \"Docker for Mac\"}, {\"href\": \"http://127.0.0.1:8001/\", \"label\": \"http://127.0.0.1:8001/\"}, {\"href\": \"https://latest.datasette.io/fixtures.db\", \"label\": \"https://latest.datasette.io/fixtures.db\"}]"} {"id": "installation:installation-datasette-desktop", "page": "installation", "ref": "installation-datasette-desktop", "title": "Datasette Desktop for Mac", "content": "Datasette Desktop is a packaged Mac application which bundles Datasette together with Python and allows you to install and run Datasette directly on your laptop. This is the best option for local installation if you are not comfortable using the command line.", "breadcrumbs": "[\"Installation\", \"Basic installation\"]", "references": "[{\"href\": \"https://datasette.io/desktop\", \"label\": \"Datasette Desktop\"}]"} {"id": "installation:installation-basic", "page": "installation", "ref": "installation-basic", "title": "Basic installation", "content": "", "breadcrumbs": "[\"Installation\"]", "references": "[]"} {"id": "installation:installation-advanced", "page": "installation", "ref": "installation-advanced", "title": "Advanced installation options", "content": "", "breadcrumbs": "[\"Installation\"]", "references": "[]"} {"id": "pages:indexview", "page": "pages", "ref": "indexview", "title": "Top-level index", "content": "The root page of any Datasette installation is an index page that lists all of the currently attached databases. Some examples: \n \n \n fivethirtyeight.datasettes.com \n \n \n global-power-plants.datasettes.com \n \n \n register-of-members-interests.datasettes.com \n \n \n Add /.json to the end of the URL for the JSON version of the underlying data: \n \n \n fivethirtyeight.datasettes.com/.json \n \n \n global-power-plants.datasettes.com/.json \n \n \n register-of-members-interests.datasettes.com/.json", "breadcrumbs": "[\"Pages and API endpoints\"]", "references": "[{\"href\": \"https://fivethirtyeight.datasettes.com/\", \"label\": \"fivethirtyeight.datasettes.com\"}, {\"href\": \"https://global-power-plants.datasettes.com/\", \"label\": \"global-power-plants.datasettes.com\"}, {\"href\": \"https://register-of-members-interests.datasettes.com/\", \"label\": \"register-of-members-interests.datasettes.com\"}, {\"href\": \"https://fivethirtyeight.datasettes.com/.json\", \"label\": \"fivethirtyeight.datasettes.com/.json\"}, {\"href\": \"https://global-power-plants.datasettes.com/.json\", \"label\": \"global-power-plants.datasettes.com/.json\"}, {\"href\": \"https://register-of-members-interests.datasettes.com/.json\", \"label\": \"register-of-members-interests.datasettes.com/.json\"}]"} {"id": "changelog:improved-support-for-spatialite", "page": "changelog", "ref": "improved-support-for-spatialite", "title": "Improved support for SpatiaLite", "content": "The SpatiaLite module \n for SQLite adds robust geospatial features to the database. \n Getting SpatiaLite working can be tricky, especially if you want to use the most\n recent alpha version (with support for K-nearest neighbor). \n Datasette now includes extensive documentation on SpatiaLite , and thanks to Ravi Kotecha our GitHub\n repo includes a Dockerfile that can build\n the latest SpatiaLite and configure it for use with Datasette. \n The datasette publish and datasette package commands now accept a new\n --spatialite argument which causes them to install and configure SpatiaLite\n as part of the container they deploy.", "breadcrumbs": "[\"Changelog\", \"0.23 (2018-06-18)\"]", "references": "[{\"href\": \"https://www.gaia-gis.it/fossil/libspatialite/index\", \"label\": \"SpatiaLite module\"}, {\"href\": \"https://github.com/r4vi\", \"label\": \"Ravi Kotecha\"}, {\"href\": \"https://github.com/simonw/datasette/blob/master/Dockerfile\", \"label\": \"Dockerfile\"}]"}