{"id": "spatialite:spatialite-warning", "page": "spatialite", "ref": "spatialite-warning", "title": "Warning", "content": "The SpatiaLite extension adds a large number of additional SQL functions , some of which are not be safe for untrusted users to execute: they may cause the Datasette server to crash. \n You should not expose a SpatiaLite-enabled Datasette instance to the public internet without taking extra measures to secure it against potentially harmful SQL queries. \n The following steps are recommended: \n \n \n Disable arbitrary SQL queries by untrusted users. See Controlling the ability to execute arbitrary SQL for ways to do this. The easiest is to start Datasette with the datasette --setting default_allow_sql off option. \n \n \n Define Canned queries with the SQL queries that use SpatiaLite functions that you want people to be able to execute. \n \n \n The Datasette SpatiaLite tutorial includes detailed instructions for running SpatiaLite safely using these techniques", "breadcrumbs": "[\"SpatiaLite\"]", "references": "[{\"href\": \"https://www.gaia-gis.it/gaia-sins/spatialite-sql-5.0.1.html\", \"label\": \"a large number of additional SQL functions\"}, {\"href\": \"https://datasette.io/tutorials/spatialite\", \"label\": \"Datasette SpatiaLite tutorial\"}]"} {"id": "sql_queries:sql-views", "page": "sql_queries", "ref": "sql-views", "title": "Views", "content": "If you want to bundle some pre-written SQL queries with your Datasette-hosted database you can do so in two ways. The first is to include SQL views in your database - Datasette will then list those views on your database index page. \n The quickest way to create views is with the SQLite command-line interface: \n $ sqlite3 sf-trees.db\nSQLite version 3.19.3 2017-06-27 16:48:08\nEnter \".help\" for usage hints.\nsqlite> CREATE VIEW demo_view AS select qSpecies from Street_Tree_List;\n", "breadcrumbs": "[\"Running SQL queries\"]", "references": "[]"} {"id": "authentication:authentication-root", "page": "authentication", "ref": "authentication-root", "title": "Using the \"root\" actor", "content": "Datasette currently leaves almost all forms of authentication to plugins - datasette-auth-github for example. \n The one exception is the \"root\" account, which you can sign into while using Datasette on your local machine. This provides access to a small number of debugging features. \n To sign in as root, start Datasette using the --root command-line option, like this: \n $ datasette --root\nhttp://127.0.0.1:8001/-/auth-token?token=786fc524e0199d70dc9a581d851f466244e114ca92f33aa3b42a139e9388daa7\nINFO: Started server process [25801]\nINFO: Waiting for application startup.\nINFO: Application startup complete.\nINFO: Uvicorn running on http://127.0.0.1:8001 (Press CTRL+C to quit) \n The URL on the first line includes a one-use token which can be used to sign in as the \"root\" actor in your browser. Click on that link and then visit http://127.0.0.1:8001/-/actor to confirm that you are authenticated as an actor that looks like this: \n {\n \"id\": \"root\"\n}", "breadcrumbs": "[\"Authentication and permissions\", \"Actors\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette-auth-github\", \"label\": \"datasette-auth-github\"}]"} {"id": "settings:setting-publish-secrets", "page": "settings", "ref": "setting-publish-secrets", "title": "Using secrets with datasette publish", "content": "The datasette publish and datasette package commands both generate a secret for you automatically when Datasette is deployed. \n This means that every time you deploy a new version of a Datasette project, a new secret will be generated. This will cause signed cookies to become invalid on every fresh deploy. \n You can fix this by creating a secret that will be used for multiple deploys and passing it using the --secret option: \n datasette publish cloudrun mydb.db --service=my-service --secret=cdb19e94283a20f9d42cca5", "breadcrumbs": "[\"Settings\"]", "references": "[]"} {"id": "testing_plugins:testing-plugins-fixtures", "page": "testing_plugins", "ref": "testing-plugins-fixtures", "title": "Using pytest fixtures", "content": "Pytest fixtures can be used to create initial testable objects which can then be used by multiple tests. \n A common pattern for Datasette plugins is to create a fixture which sets up a temporary test database and wraps it in a Datasette instance. \n Here's an example that uses the sqlite-utils library to populate a temporary test database. It also sets the title of that table using a simulated metadata.json configuration: \n from datasette.app import Datasette\nimport pytest\nimport sqlite_utils\n\n\n@pytest.fixture(scope=\"session\")\ndef datasette(tmp_path_factory):\n db_directory = tmp_path_factory.mktemp(\"dbs\")\n db_path = db_directory / \"test.db\"\n db = sqlite_utils.Database(db_path)\n db[\"dogs\"].insert_all(\n [\n {\"id\": 1, \"name\": \"Cleo\", \"age\": 5},\n {\"id\": 2, \"name\": \"Pancakes\", \"age\": 4},\n ],\n pk=\"id\",\n )\n datasette = Datasette(\n [db_path],\n metadata={\n \"databases\": {\n \"test\": {\n \"tables\": {\n \"dogs\": {\"title\": \"Some dogs\"}\n }\n }\n }\n },\n )\n return datasette\n\n\n@pytest.mark.asyncio\nasync def test_example_table_json(datasette):\n response = await datasette.client.get(\n \"/test/dogs.json?_shape=array\"\n )\n assert response.status_code == 200\n assert response.json() == [\n {\"id\": 1, \"name\": \"Cleo\", \"age\": 5},\n {\"id\": 2, \"name\": \"Pancakes\", \"age\": 4},\n ]\n\n\n@pytest.mark.asyncio\nasync def test_example_table_html(datasette):\n response = await datasette.client.get(\"/test/dogs\")\n assert \">Some dogs\" in response.text \n Here the datasette() function defines the fixture, which is than automatically passed to the two test functions based on pytest automatically matching their datasette function parameters. \n The @pytest.fixture(scope=\"session\") line here ensures the fixture is reused for the full pytest execution session. This means that the temporary database file will be created once and reused for each test. \n If you want to create that test database repeatedly for every individual test function, write the fixture function like this instead. You may want to do this if your plugin modifies the database contents in some way: \n @pytest.fixture\ndef datasette(tmp_path_factory):\n # This fixture will be executed repeatedly for every test\n ...", "breadcrumbs": "[\"Testing plugins\"]", "references": "[{\"href\": \"https://docs.pytest.org/en/stable/fixture.html\", \"label\": \"Pytest fixtures\"}, {\"href\": \"https://sqlite-utils.datasette.io/en/stable/python-api.html\", \"label\": \"sqlite-utils library\"}]"} {"id": "installation:installation-pipx", "page": "installation", "ref": "installation-pipx", "title": "Using pipx", "content": "pipx is a tool for installing Python software with all of its dependencies in an isolated environment, to ensure that they will not conflict with any other installed Python software. \n If you use Homebrew on macOS you can install pipx like this: \n brew install pipx\npipx ensurepath \n Without Homebrew you can install it like so: \n python3 -m pip install --user pipx\npython3 -m pipx ensurepath \n The pipx ensurepath command configures your shell to ensure it can find commands that have been installed by pipx - generally by making sure ~/.local/bin has been added to your PATH . \n Once pipx is installed you can use it to install Datasette like this: \n pipx install datasette \n Then run datasette --version to confirm that it has been successfully installed.", "breadcrumbs": "[\"Installation\", \"Advanced installation options\"]", "references": "[{\"href\": \"https://pipxproject.github.io/pipx/\", \"label\": \"pipx\"}, {\"href\": \"https://brew.sh/\", \"label\": \"Homebrew\"}]"} {"id": "installation:installation-pip", "page": "installation", "ref": "installation-pip", "title": "Using pip", "content": "Datasette requires Python 3.7 or higher. The Python.org Python For Beginners page has instructions for getting started. \n You can install Datasette and its dependencies using pip : \n pip install datasette \n You can now run Datasette like so: \n datasette", "breadcrumbs": "[\"Installation\", \"Basic installation\"]", "references": "[{\"href\": \"https://www.python.org/about/gettingstarted/\", \"label\": \"Python.org Python For Beginners\"}]"} {"id": "testing_plugins:testing-plugins-pdb", "page": "testing_plugins", "ref": "testing-plugins-pdb", "title": "Using pdb for errors thrown inside Datasette", "content": "If an exception occurs within Datasette itself during a test, the response returned to your plugin will have a response.status_code value of 500. \n You can add pdb=True to the Datasette constructor to drop into a Python debugger session inside your test run instead of getting back a 500 response code. This is equivalent to running the datasette command-line tool with the --pdb option. \n Here's what that looks like in a test function: \n def test_that_opens_the_debugger_or_errors():\n ds = Datasette([db_path], pdb=True)\n response = await ds.client.get(\"/\") \n If you use this pattern you will need to run pytest with the -s option to avoid capturing stdin/stdout in order to interact with the debugger prompt.", "breadcrumbs": "[\"Testing plugins\"]", "references": "[]"} {"id": "contributing:contributing-using-fixtures", "page": "contributing", "ref": "contributing-using-fixtures", "title": "Using fixtures", "content": "To run Datasette itself, type datasette . \n You're going to need at least one SQLite database. A quick way to get started is to use the fixtures database that Datasette uses for its own tests. \n You can create a copy of that database by running this command: \n python tests/fixtures.py fixtures.db \n Now you can run Datasette against the new fixtures database like so: \n datasette fixtures.db \n This will start a server at http://127.0.0.1:8001/ . \n Any changes you make in the datasette/templates or datasette/static folder will be picked up immediately (though you may need to do a force-refresh in your browser to see changes to CSS or JavaScript). \n If you want to change Datasette's Python code you can use the --reload option to cause Datasette to automatically reload any time the underlying code changes: \n datasette --reload fixtures.db \n You can also use the fixtures.py script to recreate the testing version of metadata.json used by the unit tests. To do that: \n python tests/fixtures.py fixtures.db fixtures-metadata.json \n Or to output the plugins used by the tests, run this: \n python tests/fixtures.py fixtures.db fixtures-metadata.json fixtures-plugins\nTest tables written to fixtures.db\n- metadata written to fixtures-metadata.json\nWrote plugin: fixtures-plugins/register_output_renderer.py\nWrote plugin: fixtures-plugins/view_name.py\nWrote plugin: fixtures-plugins/my_plugin.py\nWrote plugin: fixtures-plugins/messages_output_renderer.py\nWrote plugin: fixtures-plugins/my_plugin_2.py \n Then run Datasette like this: \n datasette fixtures.db -m fixtures-metadata.json --plugins-dir=fixtures-plugins/", "breadcrumbs": "[\"Contributing\"]", "references": "[]"} {"id": "metadata:metadata-yaml", "page": "metadata", "ref": "metadata-yaml", "title": "Using YAML for metadata", "content": "Datasette accepts YAML as an alternative to JSON for your metadata configuration file. YAML is particularly useful for including multiline HTML and SQL strings. \n Here's an example of a metadata.yml file, re-using an example from Canned queries . \n title: Demonstrating Metadata from YAML\ndescription_html: |-\n

This description includes a long HTML string

\n \nlicense: ODbL\nlicense_url: https://opendatacommons.org/licenses/odbl/\ndatabases:\n fixtures:\n tables:\n no_primary_key:\n hidden: true\n queries:\n neighborhood_search:\n sql: |-\n select neighborhood, facet_cities.name, state\n from facetable join facet_cities on facetable.city_id = facet_cities.id\n where neighborhood like '%' || :text || '%' order by neighborhood;\n title: Search neighborhoods\n description_html: |-\n

This demonstrates basic LIKE search \n The metadata.yml file is passed to Datasette using the same --metadata option: \n datasette fixtures.db --metadata metadata.yml", "breadcrumbs": "[\"Metadata\"]", "references": "[]"} {"id": "installation:installation-homebrew", "page": "installation", "ref": "installation-homebrew", "title": "Using Homebrew", "content": "If you have a Mac and use Homebrew , you can install Datasette by running this command in your terminal: \n brew install datasette \n This should install the latest version. You can confirm by running: \n datasette --version \n You can upgrade to the latest Homebrew packaged version using: \n brew upgrade datasette \n Once you have installed Datasette you can install plugins using the following: \n datasette install datasette-vega \n If the latest packaged release of Datasette has not yet been made available through Homebrew, you can upgrade your Homebrew installation in-place using: \n datasette install -U datasette", "breadcrumbs": "[\"Installation\", \"Basic installation\"]", "references": "[{\"href\": \"https://brew.sh/\", \"label\": \"Homebrew\"}]"} {"id": "installation:installation-docker", "page": "installation", "ref": "installation-docker", "title": "Using Docker", "content": "A Docker image containing the latest release of Datasette is published to Docker\n Hub here: https://hub.docker.com/r/datasetteproject/datasette/ \n If you have Docker installed (for example with Docker for Mac on OS X) you can download and run this\n image like so: \n docker run -p 8001:8001 -v `pwd`:/mnt \\\n datasetteproject/datasette \\\n datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db \n This will start an instance of Datasette running on your machine's port 8001,\n serving the fixtures.db file in your current directory. \n Now visit http://127.0.0.1:8001/ to access Datasette. \n (You can download a copy of fixtures.db from\n https://latest.datasette.io/fixtures.db ) \n To upgrade to the most recent release of Datasette, run the following: \n docker pull datasetteproject/datasette", "breadcrumbs": "[\"Installation\", \"Advanced installation options\"]", "references": "[{\"href\": \"https://hub.docker.com/r/datasetteproject/datasette/\", \"label\": \"https://hub.docker.com/r/datasetteproject/datasette/\"}, {\"href\": \"https://www.docker.com/docker-mac\", \"label\": \"Docker for Mac\"}, {\"href\": \"http://127.0.0.1:8001/\", \"label\": \"http://127.0.0.1:8001/\"}, {\"href\": \"https://latest.datasette.io/fixtures.db\", \"label\": \"https://latest.datasette.io/fixtures.db\"}]"} {"id": "getting_started:getting-started-your-computer", "page": "getting_started", "ref": "getting-started-your-computer", "title": "Using Datasette on your own computer", "content": "First, follow the Installation instructions. Now you can run Datasette against a SQLite file on your computer using the following command: \n datasette path/to/database.db \n This will start a web server on port 8001 - visit http://localhost:8001/ \n to access the web interface. \n Add -o to open your browser automatically once Datasette has started: \n datasette path/to/database.db -o \n Use Chrome on OS X? You can run datasette against your browser history\n like so: \n datasette ~/Library/Application\\ Support/Google/Chrome/Default/History --nolock \n The --nolock option ignores any file locks. This is safe as Datasette will open the file in read-only mode. \n Now visiting http://localhost:8001/History/downloads will show you a web\n interface to browse your downloads data: \n \n \n \n http://localhost:8001/History/downloads.json will return that data as\n JSON: \n {\n \"database\": \"History\",\n \"columns\": [\n \"id\",\n \"current_path\",\n \"target_path\",\n \"start_time\",\n \"received_bytes\",\n \"total_bytes\",\n ...\n ],\n \"rows\": [\n [\n 1,\n \"/Users/simonw/Downloads/DropboxInstaller.dmg\",\n \"/Users/simonw/Downloads/DropboxInstaller.dmg\",\n 13097290269022132,\n 626688,\n 0,\n ...\n ]\n ]\n} \n http://localhost:8001/History/downloads.json?_shape=objects will return that data as\n JSON in a more convenient format: \n {\n ...\n \"rows\": [\n {\n \"start_time\": 13097290269022132,\n \"interrupt_reason\": 0,\n \"hash\": \"\",\n \"id\": 1,\n \"site_url\": \"\",\n \"referrer\": \"https://www.dropbox.com/downloading?src=index\",\n ...\n }\n ]\n}", "breadcrumbs": "[\"Getting started\"]", "references": "[{\"href\": \"http://localhost:8001/\", \"label\": \"http://localhost:8001/\"}, {\"href\": \"http://localhost:8001/History/downloads\", \"label\": \"http://localhost:8001/History/downloads\"}, {\"href\": \"http://localhost:8001/History/downloads.json\", \"label\": \"http://localhost:8001/History/downloads.json\"}, {\"href\": \"http://localhost:8001/History/downloads.json?_shape=objects\", \"label\": \"http://localhost:8001/History/downloads.json?_shape=objects\"}]"} {"id": "settings:using-setting", "page": "settings", "ref": "using-setting", "title": "Using --setting", "content": "Datasette supports a number of settings. These can be set using the --setting name value option to datasette serve . \n You can set multiple settings at once like this: \n datasette mydatabase.db \\\n --setting default_page_size 50 \\\n --setting sql_time_limit_ms 3500 \\\n --setting max_returned_rows 2000", "breadcrumbs": "[\"Settings\"]", "references": "[]"} {"id": "performance:performance-inspect", "page": "performance", "ref": "performance-inspect", "title": "Using \"datasette inspect\"", "content": "Counting the rows in a table can be a very expensive operation on larger databases. In immutable mode Datasette performs this count only once and caches the results, but this can still cause server startup time to increase by several seconds or more. \n If you know that a database is never going to change you can precalculate the table row counts once and store then in a JSON file, then use that file when you later start the server. \n To create a JSON file containing the calculated row counts for a database, use the following: \n datasette inspect data.db --inspect-file=counts.json \n Then later you can start Datasette against the counts.json file and use it to skip the row counting step and speed up server startup: \n datasette -i data.db --inspect-file=counts.json \n You need to use the -i immutable mode against the database file here or the counts from the JSON file will be ignored. \n You will rarely need to use this optimization in every-day use, but several of the datasette publish commands described in Publishing data use this optimization for better performance when deploying a database file to a hosting provider.", "breadcrumbs": "[\"Performance and caching\"]", "references": "[]"} {"id": "installation:upgrading-packages-using-pipx", "page": "installation", "ref": "upgrading-packages-using-pipx", "title": "Upgrading packages using pipx", "content": "You can upgrade your pipx installation to the latest release of Datasette using pipx upgrade datasette : \n $ pipx upgrade datasette\nupgraded package datasette from 0.39 to 0.40 (location: /Users/simon/.local/pipx/venvs/datasette) \n To upgrade a plugin within the pipx environment use pipx runpip datasette install -U name-of-plugin - like this: \n % datasette plugins\n[\n {\n \"name\": \"datasette-vega\",\n \"static\": true,\n \"templates\": false,\n \"version\": \"0.6\"\n }\n]\n\n$ pipx runpip datasette install -U datasette-vega\nCollecting datasette-vega\nDownloading datasette_vega-0.6.2-py3-none-any.whl (1.8 MB)\n |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1.8 MB 2.0 MB/s\n...\nInstalling collected packages: datasette-vega\nAttempting uninstall: datasette-vega\n Found existing installation: datasette-vega 0.6\n Uninstalling datasette-vega-0.6:\n Successfully uninstalled datasette-vega-0.6\nSuccessfully installed datasette-vega-0.6.2\n\n$ datasette plugins\n[\n {\n \"name\": \"datasette-vega\",\n \"static\": true,\n \"templates\": false,\n \"version\": \"0.6.2\"\n }\n]", "breadcrumbs": "[\"Installation\", \"Advanced installation options\", \"Using pipx\"]", "references": "[]"} {"id": "contributing:contributing-upgrading-codemirror", "page": "contributing", "ref": "contributing-upgrading-codemirror", "title": "Upgrading CodeMirror", "content": "Datasette bundles CodeMirror for the SQL editing interface, e.g. on this page . Here are the steps for upgrading to a new version of CodeMirror: \n \n \n Download and extract latest CodeMirror zip file from https://codemirror.net/codemirror.zip \n \n \n Rename lib/codemirror.js to codemirror-5.57.0.js (using latest version number) \n \n \n Rename lib/codemirror.css to codemirror-5.57.0.css \n \n \n Rename mode/sql/sql.js to codemirror-5.57.0-sql.js \n \n \n Edit both JavaScript files to make the top license comment a /* */ block instead of multiple // lines \n \n \n Minify the JavaScript files like this: \n npx uglify-js codemirror-5.57.0.js -o codemirror-5.57.0.min.js --comments '/LICENSE/'\nnpx uglify-js codemirror-5.57.0-sql.js -o codemirror-5.57.0-sql.min.js --comments '/LICENSE/' \n \n \n Check that the LICENSE comment did indeed survive minification \n \n \n Minify the CSS file like this: \n npx clean-css-cli codemirror-5.57.0.css -o codemirror-5.57.0.min.css \n \n \n Edit the _codemirror.html template to reference the new files \n \n \n git rm the old files, git add the new files", "breadcrumbs": "[\"Contributing\"]", "references": "[{\"href\": \"https://codemirror.net/\", \"label\": \"CodeMirror\"}, {\"href\": \"https://latest.datasette.io/fixtures\", \"label\": \"this page\"}, {\"href\": \"https://codemirror.net/codemirror.zip\", \"label\": \"https://codemirror.net/codemirror.zip\"}]"} {"id": "csv_export:csv-export-url-parameters", "page": "csv_export", "ref": "csv-export-url-parameters", "title": "URL parameters", "content": "The following options can be used to customize the CSVs returned by Datasette. \n \n \n ?_header=off \n \n This removes the first row of the CSV file specifying the headings - only the row data will be returned. \n \n \n \n ?_stream=on \n \n Stream all matching records, not just the first page of results. See below. \n \n \n \n ?_dl=on \n \n Causes Datasette to return a content-disposition: attachment; filename=\"filename.csv\" header.", "breadcrumbs": "[\"CSV export\"]", "references": "[]"} {"id": "changelog:url-building", "page": "changelog", "ref": "url-building", "title": "URL building", "content": "The new datasette.urls family of methods can be used to generate URLs to key pages within the Datasette interface, both within custom templates and Datasette plugins. See Building URLs within plugins for more details. ( #904 )", "breadcrumbs": "[\"Changelog\", \"0.51 (2020-10-31)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/904\", \"label\": \"#904\"}]"} {"id": "getting_started:getting-started-glitch", "page": "getting_started", "ref": "getting-started-glitch", "title": "Try Datasette without installing anything using Glitch", "content": "Glitch is a free online tool for building web apps directly from your web browser. You can use Glitch to try out Datasette without needing to install any software on your own computer. \n Here's a demo project on Glitch which you can use as the basis for your own experiments: \n glitch.com/~datasette-csvs \n Glitch allows you to \"remix\" any project to create your own copy and start editing it in your browser. You can remix the datasette-csvs project by clicking this button: \n \n Find a CSV file and drag it onto the Glitch file explorer panel - datasette-csvs will automatically convert it to a SQLite database (using sqlite-utils ) and allow you to start exploring it using Datasette. \n If your CSV file has a latitude and longitude column you can visualize it on a map by uncommenting the datasette-cluster-map line in the requirements.txt file using the Glitch file editor. \n Need some data? Try this Public Art Data for the city of Seattle - hit \"Export\" and select \"CSV\" to download it as a CSV file. \n For more on how this works, see Running Datasette on Glitch .", "breadcrumbs": "[\"Getting started\"]", "references": "[{\"href\": \"https://glitch.com/\", \"label\": \"Glitch\"}, {\"href\": \"https://glitch.com/~datasette-csvs\", \"label\": \"glitch.com/~datasette-csvs\"}, {\"href\": \"https://glitch.com/edit/#!/remix/datasette-csvs\", \"label\": null}, {\"href\": \"https://github.com/simonw/sqlite-utils\", \"label\": \"sqlite-utils\"}, {\"href\": \"https://data.seattle.gov/Community/Public-Art-Data/j7sn-tdzk\", \"label\": \"Public Art Data\"}, {\"href\": \"https://simonwillison.net/2019/Apr/23/datasette-glitch/\", \"label\": \"Running Datasette on Glitch\"}]"} {"id": "internals:internals-tracer-trace-child-tasks", "page": "internals", "ref": "internals-tracer-trace-child-tasks", "title": "Tracing child tasks", "content": "If your code uses a mechanism such as asyncio.gather() to execute code in additional tasks you may find that some of the traces are missing from the display. \n You can use the trace_child_tasks() context manager to ensure these child tasks are correctly handled. \n from datasette import tracer\n\nwith tracer.trace_child_tasks():\n results = await asyncio.gather(\n # ... async tasks here\n ) \n This example uses the register_routes() plugin hook to add a page at /parallel-queries which executes two SQL queries in parallel using asyncio.gather() and returns their results. \n from datasette import hookimpl\nfrom datasette import tracer\n\n\n@hookimpl\ndef register_routes():\n async def parallel_queries(datasette):\n db = datasette.get_database()\n with tracer.trace_child_tasks():\n one, two = await asyncio.gather(\n db.execute(\"select 1\"),\n db.execute(\"select 2\"),\n )\n return Response.json(\n {\n \"one\": one.single_value(),\n \"two\": two.single_value(),\n }\n )\n\n return [\n (r\"/parallel-queries$\", parallel_queries),\n ] \n Adding ?_trace=1 will show that the trace covers both of those child tasks.", "breadcrumbs": "[\"Internals for plugins\", \"datasette.tracer\"]", "references": "[]"} {"id": "pages:indexview", "page": "pages", "ref": "indexview", "title": "Top-level index", "content": "The root page of any Datasette installation is an index page that lists all of the currently attached databases. Some examples: \n \n \n fivethirtyeight.datasettes.com \n \n \n global-power-plants.datasettes.com \n \n \n register-of-members-interests.datasettes.com \n \n \n Add /.json to the end of the URL for the JSON version of the underlying data: \n \n \n fivethirtyeight.datasettes.com/.json \n \n \n global-power-plants.datasettes.com/.json \n \n \n register-of-members-interests.datasettes.com/.json", "breadcrumbs": "[\"Pages and API endpoints\"]", "references": "[{\"href\": \"https://fivethirtyeight.datasettes.com/\", \"label\": \"fivethirtyeight.datasettes.com\"}, {\"href\": \"https://global-power-plants.datasettes.com/\", \"label\": \"global-power-plants.datasettes.com\"}, {\"href\": \"https://register-of-members-interests.datasettes.com/\", \"label\": \"register-of-members-interests.datasettes.com\"}, {\"href\": \"https://fivethirtyeight.datasettes.com/.json\", \"label\": \"fivethirtyeight.datasettes.com/.json\"}, {\"href\": \"https://global-power-plants.datasettes.com/.json\", \"label\": \"global-power-plants.datasettes.com/.json\"}, {\"href\": \"https://register-of-members-interests.datasettes.com/.json\", \"label\": \"register-of-members-interests.datasettes.com/.json\"}]"} {"id": "internals:internals-tilde-encoding", "page": "internals", "ref": "internals-tilde-encoding", "title": "Tilde encoding", "content": "Datasette uses a custom encoding scheme in some places, called tilde encoding . This is primarily used for table names and row primary keys, to avoid any confusion between / characters in those values and the Datasette URLs that reference them. \n Tilde encoding uses the same algorithm as URL percent-encoding , but with the ~ tilde character used in place of % . \n Any character other than ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz0123456789_- will be replaced by the numeric equivalent preceded by a tilde. For example: \n \n \n / becomes ~2F \n \n \n . becomes ~2E \n \n \n % becomes ~25 \n \n \n ~ becomes ~7E \n \n \n Space becomes + \n \n \n polls/2022.primary becomes polls~2F2022~2Eprimary \n \n \n Note that the space character is a special case: it will be replaced with a + symbol. \n \n \n \n datasette.utils. tilde_encode s : str str \n \n Returns tilde-encoded string - for example /foo/bar -> ~2Ffoo~2Fbar \n \n \n \n \n \n datasette.utils. tilde_decode s : str str \n \n Decodes a tilde-encoded string, so ~2Ffoo~2Fbar -> /foo/bar", "breadcrumbs": "[\"Internals for plugins\", \"The datasette.utils module\"]", "references": "[{\"href\": \"https://developer.mozilla.org/en-US/docs/Glossary/percent-encoding\", \"label\": \"URL percent-encoding\"}]"} {"id": "full_text_search:full-text-search-table-view-api", "page": "full_text_search", "ref": "full-text-search-table-view-api", "title": "The table page and table view API", "content": "Table views that support full-text search can be queried using the ?_search=TERMS query string parameter. This will run the search against content from all of the columns that have been included in the index. \n Try this example: fara.datasettes.com/fara/FARA_All_ShortForms?_search=manafort \n SQLite full-text search supports wildcards. This means you can easily implement prefix auto-complete by including an asterisk at the end of the search term - for example: \n /dbname/tablename/?_search=rob* \n This will return all records containing at least one word that starts with the letters rob . \n You can also run searches against just the content of a specific named column by using _search_COLNAME=TERMS - for example, this would search for just rows where the name column in the FTS index mentions Sarah : \n /dbname/tablename/?_search_name=Sarah", "breadcrumbs": "[\"Full-text search\"]", "references": "[{\"href\": \"https://fara.datasettes.com/fara/FARA_All_ShortForms?_search=manafort\", \"label\": \"fara.datasettes.com/fara/FARA_All_ShortForms?_search=manafort\"}]"} {"id": "changelog:the-road-to-datasette-1-0", "page": "changelog", "ref": "the-road-to-datasette-1-0", "title": "The road to Datasette 1.0", "content": "I've assembled a milestone for Datasette 1.0 . The focus of the 1.0 release will be the following: \n \n \n Signify confidence in the quality/stability of Datasette \n \n \n Give plugin authors confidence that their plugins will work for the whole 1.x release cycle \n \n \n Provide the same confidence to developers building against Datasette JSON APIs \n \n \n If you have thoughts about what you would like to see for Datasette 1.0 you can join the conversation on issue #519 .", "breadcrumbs": "[\"Changelog\", \"0.44 (2020-06-11)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/milestone/7\", \"label\": \"milestone for Datasette 1.0\"}, {\"href\": \"https://github.com/simonw/datasette/issues/519\", \"label\": \"the conversation on issue #519\"}]"} {"id": "authentication:permissionsdebugview", "page": "authentication", "ref": "permissionsdebugview", "title": "The permissions debug tool", "content": "The debug tool at /-/permissions is only available to the authenticated root user (or any actor granted the permissions-debug action according to a plugin). \n It shows the thirty most recent permission checks that have been carried out by the Datasette instance. \n This is designed to help administrators and plugin authors understand exactly how permission checks are being carried out, in order to effectively configure Datasette's permission system.", "breadcrumbs": "[\"Authentication and permissions\"]", "references": "[]"} {"id": "authentication:authentication-ds-actor", "page": "authentication", "ref": "authentication-ds-actor", "title": "The ds_actor cookie", "content": "Datasette includes a default authentication plugin which looks for a signed ds_actor cookie containing a JSON actor dictionary. This is how the root actor mechanism works. \n Authentication plugins can set signed ds_actor cookies themselves like so: \n response = Response.redirect(\"/\")\nresponse.set_cookie(\n \"ds_actor\",\n datasette.sign({\"a\": {\"id\": \"cleopaws\"}}, \"actor\"),\n) \n Note that you need to pass \"actor\" as the namespace to .sign(value, namespace=\"default\") . \n The shape of data encoded in the cookie is as follows: \n {\n \"a\": {... actor ...}\n}", "breadcrumbs": "[\"Authentication and permissions\"]", "references": "[]"} {"id": "internals:internals-utils", "page": "internals", "ref": "internals-utils", "title": "The datasette.utils module", "content": "The datasette.utils module contains various utility functions used by Datasette. As a general rule you should consider anything in this module to be unstable - functions and classes here could change without warning or be removed entirely between Datasette releases, without being mentioned in the release notes. \n The exception to this rule is anythang that is documented here. If you find a need for an undocumented utility function in your own work, consider opening an issue requesting that the function you are using be upgraded to documented and supported status.", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/new\", \"label\": \"opening an issue\"}]"} {"id": "changelog:the-internal-database", "page": "changelog", "ref": "the-internal-database", "title": "The _internal database", "content": "As part of ongoing work to help Datasette handle much larger numbers of connected databases and tables (see Datasette Library ) Datasette now maintains an in-memory SQLite database with details of all of the attached databases, tables, columns, indexes and foreign keys. ( #1150 ) \n This will support future improvements such as a searchable, paginated homepage of all available tables. \n You can explore an example of this database by signing in as root to the latest.datasette.io demo instance and then navigating to latest.datasette.io/_internal . \n Plugins can use these tables to introspect attached data in an efficient way. Plugin authors should note that this is not yet considered a stable interface, so any plugins that use this may need to make changes prior to Datasette 1.0 if the _internal table schemas change.", "breadcrumbs": "[\"Changelog\", \"0.54 (2021-01-25)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/417\", \"label\": \"Datasette Library\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1150\", \"label\": \"#1150\"}, {\"href\": \"https://latest.datasette.io/login-as-root\", \"label\": \"signing in as root\"}, {\"href\": \"https://latest.datasette.io/_internal\", \"label\": \"latest.datasette.io/_internal\"}]"} {"id": "internals:internals-internal", "page": "internals", "ref": "internals-internal", "title": "The _internal database", "content": "This API should be considered unstable - the structure of these tables may change prior to the release of Datasette 1.0. \n \n Datasette maintains an in-memory SQLite database with details of the the databases, tables and columns for all of the attached databases. \n By default all actors are denied access to the view-database permission for the _internal database, so the database is not visible to anyone unless they sign in as root . \n Plugins can access this database by calling db = datasette.get_database(\"_internal\") and then executing queries using the Database API . \n You can explore an example of this database by signing in as root to the latest.datasette.io demo instance and then navigating to latest.datasette.io/_internal .", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[{\"href\": \"https://latest.datasette.io/login-as-root\", \"label\": \"signing in as root\"}, {\"href\": \"https://latest.datasette.io/_internal\", \"label\": \"latest.datasette.io/_internal\"}]"} {"id": "internals:internals-multiparams", "page": "internals", "ref": "internals-multiparams", "title": "The MultiParams class", "content": "request.args is a MultiParams object - a dictionary-like object which provides access to query string parameters that may have multiple values. \n Consider the query string ?foo=1&foo=2&bar=3 - with two values for foo and one value for bar . \n \n \n request.args[key] - string \n \n Returns the first value for that key, or raises a KeyError if the key is missing. For the above example request.args[\"foo\"] would return \"1\" . \n \n \n \n request.args.get(key) - string or None \n \n Returns the first value for that key, or None if the key is missing. Pass a second argument to specify a different default, e.g. q = request.args.get(\"q\", \"\") . \n \n \n \n request.args.getlist(key) - list of strings \n \n Returns the list of strings for that key. request.args.getlist(\"foo\") would return [\"1\", \"2\"] in the above example. request.args.getlist(\"bar\") would return [\"3\"] . If the key is missing an empty list will be returned. \n \n \n \n request.args.keys() - list of strings \n \n Returns the list of available keys - for the example this would be [\"foo\", \"bar\"] . \n \n \n \n key in request.args - True or False \n \n You can use if key in request.args to check if a key is present. \n \n \n \n for key in request.args - iterator \n \n This lets you loop through every available key. \n \n \n \n len(request.args) - integer \n \n Returns the number of keys.", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[]"} {"id": "ecosystem:ecosystem", "page": "ecosystem", "ref": "ecosystem", "title": "The Datasette Ecosystem", "content": "Datasette sits at the center of a growing ecosystem of open source tools aimed at making it as easy as possible to gather, analyze and publish interesting data. \n These tools are divided into two main groups: tools for building SQLite databases (for use with Datasette) and plugins that extend Datasette's functionality. \n The Datasette project website includes a directory of plugins and a directory of tools: \n \n \n Plugins directory on datasette.io \n \n \n Tools directory on datasette.io", "breadcrumbs": "[]", "references": "[{\"href\": \"https://datasette.io/\", \"label\": \"Datasette project website\"}, {\"href\": \"https://datasette.io/plugins\", \"label\": \"Plugins directory on datasette.io\"}, {\"href\": \"https://datasette.io/tools\", \"label\": \"Tools directory on datasette.io\"}]"} {"id": "authentication:logoutview", "page": "authentication", "ref": "logoutview", "title": "The /-/logout page", "content": "The page at /-/logout provides the ability to log out of a ds_actor cookie authentication session.", "breadcrumbs": "[\"Authentication and permissions\", \"The ds_actor cookie\"]", "references": "[]"} {"id": "authentication:allowdebugview", "page": "authentication", "ref": "allowdebugview", "title": "The /-/allow-debug tool", "content": "The /-/allow-debug tool lets you try out different \"action\" blocks against different \"actor\" JSON objects. You can try that out here: https://latest.datasette.io/-/allow-debug", "breadcrumbs": "[\"Authentication and permissions\", \"Permissions\"]", "references": "[{\"href\": \"https://latest.datasette.io/-/allow-debug\", \"label\": \"https://latest.datasette.io/-/allow-debug\"}]"} {"id": "testing_plugins:id1", "page": "testing_plugins", "ref": "id1", "title": "Testing plugins", "content": "We recommend using pytest to write automated tests for your plugins. \n If you use the template described in Starting an installable plugin using cookiecutter your plugin will start with a single test in your tests/ directory that looks like this: \n from datasette.app import Datasette\nimport pytest\n\n\n@pytest.mark.asyncio\nasync def test_plugin_is_installed():\n datasette = Datasette(memory=True)\n response = await datasette.client.get(\"/-/plugins.json\")\n assert response.status_code == 200\n installed_plugins = {p[\"name\"] for p in response.json()}\n assert (\n \"datasette-plugin-template-demo\"\n in installed_plugins\n ) \n This test uses the datasette.client object to exercise a test instance of Datasette. datasette.client is a wrapper around the HTTPX Python library which can imitate HTTP requests using ASGI. This is the recommended way to write tests against a Datasette instance. \n This test also uses the pytest-asyncio package to add support for async def test functions running under pytest. \n You can install these packages like so: \n pip install pytest pytest-asyncio \n If you are building an installable package you can add them as test dependencies to your setup.py module like this: \n setup(\n name=\"datasette-my-plugin\",\n # ...\n extras_require={\"test\": [\"pytest\", \"pytest-asyncio\"]},\n tests_require=[\"datasette-my-plugin[test]\"],\n) \n You can then install the test dependencies like so: \n pip install -e '.[test]' \n Then run the tests using pytest like so: \n pytest", "breadcrumbs": "[]", "references": "[{\"href\": \"https://docs.pytest.org/\", \"label\": \"pytest\"}, {\"href\": \"https://www.python-httpx.org/\", \"label\": \"HTTPX\"}, {\"href\": \"https://pypi.org/project/pytest-asyncio/\", \"label\": \"pytest-asyncio\"}]"} {"id": "testing_plugins:testing-plugins-pytest-httpx", "page": "testing_plugins", "ref": "testing-plugins-pytest-httpx", "title": "Testing outbound HTTP calls with pytest-httpx", "content": "If your plugin makes outbound HTTP calls - for example datasette-auth-github or datasette-import-table - you may need to mock those HTTP requests in your tests. \n The pytest-httpx package is a useful library for mocking calls. It can be tricky to use with Datasette though since it mocks all HTTPX requests, and Datasette's own testing mechanism uses HTTPX internally. \n To avoid breaking your tests, you can return [\"localhost\"] from the non_mocked_hosts() fixture. \n As an example, here's a very simple plugin which executes an HTTP response and returns the resulting content: \n from datasette import hookimpl\nfrom datasette.utils.asgi import Response\nimport httpx\n\n\n@hookimpl\ndef register_routes():\n return [\n (r\"^/-/fetch-url$\", fetch_url),\n ]\n\n\nasync def fetch_url(datasette, request):\n if request.method == \"GET\":\n return Response.html(\n \"\"\"\n

\n \n \n
\"\"\".format(\n request.scope[\"csrftoken\"]()\n )\n )\n vars = await request.post_vars()\n url = vars[\"url\"]\n return Response.text(httpx.get(url).text) \n Here's a test for that plugin that mocks the HTTPX outbound request: \n from datasette.app import Datasette\nimport pytest\n\n\n@pytest.fixture\ndef non_mocked_hosts():\n # This ensures httpx-mock will not affect Datasette's own\n # httpx calls made in the tests by datasette.client:\n return [\"localhost\"]\n\n\nasync def test_outbound_http_call(httpx_mock):\n httpx_mock.add_response(\n url=\"https://www.example.com/\",\n text=\"Hello world\",\n )\n datasette = Datasette([], memory=True)\n response = await datasette.client.post(\n \"/-/fetch-url\",\n data={\"url\": \"https://www.example.com/\"},\n )\n assert response.text == \"Hello world\"\n\n outbound_request = httpx_mock.get_request()\n assert (\n outbound_request.url == \"https://www.example.com/\"\n )", "breadcrumbs": "[\"Testing plugins\"]", "references": "[{\"href\": \"https://pypi.org/project/pytest-httpx/\", \"label\": \"pytest-httpx\"}]"} {"id": "json_api:id2", "page": "json_api", "ref": "id2", "title": "Table arguments", "content": "The Datasette table view takes a number of special query string arguments.", "breadcrumbs": "[\"JSON API\"]", "references": "[]"} {"id": "pages:tableview", "page": "pages", "ref": "tableview", "title": "Table", "content": "The table page is the heart of Datasette: it allows users to interactively explore the contents of a database table, including sorting, filtering, Full-text search and applying Facets . \n The HTML interface is worth spending some time exploring. As with other pages, you can return the JSON data by appending .json to the URL path, before any ? query string arguments. \n The query string arguments are described in more detail here: Table arguments \n You can also use the table page to interactively construct a SQL query - by applying different filters and a sort order for example - and then click the \"View and edit SQL\" link to see the SQL query that was used for the page and edit and re-submit it. \n Some examples: \n \n \n ../items lists all of the line-items registered by UK MPs as potential conflicts of interest. It demonstrates Datasette's support for Full-text search . \n \n \n ../antiquities-act%2Factions_under_antiquities_act is an interface for exploring the \"actions under the antiquities act\" data table published by FiveThirtyEight. \n \n \n ../global-power-plants?country_long=United+Kingdom&primary_fuel=Gas is a filtered table page showing every Gas power plant in the United Kingdom. It includes some default facets (configured using its metadata.json ) and uses the datasette-cluster-map plugin to show a map of the results.", "breadcrumbs": "[\"Pages and API endpoints\"]", "references": "[{\"href\": \"https://register-of-members-interests.datasettes.com/regmem/items\", \"label\": \"../items\"}, {\"href\": \"https://fivethirtyeight.datasettes.com/fivethirtyeight/antiquities-act%2Factions_under_antiquities_act\", \"label\": \"../antiquities-act%2Factions_under_antiquities_act\"}, {\"href\": \"https://global-power-plants.datasettes.com/global-power-plants/global-power-plants?_facet=primary_fuel&_facet=owner&_facet=country_long&country_long__exact=United+Kingdom&primary_fuel=Gas\", \"label\": \"../global-power-plants?country_long=United+Kingdom&primary_fuel=Gas\"}, {\"href\": \"https://global-power-plants.datasettes.com/-/metadata\", \"label\": \"its metadata.json\"}, {\"href\": \"https://github.com/simonw/datasette-cluster-map\", \"label\": \"datasette-cluster-map\"}]"} {"id": "changelog:v0-28-databases-that-change", "page": "changelog", "ref": "v0-28-databases-that-change", "title": "Supporting databases that change", "content": "From the beginning of the project, Datasette has been designed with read-only databases in mind. If a database is guaranteed not to change it opens up all kinds of interesting opportunities - from taking advantage of SQLite immutable mode and HTTP caching to bundling static copies of the database directly in a Docker container. The interesting ideas in Datasette explores this idea in detail. \n As my goals for the project have developed, I realized that read-only databases are no longer the right default. SQLite actually supports concurrent access very well provided only one thread attempts to write to a database at a time, and I keep encountering sensible use-cases for running Datasette on top of a database that is processing inserts and updates. \n So, as-of version 0.28 Datasette no longer assumes that a database file will not change. It is now safe to point Datasette at a SQLite database which is being updated by another process. \n Making this change was a lot of work - see tracking tickets #418 , #419 and #420 . It required new thinking around how Datasette should calculate table counts (an expensive operation against a large, changing database) and also meant reconsidering the \"content hash\" URLs Datasette has used in the past to optimize the performance of HTTP caches. \n Datasette can still run against immutable files and gains numerous performance benefits from doing so, but this is no longer the default behaviour. Take a look at the new Performance and caching documentation section for details on how to make the most of Datasette against data that you know will be staying read-only and immutable.", "breadcrumbs": "[\"Changelog\", \"0.28 (2019-05-19)\"]", "references": "[{\"href\": \"https://simonwillison.net/2018/Oct/4/datasette-ideas/\", \"label\": \"The interesting ideas in Datasette\"}, {\"href\": \"https://github.com/simonw/datasette/issues/418\", \"label\": \"#418\"}, {\"href\": \"https://github.com/simonw/datasette/issues/419\", \"label\": \"#419\"}, {\"href\": \"https://github.com/simonw/datasette/issues/420\", \"label\": \"#420\"}]"} {"id": "facets:suggested-facets", "page": "facets", "ref": "suggested-facets", "title": "Suggested facets", "content": "Datasette's table UI will suggest facets for the user to apply, based on the following criteria: \n For the currently filtered data are there any columns which, if applied as a facet... \n \n \n Will return 30 or less unique options \n \n \n Will return more than one unique option \n \n \n Will return less unique options than the total number of filtered rows \n \n \n And the query used to evaluate this criteria can be completed in under 50ms \n \n \n That last point is particularly important: Datasette runs a query for every column that is displayed on a page, which could get expensive - so to avoid slow load times it sets a time limit of just 50ms for each of those queries.\n This means suggested facets are unlikely to appear for tables with millions of records in them.", "breadcrumbs": "[\"Facets\"]", "references": "[]"} {"id": "csv_export:streaming-all-records", "page": "csv_export", "ref": "streaming-all-records", "title": "Streaming all records", "content": "The stream all rows option is designed to be as efficient as possible -\n under the hood it takes advantage of Python 3 asyncio capabilities and\n Datasette's efficient pagination to stream back the full\n CSV file. \n Since databases can get pretty large, by default this option is capped at 100MB -\n if a table returns more than 100MB of data the last line of the CSV will be a\n truncation error message. \n You can increase or remove this limit using the max_csv_mb config\n setting. You can also disable the CSV export feature entirely using\n allow_csv_stream .", "breadcrumbs": "[\"CSV export\"]", "references": "[]"} {"id": "writing_plugins:writing-plugins-static-assets", "page": "writing_plugins", "ref": "writing-plugins-static-assets", "title": "Static assets", "content": "If your plugin has a static/ directory, Datasette will automatically configure itself to serve those static assets from the following path: \n /-/static-plugins/NAME_OF_PLUGIN_PACKAGE/yourfile.js \n Use the datasette.urls.static_plugins(plugin_name, path) method to generate URLs to that asset that take the base_url setting into account, see datasette.urls . \n To bundle the static assets for a plugin in the package that you publish to PyPI, add the following to the plugin's setup.py : \n package_data = (\n {\n \"datasette_plugin_name\": [\n \"static/plugin.js\",\n ],\n },\n) \n Where datasette_plugin_name is the name of the plugin package (note that it uses underscores, not hyphens) and static/plugin.js is the path within that package to the static file. \n datasette-cluster-map is a useful example of a plugin that includes packaged static assets in this way.", "breadcrumbs": "[\"Writing plugins\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette-cluster-map\", \"label\": \"datasette-cluster-map\"}]"} {"id": "writing_plugins:writing-plugins-cookiecutter", "page": "writing_plugins", "ref": "writing-plugins-cookiecutter", "title": "Starting an installable plugin using cookiecutter", "content": "Plugins that can be installed should be written as Python packages using a setup.py file. \n The quickest way to start writing one an installable plugin is to use the datasette-plugin cookiecutter template. This creates a new plugin structure for you complete with an example test and GitHub Actions workflows for testing and publishing your plugin. \n Install cookiecutter and then run this command to start building a plugin using the template: \n cookiecutter gh:simonw/datasette-plugin \n Read a cookiecutter template for writing Datasette plugins for more information about this template.", "breadcrumbs": "[\"Writing plugins\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette-plugin\", \"label\": \"datasette-plugin\"}, {\"href\": \"https://cookiecutter.readthedocs.io/en/stable/installation.html\", \"label\": \"Install cookiecutter\"}, {\"href\": \"https://simonwillison.net/2020/Jun/20/cookiecutter-plugins/\", \"label\": \"a cookiecutter template for writing Datasette plugins\"}]"} {"id": "facets:speeding-up-facets-with-indexes", "page": "facets", "ref": "speeding-up-facets-with-indexes", "title": "Speeding up facets with indexes", "content": "The performance of facets can be greatly improved by adding indexes on the columns you wish to facet by.\n Adding indexes can be performed using the sqlite3 command-line utility. Here's how to add an index on the state column in a table called Food_Trucks : \n $ sqlite3 mydatabase.db\nSQLite version 3.19.3 2017-06-27 16:48:08\nEnter \".help\" for usage hints.\nsqlite> CREATE INDEX Food_Trucks_state ON Food_Trucks(\"state\"); \n Or using the sqlite-utils command-line utility: \n $ sqlite-utils create-index mydatabase.db Food_Trucks state", "breadcrumbs": "[\"Facets\"]", "references": "[{\"href\": \"https://sqlite-utils.datasette.io/en/stable/cli.html#creating-indexes\", \"label\": \"sqlite-utils\"}]"} {"id": "metadata:specifying-units-for-a-column", "page": "metadata", "ref": "specifying-units-for-a-column", "title": "Specifying units for a column", "content": "Datasette supports attaching units to a column, which will be used when displaying\n values from that column. SI prefixes will be used where appropriate. \n Column units are configured in the metadata like so: \n {\n \"databases\": {\n \"database1\": {\n \"tables\": {\n \"example_table\": {\n \"units\": {\n \"column1\": \"metres\",\n \"column2\": \"Hz\"\n }\n }\n }\n }\n }\n} \n Units are interpreted using Pint , and you can see the full list of available units in\n Pint's unit registry . You can also add custom units to the metadata, which will be\n registered with Pint: \n {\n \"custom_units\": [\n \"decibel = [] = dB\"\n ]\n}", "breadcrumbs": "[\"Metadata\"]", "references": "[{\"href\": \"https://pint.readthedocs.io/\", \"label\": \"Pint\"}, {\"href\": \"https://github.com/hgrecco/pint/blob/master/pint/default_en.txt\", \"label\": \"unit registry\"}, {\"href\": \"http://pint.readthedocs.io/en/latest/defining.html\", \"label\": \"custom units\"}]"} {"id": "metadata:label-columns", "page": "metadata", "ref": "label-columns", "title": "Specifying the label column for a table", "content": "Datasette's HTML interface attempts to display foreign key references as\n labelled hyperlinks. By default, it looks for referenced tables that only have\n two columns: a primary key column and one other. It assumes that the second\n column should be used as the link label. \n If your table has more than two columns you can specify which column should be\n used for the link label with the label_column property: \n {\n \"databases\": {\n \"database1\": {\n \"tables\": {\n \"example_table\": {\n \"label_column\": \"title\"\n }\n }\n }\n }\n}", "breadcrumbs": "[\"Metadata\"]", "references": "[]"} {"id": "json_api:json-api-table-arguments", "page": "json_api", "ref": "json-api-table-arguments", "title": "Special table arguments", "content": "?_col=COLUMN1&_col=COLUMN2 \n \n List specific columns to display. These will be shown along with any primary keys. \n \n \n \n ?_nocol=COLUMN1&_nocol=COLUMN2 \n \n List specific columns to hide - any column not listed will be displayed. Primary keys cannot be hidden. \n \n \n \n ?_labels=on/off \n \n Expand foreign key references for every possible column. See below. \n \n \n \n ?_label=COLUMN1&_label=COLUMN2 \n \n Expand foreign key references for one or more specified columns. \n \n \n \n ?_size=1000 or ?_size=max \n \n Sets a custom page size. This cannot exceed the max_returned_rows limit\n passed to datasette serve . Use max to get max_returned_rows . \n \n \n \n ?_sort=COLUMN \n \n Sorts the results by the specified column. \n \n \n \n ?_sort_desc=COLUMN \n \n Sorts the results by the specified column in descending order. \n \n \n \n ?_search=keywords \n \n For SQLite tables that have been configured for\n full-text search executes a search\n with the provided keywords. \n \n \n \n ?_search_COLUMN=keywords \n \n Like _search= but allows you to specify the column to be searched, as\n opposed to searching all columns that have been indexed by FTS. \n \n \n \n ?_searchmode=raw \n \n With this option, queries passed to ?_search= or ?_search_COLUMN= will\n not have special characters escaped. This means you can make use of the full\n set of advanced SQLite FTS syntax ,\n though this could potentially result in errors if the wrong syntax is used. \n \n \n \n ?_where=SQL-fragment \n \n If the execute-sql permission is enabled, this parameter\n can be used to pass one or more additional SQL fragments to be used in the\n WHERE clause of the SQL used to query the table. \n This is particularly useful if you are building a JavaScript application\n that needs to do something creative but still wants the other conveniences\n provided by the table view (such as faceting) and hence would like not to\n have to construct a completely custom SQL query. \n Some examples: \n \n \n facetable?_where=_neighborhood like \"%c%\"&_where=_city_id=3 \n \n \n facetable?_where=_city_id in (select id from facet_cities where name != \"Detroit\") \n \n \n \n \n \n ?_through={json} \n \n This can be used to filter rows via a join against another table. \n The JSON parameter must include three keys: table , column and value . \n table must be a table that the current table is related to via a foreign key relationship. \n column must be a column in that other table. \n value is the value that you want to match against. \n For example, to filter roadside_attractions to just show the attractions that have a characteristic of \"museum\", you would construct this JSON: \n {\n \"table\": \"roadside_attraction_characteristics\",\n \"column\": \"characteristic_id\",\n \"value\": \"1\"\n} \n As a URL, that looks like this: \n ?_through={%22table%22:%22roadside_attraction_characteristics%22,%22column%22:%22characteristic_id%22,%22value%22:%221%22} \n Here's an example . \n \n \n \n ?_next=TOKEN \n \n Pagination by continuation token - pass the token that was returned in the\n \"next\" property by the previous page. \n \n \n \n ?_facet=column \n \n Facet by column. Can be applied multiple times, see Facets . Only works on the default JSON output, not on any of the custom shapes. \n \n \n \n ?_facet_size=100 \n \n Increase the number of facet results returned for each facet. Use ?_facet_size=max for the maximum available size, determined by max_returned_rows . \n \n \n \n ?_nofacet=1 \n \n Disable all facets and facet suggestions for this page, including any defined by Facets in metadata.json . \n \n \n \n ?_nosuggest=1 \n \n Disable facet suggestions for this page. \n \n \n \n ?_nocount=1 \n \n Disable the select count(*) query used on this page - a count of None will be returned instead.", "breadcrumbs": "[\"JSON API\", \"Table arguments\"]", "references": "[{\"href\": \"https://www.sqlite.org/fts3.html\", \"label\": \"full-text search\"}, {\"href\": \"https://www.sqlite.org/fts5.html#full_text_query_syntax\", \"label\": \"advanced SQLite FTS syntax\"}, {\"href\": \"https://latest.datasette.io/fixtures/facetable?_where=_neighborhood%20like%20%22%c%%22&_where=_city_id=3\", \"label\": \"facetable?_where=_neighborhood like \\\"%c%\\\"&_where=_city_id=3\"}, {\"href\": \"https://latest.datasette.io/fixtures/facetable?_where=_city_id%20in%20(select%20id%20from%20facet_cities%20where%20name%20!=%20%22Detroit%22)\", \"label\": \"facetable?_where=_city_id in (select id from facet_cities where name != \\\"Detroit\\\")\"}, {\"href\": \"https://latest.datasette.io/fixtures/roadside_attractions?_through={%22table%22:%22roadside_attraction_characteristics%22,%22column%22:%22characteristic_id%22,%22value%22:%221%22}\", \"label\": \"an example\"}]"} {"id": "json_api:json-api-special", "page": "json_api", "ref": "json-api-special", "title": "Special JSON arguments", "content": "Every Datasette endpoint that can return JSON also accepts the following\n query string arguments: \n \n \n ?_shape=SHAPE \n \n The shape of the JSON to return, documented above. \n \n \n \n ?_nl=on \n \n When used with ?_shape=array produces newline-delimited JSON objects. \n \n \n \n ?_json=COLUMN1&_json=COLUMN2 \n \n If any of your SQLite columns contain JSON values, you can use one or more\n _json= parameters to request that those columns be returned as regular\n JSON. Without this argument those columns will be returned as JSON objects\n that have been double-encoded into a JSON string value. \n Compare this query without the argument to this query using the argument \n \n \n \n ?_json_infinity=on \n \n If your data contains infinity or -infinity values, Datasette will replace\n them with None when returning them as JSON. If you pass _json_infinity=1 \n Datasette will instead return them as Infinity or -Infinity which is\n invalid JSON but can be processed by some custom JSON parsers. \n \n \n \n ?_timelimit=MS \n \n Sets a custom time limit for the query in ms. You can use this for optimistic\n queries where you would like Datasette to give up if the query takes too\n long, for example if you want to implement autocomplete search but only if\n it can be executed in less than 10ms. \n \n \n \n ?_ttl=SECONDS \n \n For how many seconds should this response be cached by HTTP proxies? Use\n ?_ttl=0 to disable HTTP caching entirely for this request. \n \n \n \n ?_trace=1 \n \n Turns on tracing for this page: SQL queries executed during the request will\n be gathered and included in the response, either in a new \"_traces\" key\n for JSON responses or at the bottom of the page if the response is in HTML. \n The structure of the data returned here should be considered highly unstable\n and very likely to change. \n Only available if the trace_debug setting is enabled.", "breadcrumbs": "[\"JSON API\"]", "references": "[{\"href\": \"https://fivethirtyeight.datasettes.com/fivethirtyeight.json?sql=select+%27{%22this+is%22%3A+%22a+json+object%22}%27+as+d&_shape=array\", \"label\": \"this query without the argument\"}, {\"href\": \"https://fivethirtyeight.datasettes.com/fivethirtyeight.json?sql=select+%27{%22this+is%22%3A+%22a+json+object%22}%27+as+d&_shape=array&_json=d\", \"label\": \"this query using the argument\"}]"} {"id": "spatialite:spatial-indexing-latitude-longitude-columns", "page": "spatialite", "ref": "spatial-indexing-latitude-longitude-columns", "title": "Spatial indexing latitude/longitude columns", "content": "Here's a recipe for taking a table with existing latitude and longitude columns, adding a SpatiaLite POINT geometry column to that table, populating the new column and then populating a spatial index: \n import sqlite3\n\nconn = sqlite3.connect(\"museums.db\")\n# Lead the spatialite extension:\nconn.enable_load_extension(True)\nconn.load_extension(\"/usr/local/lib/mod_spatialite.dylib\")\n# Initialize spatial metadata for this database:\nconn.execute(\"select InitSpatialMetadata(1)\")\n# Add a geometry column called point_geom to our museums table:\nconn.execute(\n \"SELECT AddGeometryColumn('museums', 'point_geom', 4326, 'POINT', 2);\"\n)\n# Now update that geometry column with the lat/lon points\nconn.execute(\n \"\"\"\n UPDATE museums SET\n point_geom = GeomFromText('POINT('||\"longitude\"||' '||\"latitude\"||')',4326);\n\"\"\"\n)\n# Now add a spatial index to that column\nconn.execute(\n 'select CreateSpatialIndex(\"museums\", \"point_geom\");'\n)\n# If you don't commit your changes will not be persisted:\nconn.commit()\nconn.close()", "breadcrumbs": "[\"SpatiaLite\"]", "references": "[]"} {"id": "spatialite:id1", "page": "spatialite", "ref": "id1", "title": "SpatiaLite", "content": "The SpatiaLite module for SQLite adds features for handling geographic and spatial data. For an example of what you can do with it, see the tutorial Building a location to time zone API with SpatiaLite . \n To use it with Datasette, you need to install the mod_spatialite dynamic library. This can then be loaded into Datasette using the --load-extension command-line option. \n Datasette can look for SpatiaLite in common installation locations if you run it like this: \n datasette --load-extension=spatialite --setting default_allow_sql off \n If SpatiaLite is in another location, use the full path to the extension instead: \n datasette --setting default_allow_sql off \\\n --load-extension=/usr/local/lib/mod_spatialite.dylib", "breadcrumbs": "[]", "references": "[{\"href\": \"https://www.gaia-gis.it/fossil/libspatialite/index\", \"label\": \"SpatiaLite module\"}, {\"href\": \"https://datasette.io/tutorials/spatialite\", \"label\": \"Building a location to time zone API with SpatiaLite\"}]"} {"id": "metadata:metadata-source-license-about", "page": "metadata", "ref": "metadata-source-license-about", "title": "Source, license and about", "content": "The three visible metadata fields you can apply to everything, specific databases or specific tables are source, license and about. All three are optional. \n source and source_url should be used to indicate where the underlying data came from. \n license and license_url should be used to indicate the license under which the data can be used. \n about and about_url can be used to link to further information about the project - an accompanying blog entry for example. \n For each of these you can provide just the *_url field and Datasette will treat that as the default link label text and display the URL directly on the page.", "breadcrumbs": "[\"Metadata\"]", "references": "[]"} {"id": "changelog:id56", "page": "changelog", "ref": "id56", "title": "Smaller changes", "content": "Cascading view permissions - so if a user has view-table they can view the table page even if they do not have view-database or view-instance . ( #832 ) \n \n \n CSRF protection no longer applies to Authentication: Bearer token requests or requests without cookies. ( #835 ) \n \n \n datasette.add_message() now works inside plugins. ( #864 ) \n \n \n Workaround for \"Too many open files\" error in test runs. ( #846 ) \n \n \n Respect existing scope[\"actor\"] if already set by ASGI middleware. ( #854 ) \n \n \n New process for shipping Alpha and beta releases . ( #807 ) \n \n \n {{ csrftoken() }} now works when plugins render a template using datasette.render_template(..., request=request) . ( #863 ) \n \n \n Datasette now creates a single Request object and uses it throughout the lifetime of the current HTTP request. ( #870 )", "breadcrumbs": "[\"Changelog\", \"0.45 (2020-07-01)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/832\", \"label\": \"#832\"}, {\"href\": \"https://github.com/simonw/datasette/issues/835\", \"label\": \"#835\"}, {\"href\": \"https://github.com/simonw/datasette/issues/864\", \"label\": \"#864\"}, {\"href\": \"https://github.com/simonw/datasette/issues/846\", \"label\": \"#846\"}, {\"href\": \"https://github.com/simonw/datasette/issues/854\", \"label\": \"#854\"}, {\"href\": \"https://github.com/simonw/datasette/issues/807\", \"label\": \"#807\"}, {\"href\": \"https://github.com/simonw/datasette/issues/863\", \"label\": \"#863\"}, {\"href\": \"https://github.com/simonw/datasette/issues/870\", \"label\": \"#870\"}]"} {"id": "changelog:id58", "page": "changelog", "ref": "id58", "title": "Smaller changes", "content": "New internals documentation for Request object and Response class . ( #706 ) \n \n \n request.url now respects the force_https_urls config setting. closes ( #781 ) \n \n \n request.args.getlist() returns [] if missing. Removed request.raw_args entirely. ( #774 ) \n \n \n New datasette.get_database() method. \n \n \n Added _ prefix to many private, undocumented methods of the Datasette class. ( #576 ) \n \n \n Removed the db.get_outbound_foreign_keys() method which duplicated the behaviour of db.foreign_keys_for_table() . \n \n \n New await datasette.permission_allowed() method. \n \n \n /-/actor debugging endpoint for viewing the currently authenticated actor. \n \n \n New request.cookies property. \n \n \n /-/plugins endpoint now shows a list of hooks implemented by each plugin, e.g. https://latest.datasette.io/-/plugins?all=1 \n \n \n request.post_vars() method no longer discards empty values. \n \n \n New \"params\" canned query key for explicitly setting named parameters, see Canned query parameters . ( #797 ) \n \n \n request.args is now a MultiParams object. \n \n \n Fixed a bug with the datasette plugins command. ( #802 ) \n \n \n Nicer pattern for using make_app_client() in tests. ( #395 ) \n \n \n New request.actor property. \n \n \n Fixed broken CSS on nested 404 pages. ( #777 ) \n \n \n New request.url_vars property. ( #822 ) \n \n \n Fixed a bug with the python tests/fixtures.py command for outputting Datasette's testing fixtures database and plugins. ( #804 ) \n \n \n datasette publish heroku now deploys using Python 3.8.3. \n \n \n Added a warning that the register_facet_classes() hook is unstable and may change in the future. ( #830 ) \n \n \n The {\"$env\": \"ENVIRONMENT_VARIBALE\"} mechanism (see Secret configuration values ) now works with variables inside nested lists. ( #837 )", "breadcrumbs": "[\"Changelog\", \"0.44 (2020-06-11)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/706\", \"label\": \"#706\"}, {\"href\": \"https://github.com/simonw/datasette/issues/781\", \"label\": \"#781\"}, {\"href\": \"https://github.com/simonw/datasette/issues/774\", \"label\": \"#774\"}, {\"href\": \"https://github.com/simonw/datasette/issues/576\", \"label\": \"#576\"}, {\"href\": \"https://latest.datasette.io/-/plugins?all=1\", \"label\": \"https://latest.datasette.io/-/plugins?all=1\"}, {\"href\": \"https://github.com/simonw/datasette/issues/797\", \"label\": \"#797\"}, {\"href\": \"https://github.com/simonw/datasette/issues/802\", \"label\": \"#802\"}, {\"href\": \"https://github.com/simonw/datasette/issues/395\", \"label\": \"#395\"}, {\"href\": \"https://github.com/simonw/datasette/issues/777\", \"label\": \"#777\"}, {\"href\": \"https://github.com/simonw/datasette/issues/822\", \"label\": \"#822\"}, {\"href\": \"https://github.com/simonw/datasette/issues/804\", \"label\": \"#804\"}, {\"href\": \"https://github.com/simonw/datasette/issues/830\", \"label\": \"#830\"}, {\"href\": \"https://github.com/simonw/datasette/issues/837\", \"label\": \"#837\"}]"} {"id": "changelog:smaller-changes", "page": "changelog", "ref": "smaller-changes", "title": "Smaller changes", "content": "Wide tables shown within Datasette now scroll horizontally ( #998 ). This is achieved using a new
element which may impact the implementation of some plugins (for example this change to datasette-cluster-map ). \n \n \n New debug-menu permission. ( #1068 ) \n \n \n Removed --debug option, which didn't do anything. ( #814 ) \n \n \n Link: HTTP header pagination. ( #1014 ) \n \n \n x button for clearing filters. ( #1016 ) \n \n \n Edit SQL button on canned queries, ( #1019 ) \n \n \n --load-extension=spatialite shortcut. ( #1028 ) \n \n \n scale-in animation for column action menu. ( #1039 ) \n \n \n Option to pass a list of templates to .render_template() is now documented. ( #1045 ) \n \n \n New datasette.urls.static_plugins() method. ( #1033 ) \n \n \n datasette -o option now opens the most relevant page. ( #976 ) \n \n \n datasette --cors option now enables access to /database.db downloads. ( #1057 ) \n \n \n Database file downloads now implement cascading permissions, so you can download a database if you have view-database-download permission even if you do not have permission to access the Datasette instance. ( #1058 ) \n \n \n New documentation on Designing URLs for your plugin . ( #1053 )", "breadcrumbs": "[\"Changelog\", \"0.51 (2020-10-31)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/998\", \"label\": \"#998\"}, {\"href\": \"https://github.com/simonw/datasette-cluster-map/commit/fcb4abbe7df9071c5ab57defd39147de7145b34e\", \"label\": \"this change to datasette-cluster-map\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1068\", \"label\": \"#1068\"}, {\"href\": \"https://github.com/simonw/datasette/issues/814\", \"label\": \"#814\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1014\", \"label\": \"#1014\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1016\", \"label\": \"#1016\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1019\", \"label\": \"#1019\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1028\", \"label\": \"#1028\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1039\", \"label\": \"#1039\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1045\", \"label\": \"#1045\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1033\", \"label\": \"#1033\"}, {\"href\": \"https://github.com/simonw/datasette/issues/976\", \"label\": \"#976\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1057\", \"label\": \"#1057\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1058\", \"label\": \"#1058\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1053\", \"label\": \"#1053\"}]"} {"id": "changelog:id83", "page": "changelog", "ref": "id83", "title": "Small changes", "content": "We now show the size of the database file next to the download link ( #172 ) \n \n \n New /-/databases introspection page shows currently connected databases ( #470 ) \n \n \n Binary data is no longer displayed on the table and row pages ( #442 - thanks, Russ Garrett) \n \n \n New show/hide SQL links on custom query pages ( #415 ) \n \n \n The extra_body_script plugin hook now accepts an optional view_name argument ( #443 - thanks, Russ Garrett) \n \n \n Bumped Jinja2 dependency to 2.10.1 ( #426 ) \n \n \n All table filters are now documented, and documentation is enforced via unit tests ( 2c19a27 ) \n \n \n New project guideline: master should stay shippable at all times! ( 31f36e1 ) \n \n \n Fixed a bug where sqlite_timelimit() occasionally failed to clean up after itself ( bac4e01 ) \n \n \n We no longer load additional plugins when executing pytest ( #438 ) \n \n \n Homepage now links to database views if there are less than five tables in a database ( #373 ) \n \n \n The --cors option is now respected by error pages ( #453 ) \n \n \n datasette publish heroku now uses the --include-vcs-ignore option, which means it works under Travis CI ( #407 ) \n \n \n datasette publish heroku now publishes using Python 3.6.8 ( 666c374 ) \n \n \n Renamed datasette publish now to datasette publish nowv1 ( #472 ) \n \n \n datasette publish nowv1 now accepts multiple --alias parameters ( 09ef305 ) \n \n \n Removed the datasette skeleton command ( #476 ) \n \n \n The documentation on how to build the documentation now recommends sphinx-autobuild", "breadcrumbs": "[\"Changelog\", \"0.28 (2019-05-19)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/172\", \"label\": \"#172\"}, {\"href\": \"https://github.com/simonw/datasette/issues/470\", \"label\": \"#470\"}, {\"href\": \"https://github.com/simonw/datasette/pull/442\", \"label\": \"#442\"}, {\"href\": \"https://github.com/simonw/datasette/issues/415\", \"label\": \"#415\"}, {\"href\": \"https://github.com/simonw/datasette/pull/443\", \"label\": \"#443\"}, {\"href\": \"https://github.com/simonw/datasette/pull/426\", \"label\": \"#426\"}, {\"href\": \"https://github.com/simonw/datasette/commit/2c19a27d15a913e5f3dd443f04067169a6f24634\", \"label\": \"2c19a27\"}, {\"href\": \"https://github.com/simonw/datasette/commit/31f36e1b97ccc3f4387c80698d018a69798b6228\", \"label\": \"31f36e1\"}, {\"href\": \"https://github.com/simonw/datasette/commit/bac4e01f40ae7bd19d1eab1fb9349452c18de8f5\", \"label\": \"bac4e01\"}, {\"href\": \"https://github.com/simonw/datasette/issues/438\", \"label\": \"#438\"}, {\"href\": \"https://github.com/simonw/datasette/issues/373\", \"label\": \"#373\"}, {\"href\": \"https://github.com/simonw/datasette/issues/453\", \"label\": \"#453\"}, {\"href\": \"https://github.com/simonw/datasette/pull/407\", \"label\": \"#407\"}, {\"href\": \"https://github.com/simonw/datasette/commit/666c37415a898949fae0437099d62a35b1e9c430\", \"label\": \"666c374\"}, {\"href\": \"https://github.com/simonw/datasette/issues/472\", \"label\": \"#472\"}, {\"href\": \"https://github.com/simonw/datasette/commit/09ef305c687399384fe38487c075e8669682deb4\", \"label\": \"09ef305\"}, {\"href\": \"https://github.com/simonw/datasette/issues/476\", \"label\": \"#476\"}]"} {"id": "changelog:small-changes", "page": "changelog", "ref": "small-changes", "title": "Small changes", "content": "Databases published using datasette publish now open in Immutable mode . ( #469 ) \n \n \n ?col__date= now works for columns containing spaces \n \n \n Automatic label detection (for deciding which column to show when linking to a foreign key) has been improved. ( #485 ) \n \n \n Fixed bug where pagination broke when combined with an expanded foreign key. ( #489 ) \n \n \n Contributors can now run pip install -e .[docs] to get all of the dependencies needed to build the documentation, including cd docs && make livehtml support. \n \n \n Datasette's dependencies are now all specified using the ~= match operator. ( #532 ) \n \n \n white-space: pre-wrap now used for table creation SQL. ( #505 ) \n \n \n Full list of commits between 0.28 and 0.29.", "breadcrumbs": "[\"Changelog\", \"0.29 (2019-07-07)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/469\", \"label\": \"#469\"}, {\"href\": \"https://github.com/simonw/datasette/issues/485\", \"label\": \"#485\"}, {\"href\": \"https://github.com/simonw/datasette/issues/489\", \"label\": \"#489\"}, {\"href\": \"https://github.com/simonw/datasette/issues/532\", \"label\": \"#532\"}, {\"href\": \"https://github.com/simonw/datasette/issues/505\", \"label\": \"#505\"}, {\"href\": \"https://github.com/simonw/datasette/compare/0.28...0.29\", \"label\": \"Full list of commits\"}]"} {"id": "changelog:signed-values-and-secrets", "page": "changelog", "ref": "signed-values-and-secrets", "title": "Signed values and secrets", "content": "Both flash messages and user authentication needed a way to sign values and set signed cookies. Two new methods are now available for plugins to take advantage of this mechanism: .sign(value, namespace=\"default\") and .unsign(value, namespace=\"default\") . \n Datasette will generate a secret automatically when it starts up, but to avoid resetting the secret (and hence invalidating any cookies) every time the server restarts you should set your own secret. You can pass a secret to Datasette using the new --secret option or with a DATASETTE_SECRET environment variable. See Configuring the secret for more details. \n You can also set a secret when you deploy Datasette using datasette publish or datasette package - see Using secrets with datasette publish . \n Plugins can now sign values and verify their signatures using the datasette.sign() and datasette.unsign() methods.", "breadcrumbs": "[\"Changelog\", \"0.44 (2020-06-11)\"]", "references": "[]"} {"id": "settings:id1", "page": "settings", "ref": "id1", "title": "Settings", "content": "", "breadcrumbs": "[]", "references": "[]"} {"id": "settings:id2", "page": "settings", "ref": "id2", "title": "Settings", "content": "The following options can be set using --setting name value , or by storing them in the settings.json file for use with Configuration directory mode .", "breadcrumbs": "[\"Settings\"]", "references": "[]"} {"id": "metadata:metadata-sortable-columns", "page": "metadata", "ref": "metadata-sortable-columns", "title": "Setting which columns can be used for sorting", "content": "Datasette allows any column to be used for sorting by default. If you need to\n control which columns are available for sorting you can do so using the optional\n sortable_columns key: \n {\n \"databases\": {\n \"database1\": {\n \"tables\": {\n \"example_table\": {\n \"sortable_columns\": [\n \"height\",\n \"weight\"\n ]\n }\n }\n }\n }\n} \n This will restrict sorting of example_table to just the height and\n weight columns. \n You can also disable sorting entirely by setting \"sortable_columns\": [] \n You can use sortable_columns to enable specific sort orders for a view called name_of_view in the database my_database like so: \n {\n \"databases\": {\n \"my_database\": {\n \"tables\": {\n \"name_of_view\": {\n \"sortable_columns\": [\n \"clicks\",\n \"impressions\"\n ]\n }\n }\n }\n }\n}", "breadcrumbs": "[\"Metadata\"]", "references": "[]"} {"id": "contributing:devenvironment", "page": "contributing", "ref": "devenvironment", "title": "Setting up a development environment", "content": "If you have Python 3.7 or higher installed on your computer (on OS X the quickest way to do this is using homebrew ) you can install an editable copy of Datasette using the following steps. \n If you want to use GitHub to publish your changes, first create a fork of datasette under your own GitHub account. \n Now clone that repository somewhere on your computer: \n git clone git@github.com:YOURNAME/datasette \n If you want to get started without creating your own fork, you can do this instead: \n git clone git@github.com:simonw/datasette \n The next step is to create a virtual environment for your project and use it to install Datasette's dependencies: \n cd datasette\n# Create a virtual environment in ./venv\npython3 -m venv ./venv\n# Now activate the virtual environment, so pip can install into it\nsource venv/bin/activate\n# Install Datasette and its testing dependencies\npython3 -m pip install -e '.[test]' \n That last line does most of the work: pip install -e means \"install this package in a way that allows me to edit the source code in place\". The .[test] option means \"use the setup.py in this directory and install the optional testing dependencies as well\".", "breadcrumbs": "[\"Contributing\"]", "references": "[{\"href\": \"https://docs.python-guide.org/starting/install3/osx/\", \"label\": \"is using homebrew\"}, {\"href\": \"https://github.com/simonw/datasette/fork\", \"label\": \"create a fork of datasette\"}]"} {"id": "testing_plugins:testing-plugins-datasette-test-instance", "page": "testing_plugins", "ref": "testing-plugins-datasette-test-instance", "title": "Setting up a Datasette test instance", "content": "The above example shows the easiest way to start writing tests against a Datasette instance: \n from datasette.app import Datasette\nimport pytest\n\n\n@pytest.mark.asyncio\nasync def test_plugin_is_installed():\n datasette = Datasette(memory=True)\n response = await datasette.client.get(\"/-/plugins.json\")\n assert response.status_code == 200 \n Creating a Datasette() instance like this as useful shortcut in tests, but there is one detail you need to be aware of. It's important to ensure that the async method .invoke_startup() is called on that instance. You can do that like this: \n datasette = Datasette(memory=True)\nawait datasette.invoke_startup() \n This method registers any startup(datasette) or prepare_jinja2_environment(env, datasette) plugins that might themselves need to make async calls. \n If you are using await datasette.client.get() and similar methods then you don't need to worry about this - Datasette automatically calls invoke_startup() the first time it handles a request.", "breadcrumbs": "[\"Testing plugins\"]", "references": "[]"} {"id": "internals:internals-response-set-cookie", "page": "internals", "ref": "internals-response-set-cookie", "title": "Setting cookies with response.set_cookie()", "content": "To set cookies on the response, use the response.set_cookie(...) method. The method signature looks like this: \n def set_cookie(\n self,\n key,\n value=\"\",\n max_age=None,\n expires=None,\n path=\"/\",\n domain=None,\n secure=False,\n httponly=False,\n samesite=\"lax\",\n):\n ... \n You can use this with datasette.sign() to set signed cookies. Here's how you would set the ds_actor cookie for use with Datasette authentication : \n response = Response.redirect(\"/\")\nresponse.set_cookie(\n \"ds_actor\",\n datasette.sign({\"a\": {\"id\": \"cleopaws\"}}, \"actor\"),\n)\nreturn response", "breadcrumbs": "[\"Internals for plugins\", \"Response class\"]", "references": "[]"} {"id": "metadata:metadata-default-sort", "page": "metadata", "ref": "metadata-default-sort", "title": "Setting a default sort order", "content": "By default Datasette tables are sorted by primary key. You can over-ride this default for a specific table using the \"sort\" or \"sort_desc\" metadata properties: \n {\n \"databases\": {\n \"mydatabase\": {\n \"tables\": {\n \"example_table\": {\n \"sort\": \"created\"\n }\n }\n }\n }\n} \n Or use \"sort_desc\" to sort in descending order: \n {\n \"databases\": {\n \"mydatabase\": {\n \"tables\": {\n \"example_table\": {\n \"sort_desc\": \"created\"\n }\n }\n }\n }\n}", "breadcrumbs": "[\"Metadata\"]", "references": "[]"} {"id": "metadata:metadata-page-size", "page": "metadata", "ref": "metadata-page-size", "title": "Setting a custom page size", "content": "Datasette defaults to displaying 100 rows per page, for both tables and views. You can change this default page size on a per-table or per-view basis using the \"size\" key in metadata.json : \n {\n \"databases\": {\n \"mydatabase\": {\n \"tables\": {\n \"example_table\": {\n \"size\": 10\n }\n }\n }\n }\n} \n This size can still be over-ridden by passing e.g. ?_size=50 in the query string.", "breadcrumbs": "[\"Metadata\"]", "references": "[]"} {"id": "custom_templates:customization-static-files", "page": "custom_templates", "ref": "customization-static-files", "title": "Serving static files", "content": "Datasette can serve static files for you, using the --static option.\n Consider the following directory structure: \n metadata.json\nstatic-files/styles.css\nstatic-files/app.js \n You can start Datasette using --static assets:static-files/ to serve those\n files from the /assets/ mount point: \n $ datasette -m metadata.json --static assets:static-files/ --memory \n The following URLs will now serve the content from those CSS and JS files: \n http://localhost:8001/assets/styles.css\nhttp://localhost:8001/assets/app.js \n You can reference those files from metadata.json like so: \n {\n \"extra_css_urls\": [\n \"/assets/styles.css\"\n ],\n \"extra_js_urls\": [\n \"/assets/app.js\"\n ]\n}", "breadcrumbs": "[\"Custom pages and templates\", \"Custom CSS and JavaScript\"]", "references": "[]"} {"id": "plugins:plugins-installed", "page": "plugins", "ref": "plugins-installed", "title": "Seeing what plugins are installed", "content": "You can see a list of installed plugins by navigating to the /-/plugins page of your Datasette instance - for example: https://fivethirtyeight.datasettes.com/-/plugins \n You can also use the datasette plugins command: \n $ datasette plugins\n[\n {\n \"name\": \"datasette_json_html\",\n \"static\": false,\n \"templates\": false,\n \"version\": \"0.4.0\"\n }\n] \n [[[cog\nfrom datasette import cli\nfrom click.testing import CliRunner\nimport textwrap, json\ncog.out(\"\\n\")\nresult = CliRunner().invoke(cli.cli, [\"plugins\", \"--all\"])\n# cog.out() with text containing newlines was unindenting for some reason\ncog.outl(\"If you run ``datasette plugins --all`` it will include default plugins that ship as part of Datasette::\\n\")\nplugins = [p for p in json.loads(result.output) if p[\"name\"].startswith(\"datasette.\")]\nindented = textwrap.indent(json.dumps(plugins, indent=4), \" \")\nfor line in indented.split(\"\\n\"):\n cog.outl(line)\ncog.out(\"\\n\\n\") \n ]]] \n If you run datasette plugins --all it will include default plugins that ship as part of Datasette: \n [\n {\n \"name\": \"datasette.actor_auth_cookie\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"actor_from_request\"\n ]\n },\n {\n \"name\": \"datasette.blob_renderer\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"register_output_renderer\"\n ]\n },\n {\n \"name\": \"datasette.default_magic_parameters\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"register_magic_parameters\"\n ]\n },\n {\n \"name\": \"datasette.default_menu_links\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"menu_links\"\n ]\n },\n {\n \"name\": \"datasette.default_permissions\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"permission_allowed\"\n ]\n },\n {\n \"name\": \"datasette.facets\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"register_facet_classes\"\n ]\n },\n {\n \"name\": \"datasette.filters\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"filters_from_request\"\n ]\n },\n {\n \"name\": \"datasette.forbidden\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"forbidden\"\n ]\n },\n {\n \"name\": \"datasette.handle_exception\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"handle_exception\"\n ]\n },\n {\n \"name\": \"datasette.publish.cloudrun\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"publish_subcommand\"\n ]\n },\n {\n \"name\": \"datasette.publish.heroku\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"publish_subcommand\"\n ]\n },\n {\n \"name\": \"datasette.sql_functions\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"prepare_connection\"\n ]\n }\n] \n [[[end]]] \n You can add the --plugins-dir= option to include any plugins found in that directory.", "breadcrumbs": "[\"Plugins\"]", "references": "[{\"href\": \"https://fivethirtyeight.datasettes.com/-/plugins\", \"label\": \"https://fivethirtyeight.datasettes.com/-/plugins\"}]"} {"id": "changelog:secret-plugin-configuration-options", "page": "changelog", "ref": "secret-plugin-configuration-options", "title": "Secret plugin configuration options", "content": "Plugins like datasette-auth-github need a safe way to set secret configuration options. Since the default mechanism for configuring plugins exposes those settings in /-/metadata a new mechanism was needed. Secret configuration values describes how plugins can now specify that their settings should be read from a file or an environment variable: \n {\n \"plugins\": {\n \"datasette-auth-github\": {\n \"client_secret\": {\n \"$env\": \"GITHUB_CLIENT_SECRET\"\n }\n }\n }\n} \n These plugin secrets can be set directly using datasette publish . See Custom metadata and plugins for details. ( #538 and #543 )", "breadcrumbs": "[\"Changelog\", \"0.29 (2019-07-07)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette-auth-github\", \"label\": \"datasette-auth-github\"}, {\"href\": \"https://github.com/simonw/datasette/issues/538\", \"label\": \"#538\"}, {\"href\": \"https://github.com/simonw/datasette/issues/543\", \"label\": \"#543\"}]"} {"id": "plugins:plugins-configuration-secret", "page": "plugins", "ref": "plugins-configuration-secret", "title": "Secret configuration values", "content": "Any values embedded in metadata.json will be visible to anyone who views the /-/metadata page of your Datasette instance. Some plugins may need configuration that should stay secret - API keys for example. There are two ways in which you can store secret configuration values. \n As environment variables . If your secret lives in an environment variable that is available to the Datasette process, you can indicate that the configuration value should be read from that environment variable like so: \n {\n \"plugins\": {\n \"datasette-auth-github\": {\n \"client_secret\": {\n \"$env\": \"GITHUB_CLIENT_SECRET\"\n }\n }\n }\n} \n As values in separate files . Your secrets can also live in files on disk. To specify a secret should be read from a file, provide the full file path like this: \n {\n \"plugins\": {\n \"datasette-auth-github\": {\n \"client_secret\": {\n \"$file\": \"/secrets/client-secret\"\n }\n }\n }\n} \n If you are publishing your data using the datasette publish family of commands, you can use the --plugin-secret option to set these secrets at publish time. For example, using Heroku you might run the following command: \n $ datasette publish heroku my_database.db \\\n --name my-heroku-app-demo \\\n --install=datasette-auth-github \\\n --plugin-secret datasette-auth-github client_id your_client_id \\\n --plugin-secret datasette-auth-github client_secret your_client_secret \n This will set the necessary environment variables and add the following to the deployed metadata.json : \n {\n \"plugins\": {\n \"datasette-auth-github\": {\n \"client_id\": {\n \"$env\": \"DATASETTE_AUTH_GITHUB_CLIENT_ID\"\n },\n \"client_secret\": {\n \"$env\": \"DATASETTE_AUTH_GITHUB_CLIENT_SECRET\"\n }\n }\n }\n}", "breadcrumbs": "[\"Plugins\", \"Plugin configuration\"]", "references": "[]"} {"id": "full_text_search:full-text-search-custom-sql", "page": "full_text_search", "ref": "full-text-search-custom-sql", "title": "Searches using custom SQL", "content": "You can include full-text search results in custom SQL queries. The general pattern with SQLite search is to run the search as a sub-select that returns rowid values, then include those rowids in another part of the query. \n You can see the syntax for a basic search by running that search on a table page and then clicking \"View and edit SQL\" to see the underlying SQL. For example, consider this search for manafort is the US FARA database : \n /fara/FARA_All_ShortForms?_search=manafort \n If you click View and edit SQL you'll see that the underlying SQL looks like this: \n select\n rowid,\n Short_Form_Termination_Date,\n Short_Form_Date,\n Short_Form_Last_Name,\n Short_Form_First_Name,\n Registration_Number,\n Registration_Date,\n Registrant_Name,\n Address_1,\n Address_2,\n City,\n State,\n Zip\nfrom\n FARA_All_ShortForms\nwhere\n rowid in (\n select\n rowid\n from\n FARA_All_ShortForms_fts\n where\n FARA_All_ShortForms_fts match escape_fts(:search)\n )\norder by\n rowid\nlimit\n 101", "breadcrumbs": "[\"Full-text search\"]", "references": "[{\"href\": \"https://fara.datasettes.com/fara/FARA_All_ShortForms?_search=manafort\", \"label\": \"manafort is the US FARA database\"}, {\"href\": \"https://fara.datasettes.com/fara?sql=select%0D%0A++rowid%2C%0D%0A++Short_Form_Termination_Date%2C%0D%0A++Short_Form_Date%2C%0D%0A++Short_Form_Last_Name%2C%0D%0A++Short_Form_First_Name%2C%0D%0A++Registration_Number%2C%0D%0A++Registration_Date%2C%0D%0A++Registrant_Name%2C%0D%0A++Address_1%2C%0D%0A++Address_2%2C%0D%0A++City%2C%0D%0A++State%2C%0D%0A++Zip%0D%0Afrom%0D%0A++FARA_All_ShortForms%0D%0Awhere%0D%0A++rowid+in+%28%0D%0A++++select%0D%0A++++++rowid%0D%0A++++from%0D%0A++++++FARA_All_ShortForms_fts%0D%0A++++where%0D%0A++++++FARA_All_ShortForms_fts+match+escape_fts%28%3Asearch%29%0D%0A++%29%0D%0Aorder+by%0D%0A++rowid%0D%0Alimit%0D%0A++101&search=manafort\", \"label\": \"View and edit SQL\"}]"} {"id": "contributing:contributing-running-tests", "page": "contributing", "ref": "contributing-running-tests", "title": "Running the tests", "content": "Once you have done this, you can run the Datasette unit tests from inside your datasette/ directory using pytest like so: \n pytest \n You can run the tests faster using multiple CPU cores with pytest-xdist like this: \n pytest -n auto -m \"not serial\" \n -n auto detects the number of available cores automatically. The -m \"not serial\" skips tests that don't work well in a parallel test environment. You can run those tests separately like so: \n pytest -m \"serial\"", "breadcrumbs": "[\"Contributing\"]", "references": "[{\"href\": \"https://docs.pytest.org/\", \"label\": \"pytest\"}, {\"href\": \"https://pypi.org/project/pytest-xdist/\", \"label\": \"pytest-xdist\"}]"} {"id": "sql_queries:sql", "page": "sql_queries", "ref": "sql", "title": "Running SQL queries", "content": "Datasette treats SQLite database files as read-only and immutable. This means it is not possible to execute INSERT or UPDATE statements using Datasette, which allows us to expose SELECT statements to the outside world without needing to worry about SQL injection attacks. \n The easiest way to execute custom SQL against Datasette is through the web UI. The database index page includes a SQL editor that lets you run any SELECT query you like. You can also construct queries using the filter interface on the tables page, then click \"View and edit SQL\" to open that query in the custom SQL editor. \n Note that this interface is only available if the execute-sql permission is allowed. \n Any Datasette SQL query is reflected in the URL of the page, allowing you to bookmark them, share them with others and navigate through previous queries using your browser back button. \n You can also retrieve the results of any query as JSON by adding .json to the base URL.", "breadcrumbs": "[]", "references": "[]"} {"id": "deploying:deploying-systemd", "page": "deploying", "ref": "deploying-systemd", "title": "Running Datasette using systemd", "content": "You can run Datasette on Ubuntu or Debian systems using systemd . \n First, ensure you have Python 3 and pip installed. On Ubuntu you can use sudo apt-get install python3 python3-pip . \n You can install Datasette into a virtual environment, or you can install it system-wide. To install system-wide, use sudo pip3 install datasette . \n Now create a folder for your Datasette databases, for example using mkdir /home/ubuntu/datasette-root . \n You can copy a test database into that folder like so: \n cd /home/ubuntu/datasette-root\ncurl -O https://latest.datasette.io/fixtures.db \n Create a file at /etc/systemd/system/datasette.service with the following contents: \n [Unit]\nDescription=Datasette\nAfter=network.target\n\n[Service]\nType=simple\nUser=ubuntu\nEnvironment=DATASETTE_SECRET=\nWorkingDirectory=/home/ubuntu/datasette-root\nExecStart=datasette serve . -h 127.0.0.1 -p 8000\nRestart=on-failure\n\n[Install]\nWantedBy=multi-user.target \n Add a random value for the DATASETTE_SECRET - this will be used to sign Datasette cookies such as the CSRF token cookie. You can generate a suitable value like so: \n $ python3 -c 'import secrets; print(secrets.token_hex(32))' \n This configuration will run Datasette against all database files contained in the /home/ubuntu/datasette-root directory. If that directory contains a metadata.yml (or .json ) file or a templates/ or plugins/ sub-directory those will automatically be loaded by Datasette - see Configuration directory mode for details. \n You can start the Datasette process running using the following: \n sudo systemctl daemon-reload\nsudo systemctl start datasette.service \n You will need to restart the Datasette service after making changes to its metadata.json configuration or adding a new database file to that directory. You can do that using: \n sudo systemctl restart datasette.service \n Once the service has started you can confirm that Datasette is running on port 8000 like so: \n curl 127.0.0.1:8000/-/versions.json\n# Should output JSON showing the installed version \n Datasette will not be accessible from outside the server because it is listening on 127.0.0.1 . You can expose it by instead listening on 0.0.0.0 , but a better way is to set up a proxy such as nginx - see Running Datasette behind a proxy .", "breadcrumbs": "[\"Deploying Datasette\"]", "references": "[]"} {"id": "deploying:deploying-openrc", "page": "deploying", "ref": "deploying-openrc", "title": "Running Datasette using OpenRC", "content": "OpenRC is the service manager on non-systemd Linux distributions like Alpine Linux and Gentoo . \n Create an init script at /etc/init.d/datasette with the following contents: \n #!/sbin/openrc-run\n\nname=\"datasette\"\ncommand=\"datasette\"\ncommand_args=\"serve -h 0.0.0.0 /path/to/db.db\"\ncommand_background=true\npidfile=\"/run/${RC_SVCNAME}.pid\" \n You then need to configure the service to run at boot and start it: \n rc-update add datasette\nrc-service datasette start", "breadcrumbs": "[\"Deploying Datasette\"]", "references": "[{\"href\": \"https://www.alpinelinux.org/\", \"label\": \"Alpine Linux\"}, {\"href\": \"https://www.gentoo.org/\", \"label\": \"Gentoo\"}]"} {"id": "changelog:running-datasette-behind-a-proxy", "page": "changelog", "ref": "running-datasette-behind-a-proxy", "title": "Running Datasette behind a proxy", "content": "The base_url configuration option is designed to help run Datasette on a specific path behind a proxy - for example if you want to run an instance of Datasette at /my-datasette/ within your existing site's URL hierarchy, proxied behind nginx or Apache. \n Support for this configuration option has been greatly improved ( #1023 ), and guidelines for using it are now available in a new documentation section on Running Datasette behind a proxy . ( #1027 )", "breadcrumbs": "[\"Changelog\", \"0.51 (2020-10-31)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/1023\", \"label\": \"#1023\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1027\", \"label\": \"#1027\"}]"} {"id": "deploying:deploying-proxy", "page": "deploying", "ref": "deploying-proxy", "title": "Running Datasette behind a proxy", "content": "You may wish to run Datasette behind an Apache or nginx proxy, using a path within your existing site. \n You can use the base_url configuration setting to tell Datasette to serve traffic with a specific URL prefix. For example, you could run Datasette like this: \n datasette my-database.db --setting base_url /my-datasette/ -p 8009 \n This will run Datasette with the following URLs: \n \n \n http://127.0.0.1:8009/my-datasette/ - the Datasette homepage \n \n \n http://127.0.0.1:8009/my-datasette/my-database - the page for the my-database.db database \n \n \n http://127.0.0.1:8009/my-datasette/my-database/some_table - the page for the some_table table \n \n \n You can now set your nginx or Apache server to proxy the /my-datasette/ path to this Datasette instance.", "breadcrumbs": "[\"Deploying Datasette\"]", "references": "[]"} {"id": "contributing:contributing-documentation-cog", "page": "contributing", "ref": "contributing-documentation-cog", "title": "Running Cog", "content": "Some pages of documentation (in particular the CLI reference ) are automatically updated using Cog . \n To update these pages, run the following command: \n cog -r docs/*.rst", "breadcrumbs": "[\"Contributing\", \"Editing and building the documentation\"]", "references": "[{\"href\": \"https://github.com/nedbat/cog\", \"label\": \"Cog\"}]"} {"id": "contributing:contributing-formatting-black", "page": "contributing", "ref": "contributing-formatting-black", "title": "Running Black", "content": "Black will be installed when you run pip install -e '.[test]' . To test that your code complies with Black, run the following in your root datasette repository checkout: \n $ black . --check\nAll done! \u2728 \ud83c\udf70 \u2728\n95 files would be left unchanged. \n If any of your code does not conform to Black you can run this to automatically fix those problems: \n $ black .\nreformatted ../datasette/setup.py\nAll done! \u2728 \ud83c\udf70 \u2728\n1 file reformatted, 94 files left unchanged.", "breadcrumbs": "[\"Contributing\", \"Code formatting\"]", "references": "[]"} {"id": "pages:rowview", "page": "pages", "ref": "rowview", "title": "Row", "content": "Every row in every Datasette table has its own URL. This means individual records can be linked to directly. \n Table cells with extremely long text contents are truncated on the table view according to the truncate_cells_html setting. If a cell has been truncated the full length version of that cell will be available on the row page. \n Rows which are the targets of foreign key references from other tables will show a link to a filtered search for all records that reference that row. Here's an example from the Registers of Members Interests database: \n ../people/uk.org.publicwhip%2Fperson%2F10001 \n Note that this URL includes the encoded primary key of the record. \n Here's that same page as JSON: \n ../people/uk.org.publicwhip%2Fperson%2F10001.json", "breadcrumbs": "[\"Pages and API endpoints\"]", "references": "[{\"href\": \"https://register-of-members-interests.datasettes.com/regmem/people/uk.org.publicwhip%2Fperson%2F10001\", \"label\": \"../people/uk.org.publicwhip%2Fperson%2F10001\"}, {\"href\": \"https://register-of-members-interests.datasettes.com/regmem/people/uk.org.publicwhip%2Fperson%2F10001.json\", \"label\": \"../people/uk.org.publicwhip%2Fperson%2F10001.json\"}]"} {"id": "internals:internals-response-asgi-send", "page": "internals", "ref": "internals-response-asgi-send", "title": "Returning a response with .asgi_send(send)", "content": "In most cases you will return Response objects from your own view functions. You can also use a Response instance to respond at a lower level via ASGI, for example if you are writing code that uses the asgi_wrapper(datasette) hook. \n Create a Response object and then use await response.asgi_send(send) , passing the ASGI send function. For example: \n async def require_authorization(scope, receive, send):\n response = Response.text(\n \"401 Authorization Required\",\n headers={\n \"www-authenticate\": 'Basic realm=\"Datasette\", charset=\"UTF-8\"'\n },\n status=401,\n )\n await response.asgi_send(send)", "breadcrumbs": "[\"Internals for plugins\", \"Response class\"]", "references": "[]"} {"id": "custom_templates:custom-pages-404", "page": "custom_templates", "ref": "custom-pages-404", "title": "Returning 404s", "content": "To indicate that content could not be found and display the default 404 page you can use the raise_404(message) function: \n {% if not rows %}\n {{ raise_404(\"Content not found\") }}\n{% endif %} \n If you call raise_404() the other content in your template will be ignored.", "breadcrumbs": "[\"Custom pages and templates\", \"Custom pages\"]", "references": "[]"} {"id": "internals:database-results", "page": "internals", "ref": "database-results", "title": "Results", "content": "The db.execute() method returns a single Results object. This can be used to access the rows returned by the query. \n Iterating over a Results object will yield SQLite Row objects . Each of these can be treated as a tuple or can be accessed using row[\"column\"] syntax: \n info = []\nresults = await db.execute(\"select name from sqlite_master\")\nfor row in results:\n info.append(row[\"name\"]) \n The Results object also has the following properties and methods: \n \n \n .truncated - boolean \n \n Indicates if this query was truncated - if it returned more results than the specified page_size . If this is true then the results object will only provide access to the first page_size rows in the query result. You can disable truncation by passing truncate=False to the db.query() method. \n \n \n \n .columns - list of strings \n \n A list of column names returned by the query. \n \n \n \n .rows - list of sqlite3.Row \n \n This property provides direct access to the list of rows returned by the database. You can access specific rows by index using results.rows[0] . \n \n \n \n .first() - row or None \n \n Returns the first row in the results, or None if no rows were returned. \n \n \n \n .single_value() \n \n Returns the value of the first column of the first row of results - but only if the query returned a single row with a single column. Raises a datasette.database.MultipleValues exception otherwise. \n \n \n \n .__len__() \n \n Calling len(results) returns the (truncated) number of returned results.", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[{\"href\": \"https://docs.python.org/3/library/sqlite3.html#row-objects\", \"label\": \"Row objects\"}]"} {"id": "internals:internals-response", "page": "internals", "ref": "internals-response", "title": "Response class", "content": "The Response class can be returned from view functions that have been registered using the register_routes(datasette) hook. \n The Response() constructor takes the following arguments: \n \n \n body - string \n \n The body of the response. \n \n \n \n status - integer (optional) \n \n The HTTP status - defaults to 200. \n \n \n \n headers - dictionary (optional) \n \n A dictionary of extra HTTP headers, e.g. {\"x-hello\": \"world\"} . \n \n \n \n content_type - string (optional) \n \n The content-type for the response. Defaults to text/plain . \n \n \n \n For example: \n from datasette.utils.asgi import Response\n\nresponse = Response(\n \"This is XML\",\n content_type=\"application/xml; charset=utf-8\",\n) \n The quickest way to create responses is using the Response.text(...) , Response.html(...) , Response.json(...) or Response.redirect(...) helper methods: \n from datasette.utils.asgi import Response\n\nhtml_response = Response.html(\"This is HTML\")\njson_response = Response.json({\"this_is\": \"json\"})\ntext_response = Response.text(\n \"This will become utf-8 encoded text\"\n)\n# Redirects are served as 302, unless you pass status=301:\nredirect_response = Response.redirect(\n \"https://latest.datasette.io/\"\n) \n Each of these responses will use the correct corresponding content-type - text/html; charset=utf-8 , application/json; charset=utf-8 or text/plain; charset=utf-8 respectively. \n Each of the helper methods take optional status= and headers= arguments, documented above.", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[]"} {"id": "internals:internals-request", "page": "internals", "ref": "internals-request", "title": "Request object", "content": "The request object is passed to various plugin hooks. It represents an incoming HTTP request. It has the following properties: \n \n \n .scope - dictionary \n \n The ASGI scope that was used to construct this request, described in the ASGI HTTP connection scope specification. \n \n \n \n .method - string \n \n The HTTP method for this request, usually GET or POST . \n \n \n \n .url - string \n \n The full URL for this request, e.g. https://latest.datasette.io/fixtures . \n \n \n \n .scheme - string \n \n The request scheme - usually https or http . \n \n \n \n .headers - dictionary (str -> str) \n \n A dictionary of incoming HTTP request headers. Header names have been converted to lowercase. \n \n \n \n .cookies - dictionary (str -> str) \n \n A dictionary of incoming cookies \n \n \n \n .host - string \n \n The host header from the incoming request, e.g. latest.datasette.io or localhost . \n \n \n \n .path - string \n \n The path of the request excluding the query string, e.g. /fixtures . \n \n \n \n .full_path - string \n \n The path of the request including the query string if one is present, e.g. /fixtures?sql=select+sqlite_version() . \n \n \n \n .query_string - string \n \n The query string component of the request, without the ? - e.g. name__contains=sam&age__gt=10 . \n \n \n \n .args - MultiParams \n \n An object representing the parsed query string parameters, see below. \n \n \n \n .url_vars - dictionary (str -> str) \n \n Variables extracted from the URL path, if that path was defined using a regular expression. See register_routes(datasette) . \n \n \n \n .actor - dictionary (str -> Any) or None \n \n The currently authenticated actor (see actors ), or None if the request is unauthenticated. \n \n \n \n The object also has two awaitable methods: \n \n \n await request.post_vars() - dictionary \n \n Returns a dictionary of form variables that were submitted in the request body via POST . Don't forget to read about CSRF protection ! \n \n \n \n await request.post_body() - bytes \n \n Returns the un-parsed body of a request submitted by POST - useful for things like incoming JSON data. \n \n \n \n And a class method that can be used to create fake request objects for use in tests: \n \n \n fake(path_with_query_string, method=\"GET\", scheme=\"http\", url_vars=None) \n \n Returns a Request instance for the specified path and method. For example: \n from datasette import Request\nfrom pprint import pprint\n\nrequest = Request.fake(\n \"/fixtures/facetable/\",\n url_vars={\"database\": \"fixtures\", \"table\": \"facetable\"},\n)\npprint(request.scope) \n This outputs: \n {'http_version': '1.1',\n 'method': 'GET',\n 'path': '/fixtures/facetable/',\n 'query_string': b'',\n 'raw_path': b'/fixtures/facetable/',\n 'scheme': 'http',\n 'type': 'http',\n 'url_route': {'kwargs': {'database': 'fixtures', 'table': 'facetable'}}}", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[{\"href\": \"https://asgi.readthedocs.io/en/latest/specs/www.html#connection-scope\", \"label\": \"ASGI HTTP connection scope\"}]"} {"id": "contributing:contributing-bug-fix-branch", "page": "contributing", "ref": "contributing-bug-fix-branch", "title": "Releasing bug fixes from a branch", "content": "If it's necessary to publish a bug fix release without shipping new features that have landed on main a release branch can be used. \n Create it from the relevant last tagged release like so: \n git branch 0.52.x 0.52.4\ngit checkout 0.52.x \n Next cherry-pick the commits containing the bug fixes: \n git cherry-pick COMMIT \n Write the release notes in the branch, and update the version number in version.py . Then push the branch: \n git push -u origin 0.52.x \n Once the tests have completed, publish the release from that branch target using the GitHub Draft a new release form. \n Finally, cherry-pick the commit with the release notes and version number bump across to main : \n git checkout main\ngit cherry-pick COMMIT\ngit push", "breadcrumbs": "[\"Contributing\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/releases/new\", \"label\": \"Draft a new release\"}]"} {"id": "contributing:contributing-release", "page": "contributing", "ref": "contributing-release", "title": "Release process", "content": "Datasette releases are performed using tags. When a new release is published on GitHub, a GitHub Action workflow will perform the following: \n \n \n Run the unit tests against all supported Python versions. If the tests pass... \n \n \n Build a Docker image of the release and push a tag to https://hub.docker.com/r/datasetteproject/datasette \n \n \n Re-point the \"latest\" tag on Docker Hub to the new image \n \n \n Build a wheel bundle of the underlying Python source code \n \n \n Push that new wheel up to PyPI: https://pypi.org/project/datasette/ \n \n \n To deploy new releases you will need to have push access to the main Datasette GitHub repository. \n Datasette follows Semantic Versioning : \n major.minor.patch \n We increment major for backwards-incompatible releases. Datasette is currently pre-1.0 so the major version is always 0 . \n We increment minor for new features. \n We increment patch for bugfix releass. \n Alpha and beta releases may have an additional a0 or b0 prefix - the integer component will be incremented with each subsequent alpha or beta. \n To release a new version, first create a commit that updates the version number in datasette/version.py and the the changelog with highlights of the new version. An example commit can be seen here : \n # Update changelog\ngit commit -m \" Release 0.51a1\n\nRefs #1056, #1039, #998, #1045, #1033, #1036, #1034, #976, #1057, #1058, #1053, #1064, #1066\" -a\ngit push \n Referencing the issues that are part of the release in the commit message ensures the name of the release shows up on those issue pages, e.g. here . \n You can generate the list of issue references for a specific release by copying and pasting text from the release notes or GitHub changes-since-last-release view into this Extract issue numbers from pasted text tool. \n To create the tag for the release, create a new release on GitHub matching the new version number. You can convert the release notes to Markdown by copying and pasting the rendered HTML into this Paste to Markdown tool . \n Finally, post a news item about the release on datasette.io by editing the news.yaml file in that site's repository.", "breadcrumbs": "[\"Contributing\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/blob/main/.github/workflows/deploy-latest.yml\", \"label\": \"GitHub Action workflow\"}, {\"href\": \"https://hub.docker.com/r/datasetteproject/datasette\", \"label\": \"https://hub.docker.com/r/datasetteproject/datasette\"}, {\"href\": \"https://pypi.org/project/datasette/\", \"label\": \"https://pypi.org/project/datasette/\"}, {\"href\": \"https://semver.org/\", \"label\": \"Semantic Versioning\"}, {\"href\": \"https://github.com/simonw/datasette/commit/0e1e89c6ba3d0fbdb0823272952cf356f3016def\", \"label\": \"commit can be seen here\"}, {\"href\": \"https://github.com/simonw/datasette/issues/581#ref-commit-d56f402\", \"label\": \"here\"}, {\"href\": \"https://observablehq.com/@simonw/extract-issue-numbers-from-pasted-text\", \"label\": \"Extract issue numbers from pasted text\"}, {\"href\": \"https://github.com/simonw/datasette/releases/new\", \"label\": \"a new release\"}, {\"href\": \"https://euangoddard.github.io/clipboard2markdown/\", \"label\": \"Paste to Markdown tool\"}, {\"href\": \"https://datasette.io/\", \"label\": \"datasette.io\"}, {\"href\": \"https://github.com/simonw/datasette.io/blob/main/news.yaml\", \"label\": \"news.yaml\"}]"} {"id": "testing_plugins:testing-plugins-register-in-test", "page": "testing_plugins", "ref": "testing-plugins-register-in-test", "title": "Registering a plugin for the duration of a test", "content": "When writing tests for plugins you may find it useful to register a test plugin just for the duration of a single test. You can do this using pm.register() and pm.unregister() like this: \n from datasette import hookimpl\nfrom datasette.app import Datasette\nfrom datasette.plugins import pm\nimport pytest\n\n\n@pytest.mark.asyncio\nasync def test_using_test_plugin():\n class TestPlugin:\n __name__ = \"TestPlugin\"\n\n # Use hookimpl and method names to register hooks\n @hookimpl\n def register_routes(self):\n return [\n (r\"^/error$\", lambda: 1 / 0),\n ]\n\n pm.register(TestPlugin(), name=\"undo\")\n try:\n # The test implementation goes here\n datasette = Datasette()\n response = await datasette.client.get(\"/error\")\n assert response.status_code == 500\n finally:\n pm.unregister(name=\"undo\")", "breadcrumbs": "[\"Testing plugins\"]", "references": "[]"} {"id": "spatialite:querying-polygons-using-within", "page": "spatialite", "ref": "querying-polygons-using-within", "title": "Querying polygons using within()", "content": "The within() SQL function can be used to check if a point is within a geometry: \n select\n name\nfrom\n places\nwhere\n within(GeomFromText('POINT(-3.1724366 51.4704448)'), places.geom); \n The GeomFromText() function takes a string of well-known text. Note that the order used here is longitude then latitude . \n To run that same within() query in a way that benefits from the spatial index, use the following: \n select\n name\nfrom\n places\nwhere\n within(GeomFromText('POINT(-3.1724366 51.4704448)'), places.geom)\n and rowid in (\n SELECT pkid FROM idx_places_geom\n where xmin < -3.1724366\n and xmax > -3.1724366\n and ymin < 51.4704448\n and ymax > 51.4704448\n );", "breadcrumbs": "[\"SpatiaLite\"]", "references": "[]"} {"id": "publish:publish-vercel", "page": "publish", "ref": "publish-vercel", "title": "Publishing to Vercel", "content": "Vercel - previously known as Zeit Now - provides a layer over AWS Lambda to allow for quick, scale-to-zero deployment. You can deploy Datasette instances to Vercel using the datasette-publish-vercel plugin. \n pip install datasette-publish-vercel\ndatasette publish vercel mydatabase.db --project my-database-project \n Not every feature is supported: consult the datasette-publish-vercel README for more details.", "breadcrumbs": "[\"Publishing data\", \"datasette publish\"]", "references": "[{\"href\": \"https://vercel.com/\", \"label\": \"Vercel\"}, {\"href\": \"https://github.com/simonw/datasette-publish-vercel\", \"label\": \"datasette-publish-vercel\"}, {\"href\": \"https://github.com/simonw/datasette-publish-vercel/blob/main/README.md\", \"label\": \"datasette-publish-vercel README\"}]"} {"id": "publish:publish-heroku", "page": "publish", "ref": "publish-heroku", "title": "Publishing to Heroku", "content": "To publish your data using Heroku , first create an account there and install and configure the Heroku CLI tool . \n You can publish one or more databases to Heroku using the following command: \n datasette publish heroku mydatabase.db \n This will output some details about the new deployment, including a URL like this one: \n https://limitless-reef-88278.herokuapp.com/ deployed to Heroku \n You can specify a custom app name by passing -n my-app-name to the publish command. This will also allow you to overwrite an existing app. \n Rather than deploying directly you can use the --generate-dir option to output the files that would be deployed to a directory: \n datasette publish heroku mydatabase.db --generate-dir=/tmp/deploy-this-to-heroku \n See datasette publish heroku for the full list of options for this command.", "breadcrumbs": "[\"Publishing data\", \"datasette publish\"]", "references": "[{\"href\": \"https://www.heroku.com/\", \"label\": \"Heroku\"}, {\"href\": \"https://devcenter.heroku.com/articles/heroku-cli\", \"label\": \"Heroku CLI tool\"}]"} {"id": "publish:publish-cloud-run", "page": "publish", "ref": "publish-cloud-run", "title": "Publishing to Google Cloud Run", "content": "Google Cloud Run allows you to publish data in a scale-to-zero environment, so your application will start running when the first request is received and will shut down again when traffic ceases. This means you only pay for time spent serving traffic. \n \n Cloud Run is a great option for inexpensively hosting small, low traffic projects - but costs can add up for projects that serve a lot of requests. \n Be particularly careful if your project has tables with large numbers of rows. Search engine crawlers that index a page for every row could result in a high bill. \n The datasette-block-robots plugin can be used to request search engine crawlers omit crawling your site, which can help avoid this issue. \n \n You will first need to install and configure the Google Cloud CLI tools by following these instructions . \n You can then publish one or more SQLite database files to Google Cloud Run using the following command: \n datasette publish cloudrun mydatabase.db --service=my-database \n A Cloud Run service is a single hosted application. The service name you specify will be used as part of the Cloud Run URL. If you deploy to a service name that you have used in the past your new deployment will replace the previous one. \n If you omit the --service option you will be asked to pick a service name interactively during the deploy. \n You may need to interact with prompts from the tool. Many of the prompts ask for values that can be set as properties for the Google Cloud SDK if you want to avoid the prompts. \n For example, the default region for the deployed instance can be set using the command: \n gcloud config set run/region us-central1 \n You should replace us-central1 with your desired region . Alternately, you can specify the region by setting the CLOUDSDK_RUN_REGION environment variable. \n Once it has finished it will output a URL like this one: \n Service [my-service] revision [my-service-00001] has been deployed\nand is serving traffic at https://my-service-j7hipcg4aq-uc.a.run.app \n Cloud Run provides a URL on the .run.app domain, but you can also point your own domain or subdomain at your Cloud Run service - see mapping custom domains in the Cloud Run documentation for details. \n See datasette publish cloudrun for the full list of options for this command.", "breadcrumbs": "[\"Publishing data\", \"datasette publish\"]", "references": "[{\"href\": \"https://cloud.google.com/run/\", \"label\": \"Google Cloud Run\"}, {\"href\": \"https://datasette.io/plugins/datasette-block-robots\", \"label\": \"datasette-block-robots\"}, {\"href\": \"https://cloud.google.com/sdk/\", \"label\": \"these instructions\"}, {\"href\": \"https://cloud.google.com/sdk/docs/properties\", \"label\": \"set as properties for the Google Cloud SDK\"}, {\"href\": \"https://cloud.google.com/about/locations\", \"label\": \"region\"}, {\"href\": \"https://cloud.google.com/run/docs/mapping-custom-domains\", \"label\": \"mapping custom domains\"}]"} {"id": "publish:publish-fly", "page": "publish", "ref": "publish-fly", "title": "Publishing to Fly", "content": "Fly is a competitively priced Docker-compatible hosting platform that supports running applications in globally distributed data centers close to your end users. You can deploy Datasette instances to Fly using the datasette-publish-fly plugin. \n pip install datasette-publish-fly\ndatasette publish fly mydatabase.db --app=\"my-app\" \n Consult the datasette-publish-fly README for more details.", "breadcrumbs": "[\"Publishing data\", \"datasette publish\"]", "references": "[{\"href\": \"https://fly.io/\", \"label\": \"Fly\"}, {\"href\": \"https://fly.io/docs/pricing/\", \"label\": \"competitively priced\"}, {\"href\": \"https://github.com/simonw/datasette-publish-fly\", \"label\": \"datasette-publish-fly\"}, {\"href\": \"https://github.com/simonw/datasette-publish-fly/blob/main/README.md\", \"label\": \"datasette-publish-fly README\"}]"} {"id": "custom_templates:publishing-static-assets", "page": "custom_templates", "ref": "publishing-static-assets", "title": "Publishing static assets", "content": "The datasette publish command can be used to publish your static assets,\n using the same syntax as above: \n $ datasette publish cloudrun mydb.db --static assets:static-files/ \n This will upload the contents of the static-files/ directory as part of the\n deployment, and configure Datasette to correctly serve the assets from /assets/ .", "breadcrumbs": "[\"Custom pages and templates\", \"Custom CSS and JavaScript\"]", "references": "[]"} {"id": "publish:publishing", "page": "publish", "ref": "publishing", "title": "Publishing data", "content": "Datasette includes tools for publishing and deploying your data to the internet. The datasette publish command will deploy a new Datasette instance containing your databases directly to a Heroku or Google Cloud hosting account. You can also use datasette package to create a Docker image that bundles your databases together with the datasette application that is used to serve them.", "breadcrumbs": "[]", "references": "[]"} {"id": "contributing:contributing-formatting-prettier", "page": "contributing", "ref": "contributing-formatting-prettier", "title": "Prettier", "content": "To install Prettier, install Node.js and then run the following in the root of your datasette repository checkout: \n $ npm install \n This will install Prettier in a node_modules directory. You can then check that your code matches the coding style like so: \n $ npm run prettier -- --check\n> prettier\n> prettier 'datasette/static/*[!.min].js' \"--check\"\n\nChecking formatting...\n[warn] datasette/static/plugins.js\n[warn] Code style issues found in the above file(s). Forgot to run Prettier? \n You can fix any problems by running: \n $ npm run fix", "breadcrumbs": "[\"Contributing\", \"Code formatting\"]", "references": "[{\"href\": \"https://nodejs.org/en/download/package-manager/\", \"label\": \"install Node.js\"}]"} {"id": "changelog:plugins-can-now-add-links-within-datasette", "page": "changelog", "ref": "plugins-can-now-add-links-within-datasette", "title": "Plugins can now add links within Datasette", "content": "A number of existing Datasette plugins add new pages to the Datasette interface, providig tools for things like uploading CSVs , editing table schemas or configuring full-text search . \n Plugins like this can now link to themselves from other parts of Datasette interface. The menu_links(datasette, actor, request) hook ( #1064 ) lets plugins add links to Datasette's new top-right application menu, and the table_actions(datasette, actor, database, table, request) hook ( #1066 ) adds links to a new \"table actions\" menu on the table page. \n The demo at latest.datasette.io now includes some example plugins. To see the new table actions menu first sign into that demo as root and then visit the facetable table to see the new cog icon menu at the top of the page.", "breadcrumbs": "[\"Changelog\", \"0.51 (2020-10-31)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette-upload-csvs\", \"label\": \"uploading CSVs\"}, {\"href\": \"https://github.com/simonw/datasette-edit-schema\", \"label\": \"editing table schemas\"}, {\"href\": \"https://github.com/simonw/datasette-configure-fts\", \"label\": \"configuring full-text search\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1064\", \"label\": \"#1064\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1066\", \"label\": \"#1066\"}, {\"href\": \"https://latest.datasette.io/\", \"label\": \"latest.datasette.io\"}, {\"href\": \"https://latest.datasette.io/login-as-root\", \"label\": \"sign into that demo as root\"}, {\"href\": \"https://latest.datasette.io/fixtures/facetable\", \"label\": \"facetable\"}]"} {"id": "changelog:plugins-and-internals", "page": "changelog", "ref": "plugins-and-internals", "title": "Plugins and internals", "content": "New plugin hook: filters_from_request(request, database, table, datasette) , which runs on the table page and can be used to support new custom query string parameters that modify the SQL query. ( #473 ) \n \n \n Added two additional methods for writing to the database: await db.execute_write_script(sql, block=True) and await db.execute_write_many(sql, params_seq, block=True) . ( #1570 ) \n \n \n The db.execute_write() internal method now defaults to blocking until the write operation has completed. Previously it defaulted to queuing the write and then continuing to run code while the write was in the queue. ( #1579 ) \n \n \n Database write connections now execute the prepare_connection(conn, database, datasette) plugin hook. ( #1564 ) \n \n \n The Datasette() constructor no longer requires the files= argument, and is now documented at Datasette class . ( #1563 ) \n \n \n The tracing feature now traces write queries, not just read queries. ( #1568 ) \n \n \n The query string variables exposed by request.args will now include blank strings for arguments such as foo in ?foo=&bar=1 rather than ignoring those parameters entirely. ( #1551 )", "breadcrumbs": "[\"Changelog\", \"0.60 (2022-01-13)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/473\", \"label\": \"#473\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1570\", \"label\": \"#1570\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1579\", \"label\": \"#1579\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1564\", \"label\": \"#1564\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1563\", \"label\": \"#1563\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1568\", \"label\": \"#1568\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1551\", \"label\": \"#1551\"}]"} {"id": "plugins:id1", "page": "plugins", "ref": "id1", "title": "Plugins", "content": "Datasette's plugin system allows additional features to be implemented as Python\n code (or front-end JavaScript) which can be wrapped up in a separate Python\n package. The underlying mechanism uses pluggy . \n See the Datasette plugins directory for a list of existing plugins, or take a look at the\n datasette-plugin topic on GitHub. \n Things you can do with plugins include: \n \n \n Add visualizations to Datasette, for example\n datasette-cluster-map and\n datasette-vega . \n \n \n Make new custom SQL functions available for use within Datasette, for example\n datasette-haversine and\n datasette-jellyfish . \n \n \n Define custom output formats with custom extensions, for example datasette-atom and\n datasette-ics . \n \n \n Add template functions that can be called within your Jinja custom templates,\n for example datasette-render-markdown . \n \n \n Customize how database values are rendered in the Datasette interface, for example\n datasette-render-binary and\n datasette-pretty-json . \n \n \n Customize how Datasette's authentication and permissions systems work, for example datasette-auth-tokens and\n datasette-permissions-sql .", "breadcrumbs": "[]", "references": "[{\"href\": \"https://pluggy.readthedocs.io/\", \"label\": \"pluggy\"}, {\"href\": \"https://datasette.io/plugins\", \"label\": \"Datasette plugins directory\"}, {\"href\": \"https://github.com/topics/datasette-plugin\", \"label\": \"datasette-plugin\"}, {\"href\": \"https://github.com/simonw/datasette-cluster-map\", \"label\": \"datasette-cluster-map\"}, {\"href\": \"https://github.com/simonw/datasette-vega\", \"label\": \"datasette-vega\"}, {\"href\": \"https://github.com/simonw/datasette-haversine\", \"label\": \"datasette-haversine\"}, {\"href\": \"https://github.com/simonw/datasette-jellyfish\", \"label\": \"datasette-jellyfish\"}, {\"href\": \"https://github.com/simonw/datasette-atom\", \"label\": \"datasette-atom\"}, {\"href\": \"https://github.com/simonw/datasette-ics\", \"label\": \"datasette-ics\"}, {\"href\": \"https://github.com/simonw/datasette-render-markdown#markdown-in-templates\", \"label\": \"datasette-render-markdown\"}, {\"href\": \"https://github.com/simonw/datasette-render-binary\", \"label\": \"datasette-render-binary\"}, {\"href\": \"https://github.com/simonw/datasette-pretty-json\", \"label\": \"datasette-pretty-json\"}, {\"href\": \"https://github.com/simonw/datasette-auth-tokens\", \"label\": \"datasette-auth-tokens\"}, {\"href\": \"https://github.com/simonw/datasette-permissions-sql\", \"label\": \"datasette-permissions-sql\"}]"} {"id": "changelog:plugin-hooks-and-internals", "page": "changelog", "ref": "plugin-hooks-and-internals", "title": "Plugin hooks and internals", "content": "The prepare_jinja2_environment(env, datasette) plugin hook now accepts an optional datasette argument. Hook implementations can also now return an async function which will be awaited automatically. ( #1809 ) \n \n \n Database(is_mutable=) now defaults to True . ( #1808 ) \n \n \n The datasette.check_visibility() method now accepts an optional permissions= list, allowing it to take multiple permissions into account at once when deciding if something should be shown as public or private. This has been used to correctly display padlock icons in more places in the Datasette interface. ( #1829 ) \n \n \n Datasette no longer enforces upper bounds on its dependencies. ( #1800 )", "breadcrumbs": "[\"Changelog\", \"0.63 (2022-10-27)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/1809\", \"label\": \"#1809\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1808\", \"label\": \"#1808\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1829\", \"label\": \"#1829\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1800\", \"label\": \"#1800\"}]"} {"id": "changelog:plugin-hooks", "page": "changelog", "ref": "plugin-hooks", "title": "Plugin hooks", "content": "New plugin hook: handle_exception() , for custom handling of exceptions caught by Datasette. ( #1770 ) \n \n \n The render_cell() plugin hook is now also passed a row argument, representing the sqlite3.Row object that is being rendered. ( #1300 ) \n \n \n The configuration directory is now stored in datasette.config_dir , making it available to plugins. Thanks, Chris Amico. ( #1766 )", "breadcrumbs": "[\"Changelog\", \"0.62 (2022-08-14)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/1770\", \"label\": \"#1770\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1300\", \"label\": \"#1300\"}, {\"href\": \"https://github.com/simonw/datasette/pull/1766\", \"label\": \"#1766\"}]"}