{"rowid": 101, "title": "suggest_facets", "content": "Should Datasette calculate suggested facets? On by default, turn this off like so: \n datasette mydatabase.db --setting suggest_facets off", "sections_fts": 226, "rank": null} {"rowid": 102, "title": "allow_download", "content": "Should users be able to download the original SQLite database using a link on the database index page? This is turned on by default. However, databases can only be downloaded if they are served in immutable mode and not in-memory. If downloading is unavailable for either of these reasons, the download link is hidden even if allow_download is on. To disable database downloads, use the following: \n datasette mydatabase.db --setting allow_download off", "sections_fts": 226, "rank": null} {"rowid": 103, "title": "default_cache_ttl", "content": "Default HTTP caching max-age header in seconds, used for Cache-Control: max-age=X . Can be over-ridden on a per-request basis using the ?_ttl= query string parameter. Set this to 0 to disable HTTP caching entirely. Defaults to 5 seconds. \n datasette mydatabase.db --setting default_cache_ttl 60", "sections_fts": 226, "rank": null} {"rowid": 104, "title": "cache_size_kb", "content": "Sets the amount of memory SQLite uses for its per-connection cache , in KB. \n datasette mydatabase.db --setting cache_size_kb 5000", "sections_fts": 226, "rank": null} {"rowid": 105, "title": "allow_csv_stream", "content": "Enables the CSV export feature where an entire table\n (potentially hundreds of thousands of rows) can be exported as a single CSV\n file. This is turned on by default - you can turn it off like this: \n datasette mydatabase.db --setting allow_csv_stream off", "sections_fts": 226, "rank": null} {"rowid": 106, "title": "max_csv_mb", "content": "The maximum size of CSV that can be exported, in megabytes. Defaults to 100MB.\n You can disable the limit entirely by settings this to 0: \n datasette mydatabase.db --setting max_csv_mb 0", "sections_fts": 226, "rank": null} {"rowid": 107, "title": "truncate_cells_html", "content": "In the HTML table view, truncate any strings that are longer than this value.\n The full value will still be available in CSV, JSON and on the individual row\n HTML page. Set this to 0 to disable truncation. \n datasette mydatabase.db --setting truncate_cells_html 0", "sections_fts": 226, "rank": null} {"rowid": 108, "title": "force_https_urls", "content": "Forces self-referential URLs in the JSON output to always use the https:// \n protocol. This is useful for cases where the application itself is hosted using\n HTTP but is served to the outside world via a proxy that enables HTTPS. \n datasette mydatabase.db --setting force_https_urls 1", "sections_fts": 226, "rank": null} {"rowid": 109, "title": "template_debug", "content": "This setting enables template context debug mode, which is useful to help understand what variables are available to custom templates when you are writing them. \n Enable it like this: \n datasette mydatabase.db --setting template_debug 1 \n Now you can add ?_context=1 or &_context=1 to any Datasette page to see the context that was passed to that template. \n Some examples: \n \n \n https://latest.datasette.io/?_context=1 \n \n \n https://latest.datasette.io/fixtures?_context=1 \n \n \n https://latest.datasette.io/fixtures/roadside_attractions?_context=1", "sections_fts": 226, "rank": null} {"rowid": 110, "title": "trace_debug", "content": "This setting enables appending ?_trace=1 to any page in order to see the SQL queries and other trace information that was used to generate that page. \n Enable it like this: \n datasette mydatabase.db --setting trace_debug 1 \n Some examples: \n \n \n https://latest.datasette.io/?_trace=1 \n \n \n https://latest.datasette.io/fixtures/roadside_attractions?_trace=1 \n \n \n See datasette.tracer for details on how to hook into this mechanism as a plugin author.", "sections_fts": 226, "rank": null} {"rowid": 111, "title": "base_url", "content": "If you are running Datasette behind a proxy, it may be useful to change the root path used for the Datasette instance. \n For example, if you are sending traffic from https://www.example.com/tools/datasette/ through to a proxied Datasette instance you may wish Datasette to use /tools/datasette/ as its root URL. \n You can do that like so: \n datasette mydatabase.db --setting base_url /tools/datasette/", "sections_fts": 226, "rank": null} {"rowid": 112, "title": "Configuring the secret", "content": "Datasette uses a secret string to sign secure values such as cookies. \n If you do not provide a secret, Datasette will create one when it starts up. This secret will reset every time the Datasette server restarts though, so things like authentication cookies will not stay valid between restarts. \n You can pass a secret to Datasette in two ways: with the --secret command-line option or by setting a DATASETTE_SECRET environment variable. \n $ datasette mydb.db --secret=SECRET_VALUE_HERE \n Or: \n $ export DATASETTE_SECRET=SECRET_VALUE_HERE\n$ datasette mydb.db \n One way to generate a secure random secret is to use Python like this: \n $ python3 -c 'import secrets; print(secrets.token_hex(32))'\ncdb19e94283a20f9d42cca50c5a4871c0aa07392db308755d60a1a5b9bb0fa52 \n Plugin authors make use of this signing mechanism in their plugins using .sign(value, namespace=\"default\") and .unsign(value, namespace=\"default\") .", "sections_fts": 226, "rank": null} {"rowid": 113, "title": "Using secrets with datasette publish", "content": "The datasette publish and datasette package commands both generate a secret for you automatically when Datasette is deployed. \n This means that every time you deploy a new version of a Datasette project, a new secret will be generated. This will cause signed cookies to become invalid on every fresh deploy. \n You can fix this by creating a secret that will be used for multiple deploys and passing it using the --secret option: \n datasette publish cloudrun mydb.db --service=my-service --secret=cdb19e94283a20f9d42cca5", "sections_fts": 226, "rank": null} {"rowid": 114, "title": "Introspection", "content": "Datasette includes some pages and JSON API endpoints for introspecting the current instance. These can be used to understand some of the internals of Datasette and to see how a particular instance has been configured. \n Each of these pages can be viewed in your browser. Add .json to the URL to get back the contents as JSON.", "sections_fts": 226, "rank": null} {"rowid": 115, "title": "/-/metadata", "content": "Shows the contents of the metadata.json file that was passed to datasette serve , if any. Metadata example : \n {\n \"license\": \"CC Attribution 4.0 License\",\n \"license_url\": \"http://creativecommons.org/licenses/by/4.0/\",\n \"source\": \"fivethirtyeight/data on GitHub\",\n \"source_url\": \"https://github.com/fivethirtyeight/data\",\n \"title\": \"Five Thirty Eight\",\n \"databases\": {\n\n }\n}", "sections_fts": 226, "rank": null} {"rowid": 116, "title": "/-/versions", "content": "Shows the version of Datasette, Python and SQLite. Versions example : \n {\n \"datasette\": {\n \"version\": \"0.60\"\n },\n \"python\": {\n \"full\": \"3.8.12 (default, Dec 21 2021, 10:45:09) \\n[GCC 10.2.1 20210110]\",\n \"version\": \"3.8.12\"\n },\n \"sqlite\": {\n \"extensions\": {\n \"json1\": null\n },\n \"fts_versions\": [\n \"FTS5\",\n \"FTS4\",\n \"FTS3\"\n ],\n \"compile_options\": [\n \"COMPILER=gcc-6.3.0 20170516\",\n \"ENABLE_FTS3\",\n \"ENABLE_FTS4\",\n \"ENABLE_FTS5\",\n \"ENABLE_JSON1\",\n \"ENABLE_RTREE\",\n \"THREADSAFE=1\"\n ],\n \"version\": \"3.37.0\"\n }\n}", "sections_fts": 226, "rank": null} {"rowid": 117, "title": "/-/plugins", "content": "Shows a list of currently installed plugins and their versions. Plugins example : \n [\n {\n \"name\": \"datasette_cluster_map\",\n \"static\": true,\n \"templates\": false,\n \"version\": \"0.10\",\n \"hooks\": [\"extra_css_urls\", \"extra_js_urls\", \"extra_body_script\"]\n }\n] \n Add ?all=1 to include details of the default plugins baked into Datasette.", "sections_fts": 226, "rank": null} {"rowid": 118, "title": "/-/settings", "content": "Shows the Settings for this instance of Datasette. Settings example : \n {\n \"default_facet_size\": 30,\n \"default_page_size\": 100,\n \"facet_suggest_time_limit_ms\": 50,\n \"facet_time_limit_ms\": 1000,\n \"max_returned_rows\": 1000,\n \"sql_time_limit_ms\": 1000\n}", "sections_fts": 226, "rank": null} {"rowid": 119, "title": "/-/databases", "content": "Shows currently attached databases. Databases example : \n [\n {\n \"hash\": null,\n \"is_memory\": false,\n \"is_mutable\": true,\n \"name\": \"fixtures\",\n \"path\": \"fixtures.db\",\n \"size\": 225280\n }\n]", "sections_fts": 226, "rank": null} {"rowid": 120, "title": "/-/threads", "content": "Shows details of threads and asyncio tasks. Threads example : \n {\n \"num_threads\": 2,\n \"threads\": [\n {\n \"daemon\": false,\n \"ident\": 4759197120,\n \"name\": \"MainThread\"\n },\n {\n \"daemon\": true,\n \"ident\": 123145319682048,\n \"name\": \"Thread-1\"\n },\n ],\n \"num_tasks\": 3,\n \"tasks\": [\n \" cb=[set.discard()]>\",\n \" wait_for=()]> cb=[run_until_complete..()]>\",\n \" wait_for=()]>>\"\n ]\n}", "sections_fts": 226, "rank": null} {"rowid": 121, "title": "/-/actor", "content": "Shows the currently authenticated actor. Useful for debugging Datasette authentication plugins. \n {\n \"actor\": {\n \"id\": 1,\n \"username\": \"some-user\"\n }\n}", "sections_fts": 226, "rank": null} {"rowid": 122, "title": "/-/messages", "content": "The debug tool at /-/messages can be used to set flash messages to try out that feature. See .add_message(request, message, type=datasette.INFO) for details of this feature.", "sections_fts": 226, "rank": null} {"rowid": 123, "title": "JSON API", "content": "Datasette provides a JSON API for your SQLite databases. Anything you can do\n through the Datasette user interface can also be accessed as JSON via the API. \n To access the API for a page, either click on the .json link on that page or\n edit the URL and add a .json extension to it. \n If you started Datasette with the --cors option, each JSON endpoint will be\n served with the following additional HTTP headers: \n Access-Control-Allow-Origin: *\nAccess-Control-Allow-Headers: Authorization\nAccess-Control-Expose-Headers: Link \n This means JavaScript running on any domain will be able to make cross-origin\n requests to fetch the data. \n If you start Datasette without the --cors option only JavaScript running on\n the same domain as Datasette will be able to access the API.", "sections_fts": 226, "rank": null} {"rowid": 124, "title": "Different shapes", "content": "The default JSON representation of data from a SQLite table or custom query\n looks like this: \n {\n \"database\": \"sf-trees\",\n \"table\": \"qSpecies\",\n \"columns\": [\n \"id\",\n \"value\"\n ],\n \"rows\": [\n [\n 1,\n \"Myoporum laetum :: Myoporum\"\n ],\n [\n 2,\n \"Metrosideros excelsa :: New Zealand Xmas Tree\"\n ],\n [\n 3,\n \"Pinus radiata :: Monterey Pine\"\n ]\n ],\n \"truncated\": false,\n \"next\": \"100\",\n \"next_url\": \"http://127.0.0.1:8001/sf-trees-02c8ef1/qSpecies.json?_next=100\",\n \"query_ms\": 1.9571781158447266\n} \n The columns key lists the columns that are being returned, and the rows \n key then returns a list of lists, each one representing a row. The order of the\n values in each row corresponds to the columns. \n The _shape parameter can be used to access alternative formats for the\n rows key which may be more convenient for your application. There are three\n options: \n \n \n ?_shape=arrays - \"rows\" is the default option, shown above \n \n \n ?_shape=objects - \"rows\" is a list of JSON key/value objects \n \n \n ?_shape=array - an JSON array of objects \n \n \n ?_shape=array&_nl=on - a newline-separated list of JSON objects \n \n \n ?_shape=arrayfirst - a flat JSON array containing just the first value from each row \n \n \n ?_shape=object - a JSON object keyed using the primary keys of the rows \n \n \n _shape=objects looks like this: \n {\n \"database\": \"sf-trees\",\n ...\n \"rows\": [\n {\n \"id\": 1,\n \"value\": \"Myoporum laetum :: Myoporum\"\n },\n {\n \"id\": 2,\n \"value\": \"Metrosideros excelsa :: New Zealand Xmas Tree\"\n },\n {\n \"id\": 3,\n \"value\": \"Pinus radiata :: Monterey Pine\"\n }\n ]\n} \n _shape=array looks like this: \n [\n {\n \"id\": 1,\n \"value\": \"Myoporum laetum :: Myoporum\"\n },\n {\n \"id\": 2,\n \"value\": \"Metrosideros excelsa :: New Zealand Xmas Tree\"\n },\n {\n \"id\": 3,\n \"value\": \"Pinus radiata :: Monterey Pine\"\n }\n] \n _shape=array&_nl=on looks like this: \n {\"id\": 1, \"value\": \"Myoporum laetum :: Myoporum\"}\n{\"id\": 2, \"value\": \"Metrosideros excelsa :: New Zealand Xmas Tree\"}\n{\"id\": 3, \"value\": \"Pinus radiata :: Monterey Pine\"} \n _shape=arrayfirst looks like this: \n [1, 2, 3] \n _shape=object looks like this: \n {\n \"1\": {\n \"id\": 1,\n \"value\": \"Myoporum laetum :: Myoporum\"\n },\n \"2\": {\n \"id\": 2,\n \"value\": \"Metrosideros excelsa :: New Zealand Xmas Tree\"\n },\n \"3\": {\n \"id\": 3,\n \"value\": \"Pinus radiata :: Monterey Pine\"\n }\n] \n The object shape is only available for queries against tables - custom SQL\n queries and views do not have an obvious primary key so cannot be returned using\n this format. \n The object keys are always strings. If your table has a compound primary\n key, the object keys will be a comma-separated string.", "sections_fts": 226, "rank": null} {"rowid": 125, "title": "Pagination", "content": "The default JSON representation includes a \"next_url\" key which can be used to access the next page of results. If that key is null or missing then it means you have reached the final page of results. \n Other representations include pagination information in the link HTTP header. That header will look something like this: \n link: ; rel=\"next\" \n Here is an example Python function built using requests that returns a list of all of the paginated items from one of these API endpoints: \n def paginate(url):\n items = []\n while url:\n response = requests.get(url)\n try:\n url = response.links.get(\"next\").get(\"url\")\n except AttributeError:\n url = None\n items.extend(response.json())\n return items", "sections_fts": 226, "rank": null} {"rowid": 126, "title": "Special JSON arguments", "content": "Every Datasette endpoint that can return JSON also accepts the following\n query string arguments: \n \n \n ?_shape=SHAPE \n \n The shape of the JSON to return, documented above. \n \n \n \n ?_nl=on \n \n When used with ?_shape=array produces newline-delimited JSON objects. \n \n \n \n ?_json=COLUMN1&_json=COLUMN2 \n \n If any of your SQLite columns contain JSON values, you can use one or more\n _json= parameters to request that those columns be returned as regular\n JSON. Without this argument those columns will be returned as JSON objects\n that have been double-encoded into a JSON string value. \n Compare this query without the argument to this query using the argument \n \n \n \n ?_json_infinity=on \n \n If your data contains infinity or -infinity values, Datasette will replace\n them with None when returning them as JSON. If you pass _json_infinity=1 \n Datasette will instead return them as Infinity or -Infinity which is\n invalid JSON but can be processed by some custom JSON parsers. \n \n \n \n ?_timelimit=MS \n \n Sets a custom time limit for the query in ms. You can use this for optimistic\n queries where you would like Datasette to give up if the query takes too\n long, for example if you want to implement autocomplete search but only if\n it can be executed in less than 10ms. \n \n \n \n ?_ttl=SECONDS \n \n For how many seconds should this response be cached by HTTP proxies? Use\n ?_ttl=0 to disable HTTP caching entirely for this request. \n \n \n \n ?_trace=1 \n \n Turns on tracing for this page: SQL queries executed during the request will\n be gathered and included in the response, either in a new \"_traces\" key\n for JSON responses or at the bottom of the page if the response is in HTML. \n The structure of the data returned here should be considered highly unstable\n and very likely to change. \n Only available if the trace_debug setting is enabled.", "sections_fts": 226, "rank": null} {"rowid": 127, "title": "Table arguments", "content": "The Datasette table view takes a number of special query string arguments.", "sections_fts": 226, "rank": null} {"rowid": 128, "title": "Column filter arguments", "content": "You can filter the data returned by the table based on column values using a query string argument. \n \n \n ?column__exact=value or ?_column=value \n \n Returns rows where the specified column exactly matches the value. \n \n \n \n ?column__not=value \n \n Returns rows where the column does not match the value. \n \n \n \n ?column__contains=value \n \n Rows where the string column contains the specified value ( column like \"%value%\" in SQL). \n \n \n \n ?column__endswith=value \n \n Rows where the string column ends with the specified value ( column like \"%value\" in SQL). \n \n \n \n ?column__startswith=value \n \n Rows where the string column starts with the specified value ( column like \"value%\" in SQL). \n \n \n \n ?column__gt=value \n \n Rows which are greater than the specified value. \n \n \n \n ?column__gte=value \n \n Rows which are greater than or equal to the specified value. \n \n \n \n ?column__lt=value \n \n Rows which are less than the specified value. \n \n \n \n ?column__lte=value \n \n Rows which are less than or equal to the specified value. \n \n \n \n ?column__like=value \n \n Match rows with a LIKE clause, case insensitive and with % as the wildcard character. \n \n \n \n ?column__notlike=value \n \n Match rows that do not match the provided LIKE clause. \n \n \n \n ?column__glob=value \n \n Similar to LIKE but uses Unix wildcard syntax and is case sensitive. \n \n \n \n ?column__in=value1,value2,value3 \n \n Rows where column matches any of the provided values. \n You can use a comma separated string, or you can use a JSON array. \n The JSON array option is useful if one of your matching values itself contains a comma: \n ?column__in=[\"value\",\"value,with,commas\"] \n \n \n \n ?column__notin=value1,value2,value3 \n \n Rows where column does not match any of the provided values. The inverse of __in= . Also supports JSON arrays. \n \n \n \n ?column__arraycontains=value \n \n Works against columns that contain JSON arrays - matches if any of the values in that array match the provided value. \n This is only available if the json1 SQLite extension is enabled. \n \n \n \n ?column__arraynotcontains=value \n \n Works against columns that contain JSON arrays - matches if none of the values in that array match the provided value. \n This is only available if the json1 SQLite extension is enabled. \n \n \n \n ?column__date=value \n \n Column is a datestamp occurring on the specified YYYY-MM-DD date, e.g. 2018-01-02 . \n \n \n \n ?column__isnull=1 \n \n Matches rows where the column is null. \n \n \n \n ?column__notnull=1 \n \n Matches rows where the column is not null. \n \n \n \n ?column__isblank=1 \n \n Matches rows where the column is blank, meaning null or the empty string. \n \n \n \n ?column__notblank=1 \n \n Matches rows where the column is not blank.", "sections_fts": 226, "rank": null} {"rowid": 129, "title": "Special table arguments", "content": "?_col=COLUMN1&_col=COLUMN2 \n \n List specific columns to display. These will be shown along with any primary keys. \n \n \n \n ?_nocol=COLUMN1&_nocol=COLUMN2 \n \n List specific columns to hide - any column not listed will be displayed. Primary keys cannot be hidden. \n \n \n \n ?_labels=on/off \n \n Expand foreign key references for every possible column. See below. \n \n \n \n ?_label=COLUMN1&_label=COLUMN2 \n \n Expand foreign key references for one or more specified columns. \n \n \n \n ?_size=1000 or ?_size=max \n \n Sets a custom page size. This cannot exceed the max_returned_rows limit\n passed to datasette serve . Use max to get max_returned_rows . \n \n \n \n ?_sort=COLUMN \n \n Sorts the results by the specified column. \n \n \n \n ?_sort_desc=COLUMN \n \n Sorts the results by the specified column in descending order. \n \n \n \n ?_search=keywords \n \n For SQLite tables that have been configured for\n full-text search executes a search\n with the provided keywords. \n \n \n \n ?_search_COLUMN=keywords \n \n Like _search= but allows you to specify the column to be searched, as\n opposed to searching all columns that have been indexed by FTS. \n \n \n \n ?_searchmode=raw \n \n With this option, queries passed to ?_search= or ?_search_COLUMN= will\n not have special characters escaped. This means you can make use of the full\n set of advanced SQLite FTS syntax ,\n though this could potentially result in errors if the wrong syntax is used. \n \n \n \n ?_where=SQL-fragment \n \n If the execute-sql permission is enabled, this parameter\n can be used to pass one or more additional SQL fragments to be used in the\n WHERE clause of the SQL used to query the table. \n This is particularly useful if you are building a JavaScript application\n that needs to do something creative but still wants the other conveniences\n provided by the table view (such as faceting) and hence would like not to\n have to construct a completely custom SQL query. \n Some examples: \n \n \n facetable?_where=_neighborhood like \"%c%\"&_where=_city_id=3 \n \n \n facetable?_where=_city_id in (select id from facet_cities where name != \"Detroit\") \n \n \n \n \n \n ?_through={json} \n \n This can be used to filter rows via a join against another table. \n The JSON parameter must include three keys: table , column and value . \n table must be a table that the current table is related to via a foreign key relationship. \n column must be a column in that other table. \n value is the value that you want to match against. \n For example, to filter roadside_attractions to just show the attractions that have a characteristic of \"museum\", you would construct this JSON: \n {\n \"table\": \"roadside_attraction_characteristics\",\n \"column\": \"characteristic_id\",\n \"value\": \"1\"\n} \n As a URL, that looks like this: \n ?_through={%22table%22:%22roadside_attraction_characteristics%22,%22column%22:%22characteristic_id%22,%22value%22:%221%22} \n Here's an example . \n \n \n \n ?_next=TOKEN \n \n Pagination by continuation token - pass the token that was returned in the\n \"next\" property by the previous page. \n \n \n \n ?_facet=column \n \n Facet by column. Can be applied multiple times, see Facets . Only works on the default JSON output, not on any of the custom shapes. \n \n \n \n ?_facet_size=100 \n \n Increase the number of facet results returned for each facet. Use ?_facet_size=max for the maximum available size, determined by max_returned_rows . \n \n \n \n ?_nofacet=1 \n \n Disable all facets and facet suggestions for this page, including any defined by Facets in metadata.json . \n \n \n \n ?_nosuggest=1 \n \n Disable facet suggestions for this page. \n \n \n \n ?_nocount=1 \n \n Disable the select count(*) query used on this page - a count of None will be returned instead.", "sections_fts": 226, "rank": null} {"rowid": 130, "title": "Expanding foreign key references", "content": "Datasette can detect foreign key relationships and resolve those references into\n labels. The HTML interface does this by default for every detected foreign key\n column - you can turn that off using ?_labels=off . \n You can request foreign keys be expanded in JSON using the _labels=on or\n _label=COLUMN special query string parameters. Here's what an expanded row\n looks like: \n [\n {\n \"rowid\": 1,\n \"TreeID\": 141565,\n \"qLegalStatus\": {\n \"value\": 1,\n \"label\": \"Permitted Site\"\n },\n \"qSpecies\": {\n \"value\": 1,\n \"label\": \"Myoporum laetum :: Myoporum\"\n },\n \"qAddress\": \"501X Baker St\",\n \"SiteOrder\": 1\n }\n] \n The column in the foreign key table that is used for the label can be specified\n in metadata.json - see Specifying the label column for a table .", "sections_fts": 226, "rank": null} {"rowid": 131, "title": "Discovering the JSON for a page", "content": "Most of the HTML pages served by Datasette provide a mechanism for discovering their JSON equivalents using the HTML link mechanism. \n You can find this near the top of the source code of those pages, looking like this: \n \n The JSON URL is also made available in a Link HTTP header for the page: \n Link: https://latest.datasette.io/fixtures/sortable.json; rel=\"alternate\"; type=\"application/json+datasette\"", "sections_fts": 226, "rank": null} {"rowid": 132, "title": "Getting started", "content": "", "sections_fts": 226, "rank": null} {"rowid": 133, "title": "Play with a live demo", "content": "The best way to experience Datasette for the first time is with a demo: \n \n \n global-power-plants.datasettes.com provides a searchable database of power plants around the world, using data from the World Resources Institude rendered using the datasette-cluster-map plugin. \n \n \n fivethirtyeight.datasettes.com shows Datasette running against over 400 datasets imported from the FiveThirtyEight GitHub repository .", "sections_fts": 226, "rank": null} {"rowid": 134, "title": "Follow a tutorial", "content": "Datasette has several tutorials to help you get started with the tool. Try one of the following: \n \n \n Exploring a database with Datasette shows how to use the Datasette web interface to explore a new database. \n \n \n Learn SQL with Datasette introduces SQL, and shows how to use that query language to ask questions of your data. \n \n \n Cleaning data with sqlite-utils and Datasette guides you through using sqlite-utils to turn a CSV file into a database that you can explore using Datasette.", "sections_fts": 226, "rank": null} {"rowid": 135, "title": "Datasette in your browser with Datasette Lite", "content": "Datasette Lite is Datasette packaged using WebAssembly so that it runs entirely in your browser, no Python web application server required. \n You can pass a URL to a CSV, SQLite or raw SQL file directly to Datasette Lite to explore that data in your browser. \n This example link opens Datasette Lite and loads the SQL Murder Mystery example database from Northwestern University Knight Lab .", "sections_fts": 226, "rank": null} {"rowid": 136, "title": "Try Datasette without installing anything using Glitch", "content": "Glitch is a free online tool for building web apps directly from your web browser. You can use Glitch to try out Datasette without needing to install any software on your own computer. \n Here's a demo project on Glitch which you can use as the basis for your own experiments: \n glitch.com/~datasette-csvs \n Glitch allows you to \"remix\" any project to create your own copy and start editing it in your browser. You can remix the datasette-csvs project by clicking this button: \n \n Find a CSV file and drag it onto the Glitch file explorer panel - datasette-csvs will automatically convert it to a SQLite database (using sqlite-utils ) and allow you to start exploring it using Datasette. \n If your CSV file has a latitude and longitude column you can visualize it on a map by uncommenting the datasette-cluster-map line in the requirements.txt file using the Glitch file editor. \n Need some data? Try this Public Art Data for the city of Seattle - hit \"Export\" and select \"CSV\" to download it as a CSV file. \n For more on how this works, see Running Datasette on Glitch .", "sections_fts": 226, "rank": null} {"rowid": 137, "title": "Using Datasette on your own computer", "content": "First, follow the Installation instructions. Now you can run Datasette against a SQLite file on your computer using the following command: \n datasette path/to/database.db \n This will start a web server on port 8001 - visit http://localhost:8001/ \n to access the web interface. \n Add -o to open your browser automatically once Datasette has started: \n datasette path/to/database.db -o \n Use Chrome on OS X? You can run datasette against your browser history\n like so: \n datasette ~/Library/Application\\ Support/Google/Chrome/Default/History --nolock \n The --nolock option ignores any file locks. This is safe as Datasette will open the file in read-only mode. \n Now visiting http://localhost:8001/History/downloads will show you a web\n interface to browse your downloads data: \n \n \n \n http://localhost:8001/History/downloads.json will return that data as\n JSON: \n {\n \"database\": \"History\",\n \"columns\": [\n \"id\",\n \"current_path\",\n \"target_path\",\n \"start_time\",\n \"received_bytes\",\n \"total_bytes\",\n ...\n ],\n \"rows\": [\n [\n 1,\n \"/Users/simonw/Downloads/DropboxInstaller.dmg\",\n \"/Users/simonw/Downloads/DropboxInstaller.dmg\",\n 13097290269022132,\n 626688,\n 0,\n ...\n ]\n ]\n} \n http://localhost:8001/History/downloads.json?_shape=objects will return that data as\n JSON in a more convenient format: \n {\n ...\n \"rows\": [\n {\n \"start_time\": 13097290269022132,\n \"interrupt_reason\": 0,\n \"hash\": \"\",\n \"id\": 1,\n \"site_url\": \"\",\n \"referrer\": \"https://www.dropbox.com/downloading?src=index\",\n ...\n }\n ]\n}", "sections_fts": 226, "rank": null} {"rowid": 138, "title": "Internals for plugins", "content": "Many Plugin hooks are passed objects that provide access to internal Datasette functionality. The interface to these objects should not be considered stable with the exception of methods that are documented here.", "sections_fts": 226, "rank": null} {"rowid": 139, "title": "Request object", "content": "The request object is passed to various plugin hooks. It represents an incoming HTTP request. It has the following properties: \n \n \n .scope - dictionary \n \n The ASGI scope that was used to construct this request, described in the ASGI HTTP connection scope specification. \n \n \n \n .method - string \n \n The HTTP method for this request, usually GET or POST . \n \n \n \n .url - string \n \n The full URL for this request, e.g. https://latest.datasette.io/fixtures . \n \n \n \n .scheme - string \n \n The request scheme - usually https or http . \n \n \n \n .headers - dictionary (str -> str) \n \n A dictionary of incoming HTTP request headers. Header names have been converted to lowercase. \n \n \n \n .cookies - dictionary (str -> str) \n \n A dictionary of incoming cookies \n \n \n \n .host - string \n \n The host header from the incoming request, e.g. latest.datasette.io or localhost . \n \n \n \n .path - string \n \n The path of the request excluding the query string, e.g. /fixtures . \n \n \n \n .full_path - string \n \n The path of the request including the query string if one is present, e.g. /fixtures?sql=select+sqlite_version() . \n \n \n \n .query_string - string \n \n The query string component of the request, without the ? - e.g. name__contains=sam&age__gt=10 . \n \n \n \n .args - MultiParams \n \n An object representing the parsed query string parameters, see below. \n \n \n \n .url_vars - dictionary (str -> str) \n \n Variables extracted from the URL path, if that path was defined using a regular expression. See register_routes(datasette) . \n \n \n \n .actor - dictionary (str -> Any) or None \n \n The currently authenticated actor (see actors ), or None if the request is unauthenticated. \n \n \n \n The object also has two awaitable methods: \n \n \n await request.post_vars() - dictionary \n \n Returns a dictionary of form variables that were submitted in the request body via POST . Don't forget to read about CSRF protection ! \n \n \n \n await request.post_body() - bytes \n \n Returns the un-parsed body of a request submitted by POST - useful for things like incoming JSON data. \n \n \n \n And a class method that can be used to create fake request objects for use in tests: \n \n \n fake(path_with_query_string, method=\"GET\", scheme=\"http\", url_vars=None) \n \n Returns a Request instance for the specified path and method. For example: \n from datasette import Request\nfrom pprint import pprint\n\nrequest = Request.fake(\n \"/fixtures/facetable/\",\n url_vars={\"database\": \"fixtures\", \"table\": \"facetable\"},\n)\npprint(request.scope) \n This outputs: \n {'http_version': '1.1',\n 'method': 'GET',\n 'path': '/fixtures/facetable/',\n 'query_string': b'',\n 'raw_path': b'/fixtures/facetable/',\n 'scheme': 'http',\n 'type': 'http',\n 'url_route': {'kwargs': {'database': 'fixtures', 'table': 'facetable'}}}", "sections_fts": 226, "rank": null} {"rowid": 140, "title": "The MultiParams class", "content": "request.args is a MultiParams object - a dictionary-like object which provides access to query string parameters that may have multiple values. \n Consider the query string ?foo=1&foo=2&bar=3 - with two values for foo and one value for bar . \n \n \n request.args[key] - string \n \n Returns the first value for that key, or raises a KeyError if the key is missing. For the above example request.args[\"foo\"] would return \"1\" . \n \n \n \n request.args.get(key) - string or None \n \n Returns the first value for that key, or None if the key is missing. Pass a second argument to specify a different default, e.g. q = request.args.get(\"q\", \"\") . \n \n \n \n request.args.getlist(key) - list of strings \n \n Returns the list of strings for that key. request.args.getlist(\"foo\") would return [\"1\", \"2\"] in the above example. request.args.getlist(\"bar\") would return [\"3\"] . If the key is missing an empty list will be returned. \n \n \n \n request.args.keys() - list of strings \n \n Returns the list of available keys - for the example this would be [\"foo\", \"bar\"] . \n \n \n \n key in request.args - True or False \n \n You can use if key in request.args to check if a key is present. \n \n \n \n for key in request.args - iterator \n \n This lets you loop through every available key. \n \n \n \n len(request.args) - integer \n \n Returns the number of keys.", "sections_fts": 226, "rank": null} {"rowid": 141, "title": "Response class", "content": "The Response class can be returned from view functions that have been registered using the register_routes(datasette) hook. \n The Response() constructor takes the following arguments: \n \n \n body - string \n \n The body of the response. \n \n \n \n status - integer (optional) \n \n The HTTP status - defaults to 200. \n \n \n \n headers - dictionary (optional) \n \n A dictionary of extra HTTP headers, e.g. {\"x-hello\": \"world\"} . \n \n \n \n content_type - string (optional) \n \n The content-type for the response. Defaults to text/plain . \n \n \n \n For example: \n from datasette.utils.asgi import Response\n\nresponse = Response(\n \"This is XML\",\n content_type=\"application/xml; charset=utf-8\",\n) \n The quickest way to create responses is using the Response.text(...) , Response.html(...) , Response.json(...) or Response.redirect(...) helper methods: \n from datasette.utils.asgi import Response\n\nhtml_response = Response.html(\"This is HTML\")\njson_response = Response.json({\"this_is\": \"json\"})\ntext_response = Response.text(\n \"This will become utf-8 encoded text\"\n)\n# Redirects are served as 302, unless you pass status=301:\nredirect_response = Response.redirect(\n \"https://latest.datasette.io/\"\n) \n Each of these responses will use the correct corresponding content-type - text/html; charset=utf-8 , application/json; charset=utf-8 or text/plain; charset=utf-8 respectively. \n Each of the helper methods take optional status= and headers= arguments, documented above.", "sections_fts": 226, "rank": null} {"rowid": 142, "title": "Returning a response with .asgi_send(send)", "content": "In most cases you will return Response objects from your own view functions. You can also use a Response instance to respond at a lower level via ASGI, for example if you are writing code that uses the asgi_wrapper(datasette) hook. \n Create a Response object and then use await response.asgi_send(send) , passing the ASGI send function. For example: \n async def require_authorization(scope, receive, send):\n response = Response.text(\n \"401 Authorization Required\",\n headers={\n \"www-authenticate\": 'Basic realm=\"Datasette\", charset=\"UTF-8\"'\n },\n status=401,\n )\n await response.asgi_send(send)", "sections_fts": 226, "rank": null} {"rowid": 143, "title": "Setting cookies with response.set_cookie()", "content": "To set cookies on the response, use the response.set_cookie(...) method. The method signature looks like this: \n def set_cookie(\n self,\n key,\n value=\"\",\n max_age=None,\n expires=None,\n path=\"/\",\n domain=None,\n secure=False,\n httponly=False,\n samesite=\"lax\",\n):\n ... \n You can use this with datasette.sign() to set signed cookies. Here's how you would set the ds_actor cookie for use with Datasette authentication : \n response = Response.redirect(\"/\")\nresponse.set_cookie(\n \"ds_actor\",\n datasette.sign({\"a\": {\"id\": \"cleopaws\"}}, \"actor\"),\n)\nreturn response", "sections_fts": 226, "rank": null} {"rowid": 144, "title": "Datasette class", "content": "This object is an instance of the Datasette class, passed to many plugin hooks as an argument called datasette . \n You can create your own instance of this - for example to help write tests for a plugin - like so: \n from datasette.app import Datasette\n\n# With no arguments a single in-memory database will be attached\ndatasette = Datasette()\n\n# The files= argument can load files from disk\ndatasette = Datasette(files=[\"/path/to/my-database.db\"])\n\n# Pass metadata as a JSON dictionary like this\ndatasette = Datasette(\n files=[\"/path/to/my-database.db\"],\n metadata={\n \"databases\": {\n \"my-database\": {\n \"description\": \"This is my database\"\n }\n }\n },\n) \n Constructor parameters include: \n \n \n files=[...] - a list of database files to open \n \n \n immutables=[...] - a list of database files to open in immutable mode \n \n \n metadata={...} - a dictionary of Metadata \n \n \n config_dir=... - the configuration directory to use, stored in datasette.config_dir", "sections_fts": 226, "rank": null} {"rowid": 145, "title": ".databases", "content": "Property exposing a collections.OrderedDict of databases currently connected to Datasette. \n The dictionary keys are the name of the database that is used in the URL - e.g. /fixtures would have a key of \"fixtures\" . The values are Database class instances. \n All databases are listed, irrespective of user permissions. This means that the _internal database will always be listed here.", "sections_fts": 226, "rank": null} {"rowid": 146, "title": ".plugin_config(plugin_name, database=None, table=None)", "content": "plugin_name - string \n \n The name of the plugin to look up configuration for. Usually this is something similar to datasette-cluster-map . \n \n \n \n database - None or string \n \n The database the user is interacting with. \n \n \n \n table - None or string \n \n The table the user is interacting with. \n \n \n \n This method lets you read plugin configuration values that were set in metadata.json . See Writing plugins that accept configuration for full details of how this method should be used. \n The return value will be the value from the configuration file - usually a dictionary. \n If the plugin is not configured the return value will be None .", "sections_fts": 226, "rank": null} {"rowid": 147, "title": "await .render_template(template, context=None, request=None)", "content": "template - string, list of strings or jinja2.Template \n \n The template file to be rendered, e.g. my_plugin.html . Datasette will search for this file first in the --template-dir= location, if it was specified - then in the plugin's bundled templates and finally in Datasette's set of default templates. \n If this is a list of template file names then the first one that exists will be loaded and rendered. \n If this is a Jinja Template object it will be used directly. \n \n \n \n context - None or a Python dictionary \n \n The context variables to pass to the template. \n \n \n \n request - request object or None \n \n If you pass a Datasette request object here it will be made available to the template. \n \n \n \n Renders a Jinja template using Datasette's preconfigured instance of Jinja and returns the resulting string. The template will have access to Datasette's default template functions and any functions that have been made available by other plugins.", "sections_fts": 226, "rank": null} {"rowid": 148, "title": "await .permission_allowed(actor, action, resource=None, default=False)", "content": "actor - dictionary \n \n The authenticated actor. This is usually request.actor . \n \n \n \n action - string \n \n The name of the action that is being permission checked. \n \n \n \n resource - string or tuple, optional \n \n The resource, e.g. the name of the database, or a tuple of two strings containing the name of the database and the name of the table. Only some permissions apply to a resource. \n \n \n \n default - optional, True or False \n \n Should this permission check be default allow or default deny. \n \n \n \n Check if the given actor has permission to perform the given action on the given resource. \n Some permission checks are carried out against rules defined in metadata.json , while other custom permissions may be decided by plugins that implement the permission_allowed(datasette, actor, action, resource) plugin hook. \n If neither metadata.json nor any of the plugins provide an answer to the permission query the default argument will be returned. \n See Built-in permissions for a full list of permission actions included in Datasette core.", "sections_fts": 226, "rank": null} {"rowid": 149, "title": "await .ensure_permissions(actor, permissions)", "content": "actor - dictionary \n \n The authenticated actor. This is usually request.actor . \n \n \n \n permissions - list \n \n A list of permissions to check. Each permission in that list can be a string action name or a 2-tuple of (action, resource) . \n \n \n \n This method allows multiple permissions to be checked at once. It raises a datasette.Forbidden exception if any of the checks are denied before one of them is explicitly granted. \n This is useful when you need to check multiple permissions at once. For example, an actor should be able to view a table if either one of the following checks returns True or not a single one of them returns False : \n await self.ds.ensure_permissions(\n request.actor,\n [\n (\"view-table\", (database, table)),\n (\"view-database\", database),\n \"view-instance\",\n ],\n)", "sections_fts": 226, "rank": null} {"rowid": 150, "title": "await .check_visibility(actor, action=None, resource=None, permissions=None)", "content": "actor - dictionary \n \n The authenticated actor. This is usually request.actor . \n \n \n \n action - string, optional \n \n The name of the action that is being permission checked. \n \n \n \n resource - string or tuple, optional \n \n The resource, e.g. the name of the database, or a tuple of two strings containing the name of the database and the name of the table. Only some permissions apply to a resource. \n \n \n \n permissions - list of action strings or (action, resource) tuples, optional \n \n Provide this instead of action and resource to check multiple permissions at once. \n \n \n \n This convenience method can be used to answer the question \"should this item be considered private, in that it is visible to me but it is not visible to anonymous users?\" \n It returns a tuple of two booleans, (visible, private) . visible indicates if the actor can see this resource. private will be True if an anonymous user would not be able to view the resource. \n This example checks if the user can access a specific table, and sets private so that a padlock icon can later be displayed: \n visible, private = await self.ds.check_visibility(\n request.actor,\n action=\"view-table\",\n resource=(database, table),\n) \n The following example runs three checks in a row, similar to await .ensure_permissions(actor, permissions) . If any of the checks are denied before one of them is explicitly granted then visible will be False . private will be True if an anonymous user would not be able to view the resource. \n visible, private = await self.ds.check_visibility(\n request.actor,\n permissions=[\n (\"view-table\", (database, table)),\n (\"view-database\", database),\n \"view-instance\",\n ],\n)", "sections_fts": 226, "rank": null} {"rowid": 151, "title": ".get_database(name)", "content": "name - string, optional \n \n The name of the database - optional. \n \n \n \n Returns the specified database object. Raises a KeyError if the database does not exist. Call this method without an argument to return the first connected database.", "sections_fts": 226, "rank": null} {"rowid": 152, "title": ".add_database(db, name=None, route=None)", "content": "db - datasette.database.Database instance \n \n The database to be attached. \n \n \n \n name - string, optional \n \n The name to be used for this database . If not specified Datasette will pick one based on the filename or memory name. \n \n \n \n route - string, optional \n \n This will be used in the URL path. If not specified, it will default to the same thing as the name . \n \n \n \n The datasette.add_database(db) method lets you add a new database to the current Datasette instance. \n The db parameter should be an instance of the datasette.database.Database class. For example: \n from datasette.database import Database\n\ndatasette.add_database(\n Database(\n datasette,\n path=\"path/to/my-new-database.db\",\n )\n) \n This will add a mutable database and serve it at /my-new-database . \n Use is_mutable=False to add an immutable database. \n .add_database() returns the Database instance, with its name set as the database.name attribute. Any time you are working with a newly added database you should use the return value of .add_database() , for example: \n db = datasette.add_database(\n Database(datasette, memory_name=\"statistics\")\n)\nawait db.execute_write(\n \"CREATE TABLE foo(id integer primary key)\"\n)", "sections_fts": 226, "rank": null} {"rowid": 153, "title": ".add_memory_database(name)", "content": "Adds a shared in-memory database with the specified name: \n datasette.add_memory_database(\"statistics\") \n This is a shortcut for the following: \n from datasette.database import Database\n\ndatasette.add_database(\n Database(datasette, memory_name=\"statistics\")\n) \n Using either of these pattern will result in the in-memory database being served at /statistics .", "sections_fts": 226, "rank": null} {"rowid": 154, "title": ".remove_database(name)", "content": "name - string \n \n The name of the database to be removed. \n \n \n \n This removes a database that has been previously added. name= is the unique name of that database.", "sections_fts": 226, "rank": null} {"rowid": 155, "title": ".sign(value, namespace=\"default\")", "content": "value - any serializable type \n \n The value to be signed. \n \n \n \n namespace - string, optional \n \n An alternative namespace, see the itsdangerous salt documentation . \n \n \n \n Utility method for signing values, such that you can safely pass data to and from an untrusted environment. This is a wrapper around the itsdangerous library. \n This method returns a signed string, which can be decoded and verified using .unsign(value, namespace=\"default\") .", "sections_fts": 226, "rank": null} {"rowid": 156, "title": ".unsign(value, namespace=\"default\")", "content": "signed - any serializable type \n \n The signed string that was created using .sign(value, namespace=\"default\") . \n \n \n \n namespace - string, optional \n \n The alternative namespace, if one was used. \n \n \n \n Returns the original, decoded object that was passed to .sign(value, namespace=\"default\") . If the signature is not valid this raises a itsdangerous.BadSignature exception.", "sections_fts": 226, "rank": null} {"rowid": 157, "title": ".add_message(request, message, type=datasette.INFO)", "content": "request - Request \n \n The current Request object \n \n \n \n message - string \n \n The message string \n \n \n \n type - constant, optional \n \n The message type - datasette.INFO , datasette.WARNING or datasette.ERROR \n \n \n \n Datasette's flash messaging mechanism allows you to add a message that will be displayed to the user on the next page that they visit. Messages are persisted in a ds_messages cookie. This method adds a message to that cookie. \n You can try out these messages (including the different visual styling of the three message types) using the /-/messages debugging tool.", "sections_fts": 226, "rank": null} {"rowid": 158, "title": ".absolute_url(request, path)", "content": "request - Request \n \n The current Request object \n \n \n \n path - string \n \n A path, for example /dbname/table.json \n \n \n \n Returns the absolute URL for the given path, including the protocol and host. For example: \n absolute_url = datasette.absolute_url(\n request, \"/dbname/table.json\"\n)\n# Would return \"http://localhost:8001/dbname/table.json\" \n The current request object is used to determine the hostname and protocol that should be used for the returned URL. The force_https_urls configuration setting is taken into account.", "sections_fts": 226, "rank": null} {"rowid": 159, "title": ".setting(key)", "content": "key - string \n \n The name of the setting, e.g. base_url . \n \n \n \n Returns the configured value for the specified setting . This can be a string, boolean or integer depending on the requested setting. \n For example: \n downloads_are_allowed = datasette.setting(\"allow_download\")", "sections_fts": 226, "rank": null} {"rowid": 160, "title": "datasette.client", "content": "Plugins can make internal simulated HTTP requests to the Datasette instance within which they are running. This ensures that all of Datasette's external JSON APIs are also available to plugins, while avoiding the overhead of making an external HTTP call to access those APIs. \n The datasette.client object is a wrapper around the HTTPX Python library , providing an async-friendly API that is similar to the widely used Requests library . \n It offers the following methods: \n \n \n await datasette.client.get(path, **kwargs) - returns HTTPX Response \n \n Execute an internal GET request against that path. \n \n \n \n await datasette.client.post(path, **kwargs) - returns HTTPX Response \n \n Execute an internal POST request. Use data={\"name\": \"value\"} to pass form parameters. \n \n \n \n await datasette.client.options(path, **kwargs) - returns HTTPX Response \n \n Execute an internal OPTIONS request. \n \n \n \n await datasette.client.head(path, **kwargs) - returns HTTPX Response \n \n Execute an internal HEAD request. \n \n \n \n await datasette.client.put(path, **kwargs) - returns HTTPX Response \n \n Execute an internal PUT request. \n \n \n \n await datasette.client.patch(path, **kwargs) - returns HTTPX Response \n \n Execute an internal PATCH request. \n \n \n \n await datasette.client.delete(path, **kwargs) - returns HTTPX Response \n \n Execute an internal DELETE request. \n \n \n \n await datasette.client.request(method, path, **kwargs) - returns HTTPX Response \n \n Execute an internal request with the given HTTP method against that path. \n \n \n \n These methods can be used with datasette.urls - for example: \n table_json = (\n await datasette.client.get(\n datasette.urls.table(\n \"fixtures\", \"facetable\", format=\"json\"\n )\n )\n).json() \n datasette.client methods automatically take the current base_url setting into account, whether or not you use the datasette.urls family of methods to construct the path. \n For documentation on available **kwargs options and the shape of the HTTPX Response object refer to the HTTPX Async documentation .", "sections_fts": 226, "rank": null} {"rowid": 161, "title": "datasette.urls", "content": "The datasette.urls object contains methods for building URLs to pages within Datasette. Plugins should use this to link to pages, since these methods take into account any base_url configuration setting that might be in effect. \n \n \n datasette.urls.instance(format=None) \n \n Returns the URL to the Datasette instance root page. This is usually \"/\" . \n \n \n \n datasette.urls.path(path, format=None) \n \n Takes a path and returns the full path, taking base_url into account. \n For example, datasette.urls.path(\"-/logout\") will return the path to the logout page, which will be \"/-/logout\" by default or /prefix-path/-/logout if base_url is set to /prefix-path/ \n \n \n \n datasette.urls.logout() \n \n Returns the URL to the logout page, usually \"/-/logout\" \n \n \n \n datasette.urls.static(path) \n \n Returns the URL of one of Datasette's default static assets, for example \"/-/static/app.css\" \n \n \n \n datasette.urls.static_plugins(plugin_name, path) \n \n Returns the URL of one of the static assets belonging to a plugin. \n datasette.urls.static_plugins(\"datasette_cluster_map\", \"datasette-cluster-map.js\") would return \"/-/static-plugins/datasette_cluster_map/datasette-cluster-map.js\" \n \n \n \n datasette.urls.static(path) \n \n Returns the URL of one of Datasette's default static assets, for example \"/-/static/app.css\" \n \n \n \n datasette.urls.database(database_name, format=None) \n \n Returns the URL to a database page, for example \"/fixtures\" \n \n \n \n datasette.urls.table(database_name, table_name, format=None) \n \n Returns the URL to a table page, for example \"/fixtures/facetable\" \n \n \n \n datasette.urls.query(database_name, query_name, format=None) \n \n Returns the URL to a query page, for example \"/fixtures/pragma_cache_size\" \n \n \n \n These functions can be accessed via the {{ urls }} object in Datasette templates, for example: \n Homepage\nFixtures database\nfacetable table\npragma_cache_size query \n Use the format=\"json\" (or \"csv\" or other formats supported by plugins) arguments to get back URLs to the JSON representation. This is the path with .json added on the end. \n These methods each return a datasette.utils.PrefixedUrlString object, which is a subclass of the Python str type. This allows the logic that considers the base_url setting to detect if that prefix has already been applied to the path.", "sections_fts": 226, "rank": null} {"rowid": 162, "title": "Database class", "content": "Instances of the Database class can be used to execute queries against attached SQLite databases, and to run introspection against their schemas.", "sections_fts": 226, "rank": null} {"rowid": 163, "title": "Database(ds, path=None, is_mutable=True, is_memory=False, memory_name=None)", "content": "The Database() constructor can be used by plugins, in conjunction with .add_database(db, name=None, route=None) , to create and register new databases. \n The arguments are as follows: \n \n \n ds - Datasette class (required) \n \n The Datasette instance you are attaching this database to. \n \n \n \n path - string \n \n Path to a SQLite database file on disk. \n \n \n \n is_mutable - boolean \n \n Set this to False to cause Datasette to open the file in immutable mode. \n \n \n \n is_memory - boolean \n \n Use this to create non-shared memory connections. \n \n \n \n memory_name - string or None \n \n Use this to create a named in-memory database. Unlike regular memory databases these can be accessed by multiple threads and will persist an changes made to them for the lifetime of the Datasette server process. \n \n \n \n The first argument is the datasette instance you are attaching to, the second is a path= , then is_mutable and is_memory are both optional arguments.", "sections_fts": 226, "rank": null} {"rowid": 164, "title": "db.hash", "content": "If the database was opened in immutable mode, this property returns the 64 character SHA-256 hash of the database contents as a string. Otherwise it returns None .", "sections_fts": 226, "rank": null} {"rowid": 165, "title": "await db.execute(sql, ...)", "content": "Executes a SQL query against the database and returns the resulting rows (see Results ). \n \n \n sql - string (required) \n \n The SQL query to execute. This can include ? or :named parameters. \n \n \n \n params - list or dict \n \n A list or dictionary of values to use for the parameters. List for ? , dictionary for :named . \n \n \n \n truncate - boolean \n \n Should the rows returned by the query be truncated at the maximum page size? Defaults to True , set this to False to disable truncation. \n \n \n \n custom_time_limit - integer ms \n \n A custom time limit for this query. This can be set to a lower value than the Datasette configured default. If a query takes longer than this it will be terminated early and raise a dataette.database.QueryInterrupted exception. \n \n \n \n page_size - integer \n \n Set a custom page size for truncation, over-riding the configured Datasette default. \n \n \n \n log_sql_errors - boolean \n \n Should any SQL errors be logged to the console in addition to being raised as an error? Defaults to True .", "sections_fts": 226, "rank": null} {"rowid": 166, "title": "Results", "content": "The db.execute() method returns a single Results object. This can be used to access the rows returned by the query. \n Iterating over a Results object will yield SQLite Row objects . Each of these can be treated as a tuple or can be accessed using row[\"column\"] syntax: \n info = []\nresults = await db.execute(\"select name from sqlite_master\")\nfor row in results:\n info.append(row[\"name\"]) \n The Results object also has the following properties and methods: \n \n \n .truncated - boolean \n \n Indicates if this query was truncated - if it returned more results than the specified page_size . If this is true then the results object will only provide access to the first page_size rows in the query result. You can disable truncation by passing truncate=False to the db.query() method. \n \n \n \n .columns - list of strings \n \n A list of column names returned by the query. \n \n \n \n .rows - list of sqlite3.Row \n \n This property provides direct access to the list of rows returned by the database. You can access specific rows by index using results.rows[0] . \n \n \n \n .first() - row or None \n \n Returns the first row in the results, or None if no rows were returned. \n \n \n \n .single_value() \n \n Returns the value of the first column of the first row of results - but only if the query returned a single row with a single column. Raises a datasette.database.MultipleValues exception otherwise. \n \n \n \n .__len__() \n \n Calling len(results) returns the (truncated) number of returned results.", "sections_fts": 226, "rank": null} {"rowid": 167, "title": "await db.execute_fn(fn)", "content": "Executes a given callback function against a read-only database connection running in a thread. The function will be passed a SQLite connection, and the return value from the function will be returned by the await . \n Example usage: \n def get_version(conn):\n return conn.execute(\n \"select sqlite_version()\"\n ).fetchall()[0][0]\n\n\nversion = await db.execute_fn(get_version)", "sections_fts": 226, "rank": null} {"rowid": 168, "title": "await db.execute_write(sql, params=None, block=True)", "content": "SQLite only allows one database connection to write at a time. Datasette handles this for you by maintaining a queue of writes to be executed against a given database. Plugins can submit write operations to this queue and they will be executed in the order in which they are received. \n This method can be used to queue up a non-SELECT SQL query to be executed against a single write connection to the database. \n You can pass additional SQL parameters as a tuple or dictionary. \n The method will block until the operation is completed, and the return value will be the return from calling conn.execute(...) using the underlying sqlite3 Python library. \n If you pass block=False this behaviour changes to \"fire and forget\" - queries will be added to the write queue and executed in a separate thread while your code can continue to do other things. The method will return a UUID representing the queued task.", "sections_fts": 226, "rank": null} {"rowid": 169, "title": "await db.execute_write_script(sql, block=True)", "content": "Like execute_write() but can be used to send multiple SQL statements in a single string separated by semicolons, using the sqlite3 conn.executescript() method.", "sections_fts": 226, "rank": null} {"rowid": 170, "title": "await db.execute_write_many(sql, params_seq, block=True)", "content": "Like execute_write() but uses the sqlite3 conn.executemany() method. This will efficiently execute the same SQL statement against each of the parameters in the params_seq iterator, for example: \n await db.execute_write_many(\n \"insert into characters (id, name) values (?, ?)\",\n [(1, \"Melanie\"), (2, \"Selma\"), (2, \"Viktor\")],\n)", "sections_fts": 226, "rank": null} {"rowid": 171, "title": "await db.execute_write_fn(fn, block=True)", "content": "This method works like .execute_write() , but instead of a SQL statement you give it a callable Python function. Your function will be queued up and then called when the write connection is available, passing that connection as the argument to the function. \n The function can then perform multiple actions, safe in the knowledge that it has exclusive access to the single writable connection for as long as it is executing. \n \n fn needs to be a regular function, not an async def function. \n \n For example: \n def delete_and_return_count(conn):\n conn.execute(\"delete from some_table where id > 5\")\n return conn.execute(\n \"select count(*) from some_table\"\n ).fetchone()[0]\n\n\ntry:\n num_rows_left = await database.execute_write_fn(\n delete_and_return_count\n )\nexcept Exception as e:\n print(\"An error occurred:\", e) \n The value returned from await database.execute_write_fn(...) will be the return value from your function. \n If your function raises an exception that exception will be propagated up to the await line. \n If you specify block=False the method becomes fire-and-forget, queueing your function to be executed and then allowing your code after the call to .execute_write_fn() to continue running while the underlying thread waits for an opportunity to run your function. A UUID representing the queued task will be returned. Any exceptions in your code will be silently swallowed.", "sections_fts": 226, "rank": null} {"rowid": 172, "title": "db.close()", "content": "Closes all of the open connections to file-backed databases. This is mainly intended to be used by large test suites, to avoid hitting limits on the number of open files.", "sections_fts": 226, "rank": null} {"rowid": 173, "title": "Database introspection", "content": "The Database class also provides properties and methods for introspecting the database. \n \n \n db.name - string \n \n The name of the database - usually the filename without the .db prefix. \n \n \n \n db.size - integer \n \n The size of the database file in bytes. 0 for :memory: databases. \n \n \n \n db.mtime_ns - integer or None \n \n The last modification time of the database file in nanoseconds since the epoch. None for :memory: databases. \n \n \n \n db.is_mutable - boolean \n \n Is this database mutable, and allowed to accept writes? \n \n \n \n db.is_memory - boolean \n \n Is this database an in-memory database? \n \n \n \n await db.attached_databases() - list of named tuples \n \n Returns a list of additional databases that have been connected to this database using the SQLite ATTACH command. Each named tuple has fields seq , name and file . \n \n \n \n await db.table_exists(table) - boolean \n \n Check if a table called table exists. \n \n \n \n await db.table_names() - list of strings \n \n List of names of tables in the database. \n \n \n \n await db.view_names() - list of strings \n \n List of names of views in the database. \n \n \n \n await db.table_columns(table) - list of strings \n \n Names of columns in a specific table. \n \n \n \n await db.table_column_details(table) - list of named tuples \n \n Full details of the columns in a specific table. Each column is represented by a Column named tuple with fields cid (integer representing the column position), name (string), type (string, e.g. REAL or VARCHAR(30) ), notnull (integer 1 or 0), default_value (string or None), is_pk (integer 1 or 0). \n \n \n \n await db.primary_keys(table) - list of strings \n \n Names of the columns that are part of the primary key for this table. \n \n \n \n await db.fts_table(table) - string or None \n \n The name of the FTS table associated with this table, if one exists. \n \n \n \n await db.label_column_for_table(table) - string or None \n \n The label column that is associated with this table - either automatically detected or using the \"label_column\" key from Metadata , see Specifying the label column for a table . \n \n \n \n await db.foreign_keys_for_table(table) - list of dictionaries \n \n Details of columns in this table which are foreign keys to other tables. A list of dictionaries where each dictionary is shaped like this: {\"column\": string, \"other_table\": string, \"other_column\": string} . \n \n \n \n await db.hidden_table_names() - list of strings \n \n List of tables which Datasette \"hides\" by default - usually these are tables associated with SQLite's full-text search feature, the SpatiaLite extension or tables hidden using the Hiding tables feature. \n \n \n \n await db.get_table_definition(table) - string \n \n Returns the SQL definition for the table - the CREATE TABLE statement and any associated CREATE INDEX statements. \n \n \n \n await db.get_view_definition(view) - string \n \n Returns the SQL definition of the named view. \n \n \n \n await db.get_all_foreign_keys() - dictionary \n \n Dictionary representing both incoming and outgoing foreign keys for this table. It has two keys, \"incoming\" and \"outgoing\" , each of which is a list of dictionaries with keys \"column\" , \"other_table\" and \"other_column\" . For example: \n {\n \"incoming\": [],\n \"outgoing\": [\n {\n \"other_table\": \"attraction_characteristic\",\n \"column\": \"characteristic_id\",\n \"other_column\": \"pk\",\n },\n {\n \"other_table\": \"roadside_attractions\",\n \"column\": \"attraction_id\",\n \"other_column\": \"pk\",\n }\n ]\n}", "sections_fts": 226, "rank": null} {"rowid": 174, "title": "CSRF protection", "content": "Datasette uses asgi-csrf to guard against CSRF attacks on form POST submissions. Users receive a ds_csrftoken cookie which is compared against the csrftoken form field (or x-csrftoken HTTP header) for every incoming request. \n If your plugin implements a
anywhere you will need to include that token. You can do so with the following template snippet: \n \n If you are rendering templates using the await .render_template(template, context=None, request=None) method the csrftoken() helper will only work if you provide the request= argument to that method. If you forget to do this you will see the following error: \n form-urlencoded POST field did not match cookie \n You can selectively disable CSRF protection using the skip_csrf(datasette, scope) hook.", "sections_fts": 226, "rank": null} {"rowid": 175, "title": "The _internal database", "content": "This API should be considered unstable - the structure of these tables may change prior to the release of Datasette 1.0. \n \n Datasette maintains an in-memory SQLite database with details of the the databases, tables and columns for all of the attached databases. \n By default all actors are denied access to the view-database permission for the _internal database, so the database is not visible to anyone unless they sign in as root . \n Plugins can access this database by calling db = datasette.get_database(\"_internal\") and then executing queries using the Database API . \n You can explore an example of this database by signing in as root to the latest.datasette.io demo instance and then navigating to latest.datasette.io/_internal .", "sections_fts": 226, "rank": null} {"rowid": 176, "title": "The datasette.utils module", "content": "The datasette.utils module contains various utility functions used by Datasette. As a general rule you should consider anything in this module to be unstable - functions and classes here could change without warning or be removed entirely between Datasette releases, without being mentioned in the release notes. \n The exception to this rule is anythang that is documented here. If you find a need for an undocumented utility function in your own work, consider opening an issue requesting that the function you are using be upgraded to documented and supported status.", "sections_fts": 226, "rank": null} {"rowid": 177, "title": "parse_metadata(content)", "content": "This function accepts a string containing either JSON or YAML, expected to be of the format described in Metadata . It returns a nested Python dictionary representing the parsed data from that string. \n If the metadata cannot be parsed as either JSON or YAML the function will raise a utils.BadMetadataError exception. \n \n \n datasette.utils. parse_metadata content : str dict \n \n Detects if content is JSON or YAML and parses it appropriately.", "sections_fts": 226, "rank": null} {"rowid": 178, "title": "await_me_maybe(value)", "content": "Utility function for calling await on a return value if it is awaitable, otherwise returning the value. This is used by Datasette to support plugin hooks that can optionally return awaitable functions. Read more about this function in The \u201cawait me maybe\u201d pattern for Python asyncio . \n \n \n async datasette.utils. await_me_maybe value : Any Any \n \n If value is callable, call it. If awaitable, await it. Otherwise return it.", "sections_fts": 226, "rank": null} {"rowid": 179, "title": "Tilde encoding", "content": "Datasette uses a custom encoding scheme in some places, called tilde encoding . This is primarily used for table names and row primary keys, to avoid any confusion between / characters in those values and the Datasette URLs that reference them. \n Tilde encoding uses the same algorithm as URL percent-encoding , but with the ~ tilde character used in place of % . \n Any character other than ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz0123456789_- will be replaced by the numeric equivalent preceded by a tilde. For example: \n \n \n / becomes ~2F \n \n \n . becomes ~2E \n \n \n % becomes ~25 \n \n \n ~ becomes ~7E \n \n \n Space becomes + \n \n \n polls/2022.primary becomes polls~2F2022~2Eprimary \n \n \n Note that the space character is a special case: it will be replaced with a + symbol. \n \n \n \n datasette.utils. tilde_encode s : str str \n \n Returns tilde-encoded string - for example /foo/bar -> ~2Ffoo~2Fbar \n \n \n \n \n \n datasette.utils. tilde_decode s : str str \n \n Decodes a tilde-encoded string, so ~2Ffoo~2Fbar -> /foo/bar", "sections_fts": 226, "rank": null} {"rowid": 180, "title": "datasette.tracer", "content": "Running Datasette with --setting trace_debug 1 enables trace debug output, which can then be viewed by adding ?_trace=1 to the query string for any page. \n You can see an example of this at the bottom of latest.datasette.io/fixtures/facetable?_trace=1 . The JSON output shows full details of every SQL query that was executed to generate the page. \n The datasette-pretty-traces plugin can be installed to provide a more readable display of this information. You can see a demo of that here . \n You can add your own custom traces to the JSON output using the trace() context manager. This takes a string that identifies the type of trace being recorded, and records any keyword arguments as additional JSON keys on the resulting trace object. \n The start and end time, duration and a traceback of where the trace was executed will be automatically attached to the JSON object. \n This example uses trace to record the start, end and duration of any HTTP GET requests made using the function: \n from datasette.tracer import trace\nimport httpx\n\n\nasync def fetch_url(url):\n with trace(\"fetch-url\", url=url):\n async with httpx.AsyncClient() as client:\n return await client.get(url)", "sections_fts": 226, "rank": null} {"rowid": 181, "title": "Tracing child tasks", "content": "If your code uses a mechanism such as asyncio.gather() to execute code in additional tasks you may find that some of the traces are missing from the display. \n You can use the trace_child_tasks() context manager to ensure these child tasks are correctly handled. \n from datasette import tracer\n\nwith tracer.trace_child_tasks():\n results = await asyncio.gather(\n # ... async tasks here\n ) \n This example uses the register_routes() plugin hook to add a page at /parallel-queries which executes two SQL queries in parallel using asyncio.gather() and returns their results. \n from datasette import hookimpl\nfrom datasette import tracer\n\n\n@hookimpl\ndef register_routes():\n async def parallel_queries(datasette):\n db = datasette.get_database()\n with tracer.trace_child_tasks():\n one, two = await asyncio.gather(\n db.execute(\"select 1\"),\n db.execute(\"select 2\"),\n )\n return Response.json(\n {\n \"one\": one.single_value(),\n \"two\": two.single_value(),\n }\n )\n\n return [\n (r\"/parallel-queries$\", parallel_queries),\n ] \n Adding ?_trace=1 will show that the trace covers both of those child tasks.", "sections_fts": 226, "rank": null} {"rowid": 182, "title": "Import shortcuts", "content": "The following commonly used symbols can be imported directly from the datasette module: \n from datasette import Response\nfrom datasette import Forbidden\nfrom datasette import NotFound\nfrom datasette import hookimpl\nfrom datasette import actor_matches_allow", "sections_fts": 226, "rank": null} {"rowid": 183, "title": "Contributing", "content": "Datasette is an open source project. We welcome contributions! \n This document describes how to contribute to Datasette core. You can also contribute to the wider Datasette ecosystem by creating new Plugins .", "sections_fts": 226, "rank": null} {"rowid": 184, "title": "General guidelines", "content": "main should always be releasable . Incomplete features should live in branches. This ensures that any small bug fixes can be quickly released. \n \n \n The ideal commit should bundle together the implementation, unit tests and associated documentation updates. The commit message should link to an associated issue. \n \n \n New plugin hooks should only be shipped if accompanied by a separate release of a non-demo plugin that uses them.", "sections_fts": 226, "rank": null} {"rowid": 185, "title": "Setting up a development environment", "content": "If you have Python 3.7 or higher installed on your computer (on OS X the quickest way to do this is using homebrew ) you can install an editable copy of Datasette using the following steps. \n If you want to use GitHub to publish your changes, first create a fork of datasette under your own GitHub account. \n Now clone that repository somewhere on your computer: \n git clone git@github.com:YOURNAME/datasette \n If you want to get started without creating your own fork, you can do this instead: \n git clone git@github.com:simonw/datasette \n The next step is to create a virtual environment for your project and use it to install Datasette's dependencies: \n cd datasette\n# Create a virtual environment in ./venv\npython3 -m venv ./venv\n# Now activate the virtual environment, so pip can install into it\nsource venv/bin/activate\n# Install Datasette and its testing dependencies\npython3 -m pip install -e '.[test]' \n That last line does most of the work: pip install -e means \"install this package in a way that allows me to edit the source code in place\". The .[test] option means \"use the setup.py in this directory and install the optional testing dependencies as well\".", "sections_fts": 226, "rank": null} {"rowid": 186, "title": "Running the tests", "content": "Once you have done this, you can run the Datasette unit tests from inside your datasette/ directory using pytest like so: \n pytest \n You can run the tests faster using multiple CPU cores with pytest-xdist like this: \n pytest -n auto -m \"not serial\" \n -n auto detects the number of available cores automatically. The -m \"not serial\" skips tests that don't work well in a parallel test environment. You can run those tests separately like so: \n pytest -m \"serial\"", "sections_fts": 226, "rank": null} {"rowid": 187, "title": "Using fixtures", "content": "To run Datasette itself, type datasette . \n You're going to need at least one SQLite database. A quick way to get started is to use the fixtures database that Datasette uses for its own tests. \n You can create a copy of that database by running this command: \n python tests/fixtures.py fixtures.db \n Now you can run Datasette against the new fixtures database like so: \n datasette fixtures.db \n This will start a server at http://127.0.0.1:8001/ . \n Any changes you make in the datasette/templates or datasette/static folder will be picked up immediately (though you may need to do a force-refresh in your browser to see changes to CSS or JavaScript). \n If you want to change Datasette's Python code you can use the --reload option to cause Datasette to automatically reload any time the underlying code changes: \n datasette --reload fixtures.db \n You can also use the fixtures.py script to recreate the testing version of metadata.json used by the unit tests. To do that: \n python tests/fixtures.py fixtures.db fixtures-metadata.json \n Or to output the plugins used by the tests, run this: \n python tests/fixtures.py fixtures.db fixtures-metadata.json fixtures-plugins\nTest tables written to fixtures.db\n- metadata written to fixtures-metadata.json\nWrote plugin: fixtures-plugins/register_output_renderer.py\nWrote plugin: fixtures-plugins/view_name.py\nWrote plugin: fixtures-plugins/my_plugin.py\nWrote plugin: fixtures-plugins/messages_output_renderer.py\nWrote plugin: fixtures-plugins/my_plugin_2.py \n Then run Datasette like this: \n datasette fixtures.db -m fixtures-metadata.json --plugins-dir=fixtures-plugins/", "sections_fts": 226, "rank": null} {"rowid": 188, "title": "Debugging", "content": "Any errors that occur while Datasette is running while display a stack trace on the console. \n You can tell Datasette to open an interactive pdb debugger session if an error occurs using the --pdb option: \n datasette --pdb fixtures.db", "sections_fts": 226, "rank": null} {"rowid": 189, "title": "Code formatting", "content": "Datasette uses opinionated code formatters: Black for Python and Prettier for JavaScript. \n These formatters are enforced by Datasette's continuous integration: if a commit includes Python or JavaScript code that does not match the style enforced by those tools, the tests will fail. \n When developing locally, you can verify and correct the formatting of your code using these tools.", "sections_fts": 226, "rank": null} {"rowid": 190, "title": "Running Black", "content": "Black will be installed when you run pip install -e '.[test]' . To test that your code complies with Black, run the following in your root datasette repository checkout: \n $ black . --check\nAll done! \u2728 \ud83c\udf70 \u2728\n95 files would be left unchanged. \n If any of your code does not conform to Black you can run this to automatically fix those problems: \n $ black .\nreformatted ../datasette/setup.py\nAll done! \u2728 \ud83c\udf70 \u2728\n1 file reformatted, 94 files left unchanged.", "sections_fts": 226, "rank": null} {"rowid": 191, "title": "blacken-docs", "content": "The blacken-docs command applies Black formatting rules to code examples in the documentation. Run it like this: \n blacken-docs -l 60 docs/*.rst", "sections_fts": 226, "rank": null} {"rowid": 192, "title": "Prettier", "content": "To install Prettier, install Node.js and then run the following in the root of your datasette repository checkout: \n $ npm install \n This will install Prettier in a node_modules directory. You can then check that your code matches the coding style like so: \n $ npm run prettier -- --check\n> prettier\n> prettier 'datasette/static/*[!.min].js' \"--check\"\n\nChecking formatting...\n[warn] datasette/static/plugins.js\n[warn] Code style issues found in the above file(s). Forgot to run Prettier? \n You can fix any problems by running: \n $ npm run fix", "sections_fts": 226, "rank": null} {"rowid": 193, "title": "Editing and building the documentation", "content": "Datasette's documentation lives in the docs/ directory and is deployed automatically using Read The Docs . \n The documentation is written using reStructuredText. You may find this article on The subset of reStructuredText worth committing to memory useful. \n You can build it locally by installing sphinx and sphinx_rtd_theme in your Datasette development environment and then running make html directly in the docs/ directory: \n # You may first need to activate your virtual environment:\nsource venv/bin/activate\n\n# Install the dependencies needed to build the docs\npip install -e .[docs]\n\n# Now build the docs\ncd docs/\nmake html \n This will create the HTML version of the documentation in docs/_build/html . You can open it in your browser like so: \n open _build/html/index.html \n Any time you make changes to a .rst file you can re-run make html to update the built documents, then refresh them in your browser. \n For added productivity, you can use use sphinx-autobuild to run Sphinx in auto-build mode. This will run a local webserver serving the docs that automatically rebuilds them and refreshes the page any time you hit save in your editor. \n sphinx-autobuild will have been installed when you ran pip install -e .[docs] . In your docs/ directory you can start the server by running the following: \n make livehtml \n Now browse to http://localhost:8000/ to view the documentation. Any edits you make should be instantly reflected in your browser.", "sections_fts": 226, "rank": null} {"rowid": 194, "title": "Running Cog", "content": "Some pages of documentation (in particular the CLI reference ) are automatically updated using Cog . \n To update these pages, run the following command: \n cog -r docs/*.rst", "sections_fts": 226, "rank": null} {"rowid": 195, "title": "Continuously deployed demo instances", "content": "The demo instance at latest.datasette.io is re-deployed automatically to Google Cloud Run for every push to main that passes the test suite. This is implemented by the GitHub Actions workflow at .github/workflows/deploy-latest.yml . \n Specific branches can also be set to automatically deploy by adding them to the on: push: branches block at the top of the workflow YAML file. Branches configured in this way will be deployed to a new Cloud Run service whether or not their tests pass. \n The Cloud Run URL for a branch demo can be found in the GitHub Actions logs.", "sections_fts": 226, "rank": null} {"rowid": 196, "title": "Release process", "content": "Datasette releases are performed using tags. When a new release is published on GitHub, a GitHub Action workflow will perform the following: \n \n \n Run the unit tests against all supported Python versions. If the tests pass... \n \n \n Build a Docker image of the release and push a tag to https://hub.docker.com/r/datasetteproject/datasette \n \n \n Re-point the \"latest\" tag on Docker Hub to the new image \n \n \n Build a wheel bundle of the underlying Python source code \n \n \n Push that new wheel up to PyPI: https://pypi.org/project/datasette/ \n \n \n To deploy new releases you will need to have push access to the main Datasette GitHub repository. \n Datasette follows Semantic Versioning : \n major.minor.patch \n We increment major for backwards-incompatible releases. Datasette is currently pre-1.0 so the major version is always 0 . \n We increment minor for new features. \n We increment patch for bugfix releass. \n Alpha and beta releases may have an additional a0 or b0 prefix - the integer component will be incremented with each subsequent alpha or beta. \n To release a new version, first create a commit that updates the version number in datasette/version.py and the the changelog with highlights of the new version. An example commit can be seen here : \n # Update changelog\ngit commit -m \" Release 0.51a1\n\nRefs #1056, #1039, #998, #1045, #1033, #1036, #1034, #976, #1057, #1058, #1053, #1064, #1066\" -a\ngit push \n Referencing the issues that are part of the release in the commit message ensures the name of the release shows up on those issue pages, e.g. here . \n You can generate the list of issue references for a specific release by copying and pasting text from the release notes or GitHub changes-since-last-release view into this Extract issue numbers from pasted text tool. \n To create the tag for the release, create a new release on GitHub matching the new version number. You can convert the release notes to Markdown by copying and pasting the rendered HTML into this Paste to Markdown tool . \n Finally, post a news item about the release on datasette.io by editing the news.yaml file in that site's repository.", "sections_fts": 226, "rank": null} {"rowid": 197, "title": "Alpha and beta releases", "content": "Alpha and beta releases are published to preview upcoming features that may not yet be stable - in particular to preview new plugin hooks. \n You are welcome to try these out, but please be aware that details may change before the final release. \n Please join discussions on the issue tracker to share your thoughts and experiences with on alpha and beta features that you try out.", "sections_fts": 226, "rank": null} {"rowid": 198, "title": "Releasing bug fixes from a branch", "content": "If it's necessary to publish a bug fix release without shipping new features that have landed on main a release branch can be used. \n Create it from the relevant last tagged release like so: \n git branch 0.52.x 0.52.4\ngit checkout 0.52.x \n Next cherry-pick the commits containing the bug fixes: \n git cherry-pick COMMIT \n Write the release notes in the branch, and update the version number in version.py . Then push the branch: \n git push -u origin 0.52.x \n Once the tests have completed, publish the release from that branch target using the GitHub Draft a new release form. \n Finally, cherry-pick the commit with the release notes and version number bump across to main : \n git checkout main\ngit cherry-pick COMMIT\ngit push", "sections_fts": 226, "rank": null} {"rowid": 199, "title": "Upgrading CodeMirror", "content": "Datasette bundles CodeMirror for the SQL editing interface, e.g. on this page . Here are the steps for upgrading to a new version of CodeMirror: \n \n \n Download and extract latest CodeMirror zip file from https://codemirror.net/codemirror.zip \n \n \n Rename lib/codemirror.js to codemirror-5.57.0.js (using latest version number) \n \n \n Rename lib/codemirror.css to codemirror-5.57.0.css \n \n \n Rename mode/sql/sql.js to codemirror-5.57.0-sql.js \n \n \n Edit both JavaScript files to make the top license comment a /* */ block instead of multiple // lines \n \n \n Minify the JavaScript files like this: \n npx uglify-js codemirror-5.57.0.js -o codemirror-5.57.0.min.js --comments '/LICENSE/'\nnpx uglify-js codemirror-5.57.0-sql.js -o codemirror-5.57.0-sql.min.js --comments '/LICENSE/' \n \n \n Check that the LICENSE comment did indeed survive minification \n \n \n Minify the CSS file like this: \n npx clean-css-cli codemirror-5.57.0.css -o codemirror-5.57.0.min.css \n \n \n Edit the _codemirror.html template to reference the new files \n \n \n git rm the old files, git add the new files", "sections_fts": 226, "rank": null} {"rowid": 200, "title": "Custom pages and templates", "content": "Datasette provides a number of ways of customizing the way data is displayed.", "sections_fts": 226, "rank": null}