I’m planning on making my reading queues publicly available, on this site. This is my worklog for that project.
Project Motivation
In the last while I’ve been trying to read more of the literature from my field: papers, posts, books. I’ve succeedd at the “reading more” part, but not in a very structured way- I’ve wound up with a bunch of annotated PDFs floating around various folders, without any useful metadata:
- What is this article? Title, author, venue, URI?
- What is it about?
- What is my relationship to the article? How did I found out about it? Did I like it?
I also have some UX barriers to reading how I want to. I discover a lot of articles while on my phone, but it’s not my preferred reading surface. I have an e-reader that is pretty good for PDFs, but the “send-to” flow isn’t great- too many taps.
Finally, while I’m not very vocal on social media, I do find a lot of links there. If I’m reading something and think it’s good, I want to be able to pass it on. As I’m making this site a more primary presence on the web, it seems like this would be the place to share. (Yes - The Morning Paper did it first, and better!)
So that’s the outline:
- A “Send” target, to easily add items to the queue from ~wherever
- A database (loosely) of interesting things, with:
- Original metadata: Title, author(s), date; URI, DOI, ISBN; etc.
- Personal metadata:
- Discovery / read / reviewed dates
- Discovery credit (how did I find out about it?)
- Summary and/or commentary
- Applied tags
- A way to present that database on cceckman.com
- …with appropriate filtering; e.g. I may not want to publish an incomplete review
Sketching and resources
I’m aware of The Morning Paper, though I didn’t read it while it was running and haven’t gone through the archives…yet.
The Web Share Target API looks like a way to get “share to $site” behavior. I’ve already started using Hugo for the site itself. Hugo allows for custom front matter on each page; that’s a low-effort “database” (shared keys). And might encourage me to write reviews, since each “entry” would indicate a file.
If I handle the taxonomy correctly Hugo might even generate index pages. Hm…
So a sketch:
- Offer a web app that acts as a share target
- Authenticated on App Engine or via a VPN
- On the first request, immediately save what metadata is available
- Immediately resolve the link to check it’s valid
- …and to handle indirects: “share” a Twitter post linking a paper by using the Twitter link for “from” and the linked paper as the main article
- Save to a ~persistent buffer, e.g. a file
- After saving, display an “edit” page
- Manually, by submitting the form
- Stretch: “Autofill” button; resolve the target, get metadata, fill it in
- Apparently there are DOI metadata services available?
- On a cron, or after some idle period (e.g. 1h), commit to the website Git &
push
- Don’t need to spam the history with each update
readings/<slug>.md
: metadata in front matter; summary and commentary in body
- Render in Hugo
- Already have automatic builds for the site; pushing to Git will update web automatically
Front matter:
- Article data:
- Title
- Publication date
- Author(s); author link(s)
- DOI (if present)
- ISBN (if present)
- URL (if present)
- Personal data
- Queue dates:
- Enqueue, Read, Reviewed
- Latest date present signals status
- “Reviewed” date allows post-summary content to render. Alternative to Hugo “draft” bit
- Via
- (Display) name; link
- Queue dates:
- Tags
After-front-matter: Body of the page is the review. Set the template to render
.Content
iff Reviewed
date is present; otherwise, render only .Summary
:
Getting started with web share target
Looking through the Web Share Target stuff…it seems like the bar for entry is pretty high. In order to be “installable”, according to this, the site must install a service worker - a client-side component that sits between the in-browser frame and the network. (I guess it’s a “controller” in the MVC sense?)
And a service worker can only be installed via HTTPS. A good restriction! But it will make local development a little harder. Luckily, web.dev has a guide to HTTPS for localhost; let’s start with that.
Dev server setup
The recommended option there, mkcert
, comes to us from
@FiloSottile of the Golang security team; not
super likely it’s an exploit. Downloading and running
sudo apt-get libnss3-tools && mkcert -install
gets it installed within a
container.
(For completeness: Because I’m working in a container, I copied the CA
certificate - found by $(mkcert -CACERT)/rootCA.pem
- out of the container and
added it to the relevant browser. If my browser was from the same container as
the shell, I think I wouldn’t need to do that step.)
Let’s hop into a new repository - https://github.com/cceckman/reading-list. I’ll
make sure to not accidentally commit my localhost
certificate by starting with
the .gitignore
file
**/*.pem
In the shell, we’ll generate certificates for localhost
:
∵ mkcert localhost
Created a new certificate valid for the following names 📜
- "localhost"
The certificate is at "./localhost.pem" and the key at "./localhost-key.pem" ✅
It will expire on 29 November 2023 🗓
∴ 0 reading-list:main…reading-list
∵
and use Golang to set up a basic HTTPS-capable webserver. For now, let’s just
serve from disk, going from the net/http
examples:
Let’s give it something to serve and see if it works- a plain index.html
file:
Back in the shell, start the server:
∵ go mod init github.com/cceckman/reading-list
go: creating new go.mod: module github.com/cceckman/reading-list
go: to add module requirements and sums:
go mod tidy
∴ 0 reading-list:main…reading-list
∵ go run main.go
And we’re off:
Code so far: 2e38df
Manifesting a web app
OK; we have a webserver to hook into for development. Let’s get that web app.
We need a couple icons to meet the installability criteria; we can make them with ImageMagick:
∵ convert -size 192x192 xc:white icon-192.png
∴ 0 reading-list:main…reading-list
∵ convert -size 512x512 xc:white icon-512.png
∴ 0 reading-list:main…reading-list
We’ll keep the manifest minimal for now- just the parts required for install:
And we’ll need to update the HTML to point at it:
To see if it works, I opened up (again) https://localhost:8080/reading/admin
in my web browser (Chrome). That doesn’t say much on its own, but Chrome’s
developer tools1 have an “Application” tab that will help with
debugging apps. In this case, they give only one warning:
No matching service worker detected. You may need to reload the page, or check that the scope of the service worker for the current page encloses the scope and start URL from the manifest.
That’s what we expect at this point- we haven’t interacted with the server worker API at all.
A misstep elided
I’ll note that I made a few mistakes before getting to the file contents posted
above. I had used rooted paths in a number of places-
<link href="/manifest.json">
in the HTML, "src": "/icon-192.png"
in the
manifest. When a path starts with a /
, the browser interprets it as relative
to the host (localhost:8080
)- so in this case, localhost:8080/reading/admin/
pointed to localhost:8080/manifest.json
as the manifest.
The “Application” page in the developer tools told me about these issues; it
tried to load the manifest but reported a parse error, since the contents were
404 not found
. Similarly, the icons failed to load.
In both cases, removing the leading /
from the path fixed the problem; in that
case, the path is relative to the resource, e.g. relative to /reading/admin/
.
I also got the scope
and start_url
a little wrong - using /reading/admin
as the scope and /
as the start URL. This is actually the same problem; the
developer tools indicated that the start_url
(/
, which resolves to
localhost:8080/
) is not within the scope
(/reading/admin
, which resolves
to localhost/reading/admin
). Again, changing start_url
to be relative (an
empty path, ""
) fixed the issue.
Code so far: ac63aa
Service Worker
And now I need to understand service workers. Uh oh.
I did a little bit of Android development, Quite A While Ago. The app I worked on had two components: the “application”, with all of the menus, pages, etc; and the “service”, which ran n the background and kept a persistent network connection, even when the UI was closed. The “application” lifecycle was very much tied to user events: pressing the button in the launcher, navigating between pages, etc; the “service” lifecycle was decoupled, based on a different set of events and operations. At least in the app I worked on, the primary way the two halves communicated was by message-passing.
At a glance - the service worker architecture is similar. As normal, a web page can run JavaScript that interacts with the page contents (the DOM). That JavaScript gets on-click events, navigation events, etc. For “web apps”, that operation will likely include sending more requests to the server: “Save this email draft”, “get this folder’s contents”, etc.
Based on this handy MDN article: service workers provide a kind of background context in which to handle those requests-to-server. Once spun up, service workers can intercept requests for in their scope- and can serve those from a service-worker-managed cache and/or from the network.
Other observations:
- Service workers are geared towards caching, not processing.
- The primary idea seems to be “make it possible to fulfill requests even when offline” - not “you get another thread to run stuff in”.
- While somewhat decoupled from the “view” lifetime - e.g. service workers can’t access the DOM - service workers are still subject to certain lifecycle constraints. In particular, ~everything in service workers must be asynchronous (Promise-based).
- Service workers get pretty strong guarantees about storage persistence.
- Unlike the HTTP cache - which users can and do clear ~arbitrarily - the service worker explicitly manages its full cache (unless uninstalled).
- And the storage does not necessarily need to be cache - it’s possible for a service worker to construct responses from a data store that is not an HTTP cache. I’m not going to try that, though.
- There’s one service worker instance per scope. If there’s a service worker
registered for
cceckman.com/reading/admin
, and I have two tabs open (in the same browser) forcceckman.com/reading/admin/edit/123
andcceckman.com/reading/admin/add
, the same service worker will be used for both tabs.
My current working model: the service worker acts as an in-browser HTTP server for its scope. It can get all of the requests, and serve them locally - or proxy them to another server, possibly cacheing the results.
With all that in mind: This page has a not-really-minimal example of a service worker. I suspect we can do better for a basic example; let’s try.
We’ll need two JavaScript files: one to include2 from the main page that registers the service worker, and one for the service worker itself.
We’ll base the “app” on the MDN listing:
and add it in the HTML:
Then the challenge- what’s the minimal service worker that actually works?
To be installable, the worker must have a fetch
handler. But we
don’t want to cache anything - we just want to pass through to the network; so
we don’t need to handle setup or activate.
I think that leaves us with a minimal service worker:
That does seem to get the service worker running; it shows up in developer tools. But the Application page still reports the above error.
Let’s try the suggestions from the error message:
You may need to reload the page…
Several times; it doesn’t help.
.. or check that the scope of the service worker for the current page encloses the scope and start URL from the manifest.
Hm; Devtools reports the start URL as “”, which is accurate if not precise. What if we change it?
That shows up in devtools:
Ah, and with an interesting outcome! If I click on ?pwa=true
, the browser
sends us to https://localhost:8080/reading/admin/manifest.json?pwa=true
- not
where we want to wind up.
Let’s try a different start:
Still no luck. Checking the console, I see:
Registration succeeded; got: https://localhost:8080/reading/admin/
Huh - that ends with a slash, admin/
, but our manifest does not. What if…
Success! Note the “install” icon on the right edge of the address bar - and the lack of errors in the devtools pane.
Wow- that was not obvious. Clearly I need to read up on web paths- my intuitions about what things are / aren’t equivalent are off.
Finally - I’d hoped we would be able to take the slash away:
but it appears that visiting /reading/admin
triggers a 301 redirect to
/reading/admin/
- I’m guessing as a property of Go’s handler rather than a
specified canonicalization. That prevents us from registering in the “upper”
scope, without the trailing slash; we can’t register /admin
from its suffix
/admin/
without using the service-worker-allowed
header.
Something for us to keep in mind going forward, but not something I feel the need to fix now.
It’s worth noting that the above display reflects the Chrome browser in Chrome OS - where ~all apps are web apps of one sort or another. I’ll also need to test this on other OSes and browsers to see how it looks.
Code so far: [9ef67a]
Share target
Now that our target is “installable”, we can add a share_target stanza to the manifest. Given just the fields from that examle, the Application devtool pane complains:
Manifest: Enctype should be set to either application/x-www-form-urlencoded or multipart/form-data. It currently defaults to application/x-www-form-urlencoded
So we’ll include that in our manifest:
and we’ll recognize that request specifically:
|
|
OK; that works. I can install using the Chrome button; “Reading List Admin” shows up in my (Chrome OS) taskbar.
And when I hit “Share” on the Web Share Target page, I get:
If I click through, that has a bunch of associated data in the logs, as expected. Great!
Code so far: 0c77d8
The serving problem
Now we have to hit some tough questions: how will this actually get served?
-
Serve on a local device; use a certificate from the network
In an ideal world, I’d put it on one of the Raspberry Pis I have lying around, bind it to the Tailscale interface, and be done. The connection will be encrypted and authenticated node-to-node, for this use case.
That kind of encrytion isn’t good enough for the “web app” paradigm. The Web Share Target API requires that the server is accessed via HTTPS, i.e. that the browser is doing the encryption & authentication. This does make sense for most use cases- but “a private web app” is one that it breaks.
Unfortunately TLS certificates for services within Tailscale are still a work-in-progress at this point- so we’ll have to do something else. {#tailscale-tls}
-
Serve on e.g. App Engine; authenticate using OAuth / OIDC
At the moment cceckman.com is on App Engine, which can handle efficiently serving the static content (like this page) and dynamic content (like redirects). The Identity-Aware Proxy feature is sufficent to authentcate the expected user (me).
But it also requires doing the whole OAuth flow and setting up a secondary service, so cceckman.com itself doesn’t have a login-wall. It also means our options for storage are “something that App Engine” can access, probably GCS, and therefore consuming from there as well. I’d like to keep this project “local / self-service” if possible- or at least have that as a supported mode.
-
Serve locally; add the CA to target devices
This would let us use Something Else for access control- Tailscale, or even x509 client certificates.
I’m wary of adding a CA to devices, though. For a local device where I’m “just” doing security-insensitive projects, for a little while, I’m OK; but for a device where I do my financials, or where I do work-for-money, I’m more wary. (Especially if I keep the
It also means that I can /only/ access from devices within what ever ACL; e.g. if the server is on Tailscale, I need to hook my work device(s) up to it.
With “Flexibility” in mind, we can at least derive some design points from this:
-
“Authentication and Authorization” should be substitutable. We’ll want something like
that we can pass in whichever mechanism: “is the source interface
tailscale0
”, “is there a valid OAuth token for an authorized user”, “is the client using an acceptable x509 cert”, whatever. -
Transient storage should also be pluggable.
I’ve been assuming we want to rate-limit additions to the main website repository to “not every edit to an entry”:
- Keep the log spam low; I do use “git log” from time to time.
- Keep the Git overhead low: any change to a file has overhead for the commit object, the new file contents, and every entry of the directory tree down to the changed blob. For small changes that could be a large blow-up factor.)
This is easy enough when there’s a local file system (“temporary directory”), but not in a managed (“cloud”) service. We have other options when on a manged service; we should try to abstract that away from the core logic.
-
We may not do “save changes” and “flush persistent” in the same process.
If we’re running a continuous server, we can kinda think of “transient state” as an extension of the memory of the server- it’s persistend in case of a crash, but not actually shared. (We could make it a container-local directory, for instance- I would guess systemd has some magic for that.)
If we are potentially running “save changes” from a different process, we need
git
-based persistence
A portable solution might be something like this: use Git for both “transient” and “permanent” state, but squash any “transient” changes into one “permanent” commit.
We might declare two branches of the same repository as “staged” and “live”.
- On add / edit / delete operations, we edit the “staged” branch, commit and push.
- On a “sync” operation, we rebase “staged” over “live”.
- On a “flush” operation, we “sync”; take all the diffs from “staged” and apply them to “live”; reset “staged” to “live”; and push both.
These branchs could be on different remotes- e.g. with “staged” in a local filesystem for a self-hosted server, or with both on the same upstream (Github or whatever) for a managed server.
This example uses
go-git to manipulate a repository
in-memory. That’s almost what we want…but the key merge
and rebase
operations are missing as of this writing. Could we make do
without? We’re only expecting “clean” merges - the equivalent of “apply all
these”- so we don’t need the full merge
functionality, just diff
and
apply
. Alas, the latter is missing as well. With some work we would be able to
reconstruct it from status
(“what files are modified in the work tree”) and
the diff
library underlying go-git
’s diff.Diff
function…
…but at that point, we’re doing a lot of work to implement sync
and flush
on the same endpoint as add
. Really, only the “add” endpoint has to obey HTTPS
rules; in principle, we can send the rest of the requests anywhere. We could
even make our “share” endoint “just” a redirect to a server served through some
other mechanism.
Now there’s a thought! Provide web-app compatibility by serving an
unauthenticated, HTTPS-protected redirect…to our Tailscale-reachable server,
with local storage, shell-level git
access, etc.
Let’s try it out, just redirecting to https://cceckman.com/reading-list
with
the same query parameters…
|
|
Again, use the “share” button here and…

Within the “Reading List” application window, an HTTP 404 error from cceckman.com.
Success; this may be a viable strategy.
I’ll to incorporate those handlers / contents / manifest into my main website at cceckman.com, but will leave the web-appy bits in the reading-list repository for reference.
Code so far: 161ff3
App Engine mapping
I tried a couple ways to perform the mapping / setup. Eventually I settled on
adding a new App Engine service, reading-list-redirect
, to my existing project
used for cceckman.com; set up reading.cceckman.com
to point to the App Engine
app; and created a dispatch.yaml
file to route between them by domain name:
In the reading-list
repository, I added an app.yaml
file for the web-app
portion:
|
|
For actual delivery, I’ll replace READING_LIST_SERVER
with something else-
e.g. a Tailscale IP.
Code so far: b34699
The persistent server
From here, I’m going to diverge a bit. Assume READING_LIST_SERVER
in
app.yaml
points to some other address, that is only accessible behind a login
or a firewall; this means the “web app” at reading.cceckman.com
will redirect
to it on receiving a share request. (If you actually go to
reading.cceckman.com
, you may be able to find that address!)
From here on out, we’ll be describing the server that lives at that (redirect-target) address. We’ve implemented the hook to receive “share” events- now we’ll implement what to actually do with them.
Some thoughts:
- Should actually use POST and immediately add the link to database, and present “edit” pane
- “Autofill” button in edit pane; do typical resolutions, e.g. DOI to metadata, Twitter to “from” annotation + link from the post
- Metadata + summary edit from web UI. But only summary, now “review” content.
Noting that the “advanced” of this would set up the web app as a true “app”: use the service worker to mux between the pages (served from some static content service) or a per-client configured backend. Then it’s cheap-to-free to host the web app, and everyone can have their own persistent server, without ever leaking “where” that server is (it’s only stored client-side). That’s more Javascript than I want to do at a first cut, though.
I tested https://reading.cceckman.com on my phone, and was able to get the “share-to” functionality working- just as described! So now we’re shooting for a minimum-viable-product: save shared links to local disk. That will let me close out some of my long-standing open tabs.
Data model
Eventually this is going to be a metadata block - the front matter of a Hugo page. I like using YAML for that, so we’ll be leaning on yaml.v2 for marshalling. We have three kinds of data in there:
- Data from Hugo that we do care about:
- title
- creation date/time (“enqueue date”)
- Data from Hugo that we don’t care about, but need to preserve
- Our own data (many fields, in a separate struct)
We’ll want some structure that extracts these appropriately. I know I had some bit of code that did partial-YAML decoding, preserving unknown fields - I’ll have to find or reconstruct that. (I think it actually required yaml.v3…)
ParseFrontMatterAndContent
is what reads the front matter back out of
storage- what we’ll want when updating it. Unfortunately that comes back as a
map[string]interface
rather than e.g. []byte
- which suggests the top level
is decoded already, and we wouldn’t “just” be able to unmarshall by sending
through the yaml
package.
It seems this “front matter” format is common to Jekyll and possibly other renderers as well, so, alternatives:
- gerntest/front also deserializes to a
map[string]interface{}
, i.e. one layer of decoding rather than truly destructuring. - ericaro/frontmatter deserializes to a
yaml.v2
-compatible object of your choice - though it takes / puts the body into a taggedfm:"content"
field, which is a little off of the true data model, but isn’t quite right. (Also it’s archived.)
So: setting that aside, what’s our data model? A first cut:
|
|
I’ll include some helpers and tests for these in the repository but will elide the listing here.
To split or not to split?
I looked at the code for ericaro/frontmatter and noticed it uses
strings.Split
to find the front-matter separator (\n---\n
). Initially, this
struck me as incorrect: what if a string within the front matter had that byte
sequence? But when I tried to construct a test case, I realized that it should
be sufficient to “just” find the first match of that pattern.
A review of the various multiline modes at https://yaml-multiline.info (great
resource!) led me to conclude that it isn’t possible for a YAML document to have
a string in a field containing the byte sequence \n---\n
. Consider this
document:
The field example
contains the value \n---\n
, but that delimiter doesn’t
actually show up in the source. The string field’s contents must be indented,
which means the first byte (\n
) will not be adjacent to the second (-
) in
the file contents - even if they are adjacent in the value of the string.
Comment-preserving transformation
As I reviewed how to use the YAML library, I recalled why I went with v3 over v2 in the previous project. The question I had was whether a program could safely edit a YAML file that was also human-consumable. (Not a good idea - but I wanted to see if it was possible.)
The v3 version includes a Node
type that preserves the syntax tree -
including comments- so you can do transformations to the text even while
performing comments… if you’re very careful.
In this case, I’m not actually worried about comments - so a simple
map[string]interface{}
that is inline
works.
Oh, CRUD
I spent a while today hacking together the “create” and “read” portions of the
server. Some of it is the coded - YAML to FrontMatter
and back - and shuffling
those to disk; I’ve implemented EntryManager
for this. The other part is the
translation layer between HTTP methods (RESTful!) and the EntryManager
.
I bet there’s a middleware / helper for this that I’m not aware of; something that takes an object storage:
and makes sure only valid HTTP requests reach it:
As of this moment, I think I’ve plumbed through all the bits… but I apparently
made a mistake in using the io/fs
pacakge. It doesn’t actually support reads
or writes. (I probably would have realized that if I’d gone ahead and written
tests as well. See, this is why we write tests!)
This article
suggests afero
instead; I’ll want to go back and revamp with that in mind. But
not tonight.
Code so far: fea41ad
Sharing on Windows; storage
I’ve mostly been developing this on Chrome OS, using the web.dev
page to
trigger the “Share” behavior. (It looks like there’s a flag to enable a
desktop “share” button in Chrome, without the page itself triggering the action-
I should go around and enable that!)
Today I was using a Windows machine, where (apparently) the share behavior is to open an OS dialog. Unfortunately Windows isn’t aware of my web-share-capable app, and so doesn’t offer an option.
For all the desktop / laptop platforms, it’s ~easy enough to copy-paste a link,
as long as there’s a place to do so. So in 754ea5, I added a form to the web
app main page; and tweaked it + the manifest so that they accomplish the same
thing, a POST
submission with a couple fields. Along with the storage changes
(using afero
) - it works! From submitting the form, I got a file:
Packaging
To get this running anywhere other than the dev machine, we’ll want to do a
little bit of packaging. I usually use the Debian Linux distribution, which
means “a server” is most easily expressed as a systemd unit. We’ll set up the
application and a .service
file to run in that mode.
First thing’s first: we should set up the application so that we can inject the
environmental factors, i.e. storage and port. There’s a lot written on flags vs.
environment variables vs. configuration files; for this application, having
flags only in main.go
seems the simplest way to go.
I’d initially thought /var/spool/reading-list
would be the right place, per
the Filesystem Hierarchy Standard; temporary entries are “data
which is [sic] awaiting some kind of later processing”, namely a
commit-and-push. However, the systemd configuration below doesn’t provide a
spool
path; in the interest of avoiding confusion I tweaked this to match.
Unit definition
This server needs a pretty small set of local privileges:
- RW access to a particular local directory
- Listen on a single port
- (Eventually) outbound network access (for
git push
)
systemd has a number of ways to lock down a unit; the man page
lists them all. In looking for some of them I found this article
on the DynamicUser
feature. This is a quite nice sandboxing technique- I like
it a lot more than setting up users / directories via packaging.
This post has some
more guidance, and points to the systemd-analyze
command as a way to audit
once the unit is running. We’ll use that once we get it started.
Code so far: 28db66
Debian package & release flow
I also have some redo rules from another project to wrap a Go binary as a
.deb
package, including building for multiple CPU architectures - useful, as
I’m likely to run this on a Raspberry Pi at some point.
After porting them to this project, I realized I’ve been doing enough of this flow recently to want to “stamp it out” - cceckman/dpkg is my new template repository for this ~typical case, “build a binary, add a systemd unit, package it for Debian”. (And with automatic releasing via Github.)
Of course- this is a lot of lines of shell. Ultimately stealing from Tailscale’s example, I see nfpm as a less-shell-heavy way of generating ~the same structure. But the current formula seems to work, so I’ll stick to it for now.
Code so far: 2560ca
Targeting
Above, I mentioned tweaking the web-app component so that it can have a client-side-configurable server. That would make the “app” part generic to any hostname; targeting to a dev server or a “real” server is just a question of client-side configuration.
I spent some time poking at service workers today to make that happen. I got a lot from the MDN articles on service workers, as well as a couple of snippets from this post on communication.
I’m not too handy with Javascript and didn’t want to add the infrastructure for a TypeScript build, so the code is fairly rough. (Almost C-like - many raw functions.) There were a few “gotchas” that some Googling resolved.
The most surprising - which I’ll have to remember for the future - is the
for...in
vs. for...of
syntax. for...in
iterates over “properties”- in the
case of an Array
, that means “indices”. for...of
iterates over “contained
objects” - in the case of an Array
, the actual objects in the array rather
than their indices.
Trying to route this way also required updating the layout so that “/add” is not
a separate endpoint on the server side. I used
this MDN guide
and associated documentation to wire up the “add” form to JS - so on the
back-end we’re still sending POST to /entries
.
Of course, this means we wind up in the difficult situation of trying to violate Cross-Origin Resource Sharing:
Rerouting to configured server localhost:8080 Request {method: "POST", url: "http://localhost:8080/entries", headers: Headers, destination: "", referrer: "about:client", …}
Access to fetch at 'http://localhost:8080/entries' from origin 'http://localhost:8081' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
Our service worker is scoped to localhost:8081
, but we’re pointing to to make
POST requests to some arbitrary other server (in this case, localhost:8080
).
It looks like this should fall under the “simple request” CORS
policy; that is, the browser will “just” make the POST
request
to localhost:8081
.
The developer console indicates that the request is made, with
Origin: http://localhost:8080
(just like MDN says). This allows the server -
our reading-list
server, at :8081
- to restrict access (e.g. deny the
request) if the origin doesn’t match supported origins. (This only helps, of
course, if the server trusts the browser’s origin reporting- it’s useful for
preventing phishing and the like, though it’s not an end-to-end credentialing.)
Our server doesn’t pay any attention to Origin
; more pertiently, it doesn’t
provide Access-Control-Allow-Origin
. While the browser does get a response, it
“witholds” it from the web app - the :8081
scope - because :8080
didn’t give
:8081
permission to look at the contents.
The nice solution here would be to put them in the same domain - but again, that would hit the HTTPS restriction.
Adding Access-Control-Allow-Origin
to the reading-list
server is the option that lets us get around both of these.
After handling that - and various futzing around with Promise
chaining in JS - it seems to work as of c41e8c.
Local-izing
Sorry, not localizing in the sense of enabling multiple languages, measurement standards, etc.- in the sense of “not requiring a public server”.
As mentioned above, I put the main server on App Engine because that allowed us to get a public TLS certificate - which in turn enables the “install” flow and the “share-to-app” flow on mobile. Unfortunately it also meant splitting up serving between the part serving the “web app” and the bits with access to storage.
In the few weeks I’ve neglected this project (in favor of more immediately satisfactory endeavors), Tailscale has put their TLS serving feature in beta. It seems to work for this use case- I’m writing this from a code-server instance on one machine, but “installed” on a Tailscale-connected laptop.
So, that’s the next milestone: combine the two parts into a single self-contained server.
As of 71ec142 (actually further up the chain), I’ve managed to do this. The
server grabs Tailscale credentials issued from tailscale creds
and uses them
for HTTPS. That lets us operate as an “installed app”, even when it’s on a
different host.
I’d like to try the tsnet version- if I understand correctly, that does all the Tailscale authentication, etc. within the process, rather than doing it externally. (That’s nice in that it means running at a lower privilege level- not having access to the host’s network settings.)
Single-page app
I’ve written a mockup of a single-page app version of the UI. So far, it has a
ListView
of ListItem
s, where ListItem
s reflect reading-list entries; mock
data only for now, rather than getting it from the server. The other main view
will be a edit view- an “add” should generally be the same.
This is entirely unnecessary; we could do all of the rendering, server-side, and present pages for each view. But that would skip my main learning goals for this project: client-side code. To get more experience with client-side stuff, I’m trying to stick to a more-typical frontend pattern (even when using the same server for “app” and “data”): the server “just” serves the app and an API, and the app itself does the wiring to the API.
Things I’ve learned about so far:
-
Typescript! I’ve started going through some Execute Program lessons in TS and JS, and it’s all a lot less frightening now.
-
Packaging browser code! I’m using
esbuild
as the maybe-least-frightening way to join files and generate source maps (hat tip to Julia Evans for the pointer).- I’m also using a bunch of redo rules to build everything
-
Web Components! This is a neat API- genuinely custom elements, with various levels of filling-in. I’m guessing this is what Vue et al use under the hood? They just have a bunch of pre-built components- that I’m not ready to use without understanding the substrates.
-
I’ve encountered some rough edges:
If I use the web components API to instantiate the
for-my-custom-element
in the Shadow DOM ofmy-custom-element
, the slot does appear filled- my browser “knows” that it links back to the<input name="slotted-value">
element. But when I submit the<form>
, only one value is present - not the one from theslot
.
-
I’ve also developed more sympathy for “same code on client and server”. I’m
expecting to use JSON for the Entry
type, i.e. the key object shared between
client and server; I’m not excited about keeping that in sync across languages.
(I guess that’s what protocol buffers / friends are for… or this tool?)
-
You can open developer tools using
Ctrl+Shift+I
on Windows or Linux,Cmd+Shift+I
on Macs - or using the menu, “More tools > Developer tools”. ↩︎ -
We could have fewer files by putting the registration inline in the HTML file. If I understand correctly, though, we’ll want it install on every page - so we’ll want to include it on multiple pages. Also, keeping the JavaScript in a JavaScript file just seems cleaner. ↩︎