The goals of this week
If you're an avid reader of these dev logs, you may have noticed this one is a bit later than usual. As you'll soon learn, this week I decided to pivot my focus onto public access control and security. My goal was to create a maintenance mode for my website, where guests can enter a password and view the website in read-only mode. Simple, right?
No. No it was not
Fortunately, with a little bit of time-traveling, I don't need to tell you about all of the mistakes, debugging headaches, security vulnerabilities, and spaghetti code I had to get through before finalizing my solution. So let's dive into it.
Setting website status
Your website has three different access levels: public, private, and password protected. Depending on what mode your website is in will dictate whether or not a regular person can access your website. This variable is set in Payload, but there's a problem. We technically need to check if the website is in maintenance mode every single time a user navigates on our website. This means checking the value in payload using the REST API, which can be horribly slow. We could be adding on 200-300ms of load time every time we navigate, which is just unacceptable.
Caching strategy
Introducing Vercel Edge Config - a super-fast edge cache that returns results in less than 15 milliseconds. With this cache, we can set the website access level in payload and sync that value with the cache. Then, when a user visits your website, we can use some middleware to check the access level of the site in the cache without interrupting the user experience. The result of this check determines whether or not a user should be redirected to the Maintenance Mode page.
Challenges of accessing private content
Interestingly enough, that was the easy part. Now we needed to add a way for users with a password to actually get access to the website. This particular problem sent me down a rabbit hole of security concerns. Primarily because our application is running as a multi-tenant environment. We need to ensure that resources are being accessed appropriately and by genuine users. Furthermore, we want to use an authentication strategy that will restrict users to read-only mode - and only for public-facing documents. And lastly, we need to make sure that guests can only access a specific website.
Redesigning security
As I started working on my first iteration of the guest log-in, I came to the realization that I really don't want to leave my API open-ended. I already had access control available on every collection to stop potentially malicious users from gaining access to collections they shouldn't, but if I'm now creating ways for users to access hidden data, I need to be extra careful and secure my backend even more.
From this point on, the API is completely locked down. All requests are accessible only via a proxy, and additionally, all requests must include the target organization as well as some additional headers for authentication. With this change, I now felt a lot more confident with tackling the guest access.
Guest authentication
This is a sensitive topic, so I won't go into too much detail, but essentially, every organization has a guest account associated with it. This is not a fully-fledged account like a dashboard user; it is more like a read-only access token that has its rights greatly restricted. Using the Access Control fields from payload, guests are able to view draft and published content, but they cannot interact with it at all.
When a guest enters the correct password, they go through a fairly intense process to verify that their intent is genuine. I don't want to go into too much detail - but there's a reason it takes a little while between clicking "submit" and being redirected to the website properly. Because a guest account is associated with an organization, it only has read-only access to website content belonging to that same organization. This prevents the user from being able to access content from another organization either on purpose or by accident.
There is a penalty to pay for this kind of security, and it comes in the form of longer webpage load times. The user is effectively in draft mode, so the entire website is being generated on the fly. This is a small price to pay for what is otherwise a pretty slick convenience feature.
Wrapping up
So what have I learned this week? Website security is scary. Very scary. To make sure that we are operating as safely as possible, we will be hiring a penetration tester to help us plug any holes in our website security before we start opening the system up to clients. Join us next week where we'll be moving onto the wonderful world of deployment! Setting up our staging environment to test out our authentication in the real world.