{"pageProps":{"data":{"posts":[{"__typename":"Post","id":"5eb8e6f55d7843004534cf77","title":"Quality Software","slug":"quality-software","updated_at":"2020-05-18T04:59:59.000-04:00","excerpt":"This post aims to outline the characteristics of software that we believe determine its fundamental quality.","feature_image":null,"html":"
This post is a complement to episode 346 of Design Details, Quality Software. It is also informed by the 2018 WWDC talk, The Qualities of Great Design.
Quality is subjective and hard to define. In software, it can be unclear why certain applications seem better than others. This post aims to outline the characteristics of software that we believe determine its fundamental quality. The following notes are from a conversation with Marshall Bock, so I'll be writing this post as a we/us/our.
It can be helpful to think of the characteristics of high quality software along the axis of users and developers and attention and intention. These axis overlap, and drawing clear boundaries between them isn't always possible.
Design is in the details, so they say. It takes a lot of energy to pay attention to the details. This is usually because the details are subconsciously understood and are harder to justify in development planning. Shipping pixel-perfect applications takes time, and many of those little details will be absorbed by the user without them even knowing it. Does text have enough contrast to be visible in different lighting? Do animations play at sixty frames per second? Are colors used to communicate similar ideas? Are atomic objects consistent and clearly understood?
In modern software, especially on mobile devices, the details go deeper than visuals. Are interface elements clearly labeled for accessibility on screen readers or in VoiceOver? Are haptics used appropriately to signify events, warnings, or meaningful state changes? Having thoughtfully answered questions like these puts an application into a new tier of quality.
It's easy to build features: slap a button here, tuck a disclaimer there, and pin a tooltip on every new feature. It's not so easy to take those things away. It's a tough job to effectively educate people about new features or changes in the software. And it's even harder to ignore a loudly-requested feature for reasons that might not be easily understood by users.
Quality software knows when to stop adding features. It knows when something works well. And it isn't afraid to take things away, especially if a feature is no longer in service of a broader vision.
Quality software uses motion, iconography, and color with intent. Animation is in service of orienting users, communicating an idea, or guiding them towards an objective. Form is an expression of function; the drop shadow exists to communicate depth, to afford interactivity, to help a user distinguish what can be touched and what cannot, not because it looks nice, or because the screen needed more \"stuff\" on it.
People are busy and they have things to do. Software is most often a means to an end, not an end itself. Quality software understands this and respects a person's time and attention.
Quality software exists in the service of a human outcome, like saving time, earning money, or increasing productivity. These outcomes are the result of designers and developers talking to real humans, watching them use the software, and determining ways to remove complexity.
Quality software is fast. Data is cached. The first paint feels instant. Interactions respond in milliseconds. It makes optimistic guesses about common paths that users follow in order to pre-fetch and pre-render. When the software can't be fast—for example, when operating over a slow network—it communicates the status of ongoing work to a person.
Quality software understands its own role within the operating system and device. It takes advantage of available sensors and heuristics to provide a superior experience to a person, like a phone screen waking up when reached for.
Software can also get better with use. Typeahead, predictive search, and pre-rendering content based on a user's past experiences in the app all make the software feel more personal. For example, FaceID improves over time, accounting for changes in facial orientation, characteristics, and expressions.
In addition to respecting a person's attention, quality software respects their privacy. People understand what data is being collected, how it’s being used, and how to remove it. Quality software uses the minimum amount of personally-identifiable information necessary to complete a job (if any at all) and nothing more.
Quality software considers many contexts a person may be in. Are they walking? Distracted? Hands-free? Using a keyboard or tapping? Do they have a fast internet connection, or is the network dropping?
Quality software accounts for situations like these. It provides prominent, conveniently-placed actions for single-handed users on the go. It uses clear typographical hierarchy and color to make pages scannable at a glance. It adapts itself to any input mechanism: touch, keyboard, voice, pointer, controller, or screen reader. It can queue changes locally in the event of a network loss, and it can calmly communicate the impact of a degraded network connection.
Quality software understands people and how they navigate the world. For example, it knows that some people are colorblind, so it either removes commonly-indistinguishable colors from its palette or provides opt-in overrides to the system.
If it knows that people might be using pre-paid data plans, it considers the very real user cost of sending large payloads over the network. It offers things like\"data-saving modes\" to give people a choice about when to spend their money.
Software exists in the context of the world with changing dynamics, tastes, and crises. Locally, it exists in the context of its brand. A person rarely encounters a piece of software in isolation; there’s priming that’s already occurred through advertisements, landing pages, app store screenshots, and onboarding flows. Because of this, quality software understands the expectations of its users and works to exceed those expectations. A highly polished visual experience is expected to behave in a highly polished way. A highly polished brand is expected to produce high quality software.
Quality software is built for humans. Messy, distractible, impatient, and imperfect humans. Because it understands this, quality software is forgiving. Destructive actions are presented clearly and are easily undoable. If something breaks, the software provides a clear reason that non-technical people can understand. It doesn't patronize its users or make them feel dumb. Errors are never dead ends; invalid user inputs are clearly highlighted, and failed network requests are retried.
Like mis-timed foley or a smudge on the wall, we notice quality more often by the presence of imperfections than by their absence. For this reason, it can be hard to articulate why something is high quality. We can feel it. Like the threshold test for obscenity,\"I know it when I see it.\"
This leaves us with a question: how much should “effort exerted” be considered when we evaluate the quality of software? If you ate the worst slice of pizza you've ever had, would your disgust be lessened by knowing that the chef had worked oh so very hard to make it? Probably not.
Still, it's clear that quality software can’t be created without hard work. Or, in the most optimistic sense, one’s long history of hard work enables high quality to be more easily achieved in later projects. So the two are intertwined, but we don't feel that the work itself is a necessary component in determining the quality of the software itself. An A for Effort doesn’t ensure an A overall.
It can be disenchanting to look at the current software landscape. It feels like we're spinning in circles, solving the same problems that we've been solving for years. Computers get better hardware, so we build more resource-hungry software, and things never actually get faster.
The world has become a tangled mess along the way. There are hundreds of screen sizes, operating systems, browsers, settings, and modes that designers and engineers must account for. It's no wonder that software doesn’t feel much better today than it felt five years ago: we're stuck in a constant game of catch-up.
I predict that more startups will move away from a one-size-fits-all, must-scale-to-seven-billion-users mentality. Instead, we're seeing the proliferation of the independent creator. The person who builds software for a few thousand people, but in a way that is deeply personal and understands what's most important to their audience. More designers and developers are realizing that a healthy income and a happy life can exist with a thousand customers, not a billion.
We approached the conversation of software quality from the perspective of product design and user experience. Of course, there’s a different framing of the topic that focuses on the quality of the written software and not the final experience.
To read more about this, I’d recommend reading this post by Martin Fowler or this post by Jake Voytko. Quite often, quality internal software correlates to high quality user experience. We should strive for both.
"},{"__typename":"Post","id":"5dea5ce1295515003754d9e4","title":"Using Ghost as a Headless CMS with Next.js","slug":"using-ghost-headless-cms-next-js-to-create-a-fast-and-simple-blog","updated_at":"2020-05-13T12:11:07.000-04:00","excerpt":"Rebuilding my self-hosted blog with Next.js and Ghost as a headless CMS.","feature_image":"https://overthought.ghost.io/content/images/2019/12/ghost---next-1--1-.png","html":"I recently rebuilt most of my personal site with the main goal of providing a better surface area for writing. There are dozens of viable technology choices available right now to create a self-hosted blog and I spent way too long, and way too much energy trying to find the perfect one.
Spoiler alert: there is no perfect system. Every solution is a hack. Once I accepted this, I could focus on finding the least-hacky setup that would cover 90% of my needs. Ultimately, I ended up building the following system:
The Ghost API is pretty solid - the documentation is straightforward, and they even have a guide for working with Next.js.
To start, I added a small API file that can fetch data from Ghost:
import GhostContentAPI from \"@tryghost/content-api\";\n\nconst api = new GhostContentAPI({\n url: 'https://overthought.ghost.io',\n key: 'API_KEY',\n version: \"v3\"\n});\n\nexport async function getPosts() {\n return await api.posts\n .browse({\n limit: \"all\"\n })\n .catch(err => {\n console.error(err);\n });\n}\n\nexport async function getPostBySlug(slug) {\n return await api.posts\n .read({\n slug\n })\n .catch(err => {\n console.error(err);\n });\n}
We can then use these API calls to populate data into a page. Here's a simplified version of my src/pages/overthought/index.tsx
file:
import * as React from 'react';\nimport Page from '../../components/Page';\nimport OverthoughtGrid from '../../components/OverthoughtGrid'\nimport { getPosts } from '../../data/ghost'\nimport { BlogPost } from '../../types'\n\ninterface Props {\n posts?: Array<BlogPost>\n}\n\nfunction Overthought({ posts }: Props) {\n return (\n <Page>\n <OverthoughtGrid posts={posts} />\n </Page>\n );\n}\n\nOverthought.getInitialProps = async ({ res }) => {\n if (res) {\n const cacheAge = 60 * 60 * 12;\n res.setHeader('Cache-Control', `public,s-maxage=${cacheAge}`);\n }\n const posts = await getPosts();\n return { posts: posts }\n}\n\nexport default Overthought
In the getInitialProps
call, which runs server-side, I decided that caching the entire page for 12 hours at a time was probably safe: I won't be publishing that frequently. This will improve performance if there is any spike in traffic that would otherwise overload the Ghost API.
It's a bit overkill for now, but one thing that I've been meaning to try is SWR. This package does some cool client-side work to provide data revalidation on refocus, retries on failure, polling, and client-side caching.
One key thing that I wanted to solve for was people navigating between the home page of my site and the /overthought
route. These pages both fetch the same posts, so it'd be a waste to require the second fetch to resolve before rendering my list of posts.
Before SWR, I might have reached for a tool like React.useContext
to provide some kind of global state wrapper that would keep track of any previously-fetched posts. But Context
can get messy, and I hate adding hierarchy to my components.
SWR solves the problem by maintaining a client-side cache of data I've fetched, keyed by the route used for the request. When a user navigates from /
to /overthought
, SWR will serve stale data from the cache first and then initiate a new request to update that cache with the latest data from the API.
At the end of the day, the same number of network requests are being fired. But the user experience is better: the navigation will feel instant because there's no waiting for a new network request to Ghost to resolve. Here's how our page from above looks with SWR:
import * as React from 'react';\nimport Page from '../../components/Page';\nimport OverthoughtGrid from '../../components/OverthoughtGrid'\nimport { getPosts } from '../../data/ghost'\n\nfunction Overthought({ posts }) {\n const initialData = props.posts\n const { data: posts } = useSWR('/api/getPosts', getPosts, { initialData })\n \n return (\n <Page>\n <OverthoughtGrid posts={posts} />\n </Page>\n );\n}\n\nOverthought.getInitialProps = async ({ res }) => {\n if (res) {\n const cacheAge = 60 * 60 * 12;\n res.setHeader('Cache-Control', `public,s-maxage=${cacheAge}`);\n }\n const posts = await getPosts();\n return { posts: posts }\n}\n\nexport default Overthought
With the two added lines at the top of the function, we instantly get data served from the client-side cache. The cool thing about this setup is that if the user loads a page that is server-side rendered, SWR will receive initialData
that was already fetched on the server, again creating the feeling of an instantaneous page load.
Again: this is overkill.
My one issue with Ghost is that they don't return a Markdown version of your posts. Instead, they only return a big string containing all of the HTML for your post. Rendering this HTML string can be a pain: Ghost has a lot of custom elements that they use for rich embeds, like videos, that I don't want to be ingesting.
So instead I was able to hack around this by using react-markdown in conjunction with unified, rehype-parse, rehype-remark, and remark-stringify. I found all of this to be a bit of a headache, and is certainly one of the downsides of using Ghost as a content provider. I've reached out to the team to try and start a discussion about returning a raw Markdown field from the posts API.
Here's how the HTML processing works:
import unified from 'unified'\nimport parse from 'rehype-parse'\nimport rehype2remark from 'rehype-remark'\nimport stringify from 'remark-stringify'\nimport Markdown from 'react-markdown';\n\nconst PostBody({ post }) {\n const md = unified()\n .use(parse)\n .use(rehype2remark)\n .use(stringify)\n .processSync(post.html)\n .toString()\n\n return <Markdown>{md}</Markdown>\n}
Unfortunately I have more work to do to dig into the internals of how the HTML is being parsed - I noticed that it strips out things like alt
tags on images, and entire iframe
s if I use video embeds.
My source files are here if you would like to dig around further - if you happen to know of solutions to these parsing woes, please let me know!
"},{"__typename":"Post","id":"5eb846155d7843004534cea4","title":"Just-for-me Authentication","slug":"just-for-me-authentication","updated_at":"2020-05-10T17:55:57.000-04:00","excerpt":"How adding just-for-me authentication cascaded into new ideas and possibilities for play.","feature_image":null,"html":"It's enjoyable to iterate on this site, in the spirit of building incrementally correct personal websites. Last week, when I wanted an easier way to update the metadata for my bookmarks page, I added authentication so that a page knows when I'm looking at it.
Once that worked, gears started turning. If every page on my site knows when I am looking at it, everything becomes mutable. Having a CMS is great, but when I want to fix a typo it can be a drag to spin up another tool, or navigate to another website, log in, make the change, save, redeploy, and onwards. If I spot the typo, I just want to...click it, fix it, and move on.
I can also start to progressively disclose little secret bits of UI. For example, it wouldn't be that hard to build a private subsection of /bookmarks
just for myself.
To explore these ideas, I spent yesterday building an AMA page. It exposes a form where anyone can ask a question, but I'm the only one who can view them. I don't have to use a separate CMS or database to answer or edit questions. No, I just dynamically render the list of pending questions whenever I visit the page with lightweight tools for answering, editing, and deleting.
The system is extensible, too. For example, I added a small hook in the backend to send myself an email notification whenever someone asks a new AMA question (what kind of psychopath would check their personal website every day?). I could just as easily make this send me a text message, or maybe even a push notification in the future. It would be cool to be able to reply directly to that notification to answer it in real time.
Ultimately it's this control that frees me from the website itself. It's a playground.
What's next? Well, anything CRUD is a snap to build. It might be fun to work on a personal Instagram feed at /photos
, or a personal Twitter at /notes
. It might be fun to build a browser extensions to bring some of the notification and editing functionality out of the website itself. Because this site exposes an API to the internet, all of this functionality can move out of the browser into a SwiftUI app, or maybe a macOS menu bar app.
It's fun to have a personal sandbox.
"},{"__typename":"Post","id":"5e63e21fb6a909003848a14d","title":"Product Design Portfolios","slug":"product-design-portfolios","updated_at":"2020-05-10T12:33:08.000-04:00","excerpt":"A living list of useful and inspiring product design portfolios.","feature_image":null,"html":"I've been maintaining this list of useful and inspiring product design portfolios for a few years. It's awesome to see people sharing their work experiences openly and putting their own personal touch on their website. As the list grew, people told me that it was a useful reference while creating their own portfolio.
I'll be keeping the list up to date over time, so please let me know if you find a broken link or if someone removed their portfolio!
My criteria is loose, but generally:
Here's the list. If I'm missing anyone, please drop me a note at the bottom of this post!
Did I miss someone? Let me know in the form below and I'll keep this post updated 🙏
Update: @chris_mrtn compiled a bunch of these people into a Twitter list.
"},{"__typename":"Post","id":"5eaf2d1028842300395db2af","title":"Using Cookies to Authenticate Next.js + Apollo GraphQL Requests","slug":"cookies-authenticate-next-js-apollo-graphql-requests","updated_at":"2020-05-03T19:50:20.000-04:00","excerpt":"In the spirit of over-complicating the hell out of my personal website, I spent time this weekend trying to solve one very small and seemingly-simple problem: can I make my website know when I am viewing it?","feature_image":null,"html":"In the spirit of over-complicating the hell out of my personal website, I spent time this weekend trying to solve one very small and seemingly-simple problem: can I make my statically-generated website know when I am viewing it?
I have a bookmarks page where I store helpful links. To add new links, I set up a workflow where I can text myself a url from anywhere. Here's the code to do this. New links get stored in Firebase, which triggers a cloud function to populate metadata for the url by scraping the web page. Here's the cloud function to do this. This flow is really great for saving links while I'm away from my computer.
But, when I'm on my laptop, two problems emerge:
https://brianlovin.com/bookmarks
and paste a link.<title>
tags like {actually useful content about the page} · Site Name
and I don't want the Site Name
included in my bookmarks list.So what I want is:
/bookmarks
, determine if I am the one viewing the page.The hiccups came when I tried to figure out how this should work with GraphQL (which I use on the backend to stitch together multiple third party API services - see code) and Next.js's recently-release Static Site Generation feature.
/bookmarks
route is statically generated at build time. This means that every initial page view will assume an unauthenticated render. So I'll need to check for authentication after the JavaScript rehydrates the client.users
record. This functionality is just for me. Firebase's authentication implementation was a pain, so I abandoned that path in favor of simple cookie authentication.First, some useful information that I dug up through while working on this problem:
cookies
object to the http request
.cookie
helper function to all response
objects in the backend. This will be used to set and nullify cookies.The client side of this project ended up being quite complex. Remember:
/bookmarks
should be statically generated at build time, always rendering a \"logged out\" view./bookmarks
is loaded, it needs to mount with a pre-populated ApolloProvider
cache to have access to the mutation and query hooks that come with @apollo/client
. Fortunately, I found this comment in the Next.js discussion forum which explains how to implement a withApollo
higher-order component that can instantiate itself with props from the static build phase.
I made some small modifications, but you can see the implementation here.
Next, we need to instantiate an ApolloClient
during build time in getStaticProps
:
// graphql/api/index.ts\n\nconst CLIENT_URL =\n process.env.NODE_ENV === 'production'\n ? 'https://brianlovin.com'\n : 'http://localhost:3000'\n\nconst endpoint = `${CLIENT_URL}/api/graphql`\n\nconst link = new HttpLink({ uri: endpoint })\nconst cache = new InMemoryCache()\n\nexport async function getStaticApolloClient() {\n return new ApolloClient({\n link,\n cache,\n })\n}
Now, in any of our page routes we can use Apollo to fetch data at build time:
// pages/bookmarks.tsx\n\nimport { getStaticApolloClient } from '~/graphql/api'\nimport { gql } from '@apollo/client'\n\n// ... component up here, detailed later\n\nconst GET_BOOKMARKS = gql`\n query GetBookmarks {\n bookmarks {\n id\n title\n url\n }\n }\n`\n\nexport async function getStaticProps() {\n const client = await getStaticApolloClient()\n await client.query({ query: GET_BOOKMARKS })\n return {\n props: {\n // this hydrates the clientside Apollo cache in the `withApollo` HOC\n apolloStaticCache: client.cache.extract(),\n },\n }\n}
Because I'll want to add new links to my bookmarks from many devices, I'll need some way to programmatically set a cookie in the browser by \"logging in.\"
The flow should be:
login
mutationlogin
mutation resolver decides whether or not the password is correct. If it isn't, it rejects the request. If the password is correct, it sets a cookie on the response header and returns true
. Before I can do any of this, I'll need to ensure that my GraphQL mutations have access to cookies and a response
object. We can add this information to the GraphQL context object in the server constructor:
// pages/api/graphql/index.ts\n\n// https://github.com/zeit/next.js/blob/master/examples/api-routes-middleware/utils/cookies.js\nimport cookies from './path/to/cookieHelper'\nimport typeDefs from './path/to/typeDefs'\nimport resolvers from './path/to/resolvers'\nimport { ApolloServer } from 'apollo-server-micro'\n\nfunction isAuthenticated(req) {\n // I use a cookie called 'session'\n const { session } = req?.cookies\n \n // Cryptr requires a minimum length of 32 for any signing\n if (!session || session.length < 32) {\n return false\n }\n\n const secret = process.env.PASSWORD_TOKEN\n const validated = process.env.PASSWORD\n const cryptr = new Cryptr(secret)\n const decrypted = cryptr.decrypt(session)\n return decrypted === validated\n}\n\nfunction context(ctx) {\n return {\n // expose the cookie helper in the GraphQL context object\n cookie: ctx.res.cookie,\n // allow queries and mutations to look for an `isMe` boolean in the context object\n isMe: isAuthenticated(ctx.req),\n }\n}\n\n\nconst apolloServer = new ApolloServer({\n typeDefs,\n resolvers,\n context,\n})\n\nexport const config = {\n api: {\n bodyParser: false, // required for Next.js to play nicely with GraphQL request bodies\n },\n}\n\nconst handler = apolloServer.createHandler({ path: '/api/graphql' })\n\n// attach cookie helpers to all response objects\nexport default cookies(handler)\n
The mutation:
// graphql/mutations/auth.ts\n\nimport { gql } from '@apollo/client'\n\nexport const LOGIN = gql`\n mutation login($password: String!) {\n login(password: $password)\n }\n`
The resolver:
// graphql/resolvers/mutations/login.ts\n\nimport Cryptr from 'cryptr'\n\nexport function login(_, { password }, ctx) {\n const { cookie } = ctx\n\n const validator = process.env.PASSWORD\n if (password !== validator) return false\n\n const secret = process.env.PASSWORD_TOKEN\n const cryptr = new Cryptr(secret)\n const encrypted = cryptr.encrypt(password)\n\n // the password is correct, set a cookie on the response\n cookie('session', encrypted, {\n // cookie is valid for all subpaths of my domain\n path: '/',\n // this cookie won't be readable by the browser\n httpOnly: true,\n // and won't be usable outside of my domain\n sameSite: 'strict',\n })\n\n // tell the mutation that login was successful\n return true\n }
Next, let's log in from the client:
// pages/login.tsx\n\nimport * as React from 'react'\nimport { useRouter } from 'next/router'\nimport { useMutation } from '@apollo/client'\nimport { LOGIN } from '~/graphql/mutations/auth.ts'\nimport { withApollo } from '~/components/withApollo'\n\nfunction Login() {\n const router = useRouter()\n const [password, setPassword] = React.useState('')\n\n const [handleLogin] = useMutation(LOGIN, {\n variables: { password },\n onCompleted: (data) => data.login && router.push('/'),\n })\n\n function onSubmit(e) {\n e.preventDefault()\n handleLogin()\n }\n\n return (\n <form onSubmit={onSubmit}>\n <input\n type=\"password\"\n placeholder=\"password\"\n onChange={(e) => setPassword(e.target.value)}\n />\n </form>\n )\n}\n\n// remember that withApollo wraps our component in an ApolloProvider, giving us access to use the `useMutation` and `useQuery` hooks in our component.\nexport default withApollo(Login)
So our flow should now work:
Okay, so now I have a signed cookie on my browser which will be used in all future requests to verify my identity. The next step is provide the client with some kind of isMe
boolean that can be fetched from anywhere. We can write a small GraphQL mutation to provide this information:
// graphql/queries/isMe.ts\n\nimport { gql } from '@apollo/client'\n\nexport const IS_ME = gql`\n query IsMe {\n isMe\n }\n`
Remember, we've already written an isMe
helper into our GraphQL context object, so we can return that value in our resolver:
// graphql/resolvers/isMe.ts\n\nexport function isMe(_, __, { isMe }) {\n return isMe\n}
Next, let's write our GraphQL query on the client to find out if it's me viewing the page:
// src/hooks/useAuth.tsx\n\nimport { IS_ME } from '~/graphql/queries/isMe.ts'\nimport { useQuery } from '@apollo/client'\n\nexport function useAuth() {\n const { data } = useQuery(IS_ME)\n\n return {\n isMe: data && data.isMe,\n }\n}
With this helper hook, we can now start checking for isMe
anywhere in the client:
// src/pages/bookmarks.tsx\n\nimport * as React from 'react'\nimport { useQuery } from '@apollo/client'\nimport BookmarksList from '~/components/Bookmarks'\nimport { GET_BOOKMARKS } from '~/graphql/queries'\nimport { useAuth } from '~/hooks/useAuth'\nimport { getStaticApolloClient } from '~/graphql/api'\nimport { withApollo } from '~/components/withApollo'\nimport AddBookmark from '~/components/AddBookmark'\n\nfunction Bookmarks() {\n // cache-and network is used because after I add a new bookmark, other people will still be seeing the statically-served HTML created at build time. In this way, the user will see a page rendered _instantly_, and the client will kick off a network request to ensure it has the latest bookmarks data.\n const { data } = useQuery(GET_BOOKMARKS, { fetchPolicy: 'cache-and-network' })\n const { bookmarks } = data\n const { isMe } = useAuth()\n\n return (\n <div>\n <h1>Bookmarks</h1>\n {isMe && <AddBookmark />}\n {bookmarks && <BookmarksList bookmarks={bookmarks} />}\n </div>\n )\n}\n\nexport async function getStaticProps() {\n const client = await getStaticApolloClient()\n await client.query({ query: GET_BOOKMARKS })\n return {\n props: {\n apolloStaticCache: client.cache.extract(),\n },\n }\n}\n\nexport default withApollo(Bookmarks)
Okay, so now I can progressively disclose UI on the client once the site knows it's me. But because my GraphQL endpoint is exposed to the internet, we'll need to make sure that random people can't write their own POST
s to maliciously save bookmarks.
Here's the mutation resolver on the backend checking the isMe
flag set in the context object, some input validation, and then persisting the bookmark.
// graphql/resolvers/mutations/bookmarks.ts\n\nimport { URL } from 'url'\nimport { AuthenticationError, UserInputError } from 'apollo-server-micro'\nimport firebase from '~/graphql/api/firebase'\nimport getBookmarkMetaData from './getBookmarkMetaData'\n\nfunction isValidUrl(string) {\n try {\n new URL(string)\n return true\n } catch (err) {\n return false\n }\n}\n\nexport async function addBookmark(_, { url }, { isMe }) {\n if (!isMe) throw new AuthenticationError('You must be logged in')\n if (!isValidUrl(url)) throw new UserInputError('URL was invalid')\n\n const metadata = await getBookmarkMetaData(url)\n\n const id = await firebase\n .collection('bookmarks')\n .add({\n createdAt: new Date(),\n ...metadata,\n })\n .then(({ id }) => id)\n\n return await firebase\n .collection('bookmarks')\n .doc(id)\n .get()\n .then((doc) => doc.data())\n .then((res) => ({ ...res, id }))\n}
This is all a bit...complicated, to say the least. But when it all works, it actually works quite well! And as I incrementally add more mutation types, it should all Just Work™.
At the end of the day, the site gets all the benefits of super-fast initial page loads thanks to static generation at build time, with all the downstream client side functionality of a regular React application.
I hope the pseudocode above will help unblock anyone that is following a similar path as me, but just in case, here's the full pull request containing all the changes that eventually made this work. You'll notice I spent some time hacking in automatic type generation and hook generation using GraphQL Code Generator, and added some polish to the overall experience (like a /logout
page which clears the cookie, in case I'm on a device I don't own).
Please don't hesitate to reach out with questions, I'd love to help! Otherwise, the Next.js discussions have been a fantastic resource for finding solutions to a lot of common problems.
Good luck!
"}],"episodes":[{"__typename":"Episode","id":"461f9847-6231-4d32-b0fd-ba864c6366b0","description":"This week, we share some tips for getting unstuck when working on complicated design problems. We also share our home screen organization philosophies in The Sidebar, catch up on Tweets, and share our cool things as always.","legacy_id":null,"long_description":null,"published_at":"2020-05-26T20:51:00.000000-07:00","status":"published","title":"348: Getting Unstuck","token":"9ae0646c"},{"__typename":"Episode","id":"6dc60eeb-29b5-4375-a349-4126df4484e8","description":"This week, we talk about how to overcome skill gaps. What should you do if you are bad at visual design? What if you can't make icons? Should you play into strengths or develop upon your weaknesses? We explore these topics, and more!","legacy_id":null,"long_description":null,"published_at":"2020-05-20T05:00:00.000000-07:00","status":"published","title":"347: Overcoming Skill Gaps","token":"0a523f70"},{"__typename":"Episode","id":"c6913c00-ec02-4caf-96b0-615782d43ea0","description":"This week, we discuss the characteristics of high quality software. We do our best to organize and outline things to pay attention to that will help you ship better software. This, plus a Sidebar discussing how to handle negative feedback from customers, and cool things as always!","legacy_id":null,"long_description":null,"published_at":"2020-05-13T05:00:00.000000-07:00","status":"published","title":"346: Quality Software","token":"94edbd03"},{"__typename":"Episode","id":"3f3140fc-a02d-4e8e-bfea-1104c7621be1","description":"This week, we discuss what it means to have and develop taste. From software to fashion, styles change, trends come and go, so what does it mean to be tasteful? In The Sidebar, we discuss the hacker mindset and building for an audience of one. This, plus cool things like a handy iPhone app and a killer new VR game!","legacy_id":null,"long_description":null,"published_at":"2020-05-06T05:00:00.000000-07:00","status":"published","title":"345: Developing Taste","token":"3d318a0d"},{"__typename":"Episode","id":"5936a770-16ef-4f51-9c97-5934487f46d3","description":"This week, we try to figure out the right time to give up on a fight when collaborating with stakeholders who have different opinions and priorities. We also cover a lot of feedback this week, discuss new design resources in The Sidebar, and share our cool things of the week.","legacy_id":null,"long_description":null,"published_at":"2020-04-29T05:00:00.000000-07:00","status":"published","title":"344: Knowing When to Give Up a Fight","token":"8dbe2294"}]},"apolloStaticCache":{"Post:5eb8e6f55d7843004534cf77":{"id":"5eb8e6f55d7843004534cf77","__typename":"Post","title":"Quality Software","slug":"quality-software","updated_at":"2020-05-18T04:59:59.000-04:00","excerpt":"This post aims to outline the characteristics of software that we believe determine its fundamental quality.","feature_image":null,"html":"This post is a complement to episode 346 of Design Details, Quality Software. It is also informed by the 2018 WWDC talk, The Qualities of Great Design.
Quality is subjective and hard to define. In software, it can be unclear why certain applications seem better than others. This post aims to outline the characteristics of software that we believe determine its fundamental quality. The following notes are from a conversation with Marshall Bock, so I'll be writing this post as a we/us/our.
It can be helpful to think of the characteristics of high quality software along the axis of users and developers and attention and intention. These axis overlap, and drawing clear boundaries between them isn't always possible.
Design is in the details, so they say. It takes a lot of energy to pay attention to the details. This is usually because the details are subconsciously understood and are harder to justify in development planning. Shipping pixel-perfect applications takes time, and many of those little details will be absorbed by the user without them even knowing it. Does text have enough contrast to be visible in different lighting? Do animations play at sixty frames per second? Are colors used to communicate similar ideas? Are atomic objects consistent and clearly understood?
In modern software, especially on mobile devices, the details go deeper than visuals. Are interface elements clearly labeled for accessibility on screen readers or in VoiceOver? Are haptics used appropriately to signify events, warnings, or meaningful state changes? Having thoughtfully answered questions like these puts an application into a new tier of quality.
It's easy to build features: slap a button here, tuck a disclaimer there, and pin a tooltip on every new feature. It's not so easy to take those things away. It's a tough job to effectively educate people about new features or changes in the software. And it's even harder to ignore a loudly-requested feature for reasons that might not be easily understood by users.
Quality software knows when to stop adding features. It knows when something works well. And it isn't afraid to take things away, especially if a feature is no longer in service of a broader vision.
Quality software uses motion, iconography, and color with intent. Animation is in service of orienting users, communicating an idea, or guiding them towards an objective. Form is an expression of function; the drop shadow exists to communicate depth, to afford interactivity, to help a user distinguish what can be touched and what cannot, not because it looks nice, or because the screen needed more \"stuff\" on it.
People are busy and they have things to do. Software is most often a means to an end, not an end itself. Quality software understands this and respects a person's time and attention.
Quality software exists in the service of a human outcome, like saving time, earning money, or increasing productivity. These outcomes are the result of designers and developers talking to real humans, watching them use the software, and determining ways to remove complexity.
Quality software is fast. Data is cached. The first paint feels instant. Interactions respond in milliseconds. It makes optimistic guesses about common paths that users follow in order to pre-fetch and pre-render. When the software can't be fast—for example, when operating over a slow network—it communicates the status of ongoing work to a person.
Quality software understands its own role within the operating system and device. It takes advantage of available sensors and heuristics to provide a superior experience to a person, like a phone screen waking up when reached for.
Software can also get better with use. Typeahead, predictive search, and pre-rendering content based on a user's past experiences in the app all make the software feel more personal. For example, FaceID improves over time, accounting for changes in facial orientation, characteristics, and expressions.
In addition to respecting a person's attention, quality software respects their privacy. People understand what data is being collected, how it’s being used, and how to remove it. Quality software uses the minimum amount of personally-identifiable information necessary to complete a job (if any at all) and nothing more.
Quality software considers many contexts a person may be in. Are they walking? Distracted? Hands-free? Using a keyboard or tapping? Do they have a fast internet connection, or is the network dropping?
Quality software accounts for situations like these. It provides prominent, conveniently-placed actions for single-handed users on the go. It uses clear typographical hierarchy and color to make pages scannable at a glance. It adapts itself to any input mechanism: touch, keyboard, voice, pointer, controller, or screen reader. It can queue changes locally in the event of a network loss, and it can calmly communicate the impact of a degraded network connection.
Quality software understands people and how they navigate the world. For example, it knows that some people are colorblind, so it either removes commonly-indistinguishable colors from its palette or provides opt-in overrides to the system.
If it knows that people might be using pre-paid data plans, it considers the very real user cost of sending large payloads over the network. It offers things like\"data-saving modes\" to give people a choice about when to spend their money.
Software exists in the context of the world with changing dynamics, tastes, and crises. Locally, it exists in the context of its brand. A person rarely encounters a piece of software in isolation; there’s priming that’s already occurred through advertisements, landing pages, app store screenshots, and onboarding flows. Because of this, quality software understands the expectations of its users and works to exceed those expectations. A highly polished visual experience is expected to behave in a highly polished way. A highly polished brand is expected to produce high quality software.
Quality software is built for humans. Messy, distractible, impatient, and imperfect humans. Because it understands this, quality software is forgiving. Destructive actions are presented clearly and are easily undoable. If something breaks, the software provides a clear reason that non-technical people can understand. It doesn't patronize its users or make them feel dumb. Errors are never dead ends; invalid user inputs are clearly highlighted, and failed network requests are retried.
Like mis-timed foley or a smudge on the wall, we notice quality more often by the presence of imperfections than by their absence. For this reason, it can be hard to articulate why something is high quality. We can feel it. Like the threshold test for obscenity,\"I know it when I see it.\"
This leaves us with a question: how much should “effort exerted” be considered when we evaluate the quality of software? If you ate the worst slice of pizza you've ever had, would your disgust be lessened by knowing that the chef had worked oh so very hard to make it? Probably not.
Still, it's clear that quality software can’t be created without hard work. Or, in the most optimistic sense, one’s long history of hard work enables high quality to be more easily achieved in later projects. So the two are intertwined, but we don't feel that the work itself is a necessary component in determining the quality of the software itself. An A for Effort doesn’t ensure an A overall.
It can be disenchanting to look at the current software landscape. It feels like we're spinning in circles, solving the same problems that we've been solving for years. Computers get better hardware, so we build more resource-hungry software, and things never actually get faster.
The world has become a tangled mess along the way. There are hundreds of screen sizes, operating systems, browsers, settings, and modes that designers and engineers must account for. It's no wonder that software doesn’t feel much better today than it felt five years ago: we're stuck in a constant game of catch-up.
I predict that more startups will move away from a one-size-fits-all, must-scale-to-seven-billion-users mentality. Instead, we're seeing the proliferation of the independent creator. The person who builds software for a few thousand people, but in a way that is deeply personal and understands what's most important to their audience. More designers and developers are realizing that a healthy income and a happy life can exist with a thousand customers, not a billion.
We approached the conversation of software quality from the perspective of product design and user experience. Of course, there’s a different framing of the topic that focuses on the quality of the written software and not the final experience.
To read more about this, I’d recommend reading this post by Martin Fowler or this post by Jake Voytko. Quite often, quality internal software correlates to high quality user experience. We should strive for both.
"},"Post:5dea5ce1295515003754d9e4":{"id":"5dea5ce1295515003754d9e4","__typename":"Post","title":"Using Ghost as a Headless CMS with Next.js","slug":"using-ghost-headless-cms-next-js-to-create-a-fast-and-simple-blog","updated_at":"2020-05-13T12:11:07.000-04:00","excerpt":"Rebuilding my self-hosted blog with Next.js and Ghost as a headless CMS.","feature_image":"https://overthought.ghost.io/content/images/2019/12/ghost---next-1--1-.png","html":"I recently rebuilt most of my personal site with the main goal of providing a better surface area for writing. There are dozens of viable technology choices available right now to create a self-hosted blog and I spent way too long, and way too much energy trying to find the perfect one.
Spoiler alert: there is no perfect system. Every solution is a hack. Once I accepted this, I could focus on finding the least-hacky setup that would cover 90% of my needs. Ultimately, I ended up building the following system:
The Ghost API is pretty solid - the documentation is straightforward, and they even have a guide for working with Next.js.
To start, I added a small API file that can fetch data from Ghost:
import GhostContentAPI from \"@tryghost/content-api\";\n\nconst api = new GhostContentAPI({\n url: 'https://overthought.ghost.io',\n key: 'API_KEY',\n version: \"v3\"\n});\n\nexport async function getPosts() {\n return await api.posts\n .browse({\n limit: \"all\"\n })\n .catch(err => {\n console.error(err);\n });\n}\n\nexport async function getPostBySlug(slug) {\n return await api.posts\n .read({\n slug\n })\n .catch(err => {\n console.error(err);\n });\n}
We can then use these API calls to populate data into a page. Here's a simplified version of my src/pages/overthought/index.tsx
file:
import * as React from 'react';\nimport Page from '../../components/Page';\nimport OverthoughtGrid from '../../components/OverthoughtGrid'\nimport { getPosts } from '../../data/ghost'\nimport { BlogPost } from '../../types'\n\ninterface Props {\n posts?: Array<BlogPost>\n}\n\nfunction Overthought({ posts }: Props) {\n return (\n <Page>\n <OverthoughtGrid posts={posts} />\n </Page>\n );\n}\n\nOverthought.getInitialProps = async ({ res }) => {\n if (res) {\n const cacheAge = 60 * 60 * 12;\n res.setHeader('Cache-Control', `public,s-maxage=${cacheAge}`);\n }\n const posts = await getPosts();\n return { posts: posts }\n}\n\nexport default Overthought
In the getInitialProps
call, which runs server-side, I decided that caching the entire page for 12 hours at a time was probably safe: I won't be publishing that frequently. This will improve performance if there is any spike in traffic that would otherwise overload the Ghost API.
It's a bit overkill for now, but one thing that I've been meaning to try is SWR. This package does some cool client-side work to provide data revalidation on refocus, retries on failure, polling, and client-side caching.
One key thing that I wanted to solve for was people navigating between the home page of my site and the /overthought
route. These pages both fetch the same posts, so it'd be a waste to require the second fetch to resolve before rendering my list of posts.
Before SWR, I might have reached for a tool like React.useContext
to provide some kind of global state wrapper that would keep track of any previously-fetched posts. But Context
can get messy, and I hate adding hierarchy to my components.
SWR solves the problem by maintaining a client-side cache of data I've fetched, keyed by the route used for the request. When a user navigates from /
to /overthought
, SWR will serve stale data from the cache first and then initiate a new request to update that cache with the latest data from the API.
At the end of the day, the same number of network requests are being fired. But the user experience is better: the navigation will feel instant because there's no waiting for a new network request to Ghost to resolve. Here's how our page from above looks with SWR:
import * as React from 'react';\nimport Page from '../../components/Page';\nimport OverthoughtGrid from '../../components/OverthoughtGrid'\nimport { getPosts } from '../../data/ghost'\n\nfunction Overthought({ posts }) {\n const initialData = props.posts\n const { data: posts } = useSWR('/api/getPosts', getPosts, { initialData })\n \n return (\n <Page>\n <OverthoughtGrid posts={posts} />\n </Page>\n );\n}\n\nOverthought.getInitialProps = async ({ res }) => {\n if (res) {\n const cacheAge = 60 * 60 * 12;\n res.setHeader('Cache-Control', `public,s-maxage=${cacheAge}`);\n }\n const posts = await getPosts();\n return { posts: posts }\n}\n\nexport default Overthought
With the two added lines at the top of the function, we instantly get data served from the client-side cache. The cool thing about this setup is that if the user loads a page that is server-side rendered, SWR will receive initialData
that was already fetched on the server, again creating the feeling of an instantaneous page load.
Again: this is overkill.
My one issue with Ghost is that they don't return a Markdown version of your posts. Instead, they only return a big string containing all of the HTML for your post. Rendering this HTML string can be a pain: Ghost has a lot of custom elements that they use for rich embeds, like videos, that I don't want to be ingesting.
So instead I was able to hack around this by using react-markdown in conjunction with unified, rehype-parse, rehype-remark, and remark-stringify. I found all of this to be a bit of a headache, and is certainly one of the downsides of using Ghost as a content provider. I've reached out to the team to try and start a discussion about returning a raw Markdown field from the posts API.
Here's how the HTML processing works:
import unified from 'unified'\nimport parse from 'rehype-parse'\nimport rehype2remark from 'rehype-remark'\nimport stringify from 'remark-stringify'\nimport Markdown from 'react-markdown';\n\nconst PostBody({ post }) {\n const md = unified()\n .use(parse)\n .use(rehype2remark)\n .use(stringify)\n .processSync(post.html)\n .toString()\n\n return <Markdown>{md}</Markdown>\n}
Unfortunately I have more work to do to dig into the internals of how the HTML is being parsed - I noticed that it strips out things like alt
tags on images, and entire iframe
s if I use video embeds.
My source files are here if you would like to dig around further - if you happen to know of solutions to these parsing woes, please let me know!
"},"Post:5eb846155d7843004534cea4":{"id":"5eb846155d7843004534cea4","__typename":"Post","title":"Just-for-me Authentication","slug":"just-for-me-authentication","updated_at":"2020-05-10T17:55:57.000-04:00","excerpt":"How adding just-for-me authentication cascaded into new ideas and possibilities for play.","feature_image":null,"html":"It's enjoyable to iterate on this site, in the spirit of building incrementally correct personal websites. Last week, when I wanted an easier way to update the metadata for my bookmarks page, I added authentication so that a page knows when I'm looking at it.
Once that worked, gears started turning. If every page on my site knows when I am looking at it, everything becomes mutable. Having a CMS is great, but when I want to fix a typo it can be a drag to spin up another tool, or navigate to another website, log in, make the change, save, redeploy, and onwards. If I spot the typo, I just want to...click it, fix it, and move on.
I can also start to progressively disclose little secret bits of UI. For example, it wouldn't be that hard to build a private subsection of /bookmarks
just for myself.
To explore these ideas, I spent yesterday building an AMA page. It exposes a form where anyone can ask a question, but I'm the only one who can view them. I don't have to use a separate CMS or database to answer or edit questions. No, I just dynamically render the list of pending questions whenever I visit the page with lightweight tools for answering, editing, and deleting.
The system is extensible, too. For example, I added a small hook in the backend to send myself an email notification whenever someone asks a new AMA question (what kind of psychopath would check their personal website every day?). I could just as easily make this send me a text message, or maybe even a push notification in the future. It would be cool to be able to reply directly to that notification to answer it in real time.
Ultimately it's this control that frees me from the website itself. It's a playground.
What's next? Well, anything CRUD is a snap to build. It might be fun to work on a personal Instagram feed at /photos
, or a personal Twitter at /notes
. It might be fun to build a browser extensions to bring some of the notification and editing functionality out of the website itself. Because this site exposes an API to the internet, all of this functionality can move out of the browser into a SwiftUI app, or maybe a macOS menu bar app.
It's fun to have a personal sandbox.
"},"Post:5e63e21fb6a909003848a14d":{"id":"5e63e21fb6a909003848a14d","__typename":"Post","title":"Product Design Portfolios","slug":"product-design-portfolios","updated_at":"2020-05-10T12:33:08.000-04:00","excerpt":"A living list of useful and inspiring product design portfolios.","feature_image":null,"html":"I've been maintaining this list of useful and inspiring product design portfolios for a few years. It's awesome to see people sharing their work experiences openly and putting their own personal touch on their website. As the list grew, people told me that it was a useful reference while creating their own portfolio.
I'll be keeping the list up to date over time, so please let me know if you find a broken link or if someone removed their portfolio!
My criteria is loose, but generally:
Here's the list. If I'm missing anyone, please drop me a note at the bottom of this post!
Did I miss someone? Let me know in the form below and I'll keep this post updated 🙏
Update: @chris_mrtn compiled a bunch of these people into a Twitter list.
"},"Post:5eaf2d1028842300395db2af":{"id":"5eaf2d1028842300395db2af","__typename":"Post","title":"Using Cookies to Authenticate Next.js + Apollo GraphQL Requests","slug":"cookies-authenticate-next-js-apollo-graphql-requests","updated_at":"2020-05-03T19:50:20.000-04:00","excerpt":"In the spirit of over-complicating the hell out of my personal website, I spent time this weekend trying to solve one very small and seemingly-simple problem: can I make my website know when I am viewing it?","feature_image":null,"html":"In the spirit of over-complicating the hell out of my personal website, I spent time this weekend trying to solve one very small and seemingly-simple problem: can I make my statically-generated website know when I am viewing it?
I have a bookmarks page where I store helpful links. To add new links, I set up a workflow where I can text myself a url from anywhere. Here's the code to do this. New links get stored in Firebase, which triggers a cloud function to populate metadata for the url by scraping the web page. Here's the cloud function to do this. This flow is really great for saving links while I'm away from my computer.
But, when I'm on my laptop, two problems emerge:
https://brianlovin.com/bookmarks
and paste a link.<title>
tags like {actually useful content about the page} · Site Name
and I don't want the Site Name
included in my bookmarks list.So what I want is:
/bookmarks
, determine if I am the one viewing the page.The hiccups came when I tried to figure out how this should work with GraphQL (which I use on the backend to stitch together multiple third party API services - see code) and Next.js's recently-release Static Site Generation feature.
/bookmarks
route is statically generated at build time. This means that every initial page view will assume an unauthenticated render. So I'll need to check for authentication after the JavaScript rehydrates the client.users
record. This functionality is just for me. Firebase's authentication implementation was a pain, so I abandoned that path in favor of simple cookie authentication.First, some useful information that I dug up through while working on this problem:
cookies
object to the http request
.cookie
helper function to all response
objects in the backend. This will be used to set and nullify cookies.The client side of this project ended up being quite complex. Remember:
/bookmarks
should be statically generated at build time, always rendering a \"logged out\" view./bookmarks
is loaded, it needs to mount with a pre-populated ApolloProvider
cache to have access to the mutation and query hooks that come with @apollo/client
. Fortunately, I found this comment in the Next.js discussion forum which explains how to implement a withApollo
higher-order component that can instantiate itself with props from the static build phase.
I made some small modifications, but you can see the implementation here.
Next, we need to instantiate an ApolloClient
during build time in getStaticProps
:
// graphql/api/index.ts\n\nconst CLIENT_URL =\n process.env.NODE_ENV === 'production'\n ? 'https://brianlovin.com'\n : 'http://localhost:3000'\n\nconst endpoint = `${CLIENT_URL}/api/graphql`\n\nconst link = new HttpLink({ uri: endpoint })\nconst cache = new InMemoryCache()\n\nexport async function getStaticApolloClient() {\n return new ApolloClient({\n link,\n cache,\n })\n}
Now, in any of our page routes we can use Apollo to fetch data at build time:
// pages/bookmarks.tsx\n\nimport { getStaticApolloClient } from '~/graphql/api'\nimport { gql } from '@apollo/client'\n\n// ... component up here, detailed later\n\nconst GET_BOOKMARKS = gql`\n query GetBookmarks {\n bookmarks {\n id\n title\n url\n }\n }\n`\n\nexport async function getStaticProps() {\n const client = await getStaticApolloClient()\n await client.query({ query: GET_BOOKMARKS })\n return {\n props: {\n // this hydrates the clientside Apollo cache in the `withApollo` HOC\n apolloStaticCache: client.cache.extract(),\n },\n }\n}
Because I'll want to add new links to my bookmarks from many devices, I'll need some way to programmatically set a cookie in the browser by \"logging in.\"
The flow should be:
login
mutationlogin
mutation resolver decides whether or not the password is correct. If it isn't, it rejects the request. If the password is correct, it sets a cookie on the response header and returns true
. Before I can do any of this, I'll need to ensure that my GraphQL mutations have access to cookies and a response
object. We can add this information to the GraphQL context object in the server constructor:
// pages/api/graphql/index.ts\n\n// https://github.com/zeit/next.js/blob/master/examples/api-routes-middleware/utils/cookies.js\nimport cookies from './path/to/cookieHelper'\nimport typeDefs from './path/to/typeDefs'\nimport resolvers from './path/to/resolvers'\nimport { ApolloServer } from 'apollo-server-micro'\n\nfunction isAuthenticated(req) {\n // I use a cookie called 'session'\n const { session } = req?.cookies\n \n // Cryptr requires a minimum length of 32 for any signing\n if (!session || session.length < 32) {\n return false\n }\n\n const secret = process.env.PASSWORD_TOKEN\n const validated = process.env.PASSWORD\n const cryptr = new Cryptr(secret)\n const decrypted = cryptr.decrypt(session)\n return decrypted === validated\n}\n\nfunction context(ctx) {\n return {\n // expose the cookie helper in the GraphQL context object\n cookie: ctx.res.cookie,\n // allow queries and mutations to look for an `isMe` boolean in the context object\n isMe: isAuthenticated(ctx.req),\n }\n}\n\n\nconst apolloServer = new ApolloServer({\n typeDefs,\n resolvers,\n context,\n})\n\nexport const config = {\n api: {\n bodyParser: false, // required for Next.js to play nicely with GraphQL request bodies\n },\n}\n\nconst handler = apolloServer.createHandler({ path: '/api/graphql' })\n\n// attach cookie helpers to all response objects\nexport default cookies(handler)\n
The mutation:
// graphql/mutations/auth.ts\n\nimport { gql } from '@apollo/client'\n\nexport const LOGIN = gql`\n mutation login($password: String!) {\n login(password: $password)\n }\n`
The resolver:
// graphql/resolvers/mutations/login.ts\n\nimport Cryptr from 'cryptr'\n\nexport function login(_, { password }, ctx) {\n const { cookie } = ctx\n\n const validator = process.env.PASSWORD\n if (password !== validator) return false\n\n const secret = process.env.PASSWORD_TOKEN\n const cryptr = new Cryptr(secret)\n const encrypted = cryptr.encrypt(password)\n\n // the password is correct, set a cookie on the response\n cookie('session', encrypted, {\n // cookie is valid for all subpaths of my domain\n path: '/',\n // this cookie won't be readable by the browser\n httpOnly: true,\n // and won't be usable outside of my domain\n sameSite: 'strict',\n })\n\n // tell the mutation that login was successful\n return true\n }
Next, let's log in from the client:
// pages/login.tsx\n\nimport * as React from 'react'\nimport { useRouter } from 'next/router'\nimport { useMutation } from '@apollo/client'\nimport { LOGIN } from '~/graphql/mutations/auth.ts'\nimport { withApollo } from '~/components/withApollo'\n\nfunction Login() {\n const router = useRouter()\n const [password, setPassword] = React.useState('')\n\n const [handleLogin] = useMutation(LOGIN, {\n variables: { password },\n onCompleted: (data) => data.login && router.push('/'),\n })\n\n function onSubmit(e) {\n e.preventDefault()\n handleLogin()\n }\n\n return (\n <form onSubmit={onSubmit}>\n <input\n type=\"password\"\n placeholder=\"password\"\n onChange={(e) => setPassword(e.target.value)}\n />\n </form>\n )\n}\n\n// remember that withApollo wraps our component in an ApolloProvider, giving us access to use the `useMutation` and `useQuery` hooks in our component.\nexport default withApollo(Login)
So our flow should now work:
Okay, so now I have a signed cookie on my browser which will be used in all future requests to verify my identity. The next step is provide the client with some kind of isMe
boolean that can be fetched from anywhere. We can write a small GraphQL mutation to provide this information:
// graphql/queries/isMe.ts\n\nimport { gql } from '@apollo/client'\n\nexport const IS_ME = gql`\n query IsMe {\n isMe\n }\n`
Remember, we've already written an isMe
helper into our GraphQL context object, so we can return that value in our resolver:
// graphql/resolvers/isMe.ts\n\nexport function isMe(_, __, { isMe }) {\n return isMe\n}
Next, let's write our GraphQL query on the client to find out if it's me viewing the page:
// src/hooks/useAuth.tsx\n\nimport { IS_ME } from '~/graphql/queries/isMe.ts'\nimport { useQuery } from '@apollo/client'\n\nexport function useAuth() {\n const { data } = useQuery(IS_ME)\n\n return {\n isMe: data && data.isMe,\n }\n}
With this helper hook, we can now start checking for isMe
anywhere in the client:
// src/pages/bookmarks.tsx\n\nimport * as React from 'react'\nimport { useQuery } from '@apollo/client'\nimport BookmarksList from '~/components/Bookmarks'\nimport { GET_BOOKMARKS } from '~/graphql/queries'\nimport { useAuth } from '~/hooks/useAuth'\nimport { getStaticApolloClient } from '~/graphql/api'\nimport { withApollo } from '~/components/withApollo'\nimport AddBookmark from '~/components/AddBookmark'\n\nfunction Bookmarks() {\n // cache-and network is used because after I add a new bookmark, other people will still be seeing the statically-served HTML created at build time. In this way, the user will see a page rendered _instantly_, and the client will kick off a network request to ensure it has the latest bookmarks data.\n const { data } = useQuery(GET_BOOKMARKS, { fetchPolicy: 'cache-and-network' })\n const { bookmarks } = data\n const { isMe } = useAuth()\n\n return (\n <div>\n <h1>Bookmarks</h1>\n {isMe && <AddBookmark />}\n {bookmarks && <BookmarksList bookmarks={bookmarks} />}\n </div>\n )\n}\n\nexport async function getStaticProps() {\n const client = await getStaticApolloClient()\n await client.query({ query: GET_BOOKMARKS })\n return {\n props: {\n apolloStaticCache: client.cache.extract(),\n },\n }\n}\n\nexport default withApollo(Bookmarks)
Okay, so now I can progressively disclose UI on the client once the site knows it's me. But because my GraphQL endpoint is exposed to the internet, we'll need to make sure that random people can't write their own POST
s to maliciously save bookmarks.
Here's the mutation resolver on the backend checking the isMe
flag set in the context object, some input validation, and then persisting the bookmark.
// graphql/resolvers/mutations/bookmarks.ts\n\nimport { URL } from 'url'\nimport { AuthenticationError, UserInputError } from 'apollo-server-micro'\nimport firebase from '~/graphql/api/firebase'\nimport getBookmarkMetaData from './getBookmarkMetaData'\n\nfunction isValidUrl(string) {\n try {\n new URL(string)\n return true\n } catch (err) {\n return false\n }\n}\n\nexport async function addBookmark(_, { url }, { isMe }) {\n if (!isMe) throw new AuthenticationError('You must be logged in')\n if (!isValidUrl(url)) throw new UserInputError('URL was invalid')\n\n const metadata = await getBookmarkMetaData(url)\n\n const id = await firebase\n .collection('bookmarks')\n .add({\n createdAt: new Date(),\n ...metadata,\n })\n .then(({ id }) => id)\n\n return await firebase\n .collection('bookmarks')\n .doc(id)\n .get()\n .then((doc) => doc.data())\n .then((res) => ({ ...res, id }))\n}
This is all a bit...complicated, to say the least. But when it all works, it actually works quite well! And as I incrementally add more mutation types, it should all Just Work™.
At the end of the day, the site gets all the benefits of super-fast initial page loads thanks to static generation at build time, with all the downstream client side functionality of a regular React application.
I hope the pseudocode above will help unblock anyone that is following a similar path as me, but just in case, here's the full pull request containing all the changes that eventually made this work. You'll notice I spent some time hacking in automatic type generation and hook generation using GraphQL Code Generator, and added some polish to the overall experience (like a /logout
page which clears the cookie, in case I'm on a device I don't own).
Please don't hesitate to reach out with questions, I'd love to help! Otherwise, the Next.js discussions have been a fantastic resource for finding solutions to a lot of common problems.
Good luck!
"},"Episode:461f9847-6231-4d32-b0fd-ba864c6366b0":{"id":"461f9847-6231-4d32-b0fd-ba864c6366b0","__typename":"Episode","description":"This week, we share some tips for getting unstuck when working on complicated design problems. We also share our home screen organization philosophies in The Sidebar, catch up on Tweets, and share our cool things as always.","legacy_id":null,"long_description":null,"published_at":"2020-05-26T20:51:00.000000-07:00","status":"published","title":"348: Getting Unstuck","token":"9ae0646c"},"Episode:6dc60eeb-29b5-4375-a349-4126df4484e8":{"id":"6dc60eeb-29b5-4375-a349-4126df4484e8","__typename":"Episode","description":"This week, we talk about how to overcome skill gaps. What should you do if you are bad at visual design? What if you can't make icons? Should you play into strengths or develop upon your weaknesses? We explore these topics, and more!","legacy_id":null,"long_description":null,"published_at":"2020-05-20T05:00:00.000000-07:00","status":"published","title":"347: Overcoming Skill Gaps","token":"0a523f70"},"Episode:c6913c00-ec02-4caf-96b0-615782d43ea0":{"id":"c6913c00-ec02-4caf-96b0-615782d43ea0","__typename":"Episode","description":"This week, we discuss the characteristics of high quality software. We do our best to organize and outline things to pay attention to that will help you ship better software. This, plus a Sidebar discussing how to handle negative feedback from customers, and cool things as always!","legacy_id":null,"long_description":null,"published_at":"2020-05-13T05:00:00.000000-07:00","status":"published","title":"346: Quality Software","token":"94edbd03"},"Episode:3f3140fc-a02d-4e8e-bfea-1104c7621be1":{"id":"3f3140fc-a02d-4e8e-bfea-1104c7621be1","__typename":"Episode","description":"This week, we discuss what it means to have and develop taste. From software to fashion, styles change, trends come and go, so what does it mean to be tasteful? In The Sidebar, we discuss the hacker mindset and building for an audience of one. This, plus cool things like a handy iPhone app and a killer new VR game!","legacy_id":null,"long_description":null,"published_at":"2020-05-06T05:00:00.000000-07:00","status":"published","title":"345: Developing Taste","token":"3d318a0d"},"Episode:5936a770-16ef-4f51-9c97-5934487f46d3":{"id":"5936a770-16ef-4f51-9c97-5934487f46d3","__typename":"Episode","description":"This week, we try to figure out the right time to give up on a fight when collaborating with stakeholders who have different opinions and priorities. We also cover a lot of feedback this week, discuss new design resources in The Sidebar, and share our cool things of the week.","legacy_id":null,"long_description":null,"published_at":"2020-04-29T05:00:00.000000-07:00","status":"published","title":"344: Knowing When to Give Up a Fight","token":"8dbe2294"},"ROOT_QUERY":{"__typename":"Query","posts({\"first\":5})":[{"__ref":"Post:5eb8e6f55d7843004534cf77"},{"__ref":"Post:5dea5ce1295515003754d9e4"},{"__ref":"Post:5eb846155d7843004534cea4"},{"__ref":"Post:5e63e21fb6a909003848a14d"},{"__ref":"Post:5eaf2d1028842300395db2af"}],"episodes":[{"__ref":"Episode:461f9847-6231-4d32-b0fd-ba864c6366b0"},{"__ref":"Episode:6dc60eeb-29b5-4375-a349-4126df4484e8"},{"__ref":"Episode:c6913c00-ec02-4caf-96b0-615782d43ea0"},{"__ref":"Episode:3f3140fc-a02d-4e8e-bfea-1104c7621be1"},{"__ref":"Episode:5936a770-16ef-4f51-9c97-5934487f46d3"}]}},"summaries":[{"title":"NeuBible","slug":"neubible-ios","tint":"#ee3f49","firstDetail":{"description":"This is one of the better reading experiences I've seen in an app. I'd argue that the left and right margins are a *tad* too large, but otherwise the vertical rhythm of the entire app is seamless. I love the snap-to-chapter interaction whenever you scroll past the current section; it feels great to use and has enough rubber-band tension to make it super obvious that you've transitioned to a new chapter.","title":"Reading","media":["https://player.vimeo.com/external/167672855.hd.mp4?s=6a61e147b1a5f57183bb396c65d0abab81000bf2&profile_id=174"]},"detailsCount":14},{"title":"Shorts","slug":"shorts-ios","tint":"#fda052","firstDetail":{"description":"See, once you get past the notifications request I can finally start to parse out what this app is and what kind of value it provides. \"Follow people's camera rolls\" sounds interesting, if not a bit on the extreme side of messaging. How will it work? Will I be automatically sharing my camera roll if I sign up?\r\n\r\nI'm a bit hesitant, but continue on through signup.\r\n\r\nAs far as visuals go, Shorts has killed it. But that was no surprise once I realized the [Highlight](http://highlig.ht/) team is behind this product. Clean and simple with a bright and inviting color palette - nicely done.","title":"Sign up","media":["https://player.vimeo.com/external/160549086.hd.mp4?s=24feda10316ef4b66ed8c5d8b418e15d55e0903f&profile_id=113"]},"detailsCount":16},{"title":"Stripe","slug":"stripe-dashboard-ios","tint":"#2289ca","firstDetail":{"description":"Microinteractions like these add value to the entire user experience. Rather than being jostled around with static view changes, Stripe helps introduce people to their dashboard with motion. Animation also buys Stripe time to load data in the background while preserving an experience that feels snappy and responsive. ","title":"Login","media":["https://player.vimeo.com/external/157887682.hd.mp4?s=4c3661c90aa158f993985c4c85767b8ca91c755f&profile_id=167"]},"detailsCount":14},{"title":"Quartz","slug":"quartz-ios","tint":"#1d1d1d","firstDetail":{"description":"Nicely done, Quartz: teach users how to use the app by having them actually complete the functions themselves. It’s clever and helps users build momentum in the onboarding experience.\r\n\r\nIt has become standard user experience practice to tell people that you’re going to ask them for notification permission before the popup appears. Quartz handles this nicely by explaining why notifications are useful. Opting in or out is totally fair game, but that emoji in the confirmation option is oh-so enticing.","title":"Teaching","media":["https://player.vimeo.com/external/157882203.hd.mp4?s=f481ea76419df041aeeb241fbbaf091596971851&profile_id=113"]},"detailsCount":11}]},"__N_SSG":true}