{"pageProps":{"data":{"posts":[{"__typename":"Post","id":"5eb8e6f55d7843004534cf77","title":"Quality Software","slug":"quality-software","updated_at":"2020-05-18T04:59:59.000-04:00","excerpt":"This post aims to outline the characteristics of software that we believe determine its fundamental quality.","feature_image":null,"html":"
This post is a complement to episode 346 of Design Details, Quality Software. It is also informed by the 2018 WWDC talk, The Qualities of Great Design.

Quality is subjective and hard to define. In software, it can be unclear why certain applications seem better than others. This post aims to outline the characteristics of software that we believe determine its fundamental quality. The following notes are from a conversation with Marshall Bock, so I'll be writing this post as a we/us/our.

It can be helpful to think of the characteristics of high quality software along the axis of users and developers and attention and intention. These axis overlap, and drawing clear boundaries between them isn't always possible.

Developers paying attention

Design is in the details, so they say. It takes a lot of energy to pay attention to the details. This is usually because the details are subconsciously understood and are harder to justify in development planning. Shipping pixel-perfect applications takes time, and many of those little details will be absorbed by the user without them even knowing it. Does text have enough contrast to be visible in different lighting? Do animations play at sixty frames per second? Are colors used to communicate similar ideas? Are atomic objects consistent and clearly understood?

In modern software, especially on mobile devices, the details go deeper than visuals. Are interface elements clearly labeled for accessibility on screen readers or in VoiceOver? Are haptics used appropriately to signify events, warnings, or meaningful state changes? Having thoughtfully answered questions like these puts an application into a new tier of quality.

Developing with intention

It's easy to build features: slap a button here, tuck a disclaimer there, and pin a tooltip on every new feature. It's not so easy to take those things away. It's a tough job to effectively educate people about new features or changes in the software. And it's even harder to ignore a loudly-requested feature for reasons that might not be easily understood by users.

Quality software knows when to stop adding features. It knows when something works well. And it isn't afraid to take things away, especially if a feature is no longer in service of a broader vision.

Quality software uses motion, iconography, and color with intent. Animation is in service of orienting users, communicating an idea, or guiding them towards an objective. Form is an expression of function; the drop shadow exists to communicate depth, to afford interactivity, to help a user distinguish what can be touched and what cannot, not because it looks nice, or because the screen needed more \"stuff\" on it.

Respecting people's attention

People are busy and they have things to do. Software is most often a means to an end, not an end itself. Quality software understands this and respects a person's time and attention.

Quality software exists in the service of a human outcome, like saving time, earning money, or increasing productivity. These outcomes are the result of designers and developers talking to real humans, watching them use the software, and determining ways to remove complexity.

Quality software is fast. Data is cached. The first paint feels instant. Interactions respond in milliseconds. It makes optimistic guesses about common paths that users follow in order to pre-fetch and pre-render. When the software can't be fast—for example, when operating over a slow network—it communicates the status of ongoing work to a person.

Quality software understands its own role within the operating system and device. It takes advantage of available sensors and heuristics to provide a superior experience to a person, like a phone screen waking up when reached for.

Software can also get better with use. Typeahead, predictive search, and pre-rendering content based on a user's past experiences in the app all make the software feel more personal. For example, FaceID improves over time, accounting for changes in facial orientation, characteristics, and expressions.

In addition to respecting a person's attention, quality software respects their privacy. People understand what data is being collected, how it’s being used, and how to remove it. Quality software uses the minimum amount of personally-identifiable information necessary to complete a job (if any at all) and nothing more.

Understanding people's intentions

Quality software considers many contexts a person may be in. Are they walking? Distracted? Hands-free? Using a keyboard or tapping? Do they have a fast internet connection, or is the network dropping?

Quality software accounts for situations like these. It provides prominent, conveniently-placed actions for single-handed users on the go. It uses clear typographical hierarchy and color to make pages scannable at a glance. It adapts itself to any input mechanism: touch, keyboard, voice, pointer, controller, or screen reader. It can queue changes locally in the event of a network loss, and it can calmly communicate the impact of a degraded network connection.

Quality software understands people and how they navigate the world. For example, it knows that some people are colorblind, so it either removes commonly-indistinguishable colors from its palette or provides opt-in overrides to the system.

If it knows that people might be using pre-paid data plans, it considers the very real user cost of sending large payloads over the network. It offers things like\"data-saving modes\" to give people a choice about when to spend their money.

Software exists in the context of the world with changing dynamics, tastes, and crises. Locally, it exists in the context of its brand. A person rarely encounters a piece of software in isolation; there’s priming that’s already occurred through advertisements, landing pages, app store screenshots, and onboarding flows. Because of this, quality software understands the expectations of its users and works to exceed those expectations. A highly polished visual experience is expected to behave in a highly polished way. A highly polished brand is expected to produce high quality software.

Quality software is built for humans. Messy, distractible, impatient, and imperfect humans. Because it understands this, quality software is forgiving. Destructive actions are presented clearly and are easily undoable. If something breaks, the software provides a clear reason that non-technical people can understand. It doesn't patronize its users or make them feel dumb. Errors are never dead ends; invalid user inputs are clearly highlighted, and failed network requests are retried.

I know it when I see it

Like mis-timed foley or a smudge on the wall, we notice quality more often by the presence of imperfections than by their absence. For this reason, it can be hard to articulate why something is high quality. We can feel it. Like the threshold test for obscenity,\"I know it when I see it.\"

But what about hard work?

This leaves us with a question: how much should “effort exerted” be considered when we evaluate the quality of software? If you ate the worst slice of pizza you've ever had, would your disgust be lessened by knowing that the chef had worked oh so very hard to make it? Probably not.

Still, it's clear that quality software can’t be created without hard work. Or, in the most optimistic sense, one’s long history of hard work enables high quality to be more easily achieved in later projects. So the two are intertwined, but we don't feel that the work itself is a necessary component in determining the quality of the software itself. An A for Effort doesn’t ensure an A overall.


Where is the high quality software?

It can be disenchanting to look at the current software landscape. It feels like we're spinning in circles, solving the same problems that we've been solving for years. Computers get better hardware, so we build more resource-hungry software, and things never actually get faster.

The world has become a tangled mess along the way. There are hundreds of screen sizes, operating systems, browsers, settings, and modes that designers and engineers must account for. It's no wonder that software doesn’t feel much better today than it felt five years ago: we're stuck in a constant game of catch-up.

What's next?

I predict that more startups will move away from a one-size-fits-all, must-scale-to-seven-billion-users mentality. Instead, we're seeing the proliferation of the independent creator. The person who builds software for a few thousand people, but in a way that is deeply personal and understands what's most important to their audience. More designers and developers are realizing that a healthy income and a happy life can exist with a thousand customers, not a billion.

Considerations

We approached the conversation of software quality from the perspective of product design and user experience. Of course, there’s a different framing of the topic that focuses on the quality of the written software and not the final experience.

To read more about this, I’d recommend reading this post by Martin Fowler or this post by Jake Voytko. Quite often, quality internal software correlates to high quality user experience. We should strive for both.

"},{"__typename":"Post","id":"5dea5ce1295515003754d9e4","title":"Using Ghost as a Headless CMS with Next.js","slug":"using-ghost-headless-cms-next-js-to-create-a-fast-and-simple-blog","updated_at":"2020-05-13T12:11:07.000-04:00","excerpt":"Rebuilding my self-hosted blog with Next.js and Ghost as a headless CMS.","feature_image":"https://overthought.ghost.io/content/images/2019/12/ghost---next-1--1-.png","html":"

I recently rebuilt most of my personal site with the main goal of providing a better surface area for writing. There are dozens of viable technology choices available right now to create a self-hosted blog and I spent way too long, and way too much energy trying to find the perfect one.

Spoiler alert: there is no perfect system. Every solution is a hack. Once I accepted this, I could focus on finding the least-hacky setup that would cover 90% of my needs. Ultimately, I ended up building the following system:

Getting data from Ghost

The Ghost API is pretty solid - the documentation is straightforward, and they even have a guide for working with Next.js.

To start, I added a small API file that can fetch data from Ghost:

import GhostContentAPI from \"@tryghost/content-api\";\n\nconst api = new GhostContentAPI({\n  url: 'https://overthought.ghost.io',\n  key: 'API_KEY',\n  version: \"v3\"\n});\n\nexport async function getPosts() {\n  return await api.posts\n    .browse({\n      limit: \"all\"\n    })\n    .catch(err => {\n      console.error(err);\n    });\n}\n\nexport async function getPostBySlug(slug) {\n  return await api.posts\n    .read({\n      slug\n    })\n    .catch(err => {\n      console.error(err);\n    });\n}

We can then use these API calls to populate data into a page. Here's a simplified version of my src/pages/overthought/index.tsx file:

import * as React from 'react';\nimport Page from '../../components/Page';\nimport OverthoughtGrid from '../../components/OverthoughtGrid'\nimport { getPosts } from '../../data/ghost'\nimport { BlogPost } from '../../types'\n\ninterface Props {\n  posts?: Array<BlogPost>\n}\n\nfunction Overthought({ posts }: Props) {\n  return (\n    <Page>\n      <OverthoughtGrid posts={posts} />\n    </Page>\n  );\n}\n\nOverthought.getInitialProps = async ({ res }) => {\n  if (res) {\n    const cacheAge = 60 * 60 * 12;\n    res.setHeader('Cache-Control', `public,s-maxage=${cacheAge}`);\n  }\n  const posts = await getPosts();\n  return { posts: posts }\n}\n\nexport default Overthought

In the getInitialProps call, which runs server-side, I decided that caching the entire page for 12 hours at a time was probably safe: I won't be publishing that frequently. This will improve performance if there is any spike in traffic that would otherwise overload the Ghost API.

Client-side caching

It's a bit overkill for now, but one thing that I've been meaning to try is SWR. This package does some cool client-side work to provide data revalidation on refocus, retries on failure, polling, and client-side caching.

One key thing that I wanted to solve for was people navigating between the home page of my site and the /overthought route. These pages both fetch the same posts, so it'd be a waste to require the second fetch to resolve before rendering my list of posts.

Before SWR, I might have reached for a tool like React.useContext to provide some kind of global state wrapper that would keep track of any previously-fetched posts. But Context can get messy, and I hate adding hierarchy to my components.

SWR solves the problem by maintaining a client-side cache of data I've fetched, keyed by the route used for the request. When a user navigates from / to /overthought, SWR will serve stale data from the cache first and then initiate a new request to update that cache with the latest data from the API.

At the end of the day, the same number of network requests are being fired. But the user experience is better: the navigation will feel instant because there's no waiting for a new network request to Ghost to resolve. Here's how our page from above looks with SWR:

import * as React from 'react';\nimport Page from '../../components/Page';\nimport OverthoughtGrid from '../../components/OverthoughtGrid'\nimport { getPosts } from '../../data/ghost'\n\nfunction Overthought({ posts }) {\n  const initialData = props.posts\n  const { data: posts } = useSWR('/api/getPosts', getPosts, { initialData })\n  \n  return (\n    <Page>\n      <OverthoughtGrid posts={posts} />\n    </Page>\n  );\n}\n\nOverthought.getInitialProps = async ({ res }) => {\n  if (res) {\n    const cacheAge = 60 * 60 * 12;\n    res.setHeader('Cache-Control', `public,s-maxage=${cacheAge}`);\n  }\n  const posts = await getPosts();\n  return { posts: posts }\n}\n\nexport default Overthought

With the two added lines at the top of the function, we instantly get data served from the client-side cache. The cool thing about this setup is that if the user loads a page that is server-side rendered, SWR will receive initialData that was already fetched on the server, again creating the feeling of an instantaneous page load.

Again: this is overkill.

Rendering post content

My one issue  with Ghost is that they don't return a Markdown version of your posts. Instead, they only return a big string containing all of the HTML for your post. Rendering this HTML string can be a pain: Ghost has a lot of custom elements that they use for rich embeds, like videos, that I don't want to be ingesting.

So instead I was able to hack around this by using react-markdown in conjunction with unified, rehype-parse, rehype-remark, and remark-stringify. I found all of this to be a bit of a headache, and is certainly one of the downsides of using Ghost as a content provider. I've reached out to the team to try and start a discussion about returning a raw Markdown field from the posts API.

Here's how the HTML processing works:

import unified from 'unified'\nimport parse from 'rehype-parse'\nimport rehype2remark from 'rehype-remark'\nimport stringify from 'remark-stringify'\nimport Markdown from 'react-markdown';\n\nconst PostBody({ post }) {\n  const md = unified()\n    .use(parse)\n    .use(rehype2remark)\n    .use(stringify)\n    .processSync(post.html)\n    .toString()\n\n  return <Markdown>{md}</Markdown>\n}

Unfortunately I have more work to do to dig into the internals of how the HTML is being parsed - I noticed that it strips out things like alt tags on images, and entire iframes if I use video embeds.

My source files are here if you would like to dig around further - if you happen to know of solutions to these parsing woes, please let me know!

"},{"__typename":"Post","id":"5eb846155d7843004534cea4","title":"Just-for-me Authentication","slug":"just-for-me-authentication","updated_at":"2020-05-10T17:55:57.000-04:00","excerpt":"How adding just-for-me authentication cascaded into new ideas and possibilities for play.","feature_image":null,"html":"

It's enjoyable to iterate on this site, in the spirit of building incrementally correct personal websites. Last week, when I wanted an easier way to update the metadata for my bookmarks page, I added authentication so that a page knows when I'm looking at it.

Once that worked, gears started turning. If every page on my site knows when I am looking at it, everything becomes mutable. Having a CMS is great, but when I want to fix a typo it can be a drag to spin up another tool, or navigate to another website, log in, make the change, save, redeploy, and onwards. If I spot the typo, I just want to...click it, fix it, and move on.

I can also start to progressively disclose little secret bits of UI. For example, it wouldn't be that hard to build a private subsection of /bookmarks just for myself.

To explore these ideas, I spent yesterday building an AMA page. It exposes a form where anyone can ask a question, but I'm the only one who can view them. I don't have to use a separate CMS or database to answer or edit questions. No, I just dynamically render the list of pending questions whenever I visit the page with lightweight tools for answering, editing, and deleting.

The system is extensible, too. For example, I added a small hook in the backend to send myself an email notification whenever someone asks a new AMA question (what kind of psychopath would check their personal website every day?). I could just as easily make this send me a text message, or maybe even a push notification in the future. It would be cool to be able to reply directly to that notification to answer it in real time.

Ultimately it's this control that frees me from the website itself. It's a playground.

What's next? Well, anything CRUD is a snap to build. It might be fun to work on a personal Instagram feed at /photos, or a personal Twitter at /notes. It might be fun to build a browser extensions to bring some of the notification and editing functionality out of the website itself. Because this site exposes an API to the internet, all of this functionality can move out of the browser into a SwiftUI app, or maybe a macOS menu bar app.

It's fun to have a personal sandbox.

"},{"__typename":"Post","id":"5e63e21fb6a909003848a14d","title":"Product Design Portfolios","slug":"product-design-portfolios","updated_at":"2020-05-10T12:33:08.000-04:00","excerpt":"A living list of useful and inspiring product design portfolios.","feature_image":null,"html":"

I've been maintaining this list of useful and inspiring product design portfolios for a few years. It's awesome to see people sharing their work experiences openly and putting their own personal touch on their website. As the list grew, people told me that it was a useful reference while creating their own portfolio.

I'll be keeping the list up to date over time, so please let me know if you find a broken link or if someone removed their portfolio!

My criteria is loose, but generally:

Here's the list. If I'm missing anyone, please drop me a note at the bottom of this post!

Did I miss someone? Let me know in the form below and I'll keep this post updated 🙏

Update: @chris_mrtn compiled a bunch of these people into a Twitter list.

"},{"__typename":"Post","id":"5eaf2d1028842300395db2af","title":"Using Cookies to Authenticate Next.js + Apollo GraphQL Requests","slug":"cookies-authenticate-next-js-apollo-graphql-requests","updated_at":"2020-05-03T19:50:20.000-04:00","excerpt":"In the spirit of over-complicating the hell out of my personal website, I spent time this weekend trying to solve one very small and seemingly-simple problem: can I make my website know when I am viewing it?","feature_image":null,"html":"

In the spirit of over-complicating the hell out of my personal website, I spent time this weekend trying to solve one very small and seemingly-simple problem: can I make my statically-generated website know when I am viewing it?

Background and problem

I have a bookmarks page where I store helpful links. To add new links, I set up a workflow where I can text myself a url from anywhere. Here's the code to do this. New links get stored in Firebase, which triggers a cloud function to populate metadata for the url by scraping the web page. Here's the cloud function to do this. This flow is really great for saving links while I'm away from my computer.

But, when I'm on my laptop, two problems emerge:

  1. If I want to add a new bookmark, I can't just go to https://brianlovin.com/bookmarks and paste a link.
  2. If I do save a link by texting myself, it can often scrape incorrect metadata in the cloud function. Usually it's because people set their <title> tags like {actually useful content about the page} · Site Name and I don't want the Site Name included in my bookmarks list.

So what I want is:

  1. When visiting /bookmarks, determine if I am the one viewing the page.
  2. If so, disclose UI controls to add and edit bookmarks.
  3. Protect the adding/deleting mutations from being run by anyone else, since my GraphQL endpoint is exposed to the internet (another problem, another day).
Hiccups

The hiccups came when I tried to figure out how this should work with GraphQL (which I use on the backend to stitch together multiple third party API services - see code) and Next.js's recently-release Static Site Generation feature.

Useful context

First, some useful information that I dug up through while working on this problem:

Setting up the frontend

The client side of this project ended up being quite complex. Remember:

Fortunately, I found this comment in the Next.js discussion forum which explains how to implement a withApollo higher-order component that can instantiate itself with props from the static build phase.

I made some small modifications, but you can see the implementation here.

Next, we need to instantiate an ApolloClient during build time in getStaticProps:

// graphql/api/index.ts\n\nconst CLIENT_URL =\n  process.env.NODE_ENV === 'production'\n    ? 'https://brianlovin.com'\n    : 'http://localhost:3000'\n\nconst endpoint = `${CLIENT_URL}/api/graphql`\n\nconst link = new HttpLink({ uri: endpoint })\nconst cache = new InMemoryCache()\n\nexport async function getStaticApolloClient() {\n  return new ApolloClient({\n    link,\n    cache,\n  })\n}

Now, in any of our page routes we can use Apollo to fetch data at build time:

// pages/bookmarks.tsx\n\nimport { getStaticApolloClient } from '~/graphql/api'\nimport { gql } from '@apollo/client'\n\n// ... component up here, detailed later\n\nconst GET_BOOKMARKS = gql`\n  query GetBookmarks {\n    bookmarks {\n      id\n      title\n      url\n    }\n  }\n`\n\nexport async function getStaticProps() {\n  const client = await getStaticApolloClient()\n  await client.query({ query: GET_BOOKMARKS })\n  return {\n    props: {\n      // this hydrates the clientside Apollo cache in the `withApollo` HOC\n      apolloStaticCache: client.cache.extract(),\n    },\n  }\n}
Logging in

Because I'll want to add new links to my bookmarks from many devices, I'll need some way to programmatically set a cookie in the browser by \"logging in.\"

The flow should be:

Before I can do any of this, I'll need to ensure that my GraphQL mutations have access to cookies and a response object. We can add this information to the GraphQL context object in the server constructor:

// pages/api/graphql/index.ts\n\n// https://github.com/zeit/next.js/blob/master/examples/api-routes-middleware/utils/cookies.js\nimport cookies from './path/to/cookieHelper'\nimport typeDefs from './path/to/typeDefs'\nimport resolvers from './path/to/resolvers'\nimport { ApolloServer } from 'apollo-server-micro'\n\nfunction isAuthenticated(req) {\n  // I use a cookie called 'session'\n  const { session } = req?.cookies\n  \n  // Cryptr requires a minimum length of 32 for any signing\n  if (!session || session.length < 32) {\n    return false\n  }\n\n  const secret = process.env.PASSWORD_TOKEN\n  const validated = process.env.PASSWORD\n  const cryptr = new Cryptr(secret)\n  const decrypted = cryptr.decrypt(session)\n  return decrypted === validated\n}\n\nfunction context(ctx) {\n  return {\n    // expose the cookie helper in the GraphQL context object\n    cookie: ctx.res.cookie,\n    // allow queries and mutations to look for an `isMe` boolean in the context object\n    isMe: isAuthenticated(ctx.req),\n  }\n}\n\n\nconst apolloServer = new ApolloServer({\n  typeDefs,\n  resolvers,\n  context,\n})\n\nexport const config = {\n  api: {\n    bodyParser: false, // required for Next.js to play nicely with GraphQL request bodies\n  },\n}\n\nconst handler = apolloServer.createHandler({ path: '/api/graphql' })\n\n// attach cookie helpers to all response objects\nexport default cookies(handler)\n

The mutation:

// graphql/mutations/auth.ts\n\nimport { gql } from '@apollo/client'\n\nexport const LOGIN = gql`\n  mutation login($password: String!) {\n    login(password: $password)\n  }\n`

The resolver:

// graphql/resolvers/mutations/login.ts\n\nimport Cryptr from 'cryptr'\n\nexport function login(_, { password }, ctx) {\n  const { cookie } = ctx\n\n  const validator = process.env.PASSWORD\n  if (password !== validator) return false\n\n  const secret = process.env.PASSWORD_TOKEN\n  const cryptr = new Cryptr(secret)\n  const encrypted = cryptr.encrypt(password)\n\n  // the password is correct, set a cookie on the response\n  cookie('session', encrypted, {\n    // cookie is valid for all subpaths of my domain\n    path: '/',\n    // this cookie won't be readable by the browser\n    httpOnly: true,\n    // and won't be usable outside of my domain\n    sameSite: 'strict',\n  })\n\n  // tell the mutation that login was successful\n  return true\n }

Next, let's log in from the client:

// pages/login.tsx\n\nimport * as React from 'react'\nimport { useRouter } from 'next/router'\nimport { useMutation } from '@apollo/client'\nimport { LOGIN } from '~/graphql/mutations/auth.ts'\nimport { withApollo } from '~/components/withApollo'\n\nfunction Login() {\n  const router = useRouter()\n  const [password, setPassword] = React.useState('')\n\n  const [handleLogin] = useMutation(LOGIN, {\n    variables: { password },\n    onCompleted: (data) => data.login && router.push('/'),\n  })\n\n  function onSubmit(e) {\n    e.preventDefault()\n    handleLogin()\n  }\n\n  return (\n    <form onSubmit={onSubmit}>\n      <input\n        type=\"password\"\n        placeholder=\"password\"\n        onChange={(e) => setPassword(e.target.value)}\n      />\n    </form>\n  )\n}\n\n// remember that withApollo wraps our component in an ApolloProvider, giving us access to use the `useMutation` and `useQuery` hooks in our component.\nexport default withApollo(Login)

So our flow should now work:

  1. I enter a password on the client
  2. The password gets sent as a variable to my mutation
  3. The mutation verifies the password, signs a session cookie, and returns it in the response headers to be saved in the browser
Validating my identity on the client

Okay, so now I have a signed cookie on my browser which will be used in all future requests to verify my identity. The next step is provide the client with some kind of isMe boolean that can be fetched from anywhere. We can write a small GraphQL mutation to provide this information:

// graphql/queries/isMe.ts\n\nimport { gql } from '@apollo/client'\n\nexport const IS_ME = gql`\n  query IsMe {\n    isMe\n  }\n`

Remember, we've already written an isMe helper into our GraphQL context object, so we can return that value in our resolver:

// graphql/resolvers/isMe.ts\n\nexport function isMe(_, __, { isMe }) {\n  return isMe\n}

Next, let's write our GraphQL query on the client to find out if it's me viewing the page:

// src/hooks/useAuth.tsx\n\nimport { IS_ME } from '~/graphql/queries/isMe.ts'\nimport { useQuery } from '@apollo/client'\n\nexport function useAuth() {\n  const { data } = useQuery(IS_ME)\n\n  return {\n    isMe: data && data.isMe,\n  }\n}

With this helper hook, we can now start checking for isMe anywhere in the client:

// src/pages/bookmarks.tsx\n\nimport * as React from 'react'\nimport { useQuery } from '@apollo/client'\nimport BookmarksList from '~/components/Bookmarks'\nimport { GET_BOOKMARKS } from '~/graphql/queries'\nimport { useAuth } from '~/hooks/useAuth'\nimport { getStaticApolloClient } from '~/graphql/api'\nimport { withApollo } from '~/components/withApollo'\nimport AddBookmark from '~/components/AddBookmark'\n\nfunction Bookmarks() {\n  // cache-and network is used because after I add a new bookmark, other people will still be seeing the statically-served HTML created at build time. In this way, the user will see a page rendered _instantly_, and the client will kick off a network request to ensure it has the latest bookmarks data.\n  const { data } = useQuery(GET_BOOKMARKS, { fetchPolicy: 'cache-and-network' })\n  const { bookmarks } = data\n  const { isMe } = useAuth()\n\n  return (\n    <div>\n      <h1>Bookmarks</h1>\n      {isMe && <AddBookmark />}\n      {bookmarks && <BookmarksList bookmarks={bookmarks} />}\n    </div>\n  )\n}\n\nexport async function getStaticProps() {\n  const client = await getStaticApolloClient()\n  await client.query({ query: GET_BOOKMARKS })\n  return {\n    props: {\n      apolloStaticCache: client.cache.extract(),\n    },\n  }\n}\n\nexport default withApollo(Bookmarks)
Adding bookmarks

Okay, so now I can progressively disclose UI on the client once the site knows it's me. But because my GraphQL endpoint is exposed to the internet, we'll need to make sure that random people can't write their own POSTs to maliciously save bookmarks.

Here's the mutation resolver on the backend checking the isMe flag set in the context object, some input validation, and then persisting the bookmark.

// graphql/resolvers/mutations/bookmarks.ts\n\nimport { URL } from 'url'\nimport { AuthenticationError, UserInputError } from 'apollo-server-micro'\nimport firebase from '~/graphql/api/firebase'\nimport getBookmarkMetaData from './getBookmarkMetaData'\n\nfunction isValidUrl(string) {\n  try {\n    new URL(string)\n    return true\n  } catch (err) {\n    return false\n  }\n}\n\nexport async function addBookmark(_, { url }, { isMe }) {\n  if (!isMe) throw new AuthenticationError('You must be logged in')\n  if (!isValidUrl(url)) throw new UserInputError('URL was invalid')\n\n  const metadata = await getBookmarkMetaData(url)\n\n  const id = await firebase\n    .collection('bookmarks')\n    .add({\n      createdAt: new Date(),\n      ...metadata,\n    })\n    .then(({ id }) => id)\n\n  return await firebase\n    .collection('bookmarks')\n    .doc(id)\n    .get()\n    .then((doc) => doc.data())\n    .then((res) => ({ ...res, id }))\n}
Conclusion

This is all a bit...complicated, to say the least. But when it all works, it actually works quite well! And as I incrementally add more mutation types, it should all Just Work™.

At the end of the day, the site gets all the benefits of super-fast initial page loads thanks to static generation at build time, with all the downstream client side functionality of a regular React application.

I hope the pseudocode above will help unblock anyone that is following a similar path as me, but just in case, here's the full pull request containing all the changes that eventually made this work. You'll notice I spent some time hacking in automatic type generation and hook generation using GraphQL Code Generator, and added some polish to the overall experience (like a /logout page which clears the cookie, in case I'm on a device I don't own).

Please don't hesitate to reach out with questions, I'd love to help! Otherwise, the Next.js discussions have been a fantastic resource for finding solutions to a lot of common problems.

Good luck!

"},{"__typename":"Post","id":"5dea607e295515003754d9f8","title":"Adding Dark Mode with Next.js, styled-components, and useDarkMode","slug":"adding-dark-mode-with-next-js","updated_at":"2020-05-01T14:21:36.000-04:00","excerpt":"How I added automatic dark mode to my personal site using Next.js, styled-components, and useDarkmode. ","feature_image":"https://overthought.ghost.io/content/images/2019/12/dark-mode--1-.png","html":"

I recently added automatic dark mode theming to my personal site using Next.js, styled-components, and useDarkmode. This is a short technical look at how it's built.

Updated

The solution below worked for a while, but unfortunately suffered from the dreaded \"dark mode flicker\" - that flash of a white screen you get when an SSR'd page briefly hits the client and renders light mode styles.

Along the way, Josh Comeau wrote an amazing post about implementing a dark mode that perfectly accounts for server side rendering and static generation. The tl;dr is that you have to move to CSS variable land, rather than relying on a Styled Components theme prop. Styled Components theme switching can only happen on the client at render time, and results in the flash.

For anyone visiting this post from the future, I would highly recommend studying (and re-studying) Josh's post. It is seriously amazing.

If you want to see how his writing translates into a Next.js app, you can look at my pull request to fix the dark mode flicker on this site. Specifically, you'll care about these changes: _document.tsx, colors.ts, DarkMode.tsx and InlineCssVariables.tsx.

The rest of the post below reflects my old approach, and would still be a valid way to think about implementing dark mode if you are not doing any kind of server side rendering or static generation. Although, you probably should be doing those things ☺️

useDarkMode

useDarkMode is a useful React hook designed to help people add dark mode to their site. The feature I like the most about this hook is that it respects people's operating system preferences, using prefers-color-scheme. This means means I don't need to build a manual toggle. Instead, I can infer a person's preference from their operating system settings.

Theme Objects

Knowing whether or not a person wants dark mode is just the first step of the problem. Based on this preference, we actually have to dynamically update all the colors in our app.

To do this, I defined light, dark, and default theme objects. The default theme contains non-color related properties like spacing and font sizes. The light and dark objects contain all color related properties that should switch dynamically between modes.

Here's a preview of how my themes are set up:

// Theme.ts\nconst light = {\n  bg: {\n    primary: '#eff0f5',\n    secondary: '#ffffff',\n    inset: '#e2e4e8',\n    input: 'rgba(65,67,78,0.12)'\n  },\n  text: {\n    primary: '#050505',\n    secondary: '#2f3037',\n    tertiary: '#525560',\n    quarternary: '#9194a1',\n    placeholder: 'rgba(82,85,96,0.5)',\n    onPrimary: '#ffffff'\n  },\n  // ...\n}\n\nconst dark = {\n  bg: {\n    primary: '#050505',\n    secondary: '#111111',\n    inset: '#111111',\n    input: 'rgba(191,193,201,0.12)'\n  },\n  text: {\n    primary: '#fbfbfc',\n    secondary: '#e3e4e8',\n    tertiary: '#a9abb6',\n    quarternary: '#6c6f7e',\n    placeholder: 'rgba(145,148,161,0.5)',\n    onPrimary: '#050505'\n  },\n  // ...\n}\n\nconst defaultTheme = {\n  fontSizes: [\n    '14px', // 0\n    '16px', // 1\n    '18px', // 2\n    '22px', // 3\n    '26px', // 4\n    '32px', // 5\n    '40px'  // 6\n  ],\n  fontWeights: {\n    body: 400,\n    subheading: 500,\n    link: 600,\n    bold: 700,\n    heading: 800,\n  },\n  lineHeights: {\n    body: 1.5,\n    heading: 1.3,\n    code: 1.6,\n  },\n  // ...\n};\n\nexport const lightTheme = { ...defaultTheme, ...light }\nexport const darkTheme = { ...defaultTheme, ...dark }
Using theme objects in styled-components

All of the projects I've built in the last few years have used styled-components to use CSS directly in JavaScript. If you're new to CSS-in-JS, I would recommend this blog post from @mxstbr: Why I Write CSS in JavaScript.

Styled-components uses a ThemeProvider component which accepts a theme object as a prop, and then re-exposes that object dynamically to any styled component deeper in your component tree. I used this provider to insert a different theme object depending on a person's dark mode preferences:

// Providers.tsx\nimport { lightTheme, darkTheme } from '../Theme';\n\nexport default ({ children }) => {\n  // i opt out of localStorage and the built in onChange handler because I want all theming to be based on the user's OS preferences\n  const { value } = useDarkMode(false, { storageKey: null, onChange: null })\n  const theme = value ? dark : light\n\n  return (\n    <ThemeProvider theme={theme}>\n      {children}\n    </ThemeProvider>\n  );\n}

With the ThemeProvider accepting the dynamic theme object, I can then use my color definitions downstream in my components directly:

const Card = styled.div`\n  /* ... */\n  background-color: ${props => props.theme.bg.primary};\n  color: ${props => props.theme.text.primary};\n`
Client-server mismatches

One of the best features of Next.js is the ability to render pages on the server. This gives people faster loading times by moving computationally heavy processing off the client and onto a server. Server-side rendering, or SSR, has many other benefits as well, but it comes with a tradeoff: SSR doesn't know about client-specific preferences like prefers-color-scheme.

This means that when the page is generated on the server, it can't dynamically choose the correct theme. When the client receives the page and hydrates the JavaScript, it can be out of sync and cause rendering to break.

The solution to this is hacky, but works: wrap the body in an visibility: hidden div for the server's render cycle. This prevents the flash, but doesn't prevent search engines from accessing meta tags deeper in your tree for SEO. When the client rehydrates, re-render the app with the person's clientside preferences. We can skip this server-side render using React.useEffect to determine when the app has mounted:

const [mounted, setMounted] = React.useState(false)\n\nReact.useEffect(() => {\n  setMounted(true)\n}, [])\n\nconst body = \n    <ThemeProvider theme={theme}>\n      {children}\n    </ThemeProvider>\n\n// prevents ssr flash for mismatched dark mode\nif (!mounted) {\n  return <div style={{ visibility: 'hidden' }}>{body}</div>\n}

Putting it all together

These two stripped-down files compose the work outlined above:

// Providers.tsx\nimport { lightTheme, darkTheme } from '../Theme';\n\nexport default ({ children }) => {\n  const { value } = useDarkMode(false, { storageKey: null, onChange: null })\n  const theme = value ? darkTheme : lightTheme\n\n  const [mounted, setMounted] = React.useState(false)\n\n  React.useEffect(() => {\n    setMounted(true)\n  }, [])\n    \n  const body = \n    <ThemeProvider theme={theme}>\n      {children}\n    </ThemeProvider>\n\n  // prevents ssr flash for mismatched dark mode\n  if (!mounted) {\n      return <div style={{ visibility: 'hidden' }}>{body}</div>\n  }\n\n  return body\n}
// _app.tsx\nimport * as React from 'react';\nimport App from 'next/app';\nimport Providers from '../components/Providers';\n\nclass MyApp extends App {\n  render() {\n    const { Component, pageProps } = this.props;\n    return (\n      <Providers>\n        <Component {...pageProps} />\n      </Providers>\n    );\n  }\n}\n\nexport default MyApp;
Demo
Conclusion

I'm really pleased with how much easier prefers-color-scheme made this process of enabling dark mode. Additionally, the open source work happening with tools like Next.js and useDarkmode is fantastic - what a time saver!

Tweet @ me if you end up using this process to add dark mode to your own site, I'd love to see 🌗

"},{"__typename":"Post","id":"5e7686a2f1225a0038464648","title":"Incrementally Correct Personal Websites","slug":"incrementally-correct-personal-websites","updated_at":"2020-03-22T16:30:08.000-04:00","excerpt":"It's time to change the way I think about building and maintaining my personal website.","feature_image":null,"html":"

I first heard of the phrase \"incremental correctness\" from Guillermo Rauch during our Design Details interview over 2 years ago.† Since then, the concept has been weaving its way through all parts of my life, inside and outside of design and technology. It's become a mantra of sorts at GitHub, thanks to the reliable repetition of the phrase in almost every conversation with Max Schoening.

Incremental correctness is the process of iterating towards something more truthful, accurate, usable, or interesting. The faster we can iterate, the faster we can discover good ideas. Things aren't perfect today, but tomorrow things can be slightly closer to perfect.

Incremental correctness changes everything about the way you work. It's anti-perfectionism. It's pro-generation. It's about discovery and proof, research and prototyping, and having a framework to reliably test your instincts. It discourages major redesigns, preferring isolated improvements to a small subset of nodes in any kind of working tree.

I've always struggled to have this mindset when working on my personal website. I get stuck in these loops where I redesign the thing once every few years, and am left so thoroughly exhausted and frustrated by the process that I don't want to touch the thing ever again. If you've ever dreaded the notion of having to redesign your portfolio, you probably know what I mean.

One of the reasons I get stuck in this trap is because our tools don't afford incrementally-correct processes.

Think about blogging for a second: the fact that a list of posts is ordered chronologically by publication date, by default, is a bug in our incrementally-correct worldview. Blogging tools don't create any incentive to go back and edit previous ideas or posts. Or, at the very least, the default ordering has a de facto side effect of fewer people being aware of revisions or reversals to previously-published ideas.

RSS feeds are organized linearly by publication date, putting pressure on writers make sure that each post is \"final\" – there's no going back to improve or clarify your thoughts for a feed reader where everything is static and cached for eternity. At the very least, any subsequent edits will only reach a fraction of the initial audience.

To combat this, I'm going to order the posts here by when they were last updated. But of course, this has flaws: not every update is an iteration on the underlying idea. Maybe I'm just fixing a typo, or swapping out a confusing word. The closest comparison I can think of is that blog post edits are treated like a semver minor version by default. I wish blogging tools could distinguish patches from minor versions, and interpret the impact of my change accordingly.

Regardless of this clear shortcoming, this ordering still feels incrementally more correct than ordering by publication date. Progress!

Let's talk about dependencies, everyone's favorite part of building websites.

If you've ever maintained a website for any length of time, you probably know just how frustrating it can be to let dependencies get out of date. Heaven forbid you return to a year-old project only to discover that half of your libraries are out of date, no longer supported, or require an entirely different set of local tooling to use. These are some of the reasons people hate building personal websites – there is too much effort spent on the meta-maintenance, and most people don't have the patience for it.

In this arena, tools have gotten better. A process I've found that has saved me hours of work and countless headaches is to set up an automatic dependency upgrade pipeline, backed by end-to-end tests, to ensure that my website won't break as underlying dependencies change. Using Dependabot, Cypress, and ZEIT, my website is set up to automatically update dependencies, ensure that pages work correctly with the new code, and deploy directly to production with every merge. Think of this pipeline like a system for automating incremental correctness. Underlying bug fixes flow seamlessly into production without conscious effort. Magic.

And then there are deployments.

I used to dread making changes to my personal sites because I'd have to deploy them. Usually this meant running some kind of script locally, or to get old school, FTP-ing files to production and hoping I didn't break the whole thing. Deploying changes is another kind of meta-maintenance that most people hate.

Thankfully, the tools have gotten better. With ZEIT, a production deploy is just a git push away – every single change to the master branch on my repo triggers a production deployment. It's a beautiful abstraction to eliminate the meta-maintenance of deploying code.

So here we are: I'm simultaneously frustrated by and invigorated by the tools available today to build websites. Some of them, like blogging platforms, place too much emphasis on creating static chunks of information. This discourages ongoing iteration, thus discouraging the pursuit of incremental correctness. Conversely, a lot of the meta-maintenance work of testing, deploying, and updating code has become infinitely easier.

If you've been struggling with the looming task of redesigning your personal website from scratch, or rebuilding your portfolio after years of neglect, maybe it's time to approach things from a different angle the next time around.

What if your website was a mini-product, just for you? How would you structure it differently? How would you think about process and accessibility? What systems would you set up to automate the boring parts and prevent regressions? What tools would you want to use that would afford a 5-minute daily check-in to make a small improvement to the content or design?

† The segment starts around minute 55, although I'd recommend the entire episode as Guillermo is incredibly thoughtful and forward thinking about many of these ideas.

"},{"__typename":"Post","id":"5e6e77b2f1225a0038464491","title":"Automating the Boring Parts of Product Design","slug":"automating-the-boring-parts-of-product-design","updated_at":"2020-03-15T16:29:01.000-04:00","excerpt":"Building Figma plugins to automate the boring parts of product design.","feature_image":"https://overthought.ghost.io/content/images/2020/03/hero-2.png","html":"

We started building the GitHub mobile apps last May, around the same time that Figma Plugins were released. I remember feeling so relieved, because I suddenly had access to tools that would speed up my design process, simplify tedious workflows, and automate the boring parts of designing screens.

Over the next few months, I ended up making about ten small utility plugins. Those small utilities were eventually combined into a single \"mono-plugin\" that has become my daily-driver for designing the mobile apps.

Bottom separators

iOS table views use separators between list items to improve the scannability of lists. Separators are inset from the left edge of the screen to align with any text within the table cell. As I was designing many new types of table views, it was tedious to constantly resize these separators as content dimensions changed.

The first plugin was a text input that accepted a string like t, 16 and would apply a top separator and a bottom separator inset 16 points to any table view cells that were selected. As you can see, I really poured a lot of love and care into making this plugin aesthetically pleasing and delightful. Dribbble isn't ready for this much joy.

Creating color styles

We knew we wanted to ship our mobile apps with dark mode support out of the gate. This meant that we needed to extend the Primer color system to account for dark mode variants. The next utility plugin took the original set of Primer styles and extended them into a mobile-only \"spectrum color set\" that contained dark mode variants.

For each mobile-specific color swatch, the plugin would generate color styles in the Figma document.

Functional color styles

In general, I don't recommend using colors like gray-600 as values in interface designs. Interface colors should live within a second layer of abstraction that we call \"functional colors.\"

For example, we might create a functional color called textTertiary that encapsulates gray-600. A functional color can encapsulate multiple color styles to account for theme variants. In this way, you could have textTertiary / light mode and textTertiary / dark mode.

With a few dozen functional colors, extended from the base set of a few hundred core colors, I was suddenly dealing with a lot of color styles in my document. So the next plugin ran colors through a pipeline that ensured that my mobile colors extended Primer, and that my functional colors were always derived from the spectrum palette.

The resulting color pipeline looked like this:

Color export

We needed a way to get all of our color styles out of Figma and converted into xcassets and XML styles for iOS and Android. Copy-pasting hundreds of values seemed tedious, to say the least.

The next plugin grabbed all the color swatches from a Colors page and generated an array of color objects, each with useful metadata like mode (light or dark, high contrast or normal contrast), hex values, and even extended hex values with alpha (for Android).

Eli Perkins and Elise Alix then wrote custom scripts that could ingest this array and spit out platform-specific color asset files.

Color picker

The built-in Figma style picker is not great. It doesn't support search, it constantly resets scroll position, and long style names get truncated. After programmatically creating a few hundred layer styles, I knew that I'd need a better interface.

The next plugin wrapped the document's layer styles in a modal with search, preview swatches, and utility buttons to set layer properties for a given color. For example, the S and B buttons on each color's list item would set a selection's stroke and background colors, respectively. Clicking anywhere else on the list item would fill by default.

The color picker also accounts for what mode I'm designing in: the dark mode switcher at the top loads only dark mode variants of the document's color styles. This quick preview made it much easier to debug incorrect dark mode colors.

Dark mode switcher

Designers don't need to design every single screen for both light and dark mode. It's designing at the wrong layer of abstraction: dark and light mode are functions of an underlying color system.

Yet, when making mocks, I found myself wanting to gut check that I had correct elevations, type hierarchy, and contrast ratios in dark mode. Because my color styles were all named with a structured hierarchy, like textPrimary / light / normal contrast I could programmatically swap layer styles by looking for pairs. So given a selection that had a fill color of textPrimary / light / normal contrast I could check that textPrimary / dark / normal contrast exists in my document. If it exists, swap the style IDs.

Data population

Populating mocks with avatars, names, usernames, and bios is tedious. I found myself constantly switching between my browser and Figma to grab accurate data. Luckily, Figma can talk to the internet, which meant I could write a small utility that would hit the GitHub API and return data about people, organizations, and repos.

By naming layers with a special syntax like __login I could tell specific layers to consume fields from the API response. I added a text input that would accept custom variables in case I needed to get data for a specific object; otherwise, the plugin fetches randomly from a hard-coded list of objects.

This plugin has been open sourced as  figma-github-data.

Mono-plugin

After building all these utility plugins, it became tedious to remember all the command names and constantly switch between multiple plugins while designing a single screen.

The solution for this was to combine most of the plugins into a single \"mono-plugin\" that wraps different commands as buttons. So rather than typing the command \"Convert to dark mode\" there is now a button that I can click to perform the action. This window is my own personal augmentation of Figma's interface. I can open it once for my entire working session.

Conclusion

These plugins have saved me hours and hours of boring, repetitive work. They're also fun to build!

My recommendation for other designers is to keep an eye out for behaviors that could be automated with plugins. Here are a few behaviors that should throw a red flag:

Designers should feel comfortable building their own \"personal API\" in order to solve their day-to-day design problems. If those problems can be generalized and abstracted, it makes sense to publish and share that work more broadly with the community.

But it's okay to not publish everything. It's okay to just build for yourself. Everything we build doesn't have to be packaged up, marketed, and have a shiny coat of paint.

Demo

I presented this post as a lighting talk at Figma's Config conference in February. My co-presenters Jake Miller and Lily Nguyen also demoed their impressive plugins that are being used at Atlassian and Uber.

Resources

You can grab the figma-github-data plugin on GitHub to see how it works, or fork it for your own team. The Figma Plugins API docs are solid, and should be your go-to resource as you're building your plugins.

I also recommend investigating the following repositories for resources, frameworks, or sample code:

"},{"__typename":"Post","id":"5e652c1ab6a909003848a213","title":"The Meta Skills of Product Design","slug":"the-meta-skills-of-product-design","updated_at":"2020-03-12T13:10:12.000-04:00","excerpt":"Exploring the meta skills that product designers can use to learn faster, work on more impactful products, or collaborate with more interesting people.","feature_image":"https://overthought.ghost.io/content/images/2020/03/meta-2.jpg","html":"
This post is a complement to episode 337 of Design Details, The Metagame of Design.

Last week I was reading To Get Good, Go After the Metagame, a post about metagames in life, and how understanding metas can create competitive advantages.

For starters, you should read that post. If you don't, here's a short primer from Wikipedia:

Metagame, or game about the game, is any approach to a game that transcends or operates outside of the prescribed rules of the game, uses external factors to affect the game, or goes beyond the supposed limits or environment set by the game.

Metas exist outside of games. They influence our jobs, our relationships, our beliefs, and everything in between. With this in mind, I wanted to explore meta skills that product designers can use in order to learn faster, work on more impactful products, or collaborate with more interesting people.

I've organized these from broadest and most globally-relevant, down to the more personal and individual meta skills that designers can spend time learning.

Observing laws

When laws change, designers must adjust their tactics. GDPR, CCPA, and other data privacy laws shift the requirements and expectations of product designers. From cookie notices, to entire account deletion flows, new laws will emerge that require new interfaces and experiences for users. While laws are usually slow to change, they should be considered constraints at the beginning of any design process.

Tracking the industry

Designers who became experts at mobile screen design in 2008 and 2009 are likely at the top of their game today. They are in demand, they are designing solutions for a massive number of people, and their skills are easily transferrable between industries, companies, and products.

Learning how to design for phones in 2008 was also risky. What if the iPhone had flopped? What if we hadn't been ready for 3.5 inch screens to become our portal into the world around us? Those designers may have wasted a lot of time.

We can observe similar opportunities today in AR, VR, and blockchain technologies. Will these fields dominate the world in the same way that mobile did? Maybe. Maybe not. But if it's the case that one of these does, the people who started learning how to design for those technologies yesterday have an advantage over anyone who tries to jump on board after the ship has set sail.

Serving the business

Businesses need different things at different stages of their lives. They need different things based on the competitive landscape, changing customer tastes, broader political and economic trends, and more. Being able to observe and understand how the needs of the business evolve over time is crucial to ensuring that you are working on the most important, high-impact problems of the day.

Building momentum

The more time I spend designing, the more I realize how important excitement is in the design process. Not my excitement—although that certainly helps keep me energized—but rather peer and leadership excitement. Excitement is contagious, and an excited team is a motivated team.

Learning how to build and maintain excitement among the people who matter is a skill. It takes effort, persistence, and clear communication.

Finding ways to find problems

Modern technology companies are hungry for data. They consume it by the petabyte, but rarely know how to digest it. This is changing quickly: companies on the edge know how to find signal in the noise. And as designers, it's our job to solve the problems that data helps uncover.

In this way, discovering new methods to find problems worth solving is a meta skill that requires exploration outside of everyday design tools. Digging into the data, talking with customers, and reading financial statements are all ways to expose yourself to potential problems that you can solve for people.

Leveraging platforms

Apple says you can send ads in push notifications. Apple says if you want to use in-app payments, you have to give the price a certain visual prominence on the screen. Apple says who gets featured in the App Store.

The platforms we build for are incredibly powerful. Their decisions impact what is possible to build, what designs are considered harmful or friendly, and they make the upstream choice for you about what they think is best for your customers.

In many ways, this is a good thing: Apple's decisions have the side effect of allowing designers to avoid making thousands of decisions on their own. Designers should be aware of changes to rules, and be ready to explore the edges of those rules.

Evolving software

Of course, as hardware improves, so too does its backing software. And as software has improved, the foundational tools we have access to have flourished. From OS-level accessibility controls, to dark mode, to multitouch, we continue to invent new and exciting primitives that simultaneously complicate the design process and unlock the ability to solve new kinds of problems.

Understanding hardware

As hardware gets better, software tends to follow close behind. The phones in our pockets are supercomputers with constantly improving cameras, sensors, and computing power. Every year's advances unlock the ability to design new experiences and take advantage of more processing power to do things that would have been prohibitively expensive just a few years ago. Think: augmented reality, virtual reality, rich 3D maps, high-resolution photos, and more powerful video tools are all byproducts of better hardware.

Designers who pay attention to the hardware are able to push the limits of the software.

Collaborating cross-functionally

It's not written in stone that the three pillars of product development in tech companies have to be Product, Engineering, and Design. It wasn’t always this way, and it likely won't stay this way forever. Maybe more pillars will be added. Perhaps the decision-making power shifts within those pillars.

Designers should be aware of changing dynamics of cross-functional relationships within their company and within the industry. Perhaps in five years, the fourth pillar will be Sales, and the most effective designers will be the ones who can talk the language of sales, work effectively with the best salespeople, and understand how to build better products with a different kind of decision maker in the room.

Choosing projects

Being on the right projects at the right time is often a matter of luck. But it doesn't have to be pure luck. Designers can position their skillset, leverage connections, or proactively engage with the right people in order to work on more interesting problems.

Working on more impactful problems tends to spin the flywheel of many other meta skills in this list – audience building, worldview expansion, and having a deeper understanding of design in general.

Shaping world views

The world around us shapes our perception of people, ideas, and technologies. I'm writing this post from my apartment in March of 2020, as the United States locks down and prepares for the impending explosion of coronavirus infections. Knowing about these things – current events, diseases, politics, and so forth – gives designers new types of problems to think about, new information to use in their research, or new ways of thinking about the world.

Building an audience

It's easier than ever to build an audience today: blogging, sharing work in progress, writing case studies, contributing to open source, podcasting, or giving away free design resources are all viable options. Doing these things puts your work and your name into the public conversation. Do these things for enough time, with enough consistency, and people will notice.

Having an audience has its drawbacks – perhaps those are worth exploring in a future post. But in general, having people interested in your work is a competitive advantage. It provides better access to people, conversations, and opportunities. Having an audience that shares feedback with you will tighten your feedback loops, allowing you to learn and iterate on any idea faster.

Inventing patterns

Red dots. Tooltips. Push notifications. These are a few of the tools in a designer's kit that help them to guide attention, remind people of critical information, or nudge them to make a purchase. Red dots on a single interface, in a single app, are incredibly powerful at getting a person's attention. But these days, every surface of every app is littered with red dots, killing the underlying effectiveness of the pattern.

The most innovative designers are already thinking about the next novel way to inform and delight people in ways that won't be lost in a sea of noise.

Mastering tools

It would be painful to design user interfaces in Photoshop today; modern tools are better suited for this job. Modern tools help designers transfer ideas from their brain to a screen faster than ever before. But design tools aren't done – the tides are constantly shifting. Designers with the deepest understanding of their tools will be able to consistently move faster and use the tool best suited for solving a problem.


Developing meta skills is work. If you don't pay attention for long enough, the meta will change behind your back, and it will be hard to catch up. You must make an individual choice about how closely you want to track the meta. It's time spent thinking about design, which means giving up time to think about other things you might care about.

But if you put in the time, you might find yourself inventing new design patterns, meeting more interesting people, or working on higher impact projects.

Good luck.

Thank you to @marshallbock, @effyyzhang, @mxstbr, and @jeffpersonified for reading versions of this post.

"},{"__typename":"Post","id":"5e63cb54b6a909003848a0a7","title":"The Death of Designer Unicorns","slug":"death-of-designer-unicorns","updated_at":"2020-03-07T15:49:07.000-05:00","excerpt":"It's no longer possible to be a \"designer unicorn.\"","feature_image":null,"html":"

It's no longer possible to be a \"designer unicorn.\" It used to be the case that if you were good at visual design, interaction design, and frontend coding, you were elevated to the mythical \"unicorn\" status. But each of these skills – among the other skills required to build successful products – has become too broad and too nuanced for any individual to contribute meaningfully across the entire spectrum.

I suppose there's still value in having a term for a multidisciplinary designer. Maybe it's just that: multidisciplinary – someone with a deep understanding of their role in building products, but has the ability to collaborate meaningfully with cross-functional peers, with a shared language and unifying goal.

But if I step back and think about what's really important in building products, design and engineering are just two small pieces of the puzzle. Consider: user research, copywriting, marketing, sales, data science, QA, security, and dev ops.

And within each of these areas, consider the depth. In design alone, you might think about visual, interaction, systems, and product design as all different modes of creating a final artifact. Within engineering, consider the frontend, backend, data model, performance, responsiveness, time to interactive, internationalization, and on and on.

The caricature of a \"designer who codes\" being the final evolution in a designer's career isn't enough anymore. Modern designers should strive to be multidisciplinary, and design teams should seek to build a team of overlapping multidisciplinary designers. Team construction then becomes about balancing skill coverage and skill depth, guided by upstream business needs.

So should designers code? Or should designers write copy, understand the sales pipeline, interview customers, read the data, and think performance-first? Yes. It's just that it's not realistic, nor particularly compelling, to try to be the one person who does all of these things day to day in any meaningful capacity.

Exceptions abound. But in my experience, I've been able to run the furthest, the fastest, when collaborating with people in complimentary roles where we each took the time and effort to speak each other's language and build a shared understanding of what we're really working towards.

"},{"__typename":"Post","id":"5de67c541a96810038fb11ff","title":"Migrating from Google Analytics to Fathom","slug":"migrating-from-google-analytics-to-fathom","updated_at":"2020-03-07T12:59:20.000-05:00","excerpt":"Finally: a fast and simple, privacy-first analytics alternative to Google Analytics.","feature_image":"https://overthought.ghost.io/content/images/2019/12/fathom--1-.png","html":"

Google Analytics is overkill for most of my projects. I don't really need the myriad of features and tracking options that Google offers: OS version data, behavior flows, content drill-downs, exit pages, and acquisition devices — just to name a few.

Here are questions that I want my analytics software to answer:

Here's information I'm interested in, but is not critical:

As it happens, you don't need data-hungry, privacy-violating software like Google Analytics to answer these questions.

Enter: Fathom.

Privacy first

Fathom is dedicated to providing great analytics software without collecting personal or private information from people who happen to land on my site. The minimal script doesn't get blocked by ad blockers. It doesn't use cookies, which means 3rd-party cookie blocking won't impact my traffic reports.

Having a minimal tracking footprint means that a GDPR notice isn't needed, either. No personal information is ever being collected, so I don't need to pester people with those annoying banners.

Scales with me

Fathom allows me to connect as many sites as I want and only charges more as my sites grow. This alignment of incentives is a good thing: if my sites are growing, Fathom has to do more work, so I should be paying them more.

Ridiculously fast

Fathom's dashboard is fast. Crazy fast. The team has been writing about their dedication to performance, and that dedication is paying off. It feels great to be able to peek at my site dashboard in just a second or two, rather than having to endure the 6-10 second loading spinners that have plagued Google Analytics for years.

Paying for great independent software

I want a world with more independent software powered by paying customers, not fueled by flawed advertising models. The right thing to do is to put my money where my mouth is and pay for valuable tools and services that help me live a more productive life. $14 per month for Fathom isn't cheap, as far as software subscriptions go, but I'm more than happy to pay it to support a privacy-first company which respects my time and the privacy of the people who visit my pages.

Try it out

If you're interested in trying out Fathom for your own personal site or side project, here's a referral link that will save you $10 on your first bill: https://usefathom.com/ref/KSIDW1

If you'd rather not use a referral link, I'd highly recommend checking out https://usefathom.com directly and giving it a try. They have a 7-day free trial and dead-simple setup instructions.

"},{"__typename":"Post","id":"5df5768a295515003754dcc6","title":"A primer on investing for designers and developers","slug":"investing-for-designers-and-developers","updated_at":"2020-03-07T12:59:04.000-05:00","excerpt":"Advice for designers and developers who are taking their first steps in investing.","feature_image":"https://overthought.ghost.io/content/images/2019/12/investing--2-.png","html":"

Over the last few years I've had many conversations with people who make decent tech salaries, but aren't doing all that much with their money. Their earnings are often piling up in a checking account, like a mountain of gold in a cave.

But there's a way to think about money that some might not even know exists, or perhaps sounds intimidating enough that they don't investigate further.

Money is an asset that can be put to work to make more money. This is investing. Investing comes in many forms, with varying degrees of risk, attention required, and other tradeoffs.

For many people, the most logical and accessible mechanism of investing is the stock market. With a small appetite for risk and some initial starting capital, you can buy shares of companies that have the potential to increase in value as those companies put your money to work inventing new products, increasing efficiency, or creating entirely new industries.

Investing in the market in this way is one of the primary generators of wealth for an average person. A small bit of money, invested over time, with some rate of return on that investment, has the potential to grow into something meaningful.

If you have not started investing, the best thing you can do is to start as soon as possible, and automate your investments. Let's illustrate just how important it is to start early and invest on a recurring basis.

Compounding growth

The rule of 72 is a calculation that computes how long will it take for an investment to double given a fixed annual rate of return. The historical average return of the stock market is about 10%. Given this rate, any dollar invested will take about 7 years to double in value. Ignore, for now, the obvious effect of forces like inflation on the resulting numbers in the rest of the post - we want to stay fuzzy and high level.

Assuming the stock market doesn't implode in the foreseeable future, you can run some quick math based on a starting investment amount and your age at the time of the first investment:

\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
AgeValue
20$1k (initial investment)
27$2k
34$4k
41$8k
48$16k
55$32k
62$64k
\n

After 42 years of patience, and a bit of starting cash, you can live large on $64k in your old age. Not too bad.

This ever increasing rate of growth is due to compound interest and is the primary reason why it's important to start investing early. If you made your first investment at age 27, instead of at age 20, you'd lose an entire doubling period.

But of course, you'll be earning money throughout your career, hopefully getting a bonus here or there, or maybe getting some stock from an employer. Let's re-run our calculations with a recurring monthly investment of $1,000:

\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
AgeValue
20$1k (initial investment)
27$122k
34$357k
41$817k
48$1.7m
55$3.45m
62$6.85m
\n

Wait, what? From $1k to nearly $7m by investing $1k per month? How? Compound interest.

These numbers are fuzzy of course (markets fluctuate, you may skip a few months or need to withdraw money for life events, etc), but they represent a broader truth: setting aside a fixed amount of money every single month, slow and steady, has the power to grow into something multiple times larger than the raw amount of cash invested.

Okay, one more illustration to hammer this point home: starting to invest as soon as you can is the single best thing you can be doing right now.

Let's compare Ash and Misty.

\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
AshAgeValueMistyAgeValue
20$020$1k (initial investment)
27$027$14k
34$1k (initial investment)34$39k
41$14k41$88k
48$39k48$184k
55$88k55$371k
62$184k62$735k
\n

Ash and Misty both contributed diligently, but Misty started two doubling periods earlier. These extra doubling periods mean that Misty and Ash are making very different late-life decisions as they plan for retirement.

You could also use this framework to think about the lifetime cost of any large purchase. Imagine Ash and Misty are deciding whether or not to buy a car. Ash spends $30k on a new car, Misty forgoes and instead invests that $30k in the market without contributing any additional investment.

After two doubling periods, or loosely 14 years, Ash's car is probably not worth a whole lot: maybe $3-5k on the used market. Misty's investment has grown and compounded, and is now worth nearly $120k. That $30k car turned out to be a lot more expensive than Ash realizes.

It seems unwise to use this framework to evaluate every single purchase in your life, or use it as a justification to avoid making purchases that would create lifelong memories with your friends and family. A cup of coffee that will make you happy today is worth it. Don't overthink the fact that your $5 would be worth $273 when you retire. That's no way to live.

If you want to play around with more compound interest numbers, check out this calculator.

Getting started

This is not investment advice. Take the time to research and find the tools and investments that feel right to you. Don't invest money you can't afford to lose.

I want a simple life where I don't have to pay a lot of attention to market movements day to day. As a result, I choose to invest passively in low-expense ETFs. An ETF, or an exchange-traded fund, is a collection of other assets (usually stocks), that can be traded itself as an individual stock. Go read up on ETFs for more information, I won't be able to explain it as well as the pros.

Generally I want my money to be invested in a wide array of industries, with varying risk profiles. I've chosen Vanguard as my preferred platform to trade because of their low management fees on their own ETFs.

Here's the list of funds that I've put money into over the years: VBK, VBR, VIG, VNQ, VOE, VOO, VOT, VTI.

If I had to recommend a starting point, I'd just put money into VOO and call it a day.

I have a recurring automatic investment on the 1st and 15th of every month that goes straight into Vanguard - this aligns with my paycheck schedule. I never want to see this money in my bank account because it will be too tempting to incorporate it into my budget for daily spending. Putting the money straight to work means that I'm also taking advantage of dollar-cost averaging, which is useful for reducing the impact of short-term market volatility on a portfolio.

The amount of your recurring automatic investment is up to you, but generally I recommend the following:

Here's a handy flowchart from /r/financialindependence that illustrates these bullet point priorities in a more visual way.

Other investment entry points

In addition to your monthly recurring investments into the stock market, you should be keeping your eye out for additional investment mechanisms that provide tax incentives or match a certain amount of your contribution.

Measuring and tracking

The constant up and down movement of the markets day to day, watching your money shrink and grow, is immensely distracting and stressful. I don't recommend it. To avoid this stress myself, I set up a recurring monthly appointment where I take stock of my...stocks.

On the first of every month I review credit card statements, re-invest any cash above my \"peace-of-mind baseline\", and get a mental snapshot of where my money is working. It takes about 30 minutes each month, for a grand total of 6 hours every year spent thinking about this stuff.

Every year I create a new set of rows that look something like this:

\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
2019
Checking account
Savings account
Vanguard
401k
Acorns
Etc...
Gross
Debts
Net
Monthly change
Cash on hand
\n

Columns in the spreadsheet represent each month. I use spreadsheet math to add up gross values, subtract debts, and calculate month-to-month fluctuations. Keeping track of cash on hand helps me visualize high-spend months, or signals that I am able to increase my bi-monthly recurring investment.

At the end of the year, I take stock on each of my accounts, decide if I can combine or simplify them, and look at overall yearly growth. My spreadsheet is now 8 years old and is like a master control panel into my financial life. I've found it particularly helpful to drop this data into graphs to help me visualize general trends, identify major life events with spikes and dips, and create trend-lines to help approximate long term trajectory.

Budgeting

I'm not a great budgeter, in that I don't keep rigorous categories of my spending and track them over time, with intricate limits and alerts. But knowing how you're spending money day-to-day is important. My basic version of budgeting is to use my monthly \"personal finance day\" to review my credit card statements to track general spending patterns and look for services that I paid for but didn't use.

If you're interested in getting started with budgeting tools, I'd recommend Copilot.

Financial independence

When I was young, the commonly-articulated \"life journey\" of a person was to work from their 20s through their 60s and then...retire. Whatever that means. It turns out, this is just one kind of journey among many possible options. A popular movement called FIRE (Financial Independence/Retire Early) tries to illuminate just how feasible it is for an average person today to retire in their 40s, 30s, or even younger.

I'm not an expert on FIRE, but I'd recommend taking a look at what this movement is about. What has been particularly illuminating to me is putting a dollar number to my retirement goals. FIRE defines this number as 25 times your annual expenses. If you spend $60k per year to live comfortably, you should have $1.5 million saved for retirement.

It's worth taking the time to ball-park your own personal number that represents the amount of money you'd need to live entirely independent of employment by another person.

Wrapping up

You don't need to make hundreds of thousands of dollars to start investing. You don't need to pick stocks, research financial statements, and track market movements every hour of your day to be an investor. There is a less stressful way to put your money to work that will put you on a path towards freedom and peace of mind.

Start now. Invest automatically every month. Don't overthink it.

"},{"__typename":"Post","id":"5dec3ebe295515003754dc62","title":"Caching API routes with Next.js + Now","slug":"caching-api-routes-with-next-js","updated_at":"2020-03-07T12:58:59.000-05:00","excerpt":"How I improved the loading time of our podcast network's API by 20x with one small configuration change.","feature_image":"https://overthought.ghost.io/content/images/2019/12/caching-api-routes--1-.png","html":"
Header image from Ranganath Krishnamani

I've been using Next.js's API routing feature to implement a thin wrapper around the Simplecast API. This wrapper powers the queries for spec.fm, designdetails.fm, as well as the queries for the most recent Design Details Podcast episode that appears on the home page of this site.

The API implementation was a bit naive and didn't put any effort into caching. After some digging I realized that with just a few lines I could improve the performance of these requests by about 20x.

Since our API is mainly fetching data about podcasts, and that data changes infrequently, I felt comfortable applying a flat one-hour cache time for every API route. To do this, I added the following lines to my now.json file:

// now.json\n{\n  // ...\n  \"routes\": [\n    {\n      \"src\": \"/api/(.*)\",\n      \"headers\": {\n        \"cache-control\": \"s-maxage=3600\"\n      },\n      \"dest\": \"/api/$1\",\n      \"continue\": true\n    }\n  ]\n}

As you'd expect, things are now much faster. Previously the episode player on the home page of this site took between 600ms to 1s to load, now I'm timing it at about 30ms on my home internet. More importantly, spec.fm is now much faster.

Small changes, big impact!

View the final now.json file or the source code for the API routes that power spec.fm.

"},{"__typename":"Post","id":"5e04fbe7f1e278003830c2ef","title":"On working nights and weekends","slug":"on-working-nights-and-weekends","updated_at":"2020-03-07T12:58:55.000-05:00","excerpt":"While it might not be necessary to work nights and weekends, it does seem practically useful.","feature_image":"https://overthought.ghost.io/content/images/2019/12/nights-and-weekends-2.jpg","html":"

While it might not be necessary to work nights and weekends, it does seem practically useful. There are only 24 hours in the day, yet there are so many problems to be solved. It seems the question of whether or not you should work those hours is a function of the underlying incentive structure. In other words: who will benefit the most from the extra work?

Here are a few question I ask myself to decide whether or not it's worth working nights and weekends:

The moments I flirted most with burnout were the times that I was grinding for the sake of the grind: there was no new learning, no meaningful reward — the work itself was not worthy of the sacrifice.

Over the years I've also learned to take better care of myself physically and mentally. This means, above all else, prioritizing mental and physical health. Sleeping, exercising, and eating well create the best foundation for doing deep and meaningful work.

In the software industry, it's really common to hear people say things like \"it's so busy\" or \"things are crazy\" whenever I ask how they're doing. Of course, I'm guilty of saying these things, too. But this year I've started appending a few words: \"...but I'm having so much fun.\"

If I find myself struggling to be honest about the \"fun\" part for too long, I know I'm lying to myself and it's time to recalibrate my priorities.

A few Twitter threads on the topic for further reading:

"},{"__typename":"Post","id":"5dffaee4f1e278003830c29c","title":"2019 in review","slug":"2019-in-review","updated_at":"2020-03-07T12:58:43.000-05:00","excerpt":"Looking back on 2019 and setting goals for the next year.","feature_image":"https://overthought.ghost.io/content/images/2019/12/2019-in-review.jpg","html":"

I've been privately writing year-in-review journal entries for a few years, but since I'm jump-starting this blog again it seems fun to share more openly here. I certainly enjoy reading other people's review posts, so maybe someone out there will find this interesting, too.

Here were the goals I set for myself for 2019:

Visit one new country

Failed. I didn't end up traveling all that much this year. This should have happened though - we moved to New York in February, which is so much closer to Europe. Yet, it seems that between work and exploring all that New York has to offer, we were never able to prioritize traveling abroad.

Learn conversational Chinese

Failed. I made no progress this year learning Chinese. If anything, I regressed — I spent time in China in 2018 and managed to pick up some words and phrases, but without any practice this has slipped away.

Get my back and shoulder fixed

Semi-passed. After years of shoulder injuries, I finally bit the bullet and had labrum repair done in March. After 9 months of physical therapy and a slow ramp back up, I'd say I'm at 95-98% full range of motion and strength. I still have a lot of psychological work to do next year to feel more comfortable with dynamic movement and getting back on a snowboard. My back still remains a problem and will need some follow-up in 2020.

Relaunch my personal blog and start writing again

Passed. I began working on a rewrite of my personal site in November, shipped it in the first week of December, and wrote the first Overthought post that same week. So far I've published five posts (this should be the sixth).

Create a revenue-generating side project

Failed. I wanted to launch one new idea in 2019 that would generate revenue. Even $1 would count. But instead, I doubled-down my energy on Design Details, building Figma Plugins, and working on new websites like Security Checklist. And, of course, I stayed focused on my actual day-to-day work with GitHub.

Read 12 books

Passed. The actual number here is largely irrelevant, I only say 12 because it feels like a good way to frame a pace that feels optimal for me. My favorite book of the year was probably Jitterbug Perfume by Tom Robbins.

Fewer push notifications

Passed. I've been increasingly aggressive about shutting off push notifications and badges on every device. This has paid off: in general I have a fairly calm home screen on my phone, and I rarely feel like I'm playing catch-up with a barrage of notifications. However, Slack has become the ultimate distraction. It’s a persistent noise machine that is constantly interrupting deep work. Next year I will set strict no-Slack hours each day, most likely during the morning, in order to stay focused.

Create a healthier digital diet

Semi-passed. This year I managed to break free from constantly reading Reddit, for the most part. I also had a seriously productive mid-year where I was largely off social media. But: somehow this fall and into the winter, Twitter has snuck back into my life. Next year I want to focus on being action-oriented on Twitter: whenever I check in I want to be contributing to a conversation or sharing something useful. Less mindless scrolling, please.

Ship a product at GitHub

Passed. In November we shared that we're building native mobile apps at GitHub. We launched the iOS beta with Android coming very soon. We're planning to launch both apps to the public in early 2020.

Contribute to the GitHub codebase

Passed. When I joined Facebook in 2015 I set a goal to contribute something to the codebase before I left. Unfortunately, this never happened: as much as I wanted to, it was difficult to find the time to ramp up an environment and commit anything meaningful. When I joined GitHub last November, I set the same goal.

Fortunately, this time things turned out differently. Thanks to the patience and support of the fantastic engineers I work with, I've been able to ramp up a tiny bit on iOS and Android development, making a handful of polish tweaks and bug fix commits to both apps.

Ship one more open source project

Passed. I’ve really enjoyed building in the open this year, even if it's mostly sharing website code and small Figma plugins. Next year I'd like to shift my focus from launching new things to committing fixes and improvements to existing open source libraries.

Notable

Here are some notable things that happened this year that I found to be fulfilling or helped me to stretch in new ways.

Moved to New York, and back again

In February we moved from San Francisco to New York. Last winter I was nervous and anxious about making such a huge move, but in retrospect I'm so glad I did. Moving cities and coasts helped me to develop a thicker skin for change. Also: New York is an incredible city with so much to do and endless nooks to explore.

In November, we found out that we'd be moving back to San Francisco. It's bittersweet, but there is a lot to look forward to by going back. We'll be back on the West Coast in February, 2020.

Hosted friends and family

One of the most fulfilling parts of my year was regularly hosting brunches, lunches, and dinners at our apartment. We barely ever had enough plates and bowls, and never had enough sitting room, but it felt so good to bring friends together over food so many times during the spring and summer.

Built and shipped an internal tool

I've worked remotely for several years, but always at small startups where I was usually the only designer. Joining GitHub this year was a new experience: working remotely with an established and growing team of product designers. One thing I noticed in my first few months was how difficult it was to get a sense for what everyone else was working on. Max Stoiber and I teamed up in the spring to build an internal tool to help people share work in progress across teams.

I'll share a more complete post about this tool in the future. For now: it has seen modest adoption internally, but it's not perfect. There seems to be a clear opportunity here to build a product that will smooth out the painful parts of sharing work within a distributed design team. Stay tuned...

Design Details turns five years old

Design Details has grown and evolved slowly over the past five years. This year, however, Marshall and I worked overtime to switch the show over to a patron-powered model with Patreon. Bluntly: this experiment hasn't gone as well as we'd hoped. We have almost 100 supporters (which is amazing) but we still have to rely heavily on corporate sponsors to fund the show's production costs.

Our Patreon experiment is a work in progress – we've changed our tier rewards twice, adjusted pricing just as many times, tweaked descriptions, and iterated on our social media strategy. We'll continue to tweak things in 2020, specifically: releasing more bonus content and figuring out an easier way to onboard people who want to support the show. This process will probably end up becoming its own blog post in the future. Running a podcast is hard work, but convincing people to financially support a podcast is turning out to be even harder.

2020 Goals

Here are some things I'd like to work on in 2020:

Visit one new country

Since we didn't make it happen in 2019, I'm keeping this one on the list. Traveling is so rewarding, and I've never regretting spending money on a plane ticket to visit someplace new. I find that traveling stretches me in a lot of unexpected ways, either by pushing my comfort zone, or putting me in situations to connect with people in novel ways where there is a communication barrier. At the top of my must-visit list right now is The Netherlands.

Learn conversational Chinese

A failed effort this year, but an opportunity for 2020. I'll be traveling again in China next January, which feels like an appropriate way to jumpstart my effort here.

Create a revenue-generating side project

Another failure from last year that has continued to tumble around in the back of my mind. Over the years I’ve built so many things that either don’t make money, or make money for a company where there was already an established user base and revenue stream. Creating something from scratch on my own, and having that thing be worthy of it's own revenue, feels appropriately challenging. No dollar amount goals here.

Match last year's reading pace

This will be table stakes going forward.

Learn a new programming language

Learning JavaScript has had such a profound impact on my career as a designer. Building side projects, engaging meaningfully with engineers, launching a startup, and creating internal tools, have all been important byproducts of learning React. I wonder what other doors would open if I could expand my technical knowledge? At the top of my list is SwiftUI – the technology is new and evolving, which means it's probably a good time to jump in and learn alongside everyone else.

Write monthly (at least)

There's no point in relaunching a blog if I don't write on it! This year I would like to hit a monthly publishing cadence, weighted more towards tactical topics, like tutorials for designers and developers.

Gain (good) weight

If you've ever met me, you know I'm tall and scrawny, with long dancer's legs that aren't much good for dancing at all. It's been a goal of mine for the past 10 years to gain some weight and fill out. Now that I'm over the hump of my shoulder surgery recovery, it's time to get back on track: by this time next year, I'd like to increase muscle mass by 14lbs.


I really enjoy reading other people's year-in-review posts; if you wrote one of your own, drop a link to it in the form below and I'll check it out!

Happy New Years, everyone.

"}]}},"__N_SSG":true}