How I Would Design… a URL Shortener!

Demonstrating an approach to system design

James Collerton
8 min readJun 4, 2021

Audience

This article is aimed at engineers interested in looking at how another developer may approach system design. It is not at all a definitive method, more a way of generating an approximate first draft architecture.

We will be borrowing the example problem from educative.io, and following a reasonably similar technique to generate our design.

It assumes you have a reasonable grasp of architecture, and in later stages AWS. However, even without those you should be able to get the gist.

Argument

Initially, let’s look at our problem statement.

The System to Design

We own a website, shorturl.co.uk, which provides short versions of long URLs. For example, if someone has URL

https://www.longurl.com/e2d89a01-ed9f-4796-a80e-3a13db3d0d3e

We want to give them an equivalent, more terse URL:

https://www.shorturl.com/c5b852e6

Which redirects to the same page. A similar service already exists here, feel free to have a play to get your head round it!

The requirements are as follows.

  1. The shorter URL should be unique and not guessable.
  2. A user should have the option of picking a custom shorter URL.
  3. URLs will expire either after a user specified timespan, or if one is not given, a default time.
  4. Redirection should happen in real-time with minimal latency.

The Approach

We have a standard approach to system design, with the following steps

  1. Requirements clarification: Making sure we have all the information before starting. This may include how many requests or users we are expecting.
  2. Back of the envelope estimation: Doing some quick calculations to gauge the necessary system performance. For example, how much storage or bandwidth do we need?
  3. System interface design: What will our system look like from the outside, how will people interact with it? Generally this is the API contract.
  4. Data model design: What our data will look like when we store it. At this point we could be thinking about relational vs non-relational models.
  5. Logical design: Fitting it together in a rough system! At this point I’m thinking at a level of ‘how would I explain my idea to someone who knows nothing about tech?’
  6. Physical design: Now we start worrying about servers, programming languages and the implementation details. We can superimpose these on top of the logical design.
  7. Identify and resolve bottlenecks: At this stage we will have a working system! We now refine the design.

With the above in mind, let’s get going!

Requirements Gathering

The two questions that immediately jump to mind are

  1. How many URLs we are expecting to generate?
  2. How often will they be accessed?

Let’s make up some statistics to give ourselves something to work with. Assume we make 10 million URLs a month, and each URL is accessed (on average) 10 times.

Additionally we would like to know the maximum amount of time our URL will be stored for. Let’s say one year.

Other questions we may ask include what traffic patterns are like, do we expect lots of requests at a certain time (spikes), or do we need security or authorisation. For the sake of simplicity let’s assume we don’t need to worry about this.

Back of the envelope estimation

We now have our access patterns for the URLs. The next thing we look at is bandwidth and storage requirements.

We have 10 million URLs being created a month, which means around 4 creations a second. If we assume URL access is uniform then we will have 40 accesses a second. The one thing we don’t know yet is the size of a request. Let’s call this x bytes for the time being.

This means our bandwidth requirement for creation would be 4x bytes, and 40x bytes for reading.

In terms of storage, let’s take a worst case scenario. If we create 10 million URLs a month for a year we will have 120 million URLs. Therefore our storage requirements will be 120,000,000x bytes.

System interface design

Our system has two main functionalities: creating and retrieving URLs. It’s reasonable to assume we will use a REST format, with a POST request to create a URL, and a GET request to retrieve a URL for redirection.

Some terminology before we dig in. Thinking back to the introduction, we will be providing shortened URLs of the form https://www.shorturl.com/{key}, where the key is a random string. In the rest of our design we will not think about this fully qualified URL, instead worrying only about the key.

We have used Java Spring Boot to define the endpoint for creating a new URL.

@PostMapping("/url")
public ResponseEntity<Url> createUrl(
@RequestBody Url shortUrl
) {
...
}

Where the Url object resembles the below.

{
"longUrl": "<Value of URL to be shortened>",
"key": "<Optional, requested path of short URL>",
"expiryTime": "<Optional, requested expiry time in seconds>"
}

Our response codes could then be:

  • 201: The URL has successfully been shortened. The new path for key is in the response.
  • 400: There is an issue with the request. The shortUrlPath or expiryTime may not be valid, longUrl may be missing etc.
  • 409: The requested key already exists.

Retrieving a URL can then be done via

@GetMapping("/{key}")
public RedirectView getUrl(
@PathVariable("key") String key
) {
...
}

Our response codes could then be:

  • 302: We have found the long URL relating to the input path, and will be redirecting to it. All this does is force the browser to make a second GET to the new URL. Note, we need to include the location header with the original URL.
  • 404 : This shortened URL does not exist.

Data Model Design

Now we have our methods of interaction, let’s think about how we might store our data. We only have three fields of interest: the key, the expiry time and the long URL.

Theoretically we could use a key-value store to map the key to its longer URL and expiry time. However, as we may need a way of looking up items by expiry it would be prudent to use a more flexible model.

There is an argument for a relational database with a single table, as our data is well structured. However, there is equally an argument for NoSQL as we will have a lot of data and not require joins or more relational behaviour.

Regardless, the table columns would be.

  • key VARCHAR PRIMARY KEY
  • long_url VARCHAR
  • expiry_date_time TIMESTAMP

Logical design

Let’s start putting these ideas together. The core parts we will need are somewhere to host our service, and somewhere to store our data.

An example, simplified logical design

Let’s break down what our URL service is doing:

  • Validating incoming create requests
  • Checking a specified key doesn’t already exist/ generating a new key.
  • Checking a specified expiry time is valid/ assigning a default one.
  • Persisting/ retrieving URLs from the database.
  • Deleting expired URLs from the database.

This seems like it could be separated out into services depending on their responsibility.

An expanded logical design

Although this looks like a bit of a leap, we can walk through the logic.

Generating a new, short key whilst guaranteeing no collisions is not trivial. We can base a random string from the URL, but this may clash. We could make keys sufficiently large to avoid clashes, but this would defeat the point.

A better option would be to have a database of pre-generated keys, with a flag to say if they were in use or not. A service could then be contacted to ask if a key was used, or to provide a new one.

Deleting expired keys can also be separated out into a cleanup service. This cleanup service can run on a schedule, deleting rows from the URL store and freeing up keys in the key service.

The persisting of URLs and setting of expiry times is sufficiently lightweight to remain in the original service.

Physical design

Now we have an idea of how we would like our system to fit together on a logical level, lets choose the physical components. As I am most familiar with AWS, I will use that as our basis.

Initially, we need to cement one of the limits we defined earlier, x, the number of bytes in a key. Because of the key generation service, we can now define this.

We are expecting to need around 12 million keys in a worst case scenario. By my (probably wrong) calculations, there are 81 characters allowed in a path. Therefore with four characters we can have 81^4=43,046,721 different keys.

Assuming each character is 8 bits (1 byte) we have:

  • Bandwidth for creation: 4RPS * 4 bytes = 16 Bytes per second
  • Bandwidth for retrieval: 40RPS * 40 bytes = 1.6 Kilobytes per second
  • URL Storage requirements: 120,000,000 URLs * 4 bytes = 480 Megabytes
  • Key Storage requirements: 43,046,721 * 4 bytes = 173 Megabytes

For the URL Service and Key Service we are expecting around 44RPS and 4RPS respectively. As these services are always in use, we could write our code in a language of our choosing, then use containers and an orchestration framework to run it. For our purposes we will use AWS ECS. We can have multiple instances of the service running over multiple availability zones, ensuring resilience. These will also easily handle our bandwidth requirements.

Our URL Cleanup service is a perfect opportunity to use a scheduled AWS Lambda. This saves us money by only charging us for the time it is run.

Our key store looks like a key-value store, which is a great use for AWS DynamoDB, a managed, reliable service.

As covered earlier, our URL Store could be relational or non-relational. For a relational database we could use AWS Aurora, for non-relational we could go back to DynamoDB as earlier.

A potential, simplified physical design in AWS

Identify and resolve bottlenecks

The final thing we need to do is look for any potential bottlenecks in our system. Our services will be resilient, highly available and scalable due to the nature of AWS. The core improvements we can make are on accessing storage.

Initially we can introduce a cache for our URL service. When someone makes a GET request they can first look for it there. In our particular case it makes sense to use a lazy-loading cache, with a least recently used eviction policy (check my article on cache here for more info).

Secondly we can implement a batch fetching of new keys from the key generation service. This prevents us having to make a call every time we want a new key.

We could potentially partition the database, but as we are mainly searching by primary key this should already be performant.

A final, extended architecture

Extras

The very, very last thing we can think about is nice-to-haves. For example, do we want to add any security measures? Do we want to do any request analytics? Although they weren’t specified, it’s always fun to think about. We won’t cover them here, but perhaps as an extension think about how you might fit them in.

Conclusion

In conclusion, we have demonstrated a method for thinking about service design. We worked through a complete example, showing its efficacy.

--

--

James Collerton
James Collerton

Written by James Collerton

Senior Software Engineer at Spotify, Ex-Principal Engineer at the BBC

Responses (1)