How I would design… Instagram!

A system design demonstration

James Collerton
11 min readAug 13, 2021
A short introduction to designing Instagram

Audience

This article is the next in my series of how I would design popular applications. It is recommended (although not entirely necessary) to read the previous posts here and here. We will expect a basic familiarity with architecture principles and AWS, but hopefully this post is approachable for most engineers.

Argument

Initially, let’s look at our problem statement.

The System to Design

We are recreating the popular social media service, Instagram. I’d be very impressed if you’d got this far on the internet without encountering it, but in case you’ve pulled it off, here’s a quick overview.

Instagram is a social media website for sharing photos with other Instagram users. You can upload photos, search for them, and follow other users in order to see their photo feed. You also have your own news feed, which is an aggregate of all of the users you follow’s photos.

Let’s turn this into a set of requirements for our system.

  1. Users should be able to upload and view photos with a title.
  2. Users can perform searches based on photo titles.
  3. Users can follow each other.
  4. The system should display a user’s News Feed consisting of the most recent photos from all the people the user is following.

Functionally, this just about covers it, however we also have some SLAs we need to keep to.

  1. We need to guarantee high availability.
  2. Maximum latency for generating the News Feed is 150ms.
  3. If a user doesn’t see a photo for a while; it should be fine (availability over consistency).
  4. Any uploaded photo should never be lost, we should guarantee reliability.

The Approach

We have a standard approach to system design which is explained more thoroughly in the article here. However the steps are summarised below:

  1. Requirements clarification: Making sure we have all the information before starting. This may include how many requests or users we are expecting.
  2. Back of the envelope estimation: Doing some quick calculations to gauge the necessary system performance. For example, how much storage or bandwidth do we need?
  3. System interface design: What will our system look like from the outside, how will people interact with it? Generally this is the API contract.
  4. Data model design: What our data will look like when we store it. At this point we could be thinking about relational vs non-relational models.
  5. Logical design: Fitting it together in a rough system! At this point I’m thinking at a level of ‘how would I explain my idea to someone who knows nothing about tech?’
  6. Physical design: Now we start worrying about servers, programming languages and the implementation details. We can superimpose these on top of the logical design.
  7. Identify and resolve bottlenecks: At this stage we will have a working system! We now refine the design.

Requirements clarification

At this point I would have a few questions about our system.

How many users are we expecting? How many photos do we expect them to upload? What is our maximum photo size? How often will they be retrieving their news feed? How many photos are in a news feed? How many follows are we expecting from a user per day? How many searches?

Let’s say we expect 600 million users, with around 10 million users active each day. There are 1 million new photos a day with an average size of 100KB. An active user retrieves their news feed once an hour, and it contains 20 photos. They also average at one follow and one search a day.

Let’s use this to make some high level estimates.

Back of the envelope estimation

What are our storage and bandwidth requirements? We can separate these into read and write.

Our read traffic consists of users viewing their news feed, and doing searches. If we have 10 million active users a day, this is the same as 10 million news feed views an hour, which is ~2800RPS (requests per second) for a news feed. Similarly, if we have one search a day then that’s 10 million searches a day, which is ~115RPS. We could break these down further into roughly how big we think these messages will be, and our exact bandwidth requirements in MB/s, however for brevity we will exclude this.

Our write traffic is slightly different. We expect 1 million new photos a day, of size 100KB. This means ~12RPS * 100KB = 1.2MB/s incoming write traffic. It also means storage requirements are 100GB a day, which amounts to 36.5TB a year!

System interface design

Having looked into the rough traffic requirements, we now need to identify how we might interact with the various components of the system. The main pieces of functionality we need to account for are:

  1. Post a new photo
  2. Search for a photo by title/ Id
  3. Follow user
  4. Ask for news feed

To post a new photo we might use an upload form on the front end. The form would need to send us at least the user’s Id (so we know which user the photo belongs to), a title for the photo, and the file containing the image.

In Spring Boot this may look like:

@RequestMapping(
path = "/photos",
method = POST,
consumes = { MediaType.MULTIPART_FORM_DATA_VALUE }
)
public ResponseEntity createPhoto(
@ModelAttribute PhotoFormData photoFormData
)

Where the PhotoFormData object looks something like the below

public class PhotoFormData {
private Long userId;
private String title
private MultipartFile photo;
}

At this point it’s probably good to do a little aside into the Content-Type header. This header allows us to tell the server what to expect when we are sending a request. In a response the header tells the browser what to expect.

For those of you versed in Spring you can see that the endpoint consumes multipart/form-data. This is as we will be using a web form to submit the data. It allows us to attach a file to the request, which will then be deserialised on the other side.

We could add validation to the photo object, but will consider this out of scope for the exercise. In reality we would most likely take the user’s Id from their session, but for the sake of example we can leave it in here.

Searching for a photo would then use an endpoint similar to the below:

@GetMapping("/photos")
public ResponseEntity<List<Photo>> getPhotosList(
@RequestParam String photoTitlePrefix
)

Where the photo and user objects looks like this:

public class Photo {
private String id,
private String title,
private User user
}
public class User {
private String id,
private String name
}

The relationship for following a user could resemble:

@PostMapping("/follow")
public ResponseEntity<Follow> createFollow(
@RequestBody Follow follow
)

Where the follow object looks like:

public class Follow {
private User follower,
private User followed
}

And a news feed request may look like:

@GetMapping("/newsFeed")
public ResponseEntity<List<Photo>> getNewsFeed(
@RequestParam Long userId
)

You’ll have noticed that all of these endpoints will only return the Id of photos, not the entire photo themselves. At this juncture there is a choice to make.

We could have provided a URL which links directly to our object storage, allowing the user to load from there. However, streaming through our backend gives us more control over security. This method does sacrifice some of our scalability (as we need to stream all files through our APIs), but we will employ it for the example. A final endpoint is required.

@GetMapping(   
value = "/photos/{id}",
produces = MediaType.APPLICATION_OCTET_STREAM_VALUE
)
public @ResponseBody byte[] getPhoto(
@PathVariable("id") Long id,
)

Let’s return to our point on the Content-Type header. Here we’re specifying the MIME type as application/octet-stream. This allows the browser to decide what type of file to treat this as.

However, if we know the type of file we’re returning, we can use one of the other MIME types to say it will be a jpg, png etc. Although out of the scope of this exercise, we will normally have done some transcoding/ encoding of the images to standardise them, so this may be a better approach. For more information check out the article here.

Data model design

This then lends itself to our data model design. Most likely we will need three tables: Photo , User and Following.

The photo table would have the below rows. Note how we need the created date to order the news feed.

id           BIGINT    PRIMARY KEY 
title VARCHAR
created_date TIMESTAMP

We wouldn’t store the photos themselves in the table, but instead would put them in object storage (such as S3), referencing them through a URL. This URL could be created in a backend service using the Id. The user table may be similar to:

id   BIGINT  PRIMARY KEY
name VARCHAR

Then the Follow table to track who is following who.

follower BIGINT FOREIGN KEY REFERENCES user(ID)
followed BIGINT FOREIGN KEY REFERENCES user(ID)

We can cover our choice between non-relational and relational databases, as well as the exact implementation, in the following sections. We will see that our choice of engine distorts our data model design slightly, but the core concepts remain the same.

Logical design

A first stab at a logical design may look something like the below:

An initial logical design

Our client only ever accesses one service, which is used to read and write from the database and object store. However, we can see there might be some issues as we begin to scale. A few things initially spring to mind that could be done to improve the performance of the application. Initially, can we separate out the read and write concerns?

Separating out the read and write concerns

In the above we use a CQRS like architecture to separate out the reading and writing. The idea is to have a thin gateway layer which is used to redirect the read and write requests to their own services. The write service takes requests to create a new photo, or follow relationship, and writes them to an event store.

This event store is essentially a list of all of the actions that have ever been taken. Writing to the event store triggers a synchronisation service which takes the event and updates any relevant news feeds/ photo data.

The news feed/ photo data store will need to provide two main functionalities, one for providing news feeds and one for allowing the searching of photos by title.

Each user will have a single news feed, therefore it makes sense to have their user Id as the key. As we don’t need to search any of the information, this heavily implies the use of a non-relational database as it will allow us to horizontally scale more easily, and we aren’t planning on using any relational properties.

Similarly we will be searching by the title of the photo, therefore we could have a separate key, value store where the key is the title of the photo, and the value is a list of Ids with that title. In reality we may use a more sophisticated search mechanism such as Elasticsearch or CloudSearch, but given the minimal requirements this is a reasonable first go.

Something else to note is how by defining a request for the news feed we have assumed we are using a pull model for fanout (the process for publishing a post to user’s feeds). In reality there are three options:

  1. The pull model or fan-out-on-load: This is what we’ve discussed so far. It’s good as we only load new data when we load the page. However, it’s bad as we will make requests on page load that sometimes will give us no new data (assuming we’re caching the feed on a user’s device).
  2. The push model or fan-out-on-write: This works by pushing out messages to client devices every time there is an update. This could be done by long polling, server side events or web sockets (covered more thoroughly in my article here). This is good as it means we cut down on read requests, but bad as celebrities with millions of followers will require a lot of pushing.
  3. The hybrid approach: We let users with a few followers use the push model, but celebrities use the pull one. Alternatively we could just use push for online users.

Physical design

We should now be ready to sketch out a physical design. For this we will be using AWS components.

A physical design for our Instagram solution

In the above we use API Gateway in order to separate out requests for reading and writing. Our separate services can then use ECS to host containers running whichever application we would like.

Our photo storage can be S3, which soothes any reliability qualms as it has an incredibly high guaranteed uptime. Similarly we can use DynamoDb for our non-relational storage needs. As this is managed we also don’t need to worry about reliability.

Finally our component for synchronising between our event store and our read data storage can be a AWS Lambda function. This can be triggered by items going into our event store, and can do the work to update our news feeds.

Identify and resolve bottlenecks

The final thing we need to do is see if we can optimise our service. Two main solutions present themselves.

  1. Sharding the news feed and photo title data.
  2. Caching responses at the read service.

Sharding isn’t something I’ve worked with heavily, so I will offer a brief introduction here. Sharding is a method of distributing data in order to deal with high throughput and large amounts of data. Generally it requires spreading the data out between multiple machines. Each shard in a cluster then contains a subset of the original data, meaning work can be spread out between shards.

In an engine like MongoDb we would need to think about doing something like this manually. However, DynamoDb scales a little differently. In this implementation we define something called a ‘Partition Key’, which is similar to a private key. The engine then uses this to internally distribute items amongst its physical servers, allowing us to leverage the same performance advantages as sharding.

However, we still need to select a partition key. As we will want to select a news feed on a user basis, it makes sense to have the user Id as our key. Additionally, as we will be searching by title, we can have the photo title as the key in that table.

A cache layer in the read service would also reduce the load on the DynamoDb storage and increase performance times. We would need to set a reasonable TTL and eviction policy, but could use an AWS offering like Elasticache to implement it. We could have one cache for search results and one for user news feeds.

Our final design would be as below.

Final physical design

Extras

There’s always more we could do! This includes non-functional requirements like security and more advanced analytics, or it could be functional additions such as likes or comments. It’s often useful to think about these things and how you might adapt your system. This kind of mindset can help ensure your designs remain flexible and extensible.

Conclusion

In conclusion we have covered the definition of an Instagram problem, a method for addressing it, and explored a potential solution.

--

--

James Collerton
James Collerton

Written by James Collerton

Senior Software Engineer at Spotify, Ex-Principal Engineer at the BBC

No responses yet