Event infrastructure preparation checklist#

Below are listed the main aspects to consider adjusting on a hub to prepare it for an event:

1. Quotas#

We must ensure that the quotas from the cloud provider are high-enough to handle expected usage. It might be that the number of users attending the event is very big, or their expected resource usage is big, or both. Either way, we need to check the the existing quotas will accommodate the new numbers.

Action to take

  • follow the AWS quota guide for information about how to check the quotas in an AWS project

  • follow the GCP quota guide for information about how to check the quotas in a GCP project

2. Consider dedicated nodepools on shared clusters#

If the hub that’s having an event is running on a shared cluster, then we might want to consider putting it on a dedicated nodepool as that will help with cost isolation, scaling up/down effectively, and avoid impacting other hub’s users performance.

Action to take

Follow the guide at Setup a dedicated nodepool for a hub on a shared cluster in order to setup a dedicated nodepool before an event.

3. Pre-warm the hub to reduce wait times#

There are two mechanisms that we can use to pre-warm a hub before an event:

  • making sure some nodes are ready when users arrive

    This can be done using node sharing via profile lists or by setting a minimum node count.

    Note

    You can read more about what to consider when setting resource allocation options in profile lists in Resource Allocation on Profile Lists.

  • the user image is not huge, otherwise pre-pulling it must be considered

3.1. Node sharing via profile lists#

Important

Currently, this is the recommended way to handle an event on a hub. However, for some communities that don’t already use profile lists, setting up one just before an event might be confusing, we might want to consider setting up a minimum node count in this case.

During events, we want to tilt the balance towards reducing server startup time. The docs at Resource Allocation on Profile Lists have more information about all the factors that should be considered during resource allocation.

Assuming this hub already has a profile list, before an event, you should check the following:

  1. Information is available

    Make sure the information in the event GitHub issue was filled in, especially the number of expected users before an event and their expected resource needs (if that can be known by the community beforehand).

  2. Given the current setup, calculate how many users will fit on a node?

    Check that the current number of users/node respects the following general event wishlist.

  3. Minimize startup time

  • have at least 3-4 people on a node as few users per node cause longer startup times, but no more than ~100

  • don’t have more than 30% of the users waiting for a node to come up

    Action to take

    If the current number of users per node doesn’t respect the rules above, you should adjust the instance type so that it does. Note that if you are changing the instance type, you should also consider re-writing the allocation options, especially if you are going with a smaller machine than the original one.

    deployer generate resource-allocation choices <instance type>
    
  1. Don’t oversubscribe resources

    The oversubscription factor is how much larger a limit is than the actual request (aka, the minimum guaranteed amount of a resource that is reserved for a container). When this factor is greater, then a more efficient node packing can be achieved because usually most users don’t use resources up to their limit, and more users can fit on a node.

    However, a bigger oversubscription factor also means that the users that use more resources than they are guaranteed can get their kernels killed or CPU throttled at some other times, based on what other users are doing. This inconsistent behavior is confusing to end users and the hub, so we should try and avoid this during events.

    Action to take

    For an event, you should consider an oversubscription factor of 1.

    • if the instance type remains unchanged, then just adjust the limit to match the memory guarantee if not already the case

    • if the instance type also changes, then you can use the deployer generate resource-allocation command, passing it the new instance type and optionally the number of choices.

      You can then use its output to:

      • either replace all allocation options with the ones for the new node type

      • or pick the choice(s) that will be used during the event based on expected usage and just don’t show the others

    Example

    For example, if the community expects to only use ~3GB of memory during an event, and no other users are expected to use the hub for the duration of the event, then you can choose to only make available that one option.

    Assuming they had 4 options on a n2-highmem-2 machine and we wish to move them on a n2-highmem-4 for the event, we could run:

    deployer generate resource-allocation choices n2-highmem-4 --num-allocations 4
    

    which will output:

    # pick this option to present the single ~3GB memory option for the event
    mem_3_4:
      display_name: 3.4 GB RAM, upto 3.485 CPUs
      kubespawner_override:
        mem_guarantee: 3662286336
        mem_limit: 3662286336
        cpu_guarantee: 0.435625
        cpu_limit: 3.485
        node_selector:
          node.kubernetes.io/instance-type: n2-highmem-4
      default: true
    mem_6_8:
      display_name: 6.8 GB RAM, upto 3.485 CPUs
      kubespawner_override:
        mem_guarantee: 7324572672
        mem_limit: 7324572672
        cpu_guarantee: 0.87125
        cpu_limit: 3.485
        node_selector:
          node.kubernetes.io/instance-type: n2-highmem-4
    (...2 more options)
    

    And we would have this in the profileList configuration:

    profileList:
      - display_name: Workshop
        description: Workshop environment
        default: true
        kubespawner_override:
          image: python:6ee57a9
        profile_options:
          requests:
            display_name: Resource Allocation
            choices:
              mem_3_4:
                display_name: 3.4 GB RAM, upto 3.485 CPUs
                kubespawner_override:
                  mem_guarantee: 3662286336
                  mem_limit: 3662286336
                  cpu_guarantee: 0.435625
                  cpu_limit: 3.485
                  node_selector:
                    node.kubernetes.io/instance-type: n2-highmem-4
    

    Warning

    The deployer generate resource-allocation:

    • cam only generate options where guarantees (requests) equal limits!

    • supports the instance types located in node-capacity-info.json file

3.2. Setting a minimum node count on a specific node pool#

Warning

This section is a Work in Progress!

3.3. Pre-pulling the image#

Warning

This section is a Work in Progress!

Relevant discussions:

Important

To get a deeper understanding of the resource allocation topic, you can read up these issues and documentation pieces: