Define a strategy to perform per-request forwarding decisions. Specifically, how your strategy takes into account the info reported by each server.

CS-350 – Fundamentals of Computing Systems::Homework Assignment #3 Problem 1

Problem 1

You are in charge of designing a single-ported network router for an enterprise network. You have been told

that on this type of network, packets sizes are exponentially distributed with the average packet being of

size 1350 bytes. Moreover, your router is meant to be deployed on a specific segment of the network, where

it has been benchmarked that an average of 6500 packets per second are transmitted.

Assume that for your router you plan to use standard Ethernet technology, which is able to process packets

at 100 Mbps (i.e. 108 bits per second), answer the following:

a) What assumptions do you make to reason about the system?

b) How much memory in bytes is required for the router to operate without dropping packets most of the


c) By how much a packet traversing your router will be delayed on average?

d) The specs of the chip you plan to use to process packets state that active cooling (e.g. a fan) is required

if it is expected that the chip is not idle for 3/5 or more of the time it is powered on. Do you need to

incorporate a fan in your design?

Pick up the phone: it’s your boss telling you that he has reviewed your plans. He believes that the router

you have designed will be a bottleneck for the whole enterprise. You use your ace in the sleeve: you propose

to incorporate Gigabit Ethernet technology. In this case, your router will be processing packets at 1 Gbps

(i.e. 109 bits per second). Now your boss wants to know:

e) What is going to be the speedup for the processing latency of the average packet compared to your

previous proposal (i.e. standard Ethernet)?

f) Since Gigabit Ethernet chips are expensive, can we reduce the amount of memory to hold pending

packets in the router? If so, by how much?


CS-350 – Fundamentals of Computing Systems::Homework Assignment #3 Problem 2

Table 1: Utilization and Availability of Servers Utilization Availability

Server 1 50% 85%

Server 2 20% 70%

Server 3 89% 95%

Server 4 70% 90%

Problem 2

You are in charge of implementing the a latency-aware load balancing system at the entry of a data center.

The data center is composed of an array of four servers of different types, albeit each of them can process

any incoming request. The job of your load balancer is to determine to which server the request should be

sent to minimize its turnaround time. Each server reports its status, but because the reporting system on

each server has been implemented by a different team, the information reported back is different. Here is

what is reported by each server:

1. Server 1 reports its utilization;

2. Server 2 reports the average service time for requests;

3. Server 3 reports the fraction of time it has been idle over the total uptime1;

4. Server 4 reports the average number of requests waiting to be served (excluding those being currently


Since your load balancer is in control of the forwarded traffic, it always knows the amount of traffic being

forwarded to each server.

a) What assumptions do you make about the system?

b) Define a strategy to perform per-request forwarding decisions. Specifically, how your strategy takes

into account the info reported by each server.

When a server goes down, all the requests in it are lost. Consider the availability and utilization of the

servers in Table 1. Then answer the following:

c) What is the probability that any request will be lost?

d) How many requests will be lost on average.



CS-350 – Fundamentals of Computing Systems::Homework Assignment #3 Problem 3

Problem 3

You have built a social network for stale memes, namely Repostit. A number of venture capitalists you

hooked on Craigslist have poured a good amount of money in this startup idea of yours. With this capital,

you bought 20 identical server machines. But now you are broke and want to save on the electricity bill.

For this reason, you design the following server allocation scheme. Initially when there is little traffic say

x requests per second, you power-off all the servers except Server 0 , and use it so serve any incoming

request. When the volume of requests increases, say to y, so that average latency of an incoming request

starts exceeding 500 ms, you turn on an additional server, say Server 1. Server 1 only receives the minimum

amount of traffic that if forwarded to Server 0 would violate the 500 ms constraint on the latency. If traffic

increases further, so that requests on Server 1 violate the latency constraint, you repeat the same operation

with Server 2 and so on.

Assume that the average handling time for a request is 15 ms. Answer the following:

a) Under which threshold on the volume of requests you can save the maximum amount of power?

b) What is the maximum volume of requests that your system can handle without violating the latency


c) Express a formula for the number of servers N that are powered-on when the total number of requests

per second is R.

d) If the number of servers currently powered up is 10, what is the probability that Server 5 is sitting



"Order a similar paper and get 15% discount on your first order with us
Use the following coupon

Order Now