Session 6 is about green computing and networking. There are three nice paper presentations in this session.
Paper 1: Modelling Performance and Power Consumption of Utilisation-based DVFS Using M/M/1 Queues (http://dl.acm.org/authorize.cfm?key=N15719), presented by Prof. Hermann de Meer from University of Passau.
This work uses M/M/1 queues to model the tradeoff between the service performance and the power consumption of utilization-based DVFS. More concretely, under the M/M/1 queuing model for job arrivals and CPU services, the problem is to minimize the mean power consumption of the CPU subject to a mean response time constraint for jobs, and the design spaces are the sample interval size and the utilization threshold to trigger frequency switching. The authors also use extensive simulation to validate their proposed models.
Q1: From the physical point of view, how can voltage be incorporate into the power-frequency model?
Q2: What’s the impact of the size of the sampling interval?
A2: Long sample interval may lead to non-accurate utilization measurement.
Q3: Have you considered the power consumption of non-CPU, e.g. I/O?
A3: Not yet in this work.
Paper 2: Energy-efficient Disk Caching for Content Delivery (http://dl.acm.org/authorize.cfm?key=N15725), presented by Mr. Aditya Sundarrajan from University of Massachusetts Amherst.
This work focus on how to reduce the energy consumption of CDNs by shutting down some spinning disks which is used for content caching. The goal of this paper is to reduce the energy consumption of the CDNs but at the same time not to significantly degree the cache performance in terms of hit rate. This paper consider three design spaces: how to determine the cache size, how to shutdown which parts of disks, and how to do content replacement and eviction. It is shown that the proposed solution framework can save 30% disk energy with only a 6.5% decrease in the normalized server hit rate and a 3% reduction in the normalized cluster hit rate.
Q1: What’s the time scale of making decisions in your algorithm?
A1: We adjust the cache size every 6 hours.
Q2 (A follow up question of Q1): This long time scale may use non-updated information and thus degrade the performance.
Q3: How to take into account of smaller content provider and network provider, rather than only considering the energy consumption of CDNs?
A3: It is not considered in the current work.
Q4: Why not use SSD to cache popular contents.
A4: The size of the popular contents is getting bigger and bigger, which may not be cached by SSD.
Paper 3: Enabling Reliable Data Center Demand Response via Aggregation (http://dl.acm.org/authorize.cfm?key=N15727), presented by Prof. Yuanxiong Guo from Oklahoma State University.
This work proposes the idea of aggregation to provide reliable demand respond capacity for data centers. It uses a coalition game to model the cooperation of multiple data centers. By elegantly analyzing, the authors first show that indeed when multiple data centers cooperate with each other, the uncertainty of the demand respond capacity can indeed be reduced and the total payoffs of all data centers increases. The authors further propose a payoff allocation scheme to spit the total payoffs into individual data centers, under which all data centers have no incentive to deviate from the grand coalition. Trace driven simulations demonstrate the effectiveness of the proposed approach.
Q1: How to model uncertainly of demand response capacity?
A1: We use a probability density function (PDF) to model it.
Q2: In this model, it seems that it does not have energy reduction?
A2: We do have some discussion in this model, and we may try to do more investigation in the future work.
In this session, all three presentations consider very important problems and further show very insightful ideas in the broad category of green computing. I think this is why in this session we have quite active Q&A discussions. Personally, I l favorite the second talk, as I like the nice logics and slides, and I also very enjoy the Q&A discussions, where the presenter and the audiences have a lot of back-and-forth interaction.