JOEP AERTS

Philips Research Laboratories

Prof. Holstlaan 4, 5656 AA Eindhoven

aertsj@natlab.research.philips.com

Tuesday January 11, 15.00-15.30

A multimedia server offers continuous streams of multimedia data to multiple users. In a multimedia server one can generally distinguish three parts: an array of hard disks to store the data, an internal network, and a fast memory from which the users consume, which is usually implemented in RAM. The multimedia data is stored on the hard disks in blocks, such that a continuous data stream is realized by periodically reading a block from disk and storing it in the RAM. The total RAM is split up into a number of buffers, one for each user. A user consumes, possibly at a variable bit-rate, from his/her buffer and the buffer is refilled with blocks from the hard disks. A buffer generates a request for a new block as soon as the amount of data in the buffer becomes smaller than a certain threshold. We assume that the requests are handled periodically in batches in such a way that the requests that arrive in one period are serviced in the next one. In the server we need a cost-efficient storage and retrieval strategy, that guarantees, either deterministically or statistically, that the buffers do not underflow nor overflow.

The load balancing performance of a storage strategy is an important factor in the disk costs per user, as an efficient usage of the available bandwidth of the disk array increases the maximum number of users. In this paper we analyze the load balancing performance of random duplicate storage strategies; data blocks are stored on two, randomly chosen disks. Data redundancy gives the freedom to obtain a balanced load. To exploit this freedom we have to solve in each period a retrieval problem, i.e., decide for each data block which disk(s) to use for its retrieval. The retrieval problem can be seen as a multiprocessor scheduling problem by viewing the disks as machines and the blocks as the jobs. We describe two load balancing approaches: block-based load balancing and time-based load balancing.

Block-based load balancing results in the so-called retrieval selection problem in which we have to assign a disk to each block in such a way that the maximum number of blocks assigned to one disk is minimized. In the notation of multiprocessor scheduling this problem is denoted by $P\;|\;M_j, p_j=1\;|\;C_{max}$. The machine eligibility constraints depend on the storage strategy. The problem is solvable in polynomial time with maxflow algorithms.

In the time-based load balancing approach we balance the actual working times of the disks. This means that we take transfer time, switch time and partial retrieval into account. This results in a multiprocessor scheduling problem denoted by $R\;|\;M_j, pmtn + set-up\;|\;C_{max}$. This problem is NP-complete and we present an ILP-model and a LP-based heuristic algorithm.

Back to home page | Program | Parallel sessions |

SANDJAI BHULAI

Department of Mathematics and Computer Science

Vrije Universiteit Amsterdam

sbhulai@cs.vu.nl

Thursday January 13, 15.30-16.00

In this talk we investigate the multi-armed bandit problem, where each arm generates an infinite sequence of Bernoulli distributed rewards. The parameters of these Bernoulli distributions are unknown and initially assumed to be Beta-distributed. Every time a bandit is selected its Beta-distribution is updated to new information in a Bayesian way. The objective is to maximize the long term discounted rewards.

We study the relationship between the necessity of acquiring additional information and the reward. This is done by considering two extreme situations which occur when a bandit has been played N times; the situation where the decision maker stops learning and the situation where the decision maker acquires full information about that bandit. We show that the difference in reward between this lower and upper bound goes to zero as N grows large.

Back to home page | Program | Parallel sessions |

KOEN DE BONTRIDDER

Department of Mathematics and Computing Science

Eindhoven University of Technology

P.O. Box 513, 5600 MB Eindhoven

koendb@win.tue.nl

Thursday January 13, 15.00-15.30

We present a tabu search algorithm for a generalization of the classical job shop scheduling problem. In our model there are release times, positive end-start time lags, a general precedence graph, and nonrenewable-resource constraints. As objective we consider the total weighted tardiness.

A resource is called nonrenewable if it is actually being consumed by the operations using it. The nonrenewable-resources that are consumed by an operation must have been produced by other operations or delivered by external suppliers before the start of the consuming operation. It is unary NP-complete to find a feasible schedule. We use an activity-list to represent a schedule. Given an activity-list a schedule can be generated by a list scheduling algorithm. We present some methods to modify the activity-list and give extensive computational results.

Back to home page | Program | Parallel sessions |

PATRICK MEERSMANS

Econometric Institute

Erasmus University Rotterdam

P.O. Box 1738, 3000 DR Rotterdam

meersmans@few.eur.nl

Thursday January 13, 15.00-15.30

This presentation reports on the ongoing research in the area of scheduling handling equipment at container terminals. We show that a straightforward approach, i.e., scheduling each type of equipment separately, may lead to infeasible overall schedules. Therefore, we propose to model the integrated problem exactly, using a Mixed Integer Programming formulation. Straightforward Branch & Bound proves to be very time consuming, even for small instances. Our current reseach focuses on finding classes of valid inequalities, that are to be used in a Branch & Cut procedure.

Joint work with Albert Wagelmans and Stan van Hoesel

Back to home page | Program | Parallel sessions |

WILLEM DE PAEPE

Department of Mathematics and Computing Science

Eindhoven University of Technology

P.O. Box 513, 5600 MB Eindhoven

W.E.d.Paepe@tm.tue.nl

Thursday January 13, 15.00-15.30

In the online traveling salesman problem requests for visits to cities (points in a metric space) arrive online while the salesman is traveling. The salesman moves at no more than unit speed and starts and ends his work at a designated origin. The objective is to find a routing for the salesman which finishes as early as possible.

Competitive analysis is used to measure the performance of online algorithms: the makespan of the online server is compared to the makespan of an adversary server that can serve every sequence optimally. We study the effect of restricting the class of online algorithms allowed and restricting the power of the adversary. We introduce and analyze a new class of online algorithms which we call diligent algorithms. Roughly speaking, a diligent algorithm never sits idle while there is work to do. We show that in general diligent algorithms are strictly weaker than algorithms that allow waiting time. We also present a new adversary model, called the fair adversary, as an alternative to the omnipotent adversary which is usually used in competitive analysis. The fair adversary is required to remain inside the convex hull of the requests released so far. This adversary model should be seen as a more reasonable model.

For online TSP on the positive part of the real line, we give an algorithm against a fair adversary that is (1+sqrt(17))/4 competitive and show that this is best possible. We also give a best possible diligent algorithm for this problem that is 4/3 competitive.

Back to home page | Program | Parallel sessions |

DANIEL PAULUSMA

Faculty of Mathematical Sciences

University of Twente

P.O. Box 217, 7500 AE Enschede

D.Paulusma@math.utwente.nl

Tuesday January 11, 15.30-16.00

Consider a soccer competition among various teams playing against each other in pairs (matches) according to a previously determined schedule. At some stage of the competition one may ask whether a particular team still has a (theoretical) chance to win the competition. The complexity of this question depends on the way scores are allocated according to the outcome of a match. For example, the problem is polynomially solvable for the ancient FIFA rules (2:0 resp. 1:1) but becomes $NP$-hard if the new rules (3:0 resp. 1:1) are applied. We determine the complexity of the above problem for all possible score allocation rules.

Back to home page | Program | Parallel sessions |

LEON PEETERS

Rotterdam School of Management

Erasmus University of Rotterdam

P.O. Box 1738, 3000 DR Rotterdam

l.peeters@fac.fbk.eur.nl

Thursday January 13, 15.30-16.00

We consider the problem of constructing a periodic timetable for the Dutch railway system, which is characterized by a large number of trains, running at different speeds, with high operating frequencies, and a lot of connections. The input to the problem consists of railway infrastructure, train lines (including operating frequencies and running times), and constraints involving network safety and quality requirements. The output should consist of arrival and departure times for all the trains in the system, for one time period (usually one hour). The NP-complete problem of finding a feasible periodic railway timetable was studied by Schrijver and Steenbeek, who developed a constraint propagation algorithm for solving it.

We study an optimization approach to periodic railway timetabling, formulating the problem as a mixed integer program, where the integer variables model the periodicity of the problem. Solving this program by branch and bound proves to be hard and extremely time consuming, even for small instances. First, some preprocessing techniques for reducing the size of real life instances will be discussed. These methods reduce the number of constraints and integer variables, since in our model each constraint contains a unique integer variable. Next, we introduce re-formulations for systems of constraints that correspond to certain practical requirements (e.g. connections between trains). Finally, we will present valid inequalities that substantially decrease the computational time by improving the LP relaxation bound in the branch and bound process.

Back to home page | Program | Parallel sessions |

KEES JAN ROODBERGEN

Rotterdam School of Management

Erasmus University of Rotterdam

P.O. Box 1738, 3000 DR Rotterdam

K.Roodbergen@fbk.eur.nl

Tuesday January 11, 15.00-15.30

Warehouses form an important part of distribution networks. Products can be stored temporarily and customer orders can be filled by retrieving products from storage. Processes in warehouses usually require a large amount of time. Therefore, decreasing the throughput times in warehouses is essential to meet customer demand, but this may be difficult to achieve.

The order picking process is the process by which products are retrieved from storage on the basis of customer orders. This order picking process is often the most laborious activity in a warehouse. On average the costs of order picking amount to about 55% of the operational costs in a warehouse. The efficiency of the order picking process depends on factors such as the methods for storage and transport (flow racks, shelves, order picking trucks, picking carts, etcetera), on the layout of the area and on the control mechanisms.

Improving the order picking process by decreasing the time needed for picking an order, can lead to a simultaneous decrease in throughput time and operational costs. One way to improve the order picking process is to determine a layout that would minimize travel time of the order pickers. When determining a layout we have to take into account factors such as aisle length and the number of aisles and cross aisles. Cross aisles are aisles perpendicular to the picking aisles and can be used to change from one picking aisle to another.

We describe a method to determine a good layout for the order picking area. First an estimate is derived for the average travel time. This estimate is based on statistical properties of a frequently used routing method when applying random storage. The resulting estimate for average travel time is a function of the order size, the number of aisles, the number of cross aisles, aisle width and aisle length. By minimizing this function for a given order size, we obtain the desired layout for the warehouse.

As a result of these layout optimizations we found that average travel time is fairly insensitive to small changes in warehouse layout. If a small deviation from the minimal travel time is allowed, then it appears that a wide variety of suitable layouts will be returned by the optimization procedure.

Joint work with Gunter P. Sharp (Georgia Institute of Technology).

Back to home page | Program | Parallel sessions |

ANDREI SLEPTCHENKO

Faculty of Technology and Management

Operation Methods and System Theory

University of Twente

P.O. Box 217, 7500 AE Enschede

A.Sleptchenko@sms.utwente.nl

Tuesday January 11, 15.30-16.00

Models for multi-echelon, multi-indenture supply system for repairable service parts have already thirty-years history. Such models aim to optimize system availability within a budget restriction by allocating stocks for the various system parts/subassemblies to the various locations in the supply network, given part costs, failure rates and repair throughput times.

Until now, only little attention has been paid to capacity issues in such supply networks. We extend the METRIC model of Sherbrooke with finite repair shop capacity, where repair shops are modeled by multi-server queuing systems. In this presentation we will present approximation formulas for multi-server and multi-class multi-server queueing systems, which are used to describe repair shops and a short analysis of possible errors of these approximations. Also, we show how ignoring finite capacity may affect system availability and stock allocation decisions. Further we give criteria for which it is justified to use the traditional infinite capacity METRIC-models.

Sherbrooke, C.C. [1992], Optimal inventory modeling of systems: Multi-echelon techniques, Wiley, New York.

Back to home page | Program | Parallel sessions |

JOB SMELTINK

Department of Computer Science

Utrecht University

job@cs.uu.nl

Thursday January 13, 15.30-16.00

Cornu\'ejols and Dawande proposed a set of 0-1 linear programming instances that proved to be very hard to solve by traditional methods, and in particular by linear programming based branch-and-bound. They offered these market split instances as a challenge to the integer programming community. They generated these instances such that the feasibility version should be infeasible, which implies that the optimization version has an optimal objective value strictly greater than zero. In addition, the LP-relaxation of the optimization version has value zero, even after many variables have been fixed. This implies that one needs to get very deep in the branch-and-bound tree before it is possible to close the duality gap. During our computations, however, we observed that many of the larger instances were in fact feasible, which means that the "hardness" argument given above no longer holds. We therefore performed a probabilistic analysis describing how to compute the probability of generating infeasible market split instances. Our analysis indicates that one should, given the number of equations, generate slightly fewer variables than Cornu\'ejols and Dawande suggest.

Joint work with K. Aardal, R.E. Bixby, C.A.J. Hurkens, and A.K. Lenstra.

Back to home page | Program | Parallel sessions |

MIRANDA VAN UITERT

CWI

P.O. Box 94079, 1090 GB Amsterdam

miranda@cwi.nl

Thursday January 13, 15.00-15.30

We consider networks where traffic is served according to the Generalised Processor Sharing (GPS) principle. GPS-based scheduling algorithms are considered important for providing differentiated quality of service in future integrated-services networks.

We are interested in the workload of a particular flow $i$ at the $N$th node on its path. Flow $i$ is assumed to have long-tailed traffic characteristics. We distinguish between two traffic scenarios, (i) flow $i$ generates instantaneous traffic bursts and (ii) flow $i$ generates traffic according to an on/off process. In addition, we consider two configurations of feed-forward networks. First we focus on the situation where other flows join the path of flow $i$. Then we extend the model by adding flows which can branch off at any node, with cross traffic as a special case. These networks are an extension of a single-node model in an earlier study. We prove that under certain conditions the tail behaviour of the workload distribution of flow $i$ is equivalent to the tail behaviour of the workload distribution in a

Back to home page | Program | Parallel sessions |

ALJAZ ULE

University of Amsterdam

ule@fee.uva.nl

Tuesday January 11, 15.00-15.30

We investigate time dependent demand for capacity in a mobile communications network with active calls following the movement of underlined traffic. Some areas of such network can be temporarily extremely busy, for example during rush hours or in traffic jams. Due to increased number of call requests the capacity assigned to busy areas might not be sufficient to satisfy the demand and a strategy to increase capacity of these areas is desired. For any traffic behavior we describe time dependent distribution of calls over the network, assuming there are no capacity constraints. We show it is insensitive to call length distribution. These results are used to approximate distibution of calls and blocking probabilities in finite capacity networks. A particular case of a single traffic jam moving on a road is considered and dynamic strategies to minimize blocking probabilities are analyzed. We compare strategies of capacity borrowing from a region ahead, to strategies of capacity borrowing from a region behind traffic jam. We show the performance of these depends heavily on the shape of traffic jam.

Back to home page | Program | Parallel sessions |

IRIS VIS

Erasmus University Rotterdam

I.Vis@fbk.eur.nl

Thursday January 13, 15.30-16.00

One of the activities at a container terminal is to transport containers from the stack (i.e. storage) to the ship and vice versa. This transport is carried out by vehicles, like straddle carriers, container trains or automated guided vehicles. These vehicles have predefined routes to deliver containers at the right place at the right time. To ensure that this operation is carried out fast and efficiently, one of the problems, that has to be solved is the determination of the necessary number of vehicles to transport all containers.

We consider the deterministic case in which the transport of containers has to be done within a time-window. For every container a release time and due time for pick up are given. The aim of setting pick up due times is to restrict the waiting time of containers before they are transported. Furthermore, the use of pick up due times assists in managing the usually small buffer areas between terminal cranes and vehicles. In the case of automated terminals, sometimes there is no buffer space at all, which means that a crane has to wait until a vehicle arrives before it can drop off the load.

In this presentation, a model and algorithm are given to determine the necessary number of vehicles to transport all containers within time. After the discretization of the time-windows a network can be constructed. Every node represents a time-point at which job i can be executed. If two jobs i and j can be transported by the same vehicle, there exists an arc between job i and j. The model and algorithm are illustrated with an example.

Joint work with M. Savelsbergh (Georgia Institute of Technology).

Back to home page | Program | Parallel sessions |

TJARK VREDEVELD

Department of Mathematics and Computing Science

Eindhoven University of Technology

P.O. Box 513, 5600 MB Eindhoven

tjark@win.tue.nl

Tuesday January 11, 15.00-15.30

The problem of scheduling jobs on unrelated parallel machines so as to minimize total weighted completion time is considered. We compare the solutions obtained by algorithms with a worst-case performance guarantee, to solutions obtained by local search heuristics, in terms of quality and computational effort. The algorithms with a worst-case performance guarantee are based on rounding a fractional solution to either an LP-relaxation or a convex quadratic programming (CQP) relaxation. Computational tests show that for most instances rounding the fractional solution of the LP-relaxation yields better solutions than rounding the fractional solution of the CQP-relaxation. Next, basic iterative improvement is applied to these solutions, and then the values of the new solutions hardly differ. The LP-relaxation plus rounding needs about the same amount of time as the CQP-relaxation plus rounding. Local search heuristics, based on iterative improvement from multiple random starts, need much less time and the quality of the solutions is at least as good as those of the other methods.

Back to home page | Program | Parallel sessions |

BERT ZWART

Department of Mathematics and Computing Science

Eindhoven University of Technology

P.O. Box 513, 5600 MB Eindhoven

zwart@win.tue.nl

Tuesday January 11, 15.30-16.00

The tail behaviour of the waiting-time distribution in the single server queue is quite well understood. For a heavy-tailed service-time distribution this problem has been analysed by Cohen, Pakes and Veraverbeke in the seventies. The extension of their results to multi-server queues is still open.

In this talk we will provide: (i) Exact asymptotics of the waiting-time distribution in a particular multi-server queue, which is tractable enough for an analytical treatment. (ii) A heuristic, but insightful analysis of the tail of the waiting-time distribution in this multi-server queue. It will be shown that these heuristics give the correct answer and that they can be applied to more general multi-server queues.

Joint work with Onno Boxma and Qing Deng.

Back to home page | Program | Parallel sessions |