• Nie Znaleziono Wyników

Packet Dispatching Scheme Employing Distributed Arbiters for Modified MSM Clos Switching Fabric

N/A
N/A
Protected

Academic year: 2021

Share "Packet Dispatching Scheme Employing Distributed Arbiters for Modified MSM Clos Switching Fabric"

Copied!
5
0
0

Pełen tekst

(1)



Abstract— The Clos switching fabric was proposed by Charles

Clos in 1953 to scale telephone circuit switches. Currently this architecture seems to be very attractive solution for high performance switches and routers in terms of a large switching capacity. Since some packet dispatching algorithms for the Clos network switches are very complex we have proposed the modified MSM (Memory-Space-Memory) Clos switching fabric which can be controlled very easy. In this paper we proposed and evaluated a new packet dispatching scheme employing distributed arbitration called DAUB (Distributed Arbitration and rapid Unload of Buffers) intended for the modified MSM Clos switching fabric. Simulation results show that this modified switching fabric and the described scheduling deliver very good performance in terms of throughput, cell delay and input buffers size under different traffic distribution patterns. The results obtained under distributed and centralized arbitration are also compared.

Index Terms—Clos-network, Dispatching Algorithm, Packet

Switching, Packet Scheduling.

I. INTRODUCTION

The expected traffic growth from new IP services like voice, audio, video, TV, and gaming will require service providers to rearchitect their switches/routers even more rapidly than the current interval of 3 to 5 years. Designing a cost-effective large switches/routers with a capacity of a few hundred Tbps to a few Pbps is one of the greatest challenges faced by scientists and engineers today.

There are mainly two approaches to the implementation of high-speed packet switching systems. One approach is the single-stage switch architecture, the other one is the multiple-stage switch architecture, such as the Clos-network switch [1]. Most high-speed packet switching systems in the backbone of the Internet are currently built on the basis of a single-stage switching fabric with a centralized scheduler. The multiple-stage switching fabrics are made of smaller-size switching elements, where each such element is usually a crossbar.

The three-stage MSM Clos-network switch employs shared memory modules in the first and third stages. By allocating memory in the first stage it is very easy to implement the Virtual Output Queuing (VOQ) to avoid the Head-Of-Line (HOL) blocking phenomenon. To solve internal blocking, and output port contention problems in VOQ switches, fast arbitration schemes are needed. The arbitration scheme decides which cell at input buffers will be transferred to

The author is with the Faculty of Electronics and Telecommunications, Poznan University of Technology, Poznan, Poland; e-mail: janusz.kleban@et.put.poznan.pl.

outputs, and may be implemented as centralized or distributed. In the case of centralized arbitration, the decisions are taken by a central arbiter. In the distributed manner, each input and output has its own arbiter operating independently from the others. The well known dispatching schemes for buffered Clos-network switches were proposed in [2-5]. The basic idea of these algorithms is to use the effect of desynchronization of arbitration pointers and common request-grant-accept handshaking routine. Most of these schemes can achieve 100% throughput under the uniform traffic, but under the nonuniform traffic the throughput is usually reduced. A switch can achieve 100% throughput under the uniform or nonuniform traffic if the switch is stable, as it was defined in [6].

In this paper we propose and evaluate a new packet dispatching scheme employing distributed arbitration called DAUB, dedicated to the modified MSM Clos switching fabric. Two versions of this scheme, called DAUB1 and DAUB2, are presented and investigated. The remainder of this paper is organized as follows. Section II introduces some background knowledge concerning the modified MSM Clos switching fabric that we refer to throughout this paper. Section III describes the DAUB packet dispatching scheme, while Section IV presents simulation results obtained under the DAUB1 and DAUB2 schemes. Selected results are compared to the SDRUB (Static Dispatching with Rapid Unload of Buffers) (see Section III) scheme employing centralized arbitration. We conclude this paper in section V.

II. THE MODIFIED MSM CLOS SWITCHING FABRIC The modified MSM Clos switching fabric, proposed by us in [7], is shown in Fig. 1. To define the modified architecture we use the same terminology as for the MSM Clos switching fabric proposed in [3], see Table I.

The proposed architecture comes from the three-stage MSM Clos switching fabric. The main idea of the modification lies in connecting bufferless CMs to the two-stage buffered switching fabric. To eliminate HOL blocking, an input buffer in each IM is divided into k parallel queues, each of them storing cells destined to different OMs (other arrangements of buffers are also possible [7]). Those queues are called Virtual Output Module Queues (VOMQs), instead of VOQs used in other switching networks. Memory speedup is necessary here. Each VOMQ(i, j) stores cells going from IM(i) to OM(j). An

OM(j) has n OPs, each of which has an output buffer.

Within the modified architecture it is always possible to send one cell from each IM to any OM using direct link

Packet Dispatching Scheme Employing Distributed

Arbiters for Modified MSM Clos Switching Fabric

Janusz Kleban

(2)

(without any arbitration), and additionally rapid unload of VOMQs using CMs during one time slot. The rapid unloading of a particular VOMQ(i, j) means that all CMs are used to send cells from IM(i) to OM(j) within one time slot. This approach simplifies packet dispatching scheme because all CMs are used to transmit packets between selected pairs of IM(i)-OM(j) e.g.

IM(1)-OM(2), IM(2)-OM(0) etc. Technically it is possible to

use CMs to send cells from one IM to many OMs eg.

IM(1)-OM(0), IM(1)-OM(1), IM(1)-OM(2), but in this case the

control algorithm is more complex.

VOMQ(0,0) VOMQ(0,k-1) IP (0,0) IP (0,n-1) IM (0) VOMQ(k-1,0) VOMQ(k-1,k-1) IP (k-1,0) IP (k-1,n-1) IM (k-1) CM (0) OM (0) CM (m-2) OM (k-1) OP (0,0) OP (0,n-1) OP (k-1,0) OP (k-1,n-1)

Fig. 1. The modified MSM Clos switching fabric architecture.

Within the modified MSM Clos switching fabric an expansion in IMs and OMs is used. The maximum number of connected CMs, in our proposal, is equal to m-1, but it is possible to use less CMs. In practice, the number of CMs significantly influences the performance of the switching fabric. The number of CMs depends on the traffic distribution pattern to be set up. For n=m, using m-1 CMs, it is possible to send up to n cells from the IM to the OM. One cell may be

sent through the direct connecting path between IM and OM and n-1 cells may be sent through the CMs.

To avoid internal collisions when CMs are used, an arbitration process must be implemented using a central arbiter or request-grant-accept signals. The proposed architecture is very flexible and can serve efficiently both the uniform and nonuniform traffic distribution patterns.

III. DAUB PACKET DISPATCHING SCHEME FOR MODIFIED CLOS SWITCHING FABRIC

For the modified MSM Clos switching fabric we have proposed and evaluated in [7] the packet dispatching algorithm using centralized arbitration, called SDRUB. In this paper we propose a new scheme employing distributed arbitration called DAUB. To explain how the algorithm works assume that there are y CMs in the modified MSM Clos switching fabric.

The proposed algorithm may be implemented in two ways. The first version of packet dispatching algorithm, called DAUB1, reserve all CMs for particular pair of IM(i)-OM(j) only if IM(i) has at least y+1 cells to be sent to OM(j). In this case one cell will be sent through the direct connection between IM(i) and OM(j), and y cells will be sent through CMs. If the number of cells destined to OM(j) and waiting in

IM(i) is smaller than y+1, IM(i) does not apply for CMs.

The second version of packet dispatching algorithm, called DAUB2, is more aggressive than DAUB1 and sends request signals always when IM(i) has more than one cell to be sent to

OM(j). In this case it is not necessary to wait untill y+1 cells

destined to OM(j) will be stored in a memory, therefore the DAUB2 scheme can decrease the cell delay especially for low input load.

The DAUB1 and 2 algorithms use distributed arbiters to resolve contention among the IMs. Apart from arbiters, other elements also support contention resolution process. Each VOMQ has its own counter PV(i, j) which shows the number of cells destined to OM(j). The value of PV(i, j) is increased by 1 when a new cell is written to a memory. Each IM scheduler maintains an OMP pointer pointing at the recently granted OM. Each OM scheduler maintains an IMP pointer pointing at the IM which has sent cells to the OM recently.

In detail, the DAUB1 algorithm works as follows (all steps should be performed simultaneously in each IM):

o Step 1: Match all output links LID(i, j) (direct connecting paths to OMs) with cells from VOMQs, according to the two-stage Clos switching fabric connections pattern. In this case, j denotes the number of a particular OM. If there is no cell to OM(j), the LID(i, j) remains unmatched.

o Step 2: For the VOMQs matched in step 1 decrease the value of PV(i, j) by 1.

o Step 3: Request. Each unmatched (in previous iterations) IM sends a request to every OM for which it has at least y queued cells.

o Step 4: Grant. If an unmatched OM receives multiple requests, the scheduler chooses the one that appears next in a round-robin schedule starting from the IM pointed by the TABLEI

A NOTATION FOR THE MSMCLOS SWITCHING FABRIC

Notation Description

IM Input module at the first stage CM Central module at the second stage OM Output module at the third stage i IM number, where 0 d i d k-1 j OM number, where 0 d j d k-1

h Input/output port number in each IM/OM, where 0 d h d n-1

r CM number, where 0 d r d m-2 IM (i) The (i+1)th input module CM (r) The (r+1)th central module OM (j) The (j+1)th output module IP (i, h) The (h+1)th input port at IM(i) OP (j, h) The (h+1)th output port at OM(j) LID(i, j)

LI (i, r)

Output link at IM(i) that is connected to OM(j) Output link at IM(i) that is connected to CM(r) LC (r, j) Output link at CM(r) that is connected to OM(j) VOMQ (i, j) Virtual output module queue at IM(i) that stores cells

(3)

IMP pointer. The OM notifies the IM that its request is granted.

o Step 5: Accept. If an IM receives multiple grants, it accepts the one that appears next in a round-robin schedule starting from the OM pointed by the OMP pointer. The IM notifies the OM that its grant is accepted.

o Step 6. The OMP and IMP pointers are incremented (modulo N) to the position pointing matched OM and IM respectively. The value of PV(i, j) related to matched VOMQ is decreased by y.

o Step 7. Next iteration. Each unmatched IM should perform the next iteration – Steps 3-7. Run the steps iteratively until no more matching is possible. A fixed number of iterations can also be implemented.

o Step 8: In the next time slot send cells from the matched

VOMQs to the OMs through direct connecting paths and

CMs.

The DAUB2 algorithm works as follows (all steps should be performed simultaneously in each IM):

o Step 1 and Step 2 are the same as in the DAUB1 algorithm. o Step 3: Request. Each unmatched (in previous iterations) IM sends a request to every OM for which PV(i, j)≥1. The request has to contain the information about the number of cells waiting in the buffer to be sent to the destined OM(j). o Step 4: Grant. If an unmatched OM receives multiple

requests, the scheduler chooses the one that has information about y waiting cells (it means that all CMs will be used for transferring cells to the CM) and appears next in a round-robin schedule starting from the IM pointed by the IMP pointer. If there is no request from IM, that would like to reserve y CMs, the scheduler chooses the request with the most number of cells to be sent to the OM. The OM notifies the IM that its request is granted.

o Step 5 – is the same as in DAUB1 algorithm.

o Step 6. The OMP and IMP pointers are incremented (modulo N) to the position pointing matched OM and IM respectively. The value of PV(i, j) related to the matched VOMQ is decreased by the number of cells that will be sent to the matched OM.

o Step 7 and 8 are the same as in the DAUB1 algorithm. The DAUB2 algorithm is more complex than the DAUB1 algorithm because each OM’s arbiter has to choose the request with the most number of cells destined to the OM. It may happen that the scheduler has to go through the list of requests more than one time, especially when the input load is small.

IV. SIMULATION EXPERIMENTS

Two packet arrival models are considered in simulation experiments: the Bernoulli arrival model and the bursty traffic model where the average burst length is set to 16 cells.

We consider several traffic distribution models (the most popular in this research area) which determine the probability

pij that a cell which arrives at an input will be directed to a

certain output. The considered traffic models are:

Uniform traffic: pij p/N i,j Trans-diagonal traffic: 2 2( 1) ij p for i j p p for i j N ­ °° ® ° z  °¯ Bi-diagonal traffic: 2 3 ( 1) mod 3 0 ij p for i j p for j i N p otherwise ­ ° ° °  ® ° ° °¯ Chang’s traffic: 0 1 1 ij for i j p otherwise N ­ ° ® °  ¯

The experiments have been carried out for the modified MSM Clos switching fabric of size 64 u 64 (8 IMs and OMs, and up to 7 CMs). The parameters of simulation processes are the same as in [7], because we would like to compare results obtained for DAUB and SDRUB algorithms. The 95% confidence intervals have been calculated after t-student distribution for five series with 200,000 cycles, and are at least one order lower than the mean value of the simulation results, so they are not shown in the figures. The starting phase comprised 10,000 cycles, and enabled to reach the stable state of the switching fabric. Since a considerable influence on the performance of the modified MSM Clos switching fabric has the number of CMs, the simulation experiments have been carried out for different numbers of CMs, varying from 0 up to 7. We would like to assess how many CMs have to be used to manage different types of traffic distribution patterns effectively.

We have evaluated three main performance measures: throughput, average cell delay in time slots and the VOMQs size. Due to the space limits, only selected results for the average cell delay and throughput will be presented in this paper (Fig. 2 – Fig. 10).

The simulation experiments show that under the DAUB1 scheme the modified MSM Clos switching fabric produces very good results for both the uniform and nonuniform traffic distribution patterns. The performance of the network under DAUB1 scheme is almost the same as under the SDRUB scheme for all kinds of traffic distribution patterns.

Fig. 2 shows selected results obtained for the DAUB1 scheme and uniform traffic. Only single CM is enough to significantly improve the performance of switching network. The same conclusion comes from Fig. 3 where the results obtained under the Chang’s traffic are shown. The Chang’s traffic is very similar to the uniform traffic so the results are very similar.

The differences between one and two iterations are better visible in Fig. 4, where also results obtained under centralized SDRUB scheme are shown. The DAUB1 algorithm with two iterations produces better results, than for one iteration, only

(4)

for one and two CMs, and heavy input load (p>0.6 for 1CM and p>0.8 for 2CMs). For three or more CMs two iterations do not improve the results. In Fig. 4 selected results for the SDRUB scheme are also shown. Comparison between the DAUB1 and SDRUB algorithms, in terms of cell delay, leads to the conclusion that the DAUB1 algorithm produces exactly the same results as SDRUB scheme performing two iterations for one and two CMs, and for more CMs under a single iteration. In the case of 0 CM the results for both algorithms are exactly the same. Since the differences in average cell delay between one and two iterations are negligible we may say that the performance of modified Clos network is the same under the DAUB1 (one iteration) and SDRUB packet dispatching algorithms, and the uniform and Chang’s traffic distribution patterns. 1 10 100 1000 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 A ver age cel l del ay ( ti m e sl ot s) Input load DAUB1 0 CM DAUB1 it=1 1 CM DAUB1 it=1 4 CMs DAUB1 it=1 7 CMs DAUB1 it=2 1 CM DAUB1 it=2 4 CMs DAUB1 it=2 7 CMs

Fig. 2. Average cell delay under DAUB1 algorithm, uniform traffic.

1 10 100 1000 10000 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 A ver age cel l del ay ( ti m e sl ot s) Input load DAUB1 0 CM DAUB1 it=1 1 CM DAUB1 it=1 4 CMs DAUB1 it=1 7 CMs DAUB1 it=2 1 CM DAUB1 it=2 4 CMs DAUB1 it=2 7 CMs

Fig. 3. Average cell delay under DAUB1 algorithm, Chang’s traffic.

1 2 4 8 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 A ver age cel l del ay ( ti m e sl ot s) Input load DAUB1 it=1 1 CM DAUB1 it=1 2 CMs DAUB1 it=1 4 CMs DAUB1 it=1 7 CMs DAUB1 it=2 1 CM DAUB1 it=2 2 CMs DAUB1 it=2 4 CMs DAUB1 it=2 7 CMs SDRUB 1 CM SDRUB 2 CMs SDRUB 4 CMs

Fig. 4. Average cell delay under DAUB1 and SDRUB, uniform traffic. Fig. 5 shows the results obtained under the DAUB1 algorithm and trans-diagonal traffic distribution pattern. In this case 100% throughput of the switching fabric may be achieved

only when 4 (n/2) or more CMs are implemented. For 3 CMs the throughput is equal to 85%, for 2 CMs – 60%, and for one CM – 35%. The results are the same for one and two iterations. The SDRUB algorithm produces exactly the same results as DAUB1 algorithm with one iteration. Only a small difference may be seen for 3 CMs, where the throughput of the switching fabric under the SDRUB scheme is equal to 80% and under DAUB1 scheme – 85%.

1 10 100 1000 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 A ver age cel l del ay ( ti m e sl ot s) Input load

DAUB1 0 CM DAUB1 it=1 1 CM DAUB1 it=1 2 CMs

DAUB1 it=1 3 CMs DAUB1 it=1 4 CMs DAUB1 it=1 7 CMs

DAUB1 it=2 1 CM DAUB1 it=2 2 CMs DAUB1 it=2 3 CMs

DAUB1 it=2 4 CMs DAUB1 it=2 7 CMs SDRUB 3 CMs

SDRUB 4 CMs

Fig. 5. Average cell delay under DAUB1 and SDRUB, trans-diagonal traffic. The most demanding traffic distribution pattern is the bi-diagonal for which the maximum number of CMs – 7 – must be used to achieve 100% throughput; any smaller number of CMs reduces the throughput (Fig. 6). The results obtained under DAUB1 algorithm for a single iteration are exactly the same as under the SDRUB algorithm.

1 10 100 1000 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 A ver age cel l del ay ( ti m e sl ot s) Input load

DAUB1 0 CM DAUB1 it=1 1 CM DAUB1 it=1 4 CMs

DAUB1 it=1 6 CMs DAUB1 it=1 7 CMs DAUB1 it=2 1 CM

DAUB1 it=2 4 CMs DAUB1 it=2 6 CMs DAUB1 it=2 7 CMs

SDRUB 6 CMs SDRUB 7 CMs

Fig. 6. Average cell delay under DAUB algorithm, bi-diagonal traffic. Fig. 7 and 8 show results obtained under The DAUB1 scheme and the bursty traffic with average burst size equal to 16, and the uniform or trans-diagonal traffic distribution patterns respectively. Under the uniform traffic at least 4 CMs must be used to obtain small cell delay. For 1 CM two iterations produce slightly better results than a single iteration, and the results are the same as results obtained for the SDRUB scheme. For more CMs there is no difference in average cell delay between investigated algorithms. Under the bursty arrival model and the trans-diagonal traffic distribution pattern at least 5 CMs (one more than under the Bernoulli arrival model) are necessary to provide 100% throughput.

(5)

(it=1), DAUB2 (it=1) and SDRUB schemes is shown in Fig. 9. The DAUB2 scheme can produce slightly better results than the DAUB1 scheme for more than two CMs. The difference between average cell delay obtained under DAUB1 and DAUB2 algorithms is better visible in Fig. 10, where selected results for the trans-diagonal traffic are presented. The difference is obvious because under the DAUB1 algorithm the IM uses the CMs only if the number of cells to be sent to the OM is equal to the number of CMs. If the switching fabric has a small number of CMs, they are used for unloading of input buffers a little more often than for a bigger number of CMs. More complicated arbiters (have to choose the request with the most number of cells) and a big number of requests sent to CMs make implementation of DAUB2 scheme more costly than DAUB1 algorithm. The SDRUB algorithm produces the same results as DAUB1 scheme for two iterations (1 or 2 CMs), or for a single iteration (more than 2 CMs).

1 10 100 1000 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 A ver age cel l del ay ( ti m e sl ot s) Input load DAUB1 0 CM DAUB1 it=1 1 CM DAUB1 it=1 4 CMs DAUB1 it=1 7 CMs DAUB1 it=2 1 CM DAUB1 it=2 4 CMs DAUB1 it=2 7 CMs SDRUB 7 CMs SDRUB 1 CM

Fig. 7. Average cell delay under DAUB1 algorithm, bursty traffic, average burst size b=16. 1 10 100 1000 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 A ver age cel l del ay ( ti m e sl ot s) Input load No CM, It = 1 1 CM, It = 1 1 CM, It = 2 2 CMs, It = 1 2 CMs, It = 2 3 CMs, It = 1 3 CMs, It = 2 4 CMs, It = 1 4 CMs, It = 2 5 CMs, It = 1 5 CMs, It = 2 6 CMs, It = 1 6 CMs, It = 2

Fig. 8. Average cell delay under DAUB1 algorithm, bursty traffic, average burst size b=16, trans-diagonal traffic distribution pattern

1 2 4 8 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 A ver age cel l del ay ( ti m e sl ot s) Input load DAUB1 1 CM DAUB1 3 CMs DAUB1 6 CMs DAUB1 7 CMs DAUB2 1 CM DAUB2 3 CMs DAUB2 6 CMs DAUB2 7 CMs SDRUB 1 CM SDRUB 7 CMs

Fig. 9. Performance comparison under DAUB1 (it=1), DAUB2 (it=1) and SDRUB schemes, uniform traffic.

1 2 4 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 A ver age cel l del ay ( ti m e sl ot s) Input load

DAUB1 4 CMs DAUB1 5 CMs DAUB1 6 CMs

DAUB1 7 CMs DAUB2 4 CMs DAUB2 5 CMs

DAUB2 6 CMs DAUB2 7 CMs SDRUB 7 CMs

Fig. 10. Performance comparison under DAUB1 (it=1), DAUB2 (it=1) and SDRUB schemes, trans-diagonal traffic.

V. CONCLUSIONS

In this paper, we presented DAUB1 and DAUB2 packet dispatching schemes intended for the modified MSM Clos switching fabric. The DAUB schemes employ distributed arbitration and each step of these algorithms can be performed by all IMs/OMs simultaneously. This feature allows to scale up the switching fabric to larger switching capacity required by switches and routers. The main motivation behind this research was the desire to propose the packet dispatching scheme employing distributed arbitration that can provide good performance using small number of iterations. The DAUB schemes are able to fulfill the above requirements for both uniform and nonuniform traffic distribution patterns. In all investigated cases single iteration is enough to get acceptable results (up to n/2 iterations are required in the MSM Clos switching network [5]). Two iterations can improve slightly the results mainly for the bursty traffic.

The paper shows that it is possible to use distributed arbitration (DAUB schemes) instead of centralized one (SDRUB scheme) and achieve the same performance of the modified MSM Clos network.

REFERENCES

[1] C. Clos: “A Study of Non-Blocking Switching Networks”, Bell Sys. Tech. Jour., 1953, pp. 406-424.

[2] H. J. Chao, B. Liu: High Performance Switches and Routers, Wiley -Interscience, New Jersey, 2007.

[3] E. Oki, Z. Jing, R. Rojas-Cessa, and H. J. Chao: “Concurrent Round-Robin-Based Dispatching Schemes for Clos-Network Switches”, IEEE/ACM Trans. on Networking, vol. 10, no.6, 2002, pp. 830-844. [4] K. Pun, M. Hamdi: “Dispatching schemes for Clos-network switches”,

Computer Networks no. 44, 2004, pp.667-679.

[5] J. Kleban, A. Wieczorek: “CRRD-OG: A packet Dispatching Algorithm with Open Grants for Three-Stage Buffered Clos-Network Switches”, Proc. High Performance Switching and Routing 2006, pp. 315-320. [6] N. McKeown, A. Mekkittikul, V. Anantharam, J. Walrand, “Achieving

100% Throughput in an Input-queued Switch”, IEEE Trans. Commun., pp. 1260-1267, Aug. 1999.

[7] J. Kleban, M. Sobieraj, S. Węclewski, “The Modified MSM Clos Switching Fabric with Efficient Packet Dispatching Scheme”, in Proc. IEEE High Performance Switching and Routing 2007 – HPSR 2007, New York, May 30 to June 1, 2007.

Cytaty

Powiązane dokumenty

A researcher owning 3 umbrellas walks between his home and office, taking an umbrella with him (provided there is one within reach) if it rains (which happens with probability 1/5),

Mr Smith buys one randomly chosen lottery ticket every day (we assume that tickets from the two lotteries are equally numerous on all days); the choices on different days

It is shown that in contradistinction to Tarski’s undefinability theorem for arithmetic, it is in a definite sense possible in this case to define truth in the very language whose

Recently, the CQ switches with VCQs (Fig. 2) have been proposed to eliminate the large RTT (Round Trip Time) delay between the line card and switching fabric the CICQ (Combined

There are four common physical models of multicast 3-stage Clos networks: all switches have got fan-out capability or one stage hasn’t got fan-out capability.. We focused on

This paper presents new results obtained for the CRRD-OG (Concurrent Round-Robin Dispatching with Open Grants) packet dispatching scheme under the nonuniform traffic

[r]

Zo wordt regulier beheer in het Waddengebied ( waarbij soms stuifkuilen in de zeereep worden getolereerd), langs de Hollandse kust soms dynamisch kustbeheer genoemd.. Om deze