• Nie Znaleziono Wyników

Enterprise data storage: A review of solutions

N/A
N/A
Protected

Academic year: 2021

Share "Enterprise data storage: A review of solutions"

Copied!
14
0
0

Pełen tekst

(1)

Summary

The functional characteristics of data storage system, its performance, reliabil-ity as well as ownership costs, are greatly affected by the system’s organization. Therefore, the system should be carefully planned and use the most appropriate or-ganization for its usage pattern and users’ requirements.

In this paper a review of data storage system organization methods has been presented, explaining the basic differences between Direct Attached Storage, Net-work Attached Storage and Storage Area NetNet-works. The main conclusion of the comparison is that although the SAN constitutes the most developed form of data storage system organization, yet for small and medium systems there is no point in choosing such a complicated and expensive solution, and NAS, or even DAS may work good enough.

Keywords: enterprise data storage, data storage system organization, storage area networks 1. Introduction

The volume of stored data grows quickly, and even small companies have often to handle da-tabases and document repositories containing gigabytes of data. There are numerous problems arising when dealing with large volumes of data, including making sure that the most needed information is quickly accessible, the important information is kept safe and not prone neither to accidental damage nor to intended theft or sabotage, and the information retention regimes are met. Solving these problems starts with the optimal design of the data storage system. A good deci-sion made at that moment may save a lot of effort when the system is already running. Notice that the organization of data storage system affects its capacity, performance and reliability, as well as makes it easier or more difficult to manage and extend the entire system.

In this paper different solutions to the problem of data storage system organization will be de-scribed, as choosing the best of them can only be done when the characteristics of each available solution are known. The organization of the paper is as follows: first, general organization of data storage system is described, then the specific organization methods – direct attached storage, network attached storage, and the storage area networks are discussed. The final section contains comaprisons of different organization methods and ensuing conclusions.

(2)

2. General data storage system organization

The structure of a data storage system depends on the needs of users’ regarding its capacity, performance, and reliability, as well as technological and economic restrictions and administrators’ competencies. It should also be noted that the structure of a data storage system seriously impacts the data storage management model (product-centric, infrastructure-centric, application-centric, or data-centric1) that can be applied to the data storage system.

The basic component of the data storage system structure is data storage subsystem; the re-maining ones are computers (servers and workstations allowing users to access the stored data) and network devices controlling the data flow within the network all the components are connected to.

There are three basic ways of connecting data storage subsystems within data storage system2: • Direct Attached Storage (DAS), in which storage subsystems are connected directly

to a computer (a server or a workstation) (see fig. 1),

• Network Attached Storage (NAS), in which storage subsystems are connected to a network via simple file-serving appliance (NAS head) (see fig. 2),

• Storage-Area Network (SAN), in which data storage subsystems are connected in their own proprietary network that can be accessed from outside world as it was a single, though huge, storage subsystem (see fig. 3).

Figure 1. Direct Attached Storage (DAS) scheme Source: Own work, [14].

1

Toigo, J. W., The Holy Grail of Network Storage Management, Prentice Hall, Upper Saddle River, NJ, USA, 2003.

2

Farley, M.: Storage Networking Fundamentals: An Introduction to Storage Devices, Subsystems, Applications, Manage-ment, and Filing Systems, Cisco Press, Indianapolis, IN, USA, 2004.

(3)

Figure 2. Network Attached Storage (NAS) scheme Source: Own work, [14].

Figure 3. Storage-Area Network (SAN) scheme Source: Own work, [14].

3. Direct attached storage

With Direct Attached Storage, data storage subsystems are usually connected to a server, through which all data access operations are performed (for this reason, DAS is sometimes called SAS – Server Attached Storage).

The physical transmission medium in DAS is usually copper cable, connecting the computer (via a host bus adapter) to the external port of the data storage subsystem. Until recently, two types of parallel interfaces were used: ATA/ATAPI (Advanced Technology Attachment/ATA with Packet Interface) and SCSI (Small Computer System Interface)3. The first one was meant for home/small office low performance applications, the second one – medium and high performance applications. Both standards describe the form of the ports as well as protocols defining the set of available commands and the rules of safe data transmission. During their existence, multiple versions of the standards were in use, differing in user parameters. The maximum data transmission speed for the first SCSI version was merely 5 megabytes per second, the last one with practical applications (Ultra-320) – 320 MB/s. In case of ATA the speed grew from 8.33 to 133 MB/s.

The maximum distance between a storage device and a computer is about half meter for ATA

3

(4)

and 25/12 meters for the slower/faster variants of SCSI. In ATA, no more than two devices can be connected to a single computer port (master and slave device), in SCSI – up to 16 (denoted by numbers – SCSI ID). The ATA commands are addressed to physical devices (e.g., disk drives), whereas the SCSI commands are addressed to logical devices (e.g., disk partitions), identified by numbers (logical unit number, LUN). In both cases, the addresses point to logical blocks of the device (logical block addressing, LBA).

Using parallel link, 16 bits of data can be transmitted simultaneously (in ATA and SCSI ver-sions denoted as Wide or Ultra). Thanks to that, e.g., in ATA-1 with transmission frequency of 4.166 MHz, the transmission speed of 8.33 MB/s was achieved. The higher the frequency, the bigger of a problem becomes the signal skew due to different propagation delays4.

This problem does not apply to serial transmission, and for this reason Serial ATA (SATA) and Serial Attached SCSI (SAS) replaced the parallel interfaces. Currently they work with maxi-mum frequency of 3 GHz, which amounts to data transmission rate of 300 MB/s (there are 2 redundant bits for every 8 bits of raw data). Maximum distance between the device and the com-puter is about one meter for SATA and 8 meters for SAS. In contrast to ATA and SCSI, the serial interfaces allow for direct connection of only one device per port. As a result, there is no slow-down due to connection sharing. Additional devices may be added indirectly, in case of SATA – using port multiplier (up to 15 devices), in case of SAS – using expanders (up to 128 devices to a single expander, up to more than ten thousands in total).

Both SATA and the basic type of SAS use the same connector, different from the parallel in-terfaces. SATA devices can be controlled using the ATA set of commands. A new set of commands, prepared especially for SATA, is available in the AHCI (Advanced Host Controller Interface). SAS uses three transmission protocols: SSP (Serial SCSI Protocol), conforming to SCSI and capable of controlling SAS devices, STP (Serial ATA Tunneling Protocol), capable of controlling SATA devices, and SMP (Serial Management Protocol), capable of controlling ex-panders.

Direct attachment of storage devices is a popular solution, especially in small data storage sys-tems, yet in medium and large systems using it is hampered by several difficulties, such as56:

• limited scalability: the amount of storage devices that can be attached to a single server is limited, and the consolidation of storage attached to different servers is often impossible; • difficult extension and reconfiguration: changing the configuration of storage devices

usually requires a restart and reconfiguration of the server they are attached to; • costly management, as it must be done on the level of a single server;

• expensiveness: server is an expensive device and if used to control file access it cannot be used for other purposes.

These problems can be avoided completely by using network attached storage or storage area networks.

4

Tate, J., Lucchese, F., Moore, R.: Introduction to Storage Area Networks, IBM, Armonk, NY, USA, 2006.

5

Jayaswal, K.: Administering Data Centers: Servers, Storage, and Voice over IP, Wiley, Indianapolis, IN, USA, 2006.

6

Vengurlekar, N., Vallath, M., Long, R.: Oracle Automatic Storage Management: Under-the-Hood & Practical Deploy-ment Guide, McGraw-Hill, Nowy Jork, NY, USA, 2008.

(5)

4. Network attached storage

The idea of network attached storage is to add a network appliance (so called NAS head), per-forming simple file server tasks (yet being much cheaper than real file servers), to a data storage subsystem. Thanks to that, such subsystem can be attached directly to local area network, based, e.g., on the Ethernet and the TCP/IP protocol7, thus becoming available for every network user.

The access to data stored in NAS is usually possible only on the file level, with the help of network file systems. In contrast to local file systems, the network file systems are better suited for network-specific issues, such as: identification of file servers and localization of files in the net-work, enabling access of multiple users to a single file, authorization of users and ensuring their privacy, minimization of the effects of technical limitations (large transmission delays of variable length). Two most widely used network file systems are CIFS (Common Internet File System), used mostly in Microsoft Windows environments8 and NFS (Network File System), used mostly under Unix/Linux operation systems9.

Incoming commands are processed by simple embedded software that hides the internal or-ganization of NAS (component storage devices, the way they are connected, internal file system). It helps to simplify NAS management but limits its freedom.

Network file systems provide support for automatic data migration and replication, thanks to which it is possible to extend and reconfigure data storage system without need to stop applica-tions running (the other NAS devices take over the data and operaapplica-tions of the disabled ones). Storage system capacity extension is done by attaching additional NAS devices to the network. Network file systems allow to create virtual directory trees containing files and directories actually stored in different network locations (see, e.g., Distributed replicated virtual volumes in CIFS10).

Networks containing a large number of NAS devices are sometimes called File Area Networks (FAN)11.

The weak side of NAS is their limited performance and functionality, being consequences of sharing local networks, technological limits of TCP/IP protocol and access to data only on file level12. An alternative way of network-attached storage are storage area networks.

7

Goldman, J. E., Rawles, P. T.: Applied Data Communications: A Business-Oriented Approach, John Wiley & Sons, Nowy Jork, NY, USA, 2004.

8

Leach, P. J., Naik, D. C.: “A Common Internet File System (CIFS/1.0) Protocol. Preliminary Draft”, Microsoft, Red-mond, WA, USA, 19 December 1997, ftp://ftp.microsoft.com/developr/drg/CIFS/draft-leach-cifs-v1-spec-01.txt.

9

RFC 3530, “Network File System (NFS) version 4 Protocol”, Internet Engineering Task Force, 2003, http://tools.ietf.org/html/rfc3530.

10

Leach, P. J., Naik, D. C.: “A Common Internet File System (CIFS/1.0) Protocol. Preliminary Draft”, Microsoft, Redmond, WA, USA, 19 December 1997.

11

O’Connor, M., Judd, J.: Introducing File Area Networks, Infinity Publishing, West Conshohocken, PA, USA, 2007.

12

Vengurlekar, N., Vallath, M., Long, R.: Oracle Automatic Storage Management: Under-the-Hood & Practical Deploy-ment Guide, McGraw-Hill, Nowy Jork, NY, USA, 2008.

(6)

5. Storage area networks

Storage area networks, in its original form, use their own network, specialized transmission protocol (Fibre Channel, FC) and allow data access on block level13. Otherwise than in case of NAS, the network infrastructure is an integral element of SAN: a binder connecting data storage subsystems together, which can be seen from outside as a single data storage subsystem with practically unbounded extension possibilities. In spite of its often huge capacity, SAN can be relatively easy managed from a central point. Mass storage connected within SAN is a uniform resource that can be divided in any proportions into virtual disks available for the end users. At the same time, inside SAN, its every element can be an object of individually administered, precise management operations. Together, it gives management freedom unavailable for other types of storage attachment.

Although on distances up to less than 30 meters a copper cable is often used, the basic trans-mission medium of SAN’s is fibre cable. Depending on its type, the maximum distance between devices may be up to 500 meters (multimode fiber) or 10 kilometers (mode fiber)14. Typical transmission speeds are 100, 200, 400 and 800 megabytes per second, the fastest available devices transmit one gigabyte per second (the respective codenames are 1GFC, 2GFC, 4GFC, 8GFC and 10GFC).

Like SAS, Fibre Channel uses serial data transmission. By default, again like SAS, FC uses SCSI set of commands, via FCP protocol (Fibre Channel Protocol for SCSI).

Fibre Channel allows to use three types of network topologies15: • point-to-point, FC-P2P,

arbitrated loop, FC-AL,switched fabric, FC-SW.

In point-to-point topology, the computer is connected directly to a single storage device. Apart from technology used, there are no more differences with SATA connection. In arbitrated loop a string of connections between storage subsystems is created, whose start and end is a computer. Up to 126 devices can be connected this way. The connection is shared, which limits its through-put, and a failure of a single device disables the entire loop. In practice, instead of connecting devices in physical loop, a hub is used instead, to which all the devices are connected (a ring wired as a star)16.

A topology capable of building complex storage area networks is switched fabric. Along with its name, it uses switches that create logical links between two of the many devices connected to them. Switched links are not shared, therefore data can be transmitted between them with maxi-mum possible speed. Switches can have from several to 256 ports.

The basic types of Fibre Channel ports are17:

13

Naik, D.: Inside Windows storage: server storage technologies for Windows Server 2003, Windows 2000, and beyond, Addison-Wesley, Boston, MA, USA, 2003.

14

Clark, T.: Storage Virtualization: Technologies for Simplifying Data Storage and Management, Addison-Wesley Professional, Boston, MA, USA, 2005.

15

Jayaswal, K.: Administering Data Centers: Servers, Storage, and Voice over IP, Wiley, Indianapolis, IN, USA, 2006.

16

Tate, J., Lucchese, F., Moore, R.: Introduction to Storage Area Networks, IBM, Armonk, NY, USA, 2006.

17

(7)

N_port (node) used for point-to-point connections; every storage device must have at least one such port; if there are more of them they can work independently or together to achieve better transmission speed (so called striping mode);

F_port (fabric) used to connect storage devices to switches;E_port (expansion) used to connect switches to other switches;G_port (generic) which can work as E_port or F_port.

In practice, mixed topologies are often used, where arbitrated loops are put within switched fabric. It is an economical solution, justified especially when the single devices connected in this way do not use the whole broadband offered by Fibre Channel. FL_port and NL_port are respec-tively F_port and N_port that can be used for connecting arbitrated loop to switched fabric. During fabric login, a newly connected device notifies the switch about its existence, whereas during port login, logical connections between ports are created. Less frequently used process login allows to configure processes under way on both ends of the link.

Every Fibre Channel device has a unique 64-bit World Wide Name (WWN). After connecting a device to a switch, the former is automatically assigned by the latter (using Simple Name Server) a local 24-bit number, specific for the current location of the device within the network. From this time on, the device may be identified by both numbers, and the numbers may be translated by the switch.

The local number consists of three parts: domain, area and port. Because of reserving part of the numeration for special addresses (login, multicast transmission), only 239 numbers are availa-ble for domains. The three address levels together give a possiavaila-ble maximum number of devices in a single SAN of almost 15.7 million.

The switched fabric may be implemented in several ways. Among the most typical one can distinguish18: • star fabric,cascaded fabric,ring fabric,mesh fabric,tree fabric.

In the star fabric (fig. 4), there are no direct links between any pair of switches. It is the most basic solution, appropriate for small storage area networks, under condition that data are not frequently transmitted between remote devices.

18

(8)

Figure 4. Star fabric Source: Own work, [14].

In the cascaded fabric (fig. 5), the switches are linked linearly, every one with its neighbors. Although it improves data transmission between remote devices, the transmission lag grows linearly with the number of connected devices.

Figure 5. Cascaded fabric Source: Own work, [14].

An improvement to the cascaded fabric is the ring fabric (fig. 6), where additionally the first and the last switch in the line are linked, thanks to which every transmission route can be drawn in at least two ways. It makes it simpler to modify the structure, as it does not stop to work after removing a single switch.

(9)

Figure 6. Ring fabric Source: Own work, [14].

In the ring fabric, with a larger number of switches, the access delay between remote devices becomes the main problem. A partial solution may be setting up of additional direct links between remote switches, or adding an additional switch in the center of the ring, which is connected with selected switches belonging to the ring.

The aforementioned problem does not apply to mesh fabric, in which every switch is connect-ed to every other (fig. 7). Thanks to that, the shortest data transmission path between any two network devices never contains more than two switches. Unfortunately, it also means that a signif-icant number of ports in every switch cannot be used to attach storage devices, as they are used to connect other switches. Therefore, this type of fabric cannot be used in large storage area net-works.

Figure 7. Mesh fabric Source: Own work, [14].

In large storage area networks, the best solution is the tree fabric (fig. 8). It groups switches into two layers: core and edge. In its clear form, all the storage devices are attached to the edge switches, which in turn are connected to the core switches. In practice, however, part of the core switches’ ports are used to attach the storage devices. Thanks to its significant redundancy, the tree fabric attains high levels of security and performance even in networks with large number of devices.

(10)

Figure 8. Tree fabric Source: Own work, based on [14].

A single data storage system may consist of multiple storage area networks. They may be connected within a Metropolitan or Wide Area Network. In order to make the remote storage area networks cooperate, an FCIP (Fibre Channel over TCP/IP)19 tunneling protocol should be used to transmit FC commands and data over the IP network.

Apart from all its advantages, Fibre-Channel-based storage area networks also have shortcom-ings: high ownership costs due to the necessity of maintaining a separate network, working in a different technology than the popular Ethernet, thus requiring specialist service. An attempt to diminish these costs, for the price of lower performance, are IP SAN’s (Internet Protocol Storage Area Networks), which in contrast to classic FC SAN’s (Fibre Channel Storage Area Networks) are based on the TCP/IP protocol. They can be built using iFCP or iSCSI protocols.

The iFCP (Internet Fibre Channel Protocol) allows to create Fibre Channel connections over the Internet20. The devices between which the data interchange takes place do not necessarily have to be installed within an FC SAN – iFCP does not only tunnel the packets as FCIP does, but it emulates the FC network using an existing IP network, translating FC addresses to IP addresses and vice versa.

The second protocol – iSCSI (Internet SCSI)21 can be used to transmit commands and data of the SCSI protocol using TCP protocol, therefore it is not an extension of FCP, like the iFCP, but its alternative. The IP SAN’s may use existing network infrastructure, can be maintained by administrator with knowledge of the Ethernet technology, which is much more widespread than the knowledge of the Fibre Channel technology, yet they do not attain the high performance level of the FC SAN’s.

19

RFC 3821, “Fibre Channel Over TCP/IP (FCIP)”, Internet Engineering Task Force, 2004, http://tools.ietf.org/html/rfc3821.

20

RFC 4172, “iFCP – A Protocol for Internet Fibre Channel Storage Networking”, Internet Engineering Task Force, 2005, http://tools.ietf.org/html/rfc4172.

21

RFC 3720, “Internet Small Computer Systems Interface (iSCSI)”, Internet Engineering Task Force, 2004, http://tools.ietf.org/html/rfc3720.

(GJH &RUH

(11)

6. A comparison of different data storage system organizations. Conclusions

In this section, comparisons are made between different storage system organizations. The main comparison, whose results are presented in Table 1, takes into account various technical, functional and economic factors.

Table 1. A comparison of different storage system organizations

DAS NAS

SAN Point-to-point Arbitrated

loop Switched fabric

Relative minimum build

costs Medium Low Medium Medium High

Maximum number of

devices Small Large Small Medium Large

Reconfiguration difficulty Large Small Medium Medium Small

Extensibility of an existing system

Very limited

Almost

unlimited Very limited Limited

Almost unlim-ited

Single points of failure Yes Could be

avoided Yes Yes

Could be avoided Main data transmission

bottlenecks

Server,

medium Medium Server

Server, medium

Could be avoided

Centralized administration No Limited Limited Yes Yes

Source: Own work.

Due to very limited scalability, Direct Attached Storage solutions are only suitable for small data storage systems. When the number of connected storage subsystems grows, it becomes hard to build and even harder to manage systems based on this kind of organization.

Systems based on Network Attached Storage are much easier to reconfigure and extend. This comes at a higher price, which is not always justified in case of very small systems. On the other hand, NAS lacks in performance and reliability, as well as management capabilities, compared to Storage Area Networks.

Although the switched fabric Storage Area Networks can be seen as some kind of a ultimate solution for data storage system organization, they may be a far too sophisticated and costly solution for small data storage systems. However, if they are considered suitable, other important questions arise. One concerns the technology to choose (FC SAN or IP SAN), the other – the actual structure of the fabric. As for the former, the relation of the available budget to the required performance, may help find the right answer, with administrators’ skills serving as a hint. In order to help answer the latter, Table 2 has been prepared, comparing different fabric structures.

(12)

Table 2. A comparison of different switched fabric organizations

Relative: SAN switched fabric

star cascaded ring mesh tree

Build cost

Very low Low Medium

Very high High

Extensibility Low

Reliability Very high

Performance Very high

Source: Own work.

As one can observe, star fabric can only be suitable for systems with very small number of storage subsystems and low data transmission. When the requirements are higher, cascaded or ring fabric should be used. Mesh and tree fabric are costly state-of-the-art solutions, and due to high number of interconnections between switches, the former is feasible only for smaller systems.

%LEOLRJUDSK\

[1] Clark, T.: Storage Virtualization: Technologies for Simplifying Data Storage and Management, Addison-Wesley Professional, Boston, MA, USA, 2005.

[2] Farley, M.: Storage Networking Fundamentals: An Introduction to Storage Devices, Subsystems, Applications, Management, and Filing Systems, Cisco Press, Indianapolis, IN, USA, 2004.

[3] Goldman, J. E., Rawles, P. T.: Applied Data Communications: A Business-Oriented Approach, John Wiley & Sons, Nowy Jork, NY, USA, 2004.

[4] Gupta, M., Sastry, C. A.: Storage Area Network Fundamentals, Cisco Press, Indianapolis, IN, USA, 2002.

[5] Jayaswal, K.: Administering Data Centers: Servers, Storage, and Voice over IP, Wiley, Indianapolis, IN, USA, 2006.

[6] Leach, P. J., Naik, D. C.: “A Common Internet File System (CIFS/1.0) Protocol. Preliminary

Draft”, Microsoft, Redmond, WA, USA, 19 December 1997,

ftp://ftp.microsoft.com/developr/drg/CIFS/draft-leach-cifs-v1-spec-01.txt.

[7] Naik, D.: Inside Windows storage: server storage technologies for Windows Server 2003, Windows 2000, and beyond, Addison-Wesley, Boston, MA, USA, 2003.

[8] O’Connor, M., Judd, J.: Introducing File Area Networks, Infinity Publishing, West Conshohocken, PA, USA, 2007.

[9] Patterson, D. A., Gibson, G. A., Katz, R. H.: “A Case for Redundant Arrays of Inexpensive Disks (RAID)”, [in:] Proceedings of the 1988 ACM SIGMOD International Conference on Management of Data, ACM Press, Chicago, IL, USA, 1988.

[10] RFC 3530, “Network File System (NFS) version 4 Protocol”, Internet Engineering Task Force, 2003, http://tools.ietf.org/html/rfc3530.

[11] RFC 3720, “Internet Small Computer Systems Interface (iSCSI)”, Internet Engineering Task Force, 2004, http://tools.ietf.org/html/rfc3720.

[12] RFC 3821, “Fibre Channel Over TCP/IP (FCIP)”, Internet Engineering Task Force, 2004, http://tools.ietf.org/html/rfc3821.

[13] RFC 4172, “iFCP – A Protocol for Internet Fibre Channel Storage Networking”, Internet Engineering Task Force, 2005, http://tools.ietf.org/html/rfc4172.

(13)

[14] Swacha, J.: Zarządzanie przechowywaniem danych – Metodyka oceny efektywnoĞci, Wydawnictwo Placet, Warszawa 2009 (in Polish).

[15] Tate, J., Lucchese, F., Moore, R.: Introduction to Storage Area Networks, IBM, Armonk, NY, USA, 2006.

[16] Toigo, J. W., The Holy Grail of Network Storage Management, Prentice Hall, Upper Saddle River, NJ, USA, 2003.

[17] Toigo, J. W.: The Holy Grail of Data Storage Management, Prentice Hall, Indianapolis, IN, USA, 1999.

[18] Vengurlekar, N., Vallath, M., Long, R.: Oracle Automatic Storage Management: Under-the-Hood & Practical Deployment Guide, McGraw-Hill, Nowy Jork, NY, USA, 2008.

(14)

PRZECHOWYWANIE DANYCH W PRZEDSIBIORSTWACH: PRZEGLD ROZWIZA

Streszczenie

Charakterystyka funkcjonalna systemu przechowywania danych, jego wydaj-noĞü, niezawodwydaj-noĞü, jak równieĪ koszty posiadania są w duĪej mierze rezultatem sposobu organizacji systemu. Wynika stąd potrzeba przemyĞlanego planowania sys-temu i wyboru takiego sposobu jego organizacji, który bĊdzie najlepiej odpowiadał charakterystyce uĪytkowania systemu i wymaganiom jego uĪytkowników.

W artykule zamieszczono przegląd metod organizacji systemów przechowywania danych, wyjaĞniając róĪnice pomiĊdzy pamiĊciami masowymi przyłączanymi bezpo-Ğrednio do serwera (Direct Attached Storage), pamiĊciami masowymi przyłączanymido sieci (Network Attached Storage) oraz sieciami pamiĊci masowych (Storage Area Networks). Głównym wnioskiem wynikającym z porównania jest, Īe mimo iĪ sieci pamiĊci masowych stanowią najbardziej rozwniniĊtą formĊ organizacji systemu przechowywania danych, dla małych i Ğrednich systemów mogą stanowiü rozwiązanie zbyt skomplikowane i drogie, podczas gdy pamiĊci masowe przyłączane bezpoĞrednio do serwera lub sieci mogą spełniü wszelkie wymogi systemów tej wiel-koĞci.

Słowa kluczowe: przechowywanie danych w przedsiebiorstwach, organizacja systemów przechowywania danych, sieci przechowywania danych

Jakub Swacha

PaĔstwowa WyĪsza Szkoła Zawodowa w Gorzowie Wlkp. Instytut Techniczny

ul. MyĞliborska 34, 66-400 Gorzów Wlkp. Instytut Informatyki w Zarządzaniu Uniwersytet SzczeciĔski

ul. Mickiewicza 64, 71-101 Szczecin e-mail: jakubs@uoo.univ.szczecin.pl

Cytaty

Powiązane dokumenty

Database used by the designed EMGE environment will be its central point and main module, storing all information regarding the users allowed to use the system together with

A system description using this same scalar diffraction picture of optical disc read-out that turned out to be sufficiently accurate for the lower density optical discs systems such

with respect to the square of a previous run. To verify that the SIL tip could reach contact with an optical disc, the height of the SIL tip relative to the aluminium covered

(Rolle), Tynna na końcu XVIII wieku, dz.. Mikołaja Reja poczytał za bogobojnego, świętego mędrca, a co za tym idzie – swego poprzednika i autorytet, jednak sam wykazał się

naturalne masy kłębów dochodzące do 10% oraz ubytki naturalne skrobi przekraczające 10% w stosunku do ilości skrobi stwierdzonej przy kop- cowaniu. Trzeba tu jeszcze dodać, że

W przytoczonym fragm encie zwraca uwagę fakt upatryw ania przez autora istoty pojęcia homo duplex w podziale człowieka na „w idza” i działający podm iot. Jam

We propose that using suitable detection schemes, this spatial variation (rotation) of intensity profiles can be used to extract multiple bits of information from the