• Nie Znaleziono Wyników

Feature of Users’ Eye Movements during a Distributed and Synchronised VR Meeting using Cloud Computing

N/A
N/A
Protected

Academic year: 2021

Share "Feature of Users’ Eye Movements during a Distributed and Synchronised VR Meeting using Cloud Computing"

Copied!
8
0
0

Pełen tekst

(1)

Feature of Users’ Eye Movements during a Distributed

and Synchronised VR Meeting using Cloud Computing

Tomohiro Fukuda1, Masaharu Taguchi2

1Division of Sustainable Energy and Environmental Engineering, Graduate School of En-gineering, Osaka University, Osaka, Japan, 2Kajita Corporation Co., Ltd., Nagoya, Japan 1http://y-f-lab.jp/e_index.php, 2http://www.kajita.co.jp/

1fukuda@see.eng.osaka-u.ac.jp, 2taguchi2829@gmail.com

Abstract. Owing to cloud computing Virtual Reality (cloud-VR), a note PC or tablet with no need for a high spec GPU can be used for sharing a 3D virtual space in a synchronous distributed type design meeting. This research investigates the users’ eye movements and optimization of the GUI of cloud-VR during a distributed and synchronized VR meeting. Firstly, a townscape design support system based on cloud-VR was constructed. Then, a 30-minute experiment was executed with eight subjects who wore an eye-tracking system. In conclusion, it was effective to use the eye-tracking system because meeting participants could discuss while confirming each other’s eye direction in an actual distributed and synchronized VR meeting. In scenes where a reviewer listened to a presenter’s explanation, the tendency to look at VR contents other than operation commands was observed. On the other hand, the tendency to look at operation commands about viewpoints, such as “walk-through” and “jump” to an important viewpoint location, was observed in scenes about which a reviewer argued with a presenter.

Keywords. Spatial design; distributed synchronization; cloud computing; cognitive analysis; eye-tracking.

INTRODUCTION

In the spatial design field such as architectural design, urban design, and industrial design, a consensus-building process among a variety of stakeholders such as project executors, designers, neighborhood residents, users, and general citizens is required. Since it is necessary to share three-di-mensional images to study design, 3DCG (3-Dimen-sional Computer Graphics), VR (Virtual Reality) and BIM (Building Information Modeling) systems have been developed. Design meetings using these sys-tems have been traditionally held in the same-room and at the same time. In recent years, the mobility of people’s activities, and cloud computing

technolo-gies have become advanced in the modern age of information and globalization. Therefore, system developments and design trials of an asynchronous distributed type are used which means that stake-holders participate in the design process at different places and at different times (Maher, 1999; Matsu-moto, 2006). This allows expansion of communica-tion opportunities, without a participant needing to worry about restrictions of space and time.

In a synchronous distributed type of environ-ment, research on designs supporting a system for sharing three-dimensional virtual space exists. There is a system which allows designers to be physically

(2)

immersed in their sketches and physical models, literally inside life-size, real-time representations of these, while sharing them remotely with another system of the same sort (Dorta, 2011). However, in this research, a framework for two or more stake-holders to participate in a design meeting of a syn-chronous distributed type using a standard spec PC is proposed. The data volume of the content of a de-sign study is usually large. Therefore, when drawing 3D graphics with a client PC, a client PC with a high spec GPU (Graphics Processing Unit) is required (Gu, 2009; Shen, 2010). A standard spec PC cannot nec-essarily be used to participate in a design meeting. To solve this problem, Fukuda et al (2012) presented the capability of a synchronous distributed type de-sign meeting by using the cloud computing type VR (cloud-VR).

In cloud-VR, contents are transmitted by the video compression method of the H.264 standard. Commands about viewpoint change, plan changes, etc. of the three-dimensional virtual space on the client running on Microsoft Windows or Android OS are calculated from the VR contents on a cloud com-puting type VR server. Then the calculated contents are displayed in real time on the client as a video, us-ing the H.264 standard (see Figure 1). One user can operate the virtual space of the cloud computing type VR, and the time for which it can be operated is less than 2 minutes. Therefore, this system has the following merits. 1) A highly efficient graphics envi-ronment is unnecessary in a client. Therefore, even at some sites or places where it is difficult to use a high-performance PC, it is still available on mobile devices. 2) Plural participants can share a viewpoint, alternatives, or the VR setup in synchronization. 3)

The VR application version or 3D contents are uni-fied by the management on the server side.

AIM AND METHODOLOGY

This paper presents the users’ eye movements and optimization of the GUI (Graphical User Interface) of cloud-VR during a distributed and synchronized VR meeting. Each subject carries the EMR-9, which is an eye-tracking system and is used to operate the cloud-VR [1]. EMR-9 has two kinds of camera, an eye-ball camera and a visual field camera. The eyeeye-ball camera is attached to the user’s head, and this de-tects eye movements. Next, the position of the view-point detected by the eyeball camera is displayed as an eye mark on an image recorded by the visual field camera which has 44.0 degrees in a horizontal angle. The feature of user eye-tracking based on the displayed eye marks is analyzed (see Figure 2 and 3).

In this research, townscape design is targeted for a typical experiment on spatial design. Firstly, a townscape design support system based on a cloud computing type VR was constructed (see Figure 4). The experiment using a synchronous distributed type meeting of townscape design for 30 minutes was executed with 8 subjects who were specialists in the townscape design field. For this, a designer and a reviewer paired up. The streets in Shimonose-ki-city, Japan are extension of 350 meters, and the width of 15 meters. Regarding the content of the ex-periment, the designer presents four kinds of street design proposals, after explaining the current prob-lem. Each design differs in the width of the sidewalk from 3.5 m, 4 m, and 5 m. Also, the way of using the sidewalk and building differ according to the width of the sidewalk. As the method of presentation,

Figure 1

Configuration of data trans-mission in Cloud-VR.

(3)

after looking down at the whole, a real-time walk-through along the sidewalk is carried out. Since traf-fic changes with the change of lane distribution, a simulation of the traffic stream is also carried out. A reviewer asks and comments operating the cloud computing type VR, after listening to a designer’s presentation. As regards the 8 subjects, three jects used a video conferencing system (Skype. sub-ject ID 6-8), and 5 did not use one (subsub-ject ID 1-5). Five subjects use a stand-alone type VR at least once a month (subject ID 4-8) and three subjects do not always use one (subject ID 1-3).

The whole experiment flow is shown below: 1. A subject wears the eye-tracking system. 2. A researcher makes the default settings of the

system, after checking that the subject is in a relaxed posture.

• The arm of the eyeball camera is taken down.

• The angle and focus of the view camera are adjusted.

• The direction of the eyeball camera is ad-justed so that the pupil may be located near the center of the monitor image. • A setup which detects the pupil and the

corneal reflex image is performed. • A calibration is performed in order to

ac-quire eye movement data accurately. 3. The subject is measured by the eye-tracking

system during the synchronous distributed type meeting for 30 minutes.

4. Measurement data is analyzed by eye-tracking analysis software “EMR-dFactory” after the ex-periment.

5. Eye-tracking is analyzed from the measure-ment data.

The analysis scene is shown in Table 1 and Fig-ure 5.

RESULT AND DISCUSSION

• Table 2-5 show the percentage of eye-tracking results itemized by VR display item in scenes Figure 2

Eye-tracking system (left), pupil and corneal reflection (middle) and experimental photo (right).

Figure 3

Screen shot of analysis software (left) and example of eye-tracking result per VR display items (right).

(4)

1-4. Table 6 shows the percentage of eye-track-ing results itemized by each operational com-mand in scene 4.

• In scene 1, telop shows the highest mean per-centage in all the items. The second to fourth ranking shows a building, parasol and others on the VR display except the case of subject ID 7 who looked at a Skype display. The reason why telop has the highest mean percentage is that scene 1 explains a plan outline using telop in the lower part of the VR display. Also, the reason why SD in the scene 1 is a small value compared to other scenes is that exactly the same content can be shown to every subject using a prepared automatic scenario. That is also why the mean percentage of operational commands is low. There is little time to look at anything except the VR display.

• In scenes 2 and 3, people and parasols on the

footway show a higher mean percentage than the other items. That is why the designers pre-sent a footway design alternatives by a differ-ence in footway width using operational com-mands in this scene. There is little time to look at telop or operational commands because every subject is listening to the designer’s pres-entation. There is also little time to look at any-thing except the VR display.

• In scene 4, operational commands show higher mean percentage than the other items. That is why each subject operates the VR by himself, while in discussion with the designer. There is little time to look at telop or anything except the VR display. The higher ranking items, ex-cept operational commands, differ between subjects because the content of the discussion is different. The use of each operational com-mand in scene 4 is shown in detail.

Figure 4

Developed townscape design support system based on a cloud computing type VR.

Scene Contents Situation of Reviewer Time span (m:ss)

1 Script explaining a plan outline Listening to designer's presentation

1:30

2 Design No.2: Width = 4m of sidewalk 1:13 - 1:48

3 Design No.3: Width = 5m of sidewalk 0:50 - 2:27

4 For 2 minutes after the time when reviewer began the argument after a presenter's explanation

Arguing to presenter 2:00

Table 1 Analysis scene.

(5)

• As a result of eye-tracking itemized by each operational command in scene 4, the highest mean percentage shows the “viewpoint” com-mand in all the comcom-mands. The second rank-ing shows the “walk-through” command. On the other hand, the “environment” and the “tilt” commands show less than 10%, nonetheless, the percentage is different for each subject. The “viewpoint” command jumps a representative viewpoint prepared by selecting a pull-down menu for easy operation and allowing the user to review the proposed landscape from some important viewpoints. The “walk-through” command allows the user to go forward and back freely and interactively for a simulated ex-perience of walking along a footway and driv-ing along a street. However, subject ID 1-3 who

had no experience of VR tended to use select-ing commands such as “viewpoint”, “drivselect-ing”, “script” and “environment” rather than interac-tive commands such as “rotation”, “tilt”, “walk-through” and “translation”. The user-friendliness of operational commands is an important issue on a distributed and synchronized VR meeting environment because the user has to operate the system by himself without professional VR support. Based on the results of the experi-ment, improvements were made to the GUI, in-cluding operational commands (See figure 6).

CONCLUSION

This research shows the users’ eye movements and optimization of the GUI of cloud-VR during a distrib-uted and synchronized VR meeting. The contribu-Figure 5

VR screenshot of analysis scene: Scene 1 (left), Scene 2 (middle) and Scene 3 (right).

Sub-ject's ID Telop Opera-tional com-mands

Buil-ding People Para-sol Plant Others on VR display Skype display Except VR display 1 38.19 0.00 13.80 5.08 18.84 1.97 21.76 N/A 0.37 2 35.29 0.9 19.14 3.64 15.58 9.57 15.87 N/A 0.00 3 36.61 1.02 16.07 4.11 15.43 2.05 21.72 N/A 2.99 4 36.53 1.28 22.54 1.39 16.04 5.54 16.68 N/A 0.00 5 39.15 0.00 26.93 7.04 11.08 8.00 7.80 N/A 0.00 6 30.62 0.00 23.92 4.05 14.28 9.51 12.29 3.84 1.48 7 44.99 0.90 18.30 3.71 8.24 1.89 8.74 10.72 2.51 8 45.59 0.00 7.14 1.22 18.44 4.26 22.70 0.66 0.00 Mean 38.37 0.51 18.48 3.78 14.74 5.35 15.94 1.90 0.92 SD 4.64 0.52 5.86 1.76 3.34 3.12 5.53 3.55 1.16 Table 2 Percentage of eye-tracking result itemized by VR display item in scene 1.

(6)

Sub-ject's ID Telop Opera-tional com-mands

Buil-ding People Para-sol Plant Others on VR display Skype display Except VR display 1 0.80 0.00 3.91 29.57 35.38 10.96 17.63 N/A 1.75 2 0.00 0.00 0.77 37.66 44.66 9.17 7.74 N/A 0.00 3 2.60 0.00 3.07 47.36 25.16 3.39 18.41 N/A 0.00 4 0.00 0.52 10.20 42.39 28.66 14.75 3.14 N/A 0.35 5 0.20 4.50 31.03 19.52 15.46 12.54 15.63 N/A 1.12 6 0.00 0.00 20.31 36.80 19.51 8.78 14.60 0.00 0.00 7 0.00 0.37 3.72 31.04 33.73 12.58 10.31 5.37 2.87 8 2.64 0.00 3.83 36.07 32.52 4.16 20.78 0.00 0.00 Mean 0.78 0.68 9.60 35.05 29.39 9.54 13.53 0.67 0.76 SD 1.09 1.46 9.97 7.93 8.72 3.79 5.60 1.78 1.00 Sub-ject's ID Telop Opera-tional com-mands

Buil-ding People Para-sol Plant Others on VR display Skype display Except VR display 1 0.00 0.00 5.86 32.07 37.37 12.69 10.70 N/A 1.30 2 1.19 1.73 3.77 41.19 35.57 8.50 8.04 N/A 0.00 3 2.50 2.46 3.73 52.98 22.68 1.60 8.56 N/A 5.49 4 0.20 0.52 11.44 36.84 31.00 11.54 8.46 N/A 0.00 5 0.00 0.77 28.55 16.63 17.95 30.94 5.17 N/A 0.00 6 0.00 0.00 8.78 39.38 32.17 7.78 7.15 3.98 0.75 7 0.00 0.42 3.88 25.56 24.90 10.47 22.92 4.13 7.72 8 0.00 0.00 5.17 43.41 45.53 0.56 4.21 1.12 0.00 Mean 0.49 0.74 8.90 36.01 30.90 10.51 9.40 1.15 1.91 SD 0.85 0.85 7.86 10.50 8.30 8.75 5.45 1.71 2.80 Table 3 Percentage of eye-tracking result itemized VR display item in scene 2. Sub-ject's ID Telop Opera-tional com-mands

Buil-ding People Para-sol Plant Others on VR display Skype display Except VR display 1 0.00 19.92 4.34 14.84 9.51 46.13 4.28 N/A 0.97 2 2.18 72.32 8.28 1.31 0.26 2.71 12.93 N/A 0.00 3 8.43 19.98 9.75 16.84 6.70 26.96 10.14 N/A 1.19 4 0.00 39.09 7.24 10.03 20.64 2.66 19.85 N/A 0.50 5 0.00 58.52 8.84 9.37 1.03 6.48 9.51 N/A 6.25 6 0.24 24.56 0.55 11.65 0.79 2.25 53.77 0.00 6.19 7 0.00 46.04 5.65 5.68 4.25 4.58 16.04 16.94 0.83 8 0.00 50.12 8.55 2.02 7.44 13.92 17.54 0.40 0.00 Mean 1.36 41.32 6.65 8.97 6.33 13.21 18.01 2.17 1.99 SD 2.77 17.87 2.84 5.28 6.29 14.73 14.29 7.89 2.47 Table 4 Percentage of eye-tracking result itemized by VR display item in scene 3.

Table 5

Percentage of eye-tracking result itemized by VR display item in scene 4.

(7)

tions of this research are as follows:

• There is little time to look at anything except the VR display. Although it was not possible for the presenter to check whether the reviewer would look at the VR display, the fact that this happened could be verified from the result of this analysis. Moreover, although it is effec-tive to use an eye-tracking system in an actual distributed and synchronized VR meeting, it is difficult to prepare the system under the pre-sent circumstances. Therefore, in order to

un-derstand the state of whether to look at the display mutually, video conferencing systems, such as Skype, are required.

• A difference was observed in the commands used when scenes 1-4 were compared. In scenes where a reviewer listened to a pre-senter’s explanation, such as scenes 1-3, the tendency to look at VR contents rather than operation commands was observed. On the other hand, the tendency to look at opera-tion commands was observed in scenes about

Sub-ject's ID View-point Driving Script Envi- ron-ment

Rota-tion Tilt Walk-through Trans-lation

1 58.37 3.46 1.90 2.66 5.83 6.92 3.46 17.40 2 57.49 28.09 12.13 1.88 0.4 0.00 0.00 0.00 3 0.00 12.88 78.77 3.86 0.00 0.9 3.59 0.00 4 23.83 2.16 1.03 1.92 7.70 25.90 21.57 15.87 5 9.54 19.44 2.04 0.90 40.02 7.72 11.14 9.20 6 0.00 1.53 0.00 0.00 4.25 18.56 47.88 27.78 7 38.24 28.57 5.47 1.58 0.99 6.34 17.45 1.36 8 0.00 0.00 0.00 9.6 50.93 9.78 18.79 10.90 Mean 23.43 12.02 12.67 2.8 13.76 9.51 15.49 10.32 SD 23.56 11.25 25.27 2.78 18.68 8.18 14.34 9.22 Table 6 Percentage of eye-tracking result itemized by each opera-tional command in scene 4.

Figure 6

Improvement of GUI based on the result of experiment.

(8)

which a reviewer argued with a presenter, such as scene 4. Furthermore, although there were differences between subjects, it became clear that the rate of use of commands about view-points, such as “walk-through” and “jump” to an important viewpoint location, was high. In this research, eye movements of VR users were acquired using an eye-tracking system in a distributed and synchronized meeting. When us-ing this system, meetus-ing participants can discuss with each other while confirming each other’s eye direction. On the other hand, it is unusual for a VR user to use an eye-tracking system in a distributed and synchronized meeting at the present time. In the near future, along with the popularization of the augmented reality-type wearable computer with a head-mounted display (HMD), eye-tracking might be included in wearable computers as a standard feature.

REFERENCES

Dorta, T., Kalay, Y., Lesage, A. and Perez, E. 2011, ‘Comparing Immersion in Remote and Local Collaborative Ideation through Sketches: A Case Study’, Proceedings of the 14th international conference on Computer Aided Ar-chitectural Design (CAADFutures 2011), pp. 25-39. Fukuda, T., et al. 2012, ‘Distributed and Synchronised VR

Meeting using Cloud Computing - Availability and application to a spatial design study -’, Proceedings of the 17th International Conference on Computer Aided Architectural Design Research in Asia (CAADRIA2012), pp. 203-210.

Gu, N., Nakapan, W., Willians, A. and Gul, L. F. 2009, ‘Evaluat-ing the use of 3D virtual worlds in collaborative design learning’, Proceedings of the 13th international confer-ence on Computer Aided Architectural Design (CAAD-Futures 2009), pp. 51-64.

Maher, M. L. and Simoff, S. 1999, ‘Variations on a Virtual De-sign Studio’, Proceedings of Fourth International Work-shop on CSCW in Design, pp. 159-165.

Matsumoto, Y., Kiriki, M., Naka, R., Yamaguchi, S. 2006, ‘Sup-porting Process Guidance for Collaborative Design Learning on the Web; Development of “Plan-Do-See cycle” based Design Pinup Board’, Proceedings of the 11th International Conference on Computer Aided Architectural Design Research in Asia (CAADRIA2006), pp. 72-80.

Shen, Z. and Kawakami, M. 2010, ‘An online visualization tool for Internet-based local townscape design’, Com-puters, Environment and Urban Systems, 34(2), pp. 104-116.

[1] http://www.nacinc.com/products/Eye-Tracking-Prod-ucts/EMR-9/

Cytaty

Powiązane dokumenty

In Poland, 5* hotels constitute the least numerous group (50 hotels, market share at a level of 2%), however the number of rooms they do offer is comparable to the number supplied

Korekty reszty Solowa poprzez uwzględnienie zmienności wykorzystania tylko jednego z czynników produkcji nie powodują poprawy w stosunku do wersji standardowej, przy czym

The risk of lung cancer (malignant neo- plasm of bronchus and lung, i.e., C34 according to ICD- 10) and the incidence/mortality in miners were com- pared with the general

Niezrozumiale jest dla mnie, w jaki sposób doszedł do wniosku, że do armii carskiej po upadku powstania nie wcielono 150 639 Polaków, a właśnie tylko 69 357, chyba że liczba

Osobitnú pozomosf veno ­ val kompozícii (napr. Kompozícia jazykového prejavu, 1968), problematike ja­ zyka, stylu a textu, pricom sa zameriaval nielen na umelecké, ale

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright

Przedstawiono także rezultaty badań dotyczących oceny proce- sów formułowania i wdrażania polityki regionalnej na przykładzie województwa mazowieckiego, z których wynika,

We establish uniqueness for a class of first-order Hamilton-Jacobi equa- tions with Hamiltonians that arise from the large deviations of the empirical measure and empirical flux pair