LBS in 3D

Chen Ruizhi, Zhang Jixian, Jarmo Takala and Wang Jianyu
Wireless Asia

Most current map applications for smartphones and other devices providing location-based services (LBS) are based on two-dimensional maps. Three-dimensional (3D) city models are widely used in applications such as engineering design, environmental modeling, and urban planning. Adapting such models for use in smartphones would make it possible to render 3D scenes in real time, enriching contents and user experience for personal navigation and LBS. A delimited yet large-scale event such as the 2010 World Exposition in Shanghai provides a promising area for system development and testing.

3D visualization consumes a large amount of computing power, and most of the current successful applications run in a PC environment, as does the Google Earth 3D application. It is still a very challenging task to implement 3D visualization in an embedded system such as a smartphone.

This article presents an entire 3D personal navigation system based on the Nokia S60 smartphone platform. The study covers the following aspects: 3D personal navigation and LBS service in a smartphone, 3D city modeling and multi-sensor positioning.

The objectives of the work include prototyping an entire handset-based 3D personal navigation and LBS system utilizing WLAN/Bluetooth positioning technologies, handset built-in GPS/AGPS, and 3D modeling and visualization (basic demonstration scenario), as well as presenting a multi-sensor positioning (MSP) platform in addition to the handset software (advanced demonstration scenario).

All in the software

No additional hardware is added to the Nokia Series 60 (S60) smartphone platform to achieve the 3D visualizations or other functions in the software. Figure 1 shows the general architecture of the software.

Most of the challenging tasks are included in the development of the elements in the component layer, especially in the development of the 3D visualization engine based on the OpenGL ES API that is available from the S60 platform SDK. The high-level 3D visualization engine architecture covers the interface layer, the core engine layer, and the data management layer. The first one is responsible for cross-component functional communication, request handling, and data exchange. It provides users with the 3D scene visualization functionalities to access the core engine layer via a single class called NaviSceneControl, which includes all the operations of the 3D visualization: scene zooming, view angle rotating, scene and cursor moving, and selecting route planning and virtual navigation.

The core engine layer takes care of the 3D scene visualization computation and model object management. To enable the 3D visualization for a large region, the objects in the scene are classified into two categories in this layer. One is the 3D models like buildings, trees and poles, while the other is texture of land surface, which consist of ortho-rectified digital aerial photos. All the objects are processed as tiles according to the incoming parameters from the interface layer. Therefore only a small subset is loaded dynamically instead of the whole data.

Pages

Tags: 

Commentary

5G and data center-friendly network architectures

Matt Walker / MTN Consulting

Webscale and transmission network operators' interests are aligning as the 5G era dawns

Matt Walker / MTN Consulting

Webscale and transmission network operators' interests are aligning as the 5G era dawns

Rémy Pascal / Analysys Mason

The launch of 5G by South Korean operators serves as a first benchmark for other operators around the world