Magazine for Sur veying, Mapping & GIS Professionals
June 2010 Volume 13
G GIS and Imagery G Real World Gaming G GeoSAR G NEXTMap USA
You have something new to hide.
Success has a secret ingredient.
Size is everything. Our new OEMV-1DF is the world’s smallest dual-frequency RTK receiver. That’s pretty big news for engineers. Smaller means it uses less power. Smaller means it weighs less. Smaller means it can fit in a lot more places. Smaller means it’s easier to hide your advantage from competitors. And when your advantage is NovAtel, that’s a big deal. To find out more, visit novatel.com or call you-know-who.
Integrate success into your
GeoInformatics provides coverage, analysis and commentary with respect to the international surveying, mapping and GIS industry. Publisher Ruud Groothuis firstname.lastname@example.org Editor-in-chief Eric van Rees email@example.com Editors Frank Artés firstname.lastname@example.org Florian Fischer email@example.com Job van Haaften firstname.lastname@example.org Huibert-Jan Lekkerkerk email@example.com Remco Takken firstname.lastname@example.org Joc Triglav email@example.com Contributing Writers Angus W. Stocking Lawry Jordan Karel Sukup Florian Fischer Ken Goering Philip Cheng Chuck Chaapel Kevin P. Corbley Matthew DeMeritt Account Manager Wilfred Westerhof firstname.lastname@example.org Subscriptions GeoInformatics is available against a yearly subscription rate (8 issues) of € 89,00. To subscribe, fill in and return the electronic reply card on our website or contact Janneke Bijleveld at email@example.com Advertising/Reprints All enquiries should be submitted to Ruud Groothuis firstname.lastname@example.org World Wide Web GeoInformatics can be found at: www.geoinformatics.com Graphic Design Sander van der Kolk email@example.com ISSN 13870858 © Copyright 2010. GeoInformatics: no material may be reproduced without written permission. GeoInformatics is published by CMedia Productions BV Postal address: P.O. Box 231 8300 AE Emmeloord The Netherlands Tel.: +31 (0) 527 619 000 Fax: +31 (0) 527 620 989 E-mail: firstname.lastname@example.org
A Magical Mystery Augmented Reality Tour
Recently I had the opportunity of visiting the Location Business Summit in Amsterdam. During this two-day conference there were some interesting reflections on the development of location based services. Of particular interest was a presentation by Gary Gale from Yahoo! Geo Technologies, called ‘Taking the hype out of location based services’. He came up with some interesting thoughts. Not only did he mention that smoke signals could be regarded as location based services avant-la-lettre, he showed that since we are gathering information all the time, we lose perspective. In his own words: ‘we lose the when in order to get the now’. The history of maps is lost by mapping the present, that changes all the time. Also, he used the term ‘Geobabel’ to point out how people think they are talking about the same location, but in fact they are not without realizing it. The same place may mean something else to everyone. In short, with new technologies such as location based services, the concept of location and place is redefined. Context is important here. A point of interest or location could be anything, depending on the context. Manhood’s relation with place is complex and geographers use psychological theories to understand this relation. Social media in combination with location will surely pave the way for redefining place, both virtual and physical. A Magical Mystery Augmented Reality Tour for instance. Layar created one and I’m excited about it, even though I’m not a Beatles fan myself.
Enjoy your reading!
Eric van Rees email@example.com
Articles Building a Modern GIS For an Ancient City
GIS and Imagery How They Became Pals Moving Forward Image Data Acquisition and Processing of Clustered Cameras Real World Gaming with GPS-Mission Business Perspectives of Location Based Entertainment
NEXTMap USA A GPS Coordinate for Everything in the United States 26
Building a Modern GIS
Founded by Romans in 34 BC and with a current population of 92,000, Cáceres is one of Europe’s oldest cities. Recently, a team of three city planners working with a modest budget were able to implement a worldclass municipal GIS using existing digital cartography and a variety of existing databases. Many tasks that were slow and tedious are now automated, freeing professionals for more productive activities.
Pan-sharpening and Geometric Correction WorldView-2 Satellite A Collaborative Project The Archaeological Potential for Shipwrecks Making Mapping the ‘Impossible’ Possible GeoSAR
Interviews Spatial Technology For Utilities, Public Safety and Security Solutions
The Data Exchange Company Snowflake Software Translate, Transform, Integrate and Deliver Data Moving Data with FME President of ERDAS Joel Campbell
Conferences and Meetings Thriving on Energy of Shared Innovation 2010 ESRI Developer Summit
’Are We There Yet?’ The Location Business Summit
GIS and Imagery: How They Became Pals
Historically, imagery and GIS have occupied two separate worlds. Imagery had its own methodologies, its own language, and its own set of distinct instruments. In the same way, GIS had its own tools, technicians, and “geek speak.” Although ESRI added support for imagery and rasters into its software as early as 1982, everyone on both sides knew technology had to evolve before GIS and imagery could converge in a completely unified environment.
Calendar Advertisers Index
Business Perspectives of Location Based Entertainment
Location-Based Entertainment seems to come of age slow but surely. Smartphones and reasonable mobile internet fares establish a framework to enable a broad public for gaming. The International Mobile Gaming Award just introduced the category of “real world games” last year and experts await good business perspectives for location-based games in marketing, tourism and education.
On the Cover: GeoSAR P-band DEM and orthorectified radar image highlight intricate geomorphological and textural details on the Galeras volcano (Colombia) and adjacent agricultural features on the fertile slopes of the active volcano and surrounding the city of Pasto definition. See article at page 44.
In less than a decade of commercial operations, Fugro EarthData’s GeoSAR system has earned a reputation for mapping the impossible. GeoSAR is a dual-band airborne interferometric radar system that is capable of rapidly mapping large areas in any weather conditions. In 2009 Fugro EarthData, which integrated and operates the system commercially, used GeoSAR to complete one of the most challenging terrestrial mapping projects the firm had ever attempted.
Latest News? Visit www.geoinformatics.com
Building a Modern GIS
For an Ancient City
Founded by Romans in 34 BC and with a current population of 92,000, Cáceres is one of Europe’s oldest cities. Recently, a team of three city planners working with a modest budget were able to implement a world-class municipal GIS using existing digital cartography and a variety of existing databases. Many tasks that were slow and tedious are now automated, freeing professionals for more productive activities. By Angus W. Stocking, L.S. Accessible via the Internet
But if the project’s challenges were big, so were its goals. Planners wanted to give all city employees access to the GIS, they wanted it to incorporate all existing databases—along with information from utilities, railways, and highway departments—and they wanted the GIS to be easily accessible to the public via the Internet. To accomplish all this, they broke the project down into phases. The first phase was to design and organize the GIS. One early decision was to build the new system with Bentley software to take advantage of staff’s familiarity with it. MicroStation, MicroStation GeoGraphics, and Descartes were heavily used to assemble the cartographic layers. “We had a lot of our urban planning information on paper so we scanned that for a raster layer and then compared that to digital mapping that were able to import. We adapted and drafted as needed to create base mapping, which gave us a high-quality end product,” explained Faustino Cordero, GIS department assistant. The Cáceres team also turned to dozens of outside sources for cartographic information, including the National Geographic Institute, the Geographic Army Service, historic maps on file at the Cáceres Library, and existing street maps. Most of these were paper-based and required digitizing.
A sampling of infrastructure maps managed by the GIS of Cáceres (Photo credit: Ayuntamiento de Cáceres)
Cáceres, Spain, is a UNESCO World Heritage
City renowned for its blend of Roman, Islamic, Jewish, and Christian cultures and medieval architecture, all of which have left their traces on the city. Founded by Romans in 34 BC and with a current population of 92,000, Cáceres is one of Europe’s oldest cities. But Cáceres is a modern city as well, and its city servants—like their counterparts around the world—struggle to serve citizens efficiently. Recently, a team of three city planners working with a modest budget were able to implement a world-class municipal GIS using existing digital cartography and a variety of existing databases. “The GIS was quickly adopted by the public and has become a daily timesaver for city offices,” said GIS Department Director Luis Antonio Álvarez Llorente.
Since there was no budget for outside consultants, the city’s planning staff had to develop the GIS on their own. And the databases and cartography that existed had not been designed with a GIS in mind. Álvarez continued, “Everything we had—mapping and alphanumeric information—was prepared internally. When the project started in 1999, we had some digital cartography that was inconveniently formatted, a lot of paper maps and documentation, and databases in different formats scattered across several city departments. Also, we’re very busy so we couldn’t assign a lot of staff to this—there were only two technical staff assigned to the project permanently, and occasionally we’d form small, temporary teams for particular phases.”
This base mapping was made available to city staff, and immediately proved useful. The success of this phase encouraged planners and work continued on base layers. Urban and rural cadastral mapping was imported to aid assessors, and orthophotos were adapted and tied to the GIS coordinate scheme. The next phase involved consolidating alphanumeric information—on paper and in databases—in the GIS. Bentley tools were able to work
Wireframe 3D model of the old Cáceres city (Photo credit: Ayuntamiento de Cáceres)
with the various data formats, and staff was able to import paper-based info. Once again, work at this phase was made available as completed and immediately found eager users. “Thanks to the versatility of the software, the available maps and data were easy to consolidate and we’ve seen a big return on our investment,” noted Álvarez. With the basic format created and most available city information included, the GIS planners turned to outside sources to increase usefulness. Cáceres was able to reach data-sharing agreements with all the utility companies that serve Cáceres, including water, wastewater, gas, and electrical. Cáceres was also able to get digital information about the road and rail networks, which consisted of a total length of
Latest News? Visit www.geoinformatics.com
A sampling of infrastructure maps managed by the GIS of Cáceres (Photo credit: Ayuntamiento de Cáceres)
Mainface of the street map printed on paper (Photo credit: Ayuntamiento de Cáceres)
3,000 kilometers of unpaved roads, and have integrated everything into the GIS. Seeking the most complete and useful information possible, planners continued to add to the GIS, and found ways to import and reference historical cartography, livestock paths, public transportation routes, tourist-oriented street maps, and other information resources. All city buildings are identified, with addresses, useful information like hours of operation, and more than 15,000 total pictures of buildings. Other buildings available for search include pharmacies, health centers, and schools.
Button bars were also created to make the interface readily useable by the public and city employees. In all, 30 VBA modules with a total of more than 5,000 lines of code were built. Designers have consistently updated, expanded, and improved the Cáceres GIS. Álvarez explained that it’s a living thing, currently managing 42,000 archives with more than 50 gigabytes of data and 50 workstations for city use distributed throughout the city’s departments. “All the information is centralized and accessible to all departments,” noted Cordero. “That way, the changes, updates, or improvements we make are immediately available, not only for the use of public servants, but for the public as well. The power and versatility of this tool is evident from the large volume of data we’re able to manage and make accessible”. Álvarez is effusive when speaking to the benefits of the GIS. “We have better control of tax collection and much more ability to answer planning questions. Our census information is much more accurate, and we’re able to do more
with it. And we can do a lot more for the citizens of Cáceres—for example, we’ve easily produced more than 50,000 street maps, tourist maps, and public transportation maps,” said Álvarez. He added that many tasks that were slow and tedious are now automated, freeing public servants for more productive activities. The system is also a hit with the public, and more than 150 Cáceres residents use it each day. Cáceres spent 10 years and 1.3 million euros on the GIS project, when all the staff hours, software, workstations, and training hours are taken into account. Several constituencies agree that it was money well spent—the city can accomplish vital tasks more quickly and effectively and take on some chores that were previously impossible, and residents have a resource they can turn to again and again for information.
Angus W. Stocking, L.S. is a licensed land surveyor who writes about infrastructure projects around the world. He can be contacted at firstname.lastname@example.org.
I believe in precision.
The new Leica ScanStation C10: this high-definition 3D laser scanner for civil engineering and plant surveying is a fine example of our uncompromising dedication to your needs. Precision: yet another reason to trust Leica Geosystems.
Precision is more than an asset – when your reputation is at stake, it’s an absolute necessity.
Zero tolerance is the best mindset when others need to rely on your data. That’s why precision comes first at Leica Geosystems. Our comprehensive spectrum of solutions covers all your measurement needs for surveying, engineering and geospatial applications. And they are all backed with world-class service and support that delivers answers to your questions. When it matters most. When you are in the field. When it has to be right. You can count on Leica Geosystems to provide a highly precise solution for every facet of your job.
Leica Geosystems AG Switzerland www.leica-geosystems.com
GIS and Imagery
How They Became Pals
Historically, imagery and GIS have occupied two separate worlds. Imagery had its own methodologies, its own language, and its own set of distinct instruments. In the same way, GIS had its own tools, technicians, and “geek speak.” Although ESRI added support for imagery and rasters into its software as early as 1982, everyone on both sides knew technology had to evolve before GIS and imagery could converge in a completely unified environment. By Lawrie Jordan Moore’s Law
Today, thanks in large part to enabling technologies and Moore's law, GIS and imagery have combined on the desktop. The result is that the long-imagined symbiosis between imagery and GIS is here. The challenge is to demonstrate that symbiosis to those who can most benefit from it. Thankfully, this job is easy. IT is filled with examples of technological symbiosis. It's not hard, for example, to explain how weather satellite technology informs meteorological science and, conversely, how meteorological science informs weather satellite technology. The imagery from sensors complements atmospheric science because it contains valuable data. That's similar to how GIS and imagery inform each other. Photographs of the earth are inherently spatial. GIS extracts the spatial data inherent in the photographs then processes it, analyzes it, and manages it all on the same platform. That's easy to convey to this audience because it is common sense.
Universally Understood Principle
Users of spatial information all have a common objective: they all want to produce successful projects in increasingly shorter time frames. At some point in the evolution of software, almost everybody in the software business realized that meeting that objective requires the consolidation of tasks in a workflow. Complicated processes could be automated. Moore’s law enabled CPUs to perform a number of concurrent operations without frying circuits. The software suite was born from that novel development. The creation of ArcGIS exemplifies that bundling of functionality. It couldn’t do everything at first, but it did a lot.
GeoEye 1 high resolution satellite imagery over Queenstown, New Zealand, with local government parcel basemap.
GeoEye 1 image of Queenstown Airport, on-the-fly sharpening applied.
Killing two birds with one stone is an age-old, universally understood principle. If dinner can be had with the least expenditure of energy, that conserves time and calories for other equally important tasks. Companies and governments operate the same way on a macro scale. Only in their case, “time and calories” represent their goal to always run at optimal efficiency. So what does this have to do with earth views and cartography? With imagery and geoprocessing tools in a single interface, GIS technicians no longer have to open an image-processing package to modify imagery data, nor do they have to deal with separate licensing. At 9.3.1, ArcGIS combined imagery and GIS analysis in one integrated environment that immediately improved workflow. By availing themselves of that merger, organizations maximized the value of their imagery data. Many other benefits loom on the horizon with ArcGIS 10. The next release of ArcGIS includes a new Image Analysis window in the user interface, which enables quick access to a range of tools that those who work with imagery typically require. That integration paves a more direct path to results. Users can also now create catalogs of all the rasters in their organization as well as define metadata and processing to be performed. Access has been beefed up, as well. Image services open the door to huge imagery holdings like ArcGIS Online, Bing, and the forthcoming ArcGIS.com. The surplus of quality imagery data is ever-growing.
mosaicking. These entail going back to the original source pixels to render hundreds of thousands of images that instantly display on the screen. This is tremendously powerful, and ESRI's use of it is unique in the industry. It means if there’s an organization that wants to host a dataset—say, an image mosaic of the world (or any other dataset)— it could easily accommodate tens of thousands or hundreds of thousands of users who want to view it at the same time. Tiled caches are invaluable for that scale of image delivery. Many common business needs are easily met thanks to this performance gain. Not only can pre-processing and dynamic mosaicking save terabytes of intermediate file storage, the results return accurately and instantly.
Mosaic Datasets and Multiple Sensor Models
At ArcGIS 10, ESRI decided to combine GIS and imagery into a single comprehensive data model stored within the geodatabase called the Mosaic Dataset. The enhanced scalability enables massive volumes of imagery to be quickly and easily cataloged from within ArcGIS Desktop or automated using the geoprocessing tools. Mosaic datasets not only catalog the data; they enable definition of extensive metadata and processing to be performed on the imagery. This processing can include simple aspects, such as clipping and enhancement, to more detailed orthorectification, pansharpening, pixel-based classification, and color correction. Additionally, Mosaic Datasets can be deployed as image services, making them quickly accessible to a large number of different users both over local networks and the Web. The Mosaic Dataset is the implementation of image serving technology directly into the core of GIS. Soon, Mosaic
Moore's law told us one day these two disciplines would marry, and indeed they have. That is evident in ESRI's on-the-fly processing and dynamic
Latest News? Visit www.geoinformatics.com
Keynote speaker David Chappell explains why cloud computing is a golden opportunity for developers
GeoEye 1 image of Queenstown Airport, on-the-fly terrain hillshade processing Interactive supervised classification of a DigitalGlobe WorldView 2 8-band image.
Datasets will become the de facto method of managing and using large collections of imagery and other raster datasets that our users continue to acquire. GIS also handles new higher-resolution, higher-precision data types. In version ArcGIS 10, a start has been made to integrate rigorous sensor models into the software. A sensor model is a precise way to get 3D coordinate positions on the ground. Traditionally, simple approaches just to make an approximation of exactly where a pixel is on the ground use a very low-level of mathematical equation. Sensor models are more sophisticated. A sensor model implementation knows all about the optics of the system and calculates a precise math model that locates the pixel in threedimensional coordinate space. ESRI implements several sensor models into ArcGIS in full cooperation with all of our partners.
These groundbreaking developments in GIS and imagery are exciting to watch. Granted, Moore’s Law will always create such partnerships, but that doesn’t make it any less gratifying to witness. Anyone interested in these fields is encouraged to investigate the merger of GIS and imagery. See what it can do for your organization.
Lawrie Jordan, Director of Imagery Enterprise Solutions, ESRI.
Complete Solutions for
WELCOME TO THE REVOLUTION
The Next Leap in Lidar Evolution
Image Data Acquisition and Processing of Clustered Cameras
GEODIS is a European company in the fields of geodesy, photogrammetry and remote sensing. The following article focuses on how the company is involved in image data acquisition and processing of clustered cameras. Topics discussed are development of the digital technology usage, application of clustered cameras data, image processing using automatic aerotriangulation, among others. The article concludes with a look into the future of digital photogrammetry. By Karel Sukup
Development of the Digital Technology Usage The versatility and easy-to-use characteristics of digital sensors for photogrammetric purposes caused a wide range of camera systems to appear on the market. The problem of low individual chip resolution led developers, through necessity, to “combine” the chips into larger units, resulting in a bigger image size. Today’s digital camera image sizes are therefore close to the classic large-format film cameras. Although GEODIS, as a specialized digital photogrammetry processing company, was linked to the technologies of Intergraph, they had to migrate to Vexcel solutions when facing the decision of which digital camera to purchase. Sensors from this company were being developed dynamically and it is worth noting that efforts at Vexcel have not dropped. UltraCamD, criticized by many professionals for its construction, instability etc., was relatively close to GEODIS because the construction philosophy was similar to the kit used for building the company’s own camera systems. era. And this is not the largest resolution available on the market – there are now 50megapixel and 60-megapixel solutions commercially available as standard.
Fig. 1 Five-camera GbCam
Continuously increasing the resolution of commercially-produced large-format digital cameras or standalone medium-format digital camera backs has brought a number of changes in technological methods. One of the application areas of these digital sensors is in the field of applied photogrammetry and image interpretation. GEODIS purchased its first digital camera with a resolution of 6 megapixels about 10 years ago. The company was excited about its features, image quality and PC connectivity support, offering astounding image processing options compared to classic aerial film cameras. The only flaw in this type of technology was the relatively low resolution of its sensors. Compared to an RMK TOP, the camera used by the company at that time, the area captured in a single digital
Although GEODIS bought the first UltraCam back in 2007 and now have three cameras in total, they purchased the first 39megapixel camera in 2005 and started experimenting with it, developing their own solution, Fig. 2 Orientation system of a cluster camera with two strips captured with opposite flight heading. the GbCam digital camera. Their activities first involved the use of a single camera but a digital image was negligible. However, the digital twin followed in 2006, a three-camera set in camera’s flexibility, its ability to capture quali2007 and since 2008 this system has been ty images, even in rather poor lighting condiused as the five-camera GbCam system (Fig. tions, and the versatility of its use was 1) for capturing vertical and oblique images. remarkable (the camera could be held in The system is suitable for both aerial and terhand, with vertical or horizontal image axis restrial digital image data acquisition applicaorientation, could be used in an aircraft or tions. car). Amazingly this first digital toy cost the Over the years, GEODIS managed to fine-tune same as a current 39-megapixel digital camthe controlling electronics and software of the
Basemaps… Subscription service… Online access… More than 80 countries.
Fig. 3 PixoView Application Workspace
system. However, there was also development in the digital image processing field, with software for “stitching” generally oriented images, calculating interior and exterior orientation parameters, dependent and independent orientation of image pairs, triples, and quintuples, right up to bundle adjustment of whole image sets. The solution included development of software for simple viewing and measuring of images in a single-image mode and the transition to GEODIS’ own stereo-viewing and stereoplotting solution this past year. Several other specialized companies engaged solely in image capture hardware development followed a similar scenario. Through development of various versions of dual and quarto systems, the technology reached the stage recently when four- or five-camera systems were developed for capturing generally oriented images, with one camera usually
pointed vertically and the remaining four cameras tiltable as needed.
Application of Clustered Cameras Data
Clustered cameras are developed mainly for the purpose of acquiring area survey/reconnaissance images. At the beginning this mainly involved development for military purposes but civil applications have since followed. Images are usually visualized using special software developed specifically for their processing. This software enables basic measuring information within the images such as lengths, widths, heights, surface areas, point coordinates, etc. There is relatively little discussion about options for using these generally oriented images for further photogrammetric applications such as mapping, orthophotomap production, generation of better 3D models based on data texturing and others.
Ask us for the Earth
Fig. 4 Options for generating DTM and DSM using clustered cameras
Latest News? Visit www.geoinformatics.com
Fig. 5 Example of a color orthophotomap generated using images acquired with the GbCam camera
The primary problem in processing of generally oriented images from clustered cameras is their correct geo-referencing. Since there are a high number of image files generated during the photographic mission, perfect data management is needed. Compared to large-scale digital cameras, commonly used medium-format cameras generate many more images even if vertical capture only is performed. If there are five such cameras mounted to the holder, several hours of imaging can result in tens or even hundreds of thousands of images. Proper organization of this data and simultaneous assignment of appropriate meta-information during the flight is a relatively difficult task, the successful performance of which significantly ben-
efits subsequent data post-processing. If every image has at least the information on GPS time and/or basic GPS/INS image orientation information assigned, a considerable amount of effort can be saved later when organizing these data sets for further production. If images are only used for monitoring an area from several different perspectives, the directly registered GPS/INS data is usually sufficient to determine orientation of the images with sufficient accuracy. In fact, with these tasks it is only necessary to download a set of matching generally oriented images that “see” the selected ground objects from various directions after a viewpoint is selected on a vertical image or map. If the directly measured image orientation
elements are merely approximate or determined with lower accuracy, this often poses no problem for this type of application. If more accurate image orientation is required, there are usually two methods available, in addition to the more accurate GPS/INS system. The first method is to make a cluster adjustment based on GPS/INS measurement only without ground control points supplied, which considerably increases the relative ties of images. The second option is to perform full cluster adjustment by means of classic aerotriangulation (AT).
Image Processing Using Automatic Aerotriangulation
When processing oblique imagery using software solutions that are currently available, serious issues occur in the functionality of these systems when processing non-standard configurations and orientations. It is usually necessary therefore to process blocks of images in several passes so that the existing software can handle these images. At GEODIS BRNO, there are three types of automatic aerotriangulation processing software available: a solution from Intergraph (ISAT), Inpho (Match AT) and Vexcel (Ultramap AT with adjustment in Bingo). For processing oblique images there were two applications under test, ISAT and Match AT, and our experiences in 2009 varied. The company was able to use both applications for calculation with different results, relating mainly to the degree of oblique image used. The problems the company encountered were discussed with both software producers. AT input involved individual images with interior orientation parameters determined by field calibration while exterior orientation parameter calculations were carried out mostly using Orient software developed at TU Vienna (adjustment was done at the Brno University of Technology) and later using the Bingo system. The automatic correlation had difficulties tying appropriate images together. The software was more stable if overlapping of vertical strips was ensured. Oblique images correlated only if taken in the same direction. Images from strips captured by cameras oriented in different directions did not produce correlation and considerable dropouts occurred in mutual ties of the strips. Later, the triangulation blocks were divided into sub-blocks with the same camera orientation, which substantially increased the stability of the calculations. The correlated sub-blocks were again merged into a single block and the final adjustment was performed using the least squares method. The complexity of the mutual position of images in strips with opposite orientation is illustrated in Fig. 2. Figure 2 shows that when performing image capture it is better to set up the flight in such a way that mutual overlapping of central vertical images is ensured (preferably large). This is given by the current automatic AT processing development level. Although the overlap between the strips can be selected as needed, at least 40% overlap proved to be useful. In urban areas, it is better to ensure at least 50% or 60% overlap due to the relating technologies, e.g. possibility to perform higher quality DSM correlation, while maintaining the overlap of 60% between the images in a particular strip.
Latest News? Visit www.geoinformatics.com
Examples of issues bound to automatic AT processing using current software systems: • Serious correlation problem in ISAT – correlation sequences are selected chaotically especially if multiple overlapping exists; often there is no connection achieved • ISAT cannot handle correlation of oblique images if not oriented in the same direction • Solution: “per partes” ISAT correlation – standalone correlation for various combinations of strips and cameras with subsequent merging into a single block and final adjustment. It is not possible to determine in advance which combinations will deliver the best result. However, we know for sure that the following camera combinations are required (see Fig. 2): 1+3, 2+3, 3 and 3+4+5. If problems persist, additional special combinations are needed, such as 3+ “all images facing south (north, west, east)”. • Computing times needed for the ISAT correlation in individual combinations is relatively low (20-35 seconds per image). In total the times range from 45 to 60 seconds per image depending on the number of strip combinations. • Inpho Match AT correlates “all with all”, which results in longer correlation times (3.5 minutes per image). If the number of observations is optimized for a single point and the maximum number of points is limited for a single image, the times are lower, comparable (or even shorter) than times in the ISAT software. In some cases, however, images suffer from unacceptable decrease in the number of automatically generated points and the optimizing settings need to be re-adjusted, which often leads to higher correlation times again.
Discover SPOTMaps Online the easiest way to access your SPOTMaps database!
Options for Using Clustered Cameras for Mapping and 3D Measurements
The use of clustered cameras is most frequently discussed in connection with image acquisition for area documentation purposes, e.g. for construction, traffic, urban planning, police, integrated rescue system etc. However, the oblique images acquired can be used for mapping too. The procedure suitable for this purpose is single-image mapping. This can be applied when obtaining location-specific details of public areas or performing simple mapping of buildings and other objects (see Fig. 3). This kind of mapping can be performed using specialized software, such as PixoView developed for these applications by GEODIS BRNO. However, using oblique images for stereoscopic measurements can be far more interesting. The well-known problem of handling
Ask us for the Earth
roof overlaps could also be solved using this image acquisition method. The stereoscopic shadow issue, occurring commonly when using vertical images, could be considerably eliminated as well. Although the current AT results are not optimal for use in accurate mapping, it is merely a matter of better system calibration (field conditions are not perfect for most types of clustered cameras) and proper AT adjustment of the entire set of images to receive accurately geo-referenced stereo pairs for all directions. For stereo restitution the company has tested the Intergraph and Inpho systems and our own stereo workstation. All systems delivered great stereoscopic perception with vertical and oblique images. When using oblique images acquired in multiple directions, it will be necessary however, to develop an image manager to support stereo plotting that will enable instant replacement of the oblique stereo pair needed for measuring a situation covered in one direction.
Options for Using Clustered Cameras for DTM and DSM Preparation
Current experiences indicate the option of using clustered cameras for generating DTM and DSM. For example, Match-T DSM from Inpho can be used to create a higher-quality DSM on the assumption that there is at least 60% overlap of images and strips. Such an overlap results in a high-quality DSM when digital images are used. Despite this, even these calculations have to deal with the issue of hidden image areas or problems with determining real terrain, especially close to large objects, such as buildings. Although these problems have been minimized in recent years, the use of oblique images still provides considerably greater options for obtaining correct correlation of images and calculating DSM in locations that proved problematic before. For now, existing software cannot be fully used for DSM calculations using oblique images but it is possible to assume that a combination of vertical and oblique images will be beneficial for these calculations. Available information also suggests that Inpho has been working intensively on this issue, also using GbCam data. If one takes into account the option to calculate surface points on building façades, the company could generate a high-accuracy surface model, including various types of façade details. Usability of the above methods for processing oblique images acquired from an aircraft or mobile mapping system would certainly represent an excellent opportunity to calculate accurate surface models of all buildings around communications for example. Fig. 4 provides samples of DTM and DSM data generated using
Fig. 6 Example of building automatically textured using images acquired with the GbCam camera
Use of Clustered Cameras for Creating Orthophotomaps
The existing digital rectification technologies enable the use of oblique mutually overlapping images for creating orthophotomaps. The modified “true” orthophotomap creation technology allows for the efficient “patching” of shaded areas of vertical images. This is done with image information obtained using mathematical searches to identify the missing section in a suitable oblique image. A similar method can be applied when performing automatic building texturing. This is likely to open a future path to 3D image databases that will contain all information not only on the terrain features but also the pixel image information for all surfaces of the given 3D object in database systems such as Oracle. An example of a color orthophotomap produced using the GbCam system is provided in Fig. 5 and an example of an automatically textured building in Fig. 6.
cessing in the sector of geo-informatics, focused on applications related to image measurement and semantic processing. Generally oriented images will be stored in 3D databases with the option of further use for various types of 3D object measurement and surface texturing. In connection with possible improvement of image correlation options or rotating laser scanners, it will be possible to create extensive 3D databases of selected areas comprising individual pixels with proper geo-spatial and spectral information.
Karel Sukup – Managing Director and CEO of Geoinformatics Division of GEODIS BRNO, and Patrik Meixner – Production Manager of Geoinformatics Division of GEODIS BRNO Many thanks to Ing. Eva Paseková, Marketing &Sales Department Geoinformatics Division GEODIS BRNO, spol. s r.o. Internet: www.geodis.cz
The era of digital photogrammetry will bring dynamic changes in acquisition and processing of not only classic vertical images but also oblique images. The software interconnection of generally oriented images, captured from an aircraft or ground-based mobile mapping system, provides opportunities for the gradual development of automated image data proJune 2010
The latest and greatest in robotic technology from Spectra Precision.
StepDrive™ high speed motion technology LockNGo™ advanced tracking technology Spectra Precision Survey Pro™ ﬁeld software GeoLock™ GPS assist technology 2”, 3” and 5” Windows CE Touchscreen 2.4 GHz interference-free radio Ultra lightweight at only 5kgs (11 lbs)
Contact your Spectra Precision dealer for a demo today. www.spectraprecision.com/dealers
© 2010 Spectra Precision. All rights reserved. All other trademarks are property of their respective owners.
Real World Gaming with GPS-Mission
Business Perspectives of Location Based Entertainment
Location-Based Entertainment seems to have slowly but surely come of age. Smartphones and reasonable mobile internet fares have established a framework to enable a broad public market for gaming. The International Mobile Gaming Award introduced the category of “real world games” last year and experts await good business perspectives for location-based games in marketing, tourism and education. Florian Fischer talked with Georg Broxtermann from Orbster about the promise and prospective of location-based gaming. Orbster is the location-based entertainment company that developed the highly successful game GPS-Mission. By Florian Fischer
coordinates. It has become a popular representative for a new paradigm of leisure and entertainment-based activities, characterised by the convergence of mobile information, communication technology and location services to link up material space with media-space. They connect space and entertainment in a way that makes people discover their environment beyond their ordinary action space, solve problems, compete with others and learn about spatial phenomenon or history. It is often described by terms such as “pervasive”, “mixed-reality” or “augmented-reality” and mostly dedicated to location-based entertainment like gaming or storytelling. In 2010 location-based gaming seems to be a rising star in the entertainment market.
Linking Material and Media-Space with Geospatial Technology
Linking physical and virtual space holds the possibility of reclaiming social and physical aspects of space in a playful way, and creates new and revolutionary forms of spatial experience. Location-based games require sensitivity for spatial contexts and interaction during the course of play which is established by the application of localisation and mapping technologies. A starting point of their success has been the recent development and convergence of mobile internet and geospatial technology. Both the Microsoft and Google geo-browsing platforms ensure a free availability of maps – even on mobile phones. Many communication providers offer fair mobile internet rates and cell phone producers commonly integrate GPS chips nowadays. Thus costs for play and provision of location-based games are reduced, which helps them gain more and more attention in the entertainment and leisure industries. Up to now a great variety of different games exists, as the Location-Based Games Database project of the Chair for Computing in the Cultural Sciences at Bamberg University proves. It contains 135 entries on different games. While most are prototypes from research institutions, some commercial projects are listed as well. GPS-Mission (www.gpsmission.com) is one of these and at the moment one of the most successful in the world.
GPS-Mission - mixed-reality treasure hunt
From the Pursuit of Coordinates to Mixed-Reality
May 1st 2000 is a memorable date for many geo-cachers. It was the day when the White House announced it was going to stop degrading the Global Positioning System accuracy and GPS users received an instant upgrade of their devices’ accuracy. It has been an enabler for the very popular leisure activity of geo-caching, which today is widespread all over the world. People use GPS devices to search for hidden treasures, often among historical or nature-relevant places, that are only described by their
GPS-Mission – The World is Your Playground
GPS-Mission is a treasure hunt game that offers numerous missions worldwide with each mission adapted to a specific urban environment. After having downloaded the GPS-Mission client on a mobile phone, the player can log-in and start playing. During the game a mobile internet connection is necessary to re-load maps and update the player’s position on the
What the player can do in GPS-Mission
server of GPS-Mission. That is to say, the course of the game is recorded and can be reviewed later. In addition, other players in the community of GPS-Mission can follow the game in real-time. After having selected a mission the player’s mobile phone shows checkpoints he has to reach and challenges he has to fulfill. Checkpoints are points within walking distance. When reaching a checkpoint, sometimes a question related to the place has to be answered. After reaching the last checkpoint the player has solved the mission. There is virtual gold everywhere in the world of GPS-Mission. All the gold a player collects while playing a mission is available for him on his account. Furthermore he is awarded gold for completing missions and can earn gold for creating successful missions played by other players. Gold is the in-game currency and can be used to buy trophies for every mission which has been completed. The trophies are virtual collectibles similar to the popular hiking-medals for alpine wanderers. Players can also buy powerups that improve the play.
trend similarly observed on OpenStreetMap and other popular platforms for Volunteered Geographic Information (VGI). In fact Broxtermann argues that the authors of missions are driven by a motivation “similar to participating on YouTube”. While he wanted to focus on entertainment as motivation, I rather believe in a whole bunch of motivations for creating missions, ranging from entertainment and education to earning money, and developing a kind of professionalism in location-based entertainment.
A Multi-branched Business Model
Still the company Orbster wants to earn some money with GPS-Mission. Georg Broxtermann explained the various branches of their business model. Basically a premium client can be purchased on Apple’s App Store or Nokia’s Ovi Store and advertisements on the website of GPS-Mission generate some revenue for Orbster. But Broxtermann emphasizes that their interest is in partner-events and the re-use of the GPS-Mission platform for white-label productions and brand-marketing. There are three levels of branding which can be incorporated in GPS-Mission. Firstly, the branding of single missions, which reach from a special design for checkpoints and a branded story, to the checkpoints that guide the player to points-ofinterest for that particular brand. Secondly, Orbster can build a new game which is integrated on its platform, and thirdly, create a whole new and independent game for its customers.
Creating your own Missions
The missions are created by the community of GPS-Mission which are assumed to be the community of players of the game as well. Thus every player in the GPS-Mission community is invited to create their own missions for the community and share their knowledge of interesting places, challenge other players and make them walk. Thus a mission designer is provided as an easy to use web-based tool to create missions online. After publishing a mission, it is instantaneously available for all players in the area. The creator of a mission will be rewarded with 50 Gold for every user that completes his mission successfully. In addition to managing the mission, the mission designer utilizes a geo-browser – optionally BingMaps, OSM or Google Maps – to create checkpoints, add local riddles, gold and photo spots. As soon as a newly designed mission is ready to be played, it can be published online and is visible for everyone in the community after just a few seconds.
While location-based entertainment can be part of a branding-strategy in the opinion of Orbster, it also has opportunities in the tourism and leisure industries as well as in education. Location-based games are often described as new leisure activities combining outdoor activities with gaming experience as they generate a great post-work reward for the players. As such they have strong connotations with life-style trends, self expression and fashion issues and compete with personal fashion items and activities, such as having a coffee with friends rather than watching a movie. Thus it might be assigned a valuable component in the tourism and leisure industry in the future rather than in the entertainment domain. Touristic performances are strongly concerned with “play”. They are about taking on new roles and trying different patterns of action. The experience of “difference” aside from everyday life’s spaces is considered the most driving force for leisure activities and travelling. Location-based games provide a playful and different experience in everyday spaces, and they help players transcend urban life by inscribing the game and their interactions with it. While the game directs the player in space rather than as his personal everyday habits do, he gains a new perspective on space and a chance to reflect on daily spatial habits and configurations. At the same time, he experiments with new tactics of space appropriation while he moves through space by conducting the game’s rules, interacting with other players and executing strategies to succeed in the game. The change of perspectives is a basic principle to experience “difference” and gain an awareness of other concepts of space. Other-awareness means an imaginative takeover of other points of perception while one’s own points-ofview are temporarily suspended. Perspective-taking is an important comJune 2010
Georg Broxtermann believes, “that the quality of the GPS-Mission largely belongs to the activities in the community. This is also the reason why we leave the quality management to the players mainly.” However, a tool in the mission designer checks every mission for its rough playability, and it is up the players in the community to review the mission with stars and comments. According to Broxtermann these players are aged mainly from 14yrs to 40yrs but sporadically up to 65yrs. He must smile while he admits that the best mission on GPS-Mission has been created by a 66 year old teacher from Amsterdam. This might indicate that the most active members, in terms of high-quality contribution, are in the older age range, a
Latest News? Visit www.geoinformatics.com
ponent of a successful learning environment. Thus, location-based games might be interesting components of education-focussed leisure activities as well as for school excursions and study trips. Affirmatively Georg Broxtermann explains that “education is a fascinating domain for location-based entertainment. Teachers can easily use the mission designer to create attractive mission for their students. There are already many examples of that.” He also mentions a teacher in Munich, Bavaria who has even been assigned to the municipal school authority to create missions for learners.
get rent from follow visitors of the place. Embedding the community of players seems to be a central topic of future location-based entertainment and its application in the leisure, education and marketing domains. However, we shall keep our eyes peeled to see what fusions emerge with other kinds of mobile services.
Florian Fischer, GIS Editor and Research Assistant at the Austrian Academy of Sciences, Institute for GIScience in Salzburg, Austria. He has a blog with small essays on the Geographic Information Society, Locative Media, Geobrowsers and the like: www.ThePointOfInterest.net. Links Orbster: www.orbster.com GPS-Mission: www.gps-mission.com Location-Based Games Database: www.kinf.wiai.uni-bamberg.de/lbgdb/ Gowalla: www.gowalla.com Foursquare: www.foursquare.com MyTown: www.booyah.com
The Future of Location-Based Gaming
It seems that location-based entertainment has some very bright prospects to be used for the branding of products as well as becoming a popular leisure and tourist activity or even utilized as a learning environGame display in GPS-Mission ment. The fusion of location-based gaming with local search and geo-social networking is expanding. The ever popular mobile applications like Foursquare and Gowalla unite mobile gaming with local search. They reward their users with virtual commodities when they “check-in” at a place. Those commodities can be collected, changed and dropped again. Furthermore, players are rewarded with special badges if they create new places. Checking-in and the maintenance of virtual places assure the reception of virtual commodities. In the local search game MyTown, the player – if he owns the virtual place – can even
For Utilities, Public Safety and Security Solutions
Dr. Horst Harbauer, SG&I Senior Vice President for EMEA at Intergraph, talks about the company’s software solutions for the utilities industry, public safety and security solutions. Also , the distinction between GIS and security is addressed and how Intergraph is in a unique position to deliver critical infrastructure protection to different but related markets. Lastly, Harbauer speaks about integration real-time sensor feeds with maps and how that experience leads towards new innovations. By the editors
How does Intergraph support the ‘Smart Grid’ needs of the utilities industry?
Horst Harbauer: The term “smart grid” means the availability of intelligent and flexible grids. More and more power is being generated by decentralized power sources (photovoltaics, wind power). This leads to higher grid structure requirements with regard to load distribution and grid stability, which can be secured by intelligent and flexible grids. Contrary to regular power plants, photovoltaic plants directly feed into medium and low voltage networks creating significantly higher effort to conduct networks analysis. Wide area power generation equally broadens the volume of requests for network analysis software solutions (e.g. voltage drop and R&X-calculation) from not only the headquarters and the power plant, but also in some of the subsidiaries of the regional supplier and municipal utilities.
upgraded to the object data model. Today, both G/Technology and GeoMedia utilise Oracle’s object data model. For earlier versions, customers made use of Oracle stored procedures to simultaneously populate both geometry types, allowing both applications to access common records.
In Europe, when performing disaster management simulations, the heavy security at government institutions impedes the exchange of (geo)data. The real problem seems to be massive firewalls. In what way can Intergraph help government agencies with this issue?
Horst Harbauer: This is really a matter of approaching the requirement from the correct direction. Major events (whether natural disasters, acts of terrorism or sporting events of the scale of the Olympics) are unparalleled in their operational and organisational complexity. Their safe and effective Dr. Horst Harbauer management requires timely and G/Technology is Intergraph’s well informed decision making coupled with the ability to communicate focused application for utility and communications customers. It was and coordinate across geographically dispersed locations and a bewilderdeveloped from the foundation of our GeoMedia technology to provide ing range of diverse organisations. These can involve critical responders advanced workflows that meet the data capture, maintenance, analysis and resources from emergency services, national government, municipal and reporting requirements of utility and communications companies. To and regional government, the private sector (such as utility operators, provide maximum openness, flexibility and scalability, both applications communications companies, transport operators, etc.), the military, securisupport native Oracle Spatial. Previous versions of G/Technology initially ty services and the voluntary sector, amongst others. remained on Oracle’s relational spatial data model when GeoMedia
To achieve this requires a significant degree of coordination, control and resilience. In the absence of secure, reliable and predictable process and access control, data sharing invariably becomes reduced to non-sensitive themes that can be exploited by organisations downloading data from portals for use in their local projects. The overheads, hinted to in the question, and the lack of real-time interaction, tend to limit the application of GIS to the planning and recovery phases of disaster management. Intergraph has drawn on its experience as the leading provider of mapbased public safety and security solutions to develop a robust, collaborative, process-driven emergency planning and response suite that fuses workflow, real-time data integration, secure role-based access and advanced geospatial functionality. The security and coordination provided by this platform enables users from different organisations to use data directly from the source, avoiding the overhead and disconnect caused by downloading datasets. This platform has already helped manage major events successfully, including the recent G8 Summit in L’Aquila, Italy, and is being deployed for regional civil protection centres across Europe.
The placement of safety cameras with a known position that recognizes pixels is rapidly bringing digital camera technology into the spatial domain. What can be expected from Intergraph in the field of cameras and location, pixel recognition and the real-time monitoring of suspected movements with multiple cameras?
Horst Harbauer: While this is ‘bleeding edge’ technology for conventional GIS vendors, Intergraph has a long history of working with video, and the company holds a number of patents in this space. We first integrated camera feeds with our emergency management environment over a decade ago and also produce a forensic video enhancement and analysis product. This experience has enabled us to lead innovation in a number of directions.
The security and public safety markets have driven the need to integrate real-time sensor feeds with maps to maintain a clear picture of the situation on the ground and as a way to manage and make sense of the ballooning and bewildering range of real time data feeds like intelligent CCTV, radar, access control and UAVs. The spatial framework also helps the operator understand situations more quickly by showing the context of an alarm with clear links to supplementary information that can help them determine whether action is required. For example, when an alarm is raised by an access control system or a sensor, the operator is shown its location along with CCTV that covers the area in question and the location and status of nearby personnel. Video footage 10 seconds from either side of the alarm can be accessed by clicking a camera location. Similarly, a patrol can be dispatched to investigate and CCTV cameras can be panned and zoomed by simply clicking their icon within the map. Intelligent CCTV enhances this process by continuously monitoring multiple feeds for conditions that fall outside acceptable parameters. When an exception is detected, an operator is shown the video sequence and location of the event on a map display, providing direct access to all of the supplementary information to assess the alarm and deploy the most effective response. These capabilities are used extensively in critical infrastructure protection and border security. Intergraph also has just launched GeoMedia Motion Video Analyst to enable wider and more effective exploitation of the terrabytes of data that are produced by the hundreds of thousands of hours of video produced annually by UAV flights. . Motion Video Exploitation combines video feeds from aerial platforms directly with mapping, enabling live video to be viewed in its geographic context and in combination with other data for enhanced situational awareness during operations. It also unlocks valuable information in archived footage by providing a simple and reliable means of searching by location as well as date and time.
For more information, have a look at www.intergraph.com
The same questions as the one before, but with a focus on security and infrastructure? How can Intergraph use its knowledge of the energy and utilities infrastructure industries to direct its expertise toward security concerns? And because security in government agencies and energy companies is not in the same hands as GIS, is there any contact at all between both divisions and what is Intergraph’s strategy to enter these divisions?
Horst Harbauer: In a perfect world, the GIS/security distinction would not exist. However, some GIS technologies are harder to integrate with realtime information and operational business systems. Intergraph is in a unique position, having experience and products in the three prerequisite areas of capability necessary to deliver critical infrastructure protection. Intergraph offers core geospatial technology, as well as integrated security platforms and industry solutions for infrastructure design and management. Today, Intergraph solutions are providing integrated security for airports, ports, mass transit systems, rail, national borders and nuclear power plants. Besides SG&I (Security, Government & Infrastructure), Intergraph Process, Power and Marine (PP&M) which is Intergraph Corporation’s second division, is the world’s leading provider of enterprise engineering software for the design, construction and operation of process and power plants. Our close relationship with and insight into the energy sector means we work with clients wishing to protect next generation nuclear, petrochemical plants and oil production facilities.
The utilities industry has quite a high pressure to reduce its operating cost. What solutions can Intergraph provide to achieve this goal?
Horst Harbauer: The German Federal Grid Agency has requested the utility industry to reduce its operating costs and – at the same time – to compensate the power losses which occur during the transmission. To secure this, many power suppliers focus on status oriented maintenance. Intergraph’s G!NIUS solution provides all necessary methods and functions needed to collect and document the status of the production equipment. This covers the full workflow of production equipment data into the grid, graphical user interface for result entry in the field, and recirculation of the collected data into the office. The funds allocation is then based on the findings of the results of the status oriented maintenance plan. Furthermore, Intergraph does return the result data back into the central ERP-SAP system, where cost calculation can be done.
Latest News? Visit www.geoinformatics.com
A GPS Coordinate for Everything in the United States
The contiguous United States, comprising more than 8 million km2, extends westward from a Maine beach on the Atlantic Ocean to the state of Washington’s Pacific coastline. With Canada on its northern border and Mexico on the south, the country’s landforms range from deserts to mountaintops and from grassland prairies to marshland. Each of those 8 million square kilometers of diverse terrain is now part of NEXTMap USA, a high-resolution 3D digital elevation dataset from Intermap Technologies. NEXTMap USA, which also includes the island state of Hawaii, is a companion dataset to NEXTMap Europe, Intermap’s collection of 2.4 million km2 of digital elevation data for all of Western Europe that was made commercially available in May 2009. By Ken Goering
“NEXTMap USA is a remarkable database,” said Brian Bullock, Intermap president and CEO. “Every building, road, and even large rock in the United States now has a GPS address, if you will, and we know its position within 2 meters horizontally and 1 meter vertically.” Each square kilometer in the database includes 40,000 individual elevation postings and 640,000 image pixels, equating to over 600 billion elevation measurements and five trillion image pixels for the nation. The privately funded NEXTMap program developed from Intermap’s recognition that mapping resources for first-world countries could be dramatically improved. “In 1998, after analyzing the United Kingdom, Germany, and the United States, we concluded that the first world was not well-mapped,” said Bullock. “Rather, what existed was an accumulation of decades and decades of maps, with varying degrees of accuracy, all cobbled together.”
Britain Serves as Prototype
This is a NEXTMap USA colorized shaded-relief digital terrain model (DTM) of the Grand Canyon, which is located in northern Arizona in the southwest United States. The canyon is 446 km long and varies in width from 8 km to 29 km. Grand Canyon National Park was one of the first U.S. national parks; the Colorado River began carving the canyon at least 17 million years ago.
ike those in NEXTMap Europe, the datasets within NEXTMap USA – which include digital surface models, digital terrain models, and orthorectified radar images – are unprecedented in their uniform accuracy and have already been put to use in extraordinarily diverse markets and industries. County governments use the elevation models and images for projects such as water management planning, and U.S. federal government agencies have leveraged the countrywide uni-
formity of the data, which is of the same accuracy specification from coast to coast and from border to border. In addition, the data is used in an enormous array of geospatialenabled products and services; in the automotive industry alone, NEXTMap data will be used in 3D in-dash visualization applications, while Intermap’s 3D Roads product, derived from NEXTMap data, supports energy management and safety/advanced driver assistance systems (ADAS) applications.
By 2002, Intermap was ready to initiate its first whole-country mapping project and chose Great Britain as a prototype. Intermap collects its data with interferometric synthetic aperture radar (IFSAR) mounted on an aircraft fleet which includes Learjets and King Airs that collect data in swaths up to 10 kilometers wide. The method results in digital elevation databases with sub-meter vertical accuracy. One particular advantage of IFSAR is the ability to collect data in cloudy or dark conditions, which allows the jets to fly without worrying about cloud belts or overcast days. England and Wales were completed in 2002, and Scotland was added to NEXTMap Britain in 2003. “We were able to meet the technical
along the southern and northern borders of the United States for the mutual benefit of the North American governments to manage border and security issues. With the northern and southern borders completed, the rest of the United States was completed with maximum efficiency as dictated by cooperative weather patterns and the seasons. Coordinating the flights, which could change on a moment’s notice depending on extreme weather, took a huge effort from Intermap personnel. “There were times, especially during the winter, when we couldn’t fly anywhere in the country,” said Ivan Maddox, Intermap director of data acquisition and planning. Coordinating governmental clearance for the NEXTMap USA flights was, compared to data collection for NEXTMap Europe, relatively straightforward: there is only one civil air authority for the country, instead of different agencies for each of the European countries. Still, the flight planning had to be thorough. “Each flight had a standard 12-page briefing that included the precise times of every single turn,” said Maddox. For NEXTMap USA, Intermap aircraft flew a total of 2,530 sorties, equating to 10,324 hours of airtime or a total of nearly five years working aloft.
AccuTerra by Intermap is one of the many applications enabled by NEXTMap data. The application, available for Apple’s iPhone as well as dedicated GPS devices, allows users to plan, record, and share their outdoor recreational experiences, like hiking and skiing.
specifications and also prove the business model,” Bullock said. “The big challenge was to scale that up 50 times and significantly reduce the costs.” Bullock said that Intermap wanted to develop a digital database for the United States that was much more accurate than what was available at the time. “It took the U.S. government 60 years and $2 billion to map the United States the first time, and we were setting out to do it at a thousand times more density, and at least ten times more accuracy, and we were going to do it in four or five years with private funding,” he said.
NEXTMap USA Begins with California
Based on market demand, NEXTMap USA began with remapping the state of California. However, remapping this single state was a significantly larger project than NEXTMap Britain had been: At nearly 424,000 km2, California is almost twice the size of England, Wales, and Scotland combined. Intermap developed a 150-page project plan that guided the company through this
Latest News? Visit www.geoinformatics.com
unprecedented project. The plan addressed, in part, ways in which to ensure that the data was collected as accurately as possible. Intermap’s aircraft collect data by flying absolutely straight lines, and subsequently control the data with reflective ground control points (GCPs) placed by field staff using GPS coordinates – and, for a state the size of California, GCP placement was no easy task. However, as massive as California is, it’s only the second-largest state in the contiguous United States (Texas is nearly 7 million km2): Intermap was definitely headed into new territory with NEXTMap USA. Data collection for California was completed in September 2005. The early sales successes of the dataset – including use for floodplain mapping, high-speed rail line planning, and water resource planning, among many other projects in the state – convinced Intermap to continue the initiative of remapping the entire United States. The expansion began with the states of Mississippi and Florida in the southeastern United States. Next, Intermap collected data
Throughout data collection operations for NEXTMap USA and NEXTMap Europe, Intermap was taking significant steps forward in both its technology and methodology. When data for NEXTMap Britain was collected, the aircraft flew in lines of only 200 km in length. To maintain the absolutely straight lines needed for accurate data collection, the pilots must continuously adjust the aircraft heading during the flight because of changing winds aloft – which also reorients the antennae mounted on the jets and changes the “look angle” of the antennae. The radar would have to be taken offline so that it could be manually reoriented to correct the look angle, and the aircraft would have to make a turn in order to start collecting data where it had left off before. During those periods, the radar wasn’t collecting data, but the aircraft was still using fuel and time – both of which are expensive resources. Through intense research and testing, the company’s engineers developed a method of automatically reorienting the IFSAR antennae pedestal to account for changing wind directions while continuing to collect data. This advancement allowed the Learjets to fly “ultra long lines,” 1,200km flightlines that were restricted to that length only by the fuel capacity of the aircraft. “By the end,” said
Maddox, “we had collected an area about four times the size of California in the same amount of time it took to collect that state.” Collection of the data for NEXTMap USA was completed on March 16, 2009, within its budget by six percent and ahead of schedule by nine months. The data was continuously being processed and verified in several of Intermap’s offices around the world, necessitating tremendous upgrades in computing power and storage capabilities, as well as significant additions to staff. NEXTMap USA required 1,300 GCPs, each placed by an Intermap employee who had to ask landowner permission prior to its placement. The field staff would regularly drive up to 25,000 miles in one month. “For NEXTMap USA, to initially place and then return to pick up the reflectors, our GCP crew drove the equivalent of two return trips to the moon,” said Maddox. A total of 160 field staff from Intermap worked on the data collection phase of NEXTMap USA. Various project teams, including the GCP crews, spent a total of 24,463 days (67.5 working years) in the field. Perhaps most stunning of all of the numbers regarding the NEXTMap program is: two. For a significant length of time, data collection (and all of the operations that occurred to support it) and processing was taking place on the two continents of America and Europe simultaneously so that NEXTMap USA and NEXTMap Europe could both be completed as quickly as possible. The end result: more than 10 million square kilometers of datasets providing uniformly accurate coverage for the contiguous United States and Western Europe.
agement and application interoperability. Instead of storing and managing large datasets locally, many users now prefer costeffective Internet-hosted solutions that are compatible with both their existing application environment and data access requirements. In response, Intermap’s Web services portal – called TerrainOnDemand – is an Open Geospatial Consortium (OGC) “data as a service” platform that natively supports the acquisition, analysis, and delivery of the company’s NEXTMap data.
Automotive Applications Abound
NEXTMap data is being evaluated extensively in the automotive industry: Intermap’s 3D Roads product is an accurate and homogeneous geometric representation of all roads in a country, based on NEXTMap data. Key vehicle energy management applications enabled by 3D Roads include Eco-routing, which helps plan more fuel-efficient routes (and reducing carbon emissions), and Electric Vehicle Range Prediction, which accurately informs electric or hybrid electric vehicle drivers how far they can proceed on their current charge. Also in the automotive sphere, NEXTMap data is enabling applications such as Predictive Front Lighting, which automatically adjusts a vehicle’s headlights to illuminate curves in the road, and Curve Speed Warning, which alerts a driver if the vehicle is traveling at an unsafe speed for an approaching curve.
This is a digital surface model (DSM) of Germany from NEXTMap Europe, which reflects the wholecountry mapping concept underlying the NEXTMap program. Intermap is leveraging that concept to develop a number of geospatial-enabled products and services using NEXTMap data, including automotive applications that help increase vehicle fuel efficiency and reduce carbon emissions.
Recreational Uses As Well
NEXTMap USA data is the foundation for Intermap’s AccuTerra product, which is a map database for smartphones and dedicated GPS units used by outdoor enthusiasts to plan, record, and share their hiking, skiing, and other outdoor recreational pursuits. Earlier this year, The New York Times used NEXTMap data to create highly detailed and interactive maps of the Winter Olympic venues in British Columbia, Canada, for its Web site.
Ken Goering, Senior Writer at Intermap Technologies For more information on NEXTMap USA and NEXTMap Europe, visit www.intermap.com.
Putting NEXTMap USA to Work
While Intermap continues to collect and process data under its NEXTMap program around the world, the company has also transformed itself from a data collection and processing entity into one that is creating geospatial products and services based on the NEXTMap database and driven by the varying needs of its customers worldwide. Beyond traditional GIS-based uses for digital elevation data and images, NEXTMap is also used in a wide variety of geospatial-enabled products and services. This year, Intermap is introducing an online risk assessment portal with which insurance companies can accurately gauge their property portfolio’s risk of
flood damage; the accuracy of NEXTMap data will allow that to happen at the level of a specific property address. The company is also launching an application that supports online terrain profiles application for microwave link planning, which allows telecommunications companies in the planning phase of building or extending a network to ensure that their transmission towers will have a clear line of sight without expensive field verifications. The application has extensive benefits to industries that use transmission lines of any type, such as water, and oil and natural gas.
On-demand Data Delivery
The quality, resolution, size, and complexity of geospatial data is increasing exponentially, driving the need for more effective data manJune 2010
Pan-sharpening and Geometric Correction
The successful operation of DigitalGlobe WorldView-2 has created another milestone for high-resolution satellites. The high-resolution panchromatic sensor, the four previously unavailable multispectral bands at 1.8 meter resolution and WorldView-2’s advanced geopositional capability have provided a range of benefits to different applications. By Philip Cheng and Chuck Chaapel
n October 8, 2009, WorldView-2 joined its sister satellites, WorldView-1 and QuickBird, in orbit. WorldView-2 is a remote-sensing satellite principally used to capture high-resolution images of the earth. The images provided by the satellite can be used for applications such as mapping, land planning, disaster relief, exploration, defense and intelligence, visualization and simulation of environments, and classification. WorldView-2 was designed and developed by Ball Aerospace & Technologies Corp, US, and is operated by DigitalGlobe Corporate (DigitalGlobe). The satellite can swing rapidly from one target to another, allowing broad images of many targets. The satellite was launched into orbit through the Ball commercial platform (BCP) 5000 spacecraft bus on the piggyback of Boeing Delta II on 8 October 2009. Worldview2 can operate at an altitude of 770km with an inclination of 97.2° for a maximum orbital period of 100 minutes. Worldview-2 is the third satellite in orbit in DigitalGlobe's constellation, and joins its forerunners Worldview-1 (launched in 2007) and QuickBird (launched in 2001). The satellite has been designed to have a lifespan of 7.5 years. WorldView-2’s large-area collection capabilities and rapid retargeting are two important features of the satellite. Enabled by the combination
of the satellite’s 770km orbiting altitude, its state-of-the-art Control Moment Gyroscopes (CMGs) and bi-directional push-broom sensors, WorldView-2’s enhanced agility and bi-directional scanning allows for the collection of over 10,000 sq km in a single overhead pass, plus efficient in-track stereo collections of over 5,000 sq km. WorldView-2’s advanced geopositional technology provides significant improvements in accuracy. The accuracy specification has been tightened to 6.5m CE90 directly right off the satellite, meaning no processing, no elevation model and no ground control, and measured accuracy is expected to be approximately 4m CE90. WorldView-2 panchromatic resolution is 46cm and multispectral resolution is 1.8m. Distribution and use of imagery better than 0.50m GSD pan and 2.0m GSD multispectral is subject to prior approval by the U.S. Government. As the first high-resolution commercial satellite to provide eight spectral bands, WorldView-2 offers imagery with a high degree of detail, unlocking a finer level of analytical discernment that enables improved decision-making. In addition to industry-standard blue, green, red and near-infrared, WorldView-2 includes four previously unavailable bands, collected at 1.8m resolution: coastal blue, yellow, red edge and near-infrared 2. These bands offer a range of benefits to analysts, who
Figure 1a: Panchromatic image of Phoenix, USA Figure 1b: Multispectral image of Phoenix, USA Figure 1c: Pan-sharpened image of Phoenix, USA
will be able to identify a broader range of classification, (e.g. more varieties of vegetation or water-penetrated objects), to extract more features (e.g. cotton-based camouflage from natural ground cover), to view a truer representation of colors that match natural human vision, and to track coastal changes and infractions. This article will examine different areas of the WorldView-2 satellite image data. Firstly, we will test pan-sharpening using WorldView-2 panchromatic and multispectral data. Secondly, the geometric correction method and accuracy of the WorldView-2 data will be examined. Given that the WorldView-2 is equipped with state-of-the-art geo-location accuracy, it would be useful to find out the geometric model accuracy of the WorldView-2 data with and without ground control points (GCPs). Lastly, we will test the geometric correction of WorldView-2 data using Google Earth as a source of GCPs.
culated from a terrain elevation model or supplied by the customer. It can be ordered from a minimum of 25 km2 from the library, or from 64 km2 for new tasking. For this article three sets of WorldView-2 OR2A data were obtained from DigitalGlobe. The data include Morrison and Phoenix, USA and Beijing, China. OR2A products are recommended for geometric correction because the panchromatic and multispectral data are resampled to exactly the same geographic extents; hence, it is possible to perform pan-sharpening of the data before geometric correction if a pan-sharpened orthorectified image is desired. This method works for most areas with gentle terrain. Performing pan-sharpening after geometric correction of the panchromatic and multispectral data separately often requires the need to deal with small misalignments between orthorectified panchromatic and multispectral data due to the accuracy of GCPs and DEM used in the orthorectification process
Similar to the QuickBird and WorldView-1 satellite data, WorldView-2 data is distributed in five different levels, i.e., Basic 1B, Basic Stereo Pairs, Standard 2A, Ortho-Ready Standard (OR2A), and Orthorectified. For custom orthorectification the Standard 2A and Orthorectified products are not recommended. Standard 2A product is not recommended because of the coarse DEM correction already applied to the image data. Basic Imagery products are the least processed of the WorldView-2 imagery products. Each strip in a Basic Imagery order is processed individually and therefore, multi-strip Basic Imagery products are not mosaicked. Basic Imagery products are radiometrically corrected and sensor corrected, but not projected to a plane using a map projection or datum. The sensor correction blends all pixels from all detectors into the synthetic array to form a single image. The resulting GSD varies over the entire product because the attitude and ephemeris slowly change during the imaging process. Basic Stereo Pairs are supplied as two full scenes with overlap, designed for the creation of digital elevation models (DEMs) and derived GCPs. OR2A has no topographic relief applied, making it suitable for custom orthorectification. OR2A is projected to an average elevation, either calLatest News? Visit www.geoinformatics.com
The availability of a WorldView-2 0.5m panchromatic band, in conjunction with the 2m multispectral bands, provides the opportunity to create a 0.5m multispectral pan-sharpened image by fusing these images. Based on the thorough study and analysis of existing pan-sharpening algorithms and their fusion effects, an automatic pan-sharpening algorithm has been developed by Dr. Yun Zhang at the University of New Brunswick, in New Brunswick, Canada. This technique solved the two major problems in pan-sharpening – color distortion and operator dependency. A method based on least squares was employed for a best approximation of the grey level value relationship between the original multispectral, panchromatic, and the pan-sharpened image bands for a best color representation. A statistical approach was applied to the pansharpening process for standardizing and automating the pan-sharpening process. This new algorithm is commercially available within the PCI Geomatics software. In figures 1a, 1b and 1c, examples of the WorldView-2 panchromatic, multispectral and pan-sharpened images of Phoenix, U.S.A., are provided Figures 2a, 2b and 2c show examples of the WorldView-2 panchromatic, multispectral and pan-sharpened images of Beijing, China.
Figure 2a: Panchromatic image of Beijing, China
Figure 2b: Multispectral image of Beijing, China
Geometric Correction Method and Software
In order to leverage the WorldView-2 images for applications such as GIS, it is necessary to orthorectify the imagery. In order to perform the orthorectification process, a geometric model, GCPs and DEMs are required. The Rational Polynomial Coefficient (RPC) model has been the most popular method in orthorectifying high-resolution images because it allows the user to correct an image using no GCP or a few GCPs. More details about the RPC model can be found in the paper written by Grodecki and Dial (Block Adjustment of High-Resolution Satellite Images Described by Rational Functions - PE &RS January, 2003). For the purposes of testing, the latest version of PCI Geomatics’ OrthoEngine software was used. This software supports reading of the raw data, manual or automatic GCP/tie point (TP) collection, geometric modeling of different satellites using Toutin’s rigorous model or the RPC model, automatic DEM generation and editing, orthorectification, and either manual or automatic mosaicking. OrthoEngine’s RPC model is based on the block adjustment method developed by Grodecki and Dial and was certified by Space Imaging (http://www.pcigeomatics.com/support_center/tech_papers/rpc_pci_cert.pdf).
When one GCP was collected from each image, the ICP RMS errors were 0.7m in X and 1.0m in Y with maximum errors of 1.4m in X and 1.4m in Y. Therefore, it is possible to achieve within 1m RMS accuracy with only one accurate GCP per image using the RPC method. Figure 3 shows an orthorectified image of the Morrison dataset overlaid with Google Earth.
Phoenix Test Results using Google Earth
In recent years, the launch of Google Earth has provided users with reference imagery which can be used as a source of GCPs anywhere in the world. For most cities high-resolution data such as GeoEye, QuickBird or airphotos are available. By checking the Google Earth imagery with known survey points, it was found that the accuracy of the Google Earth imagery is approximately within 2m in X, Y and Z directions in most cities in North America. Accuracy outside of North America has not been checked at this time. To test using Google Earth imagery as a source of GCPs, the Phoenix dataset was used. A total of eight Phoenix WorldView2 OR2A data was used. Forty ICPs were collected from the Google Earth imagery and the RMS errors were 0.9m in X and 0.7m in Y with maximum errors of 1.8m in X and 1.6m in Y. When using one GCP per image, the ICP RMS errors are 1.2m in X and 0.7m in Y with maximum errors of 2.2m in X and 1.6m in Y. Figure 4 shows the pan-sharpened orthorectified Phoenix image overlaid with Google Earth. Therefore, it can be concluded that it is possible to use Google Earth as reference imagery to collect GCPs for near-nadir acquisition angle imagery. For off-nadir acquisition angle imagery, more accurate GCPs should be used.
Morrison Test Results using Survey Points
A total of 13 survey-independent check points (ICPs) with sub-meter accuracy were collected from six OR2A datasets. A zero order polynomial RPC adjustment was used. The ICP root mean square (RMS) errors were 2.6m in X and 1.3m in Y with maximum errors of 5.7m in X and 3.1m in Y.
Figure 3: Pan-sharpened orthorectified Morrison image overlaid with Google Earth
Figure 4: Pan-sharpened Phoenix image overlaid with Google Earth
Figure 5: Beijing pan-sharpened ortho image overlaid with Google Earth
from the imagery, the ICP RMS errors were 2.9m in X and 1.2m in Y with maximum errors of 3.6m in X and 1.9m in Y. As previously mentioned the Google Earth imagery may not be very accurate outside of North America; however, it is still a useful tool if one just intends to update an area of Google Earth. Figure 5 shows the pan-sharpened Beijing image overlaid with Google Earth.
This article examines different aspects of the WorldView-2 data. Pansharpening of WorldView-2 data can be performed by using OR2A panchromatic and multispectral products before geometric correction. The RPC model with zero order polynomial adjustment can be used as the geometric model to orthorectify WorldView-2 data. Similar to WorldView-1 data, it is possible to achieve RPC model accuracy within 1m RMS with a minimum of one accurate GCP for WorldView-2 data. For areas without accurate GCPs, Google Earth can be used as a source of GCPs.
Figure 2c: Pan-sharpened image of Beijing, China Dr. Philip Cheng email@example.com is a Senior Scientist at PCI Geomatics. Mr. Chuck Chaapel firstname.lastname@example.org is a Senior Geospatial Engineer at DigitalGlobe.
Beijing Test Results using Google Earth
A similar test was performed on an OR2A data set of Beijing, China. Five points were collected using Google Earth imagery as the reference image. The ICP RMS errors were 2.5m in X and 9.1m in Y with maximum errors of 3.7m in X and 8.9m in Y. When one GCP was collected
Latest News? Visit www.geoinformatics.com
’Are We There Yet?’
The Location Business Summit
'Location based services: are we there yet?’ and ‘how is there(more) money to be made with location business systems?' were questions central at the Location Business Summit in Amsterdam, April 28-29. With an expected further growth of smart phones equipped with a GPS, there sure is room for more location based systems and thus money to be made. The question is how and by whom. Also, what lessons are there to be learned from the geospatial web for driving profits? During two days, more than 50 speakers from the industry gathered in the Okura Hotel in Amsterdam to share their thoughts on these matters. By Eric van Rees
shown is just as important as the information itself. Context determines if a message comes through. This message was repeated in other presentations: everybody seemed to agree that there’s a need to personalize location based information for the user. The question is how and by what means. About personalizing location based information, Parsons argued that to be able to personalize content to the individual user, the service should have information about the user so it can give better search results. Google is already doing this, and some speakers agreed that Google is in the driver’s seat in the location business market. Everyone was eager to hear Google’s presentation during the second conference day on mobile local advertising. One of Google’s new initiatives in this field is Google Local Shopping, where inventories of shops are made searchable for mobile users through Google. The other way around is also possible: take for instance geofencing, where mobile users receive text messages about discounts offered by the shop where they are at that very moment. Although research has shown that geofencing can be quite effective as a marketing tool, it remains to be seen if people are in favor of these marketing tools, as they may not be personalized and could be considered intrusive.
ocation based services have come a long way. Part of its success has to do with technology, part with data providers and part with companies that use location as a way of displaying data. In addition, that data is free for everyone to use wherever they want. But what is the next step? Who will lead the way in location based systems and decide what others will do? What are the challenges ahead and how to tackle them? What lessons are there to be learned from geospatial parties that deal with location every day? These questions and more were addressed during the Location Business Summit in Amsterdam, April 28-29. As to be expected, this was not so much a technological conference, but one where different groups of people met, discussing their thoughts and learning from each other. Familiar parties such as Google, Yahoo, Layar, Open Street Map and TeleAtlas were present, but also marketing agencies, telecom companies and major hardware and software companies like Microsoft and Dell Computers. Microsoft and Dell Computers.
The target audience at the conference was not clearly different than that found at your normal geospatial event. This was not a technical conference, which had its strengths and weaknesses. I for one learned a lot more on how business can use location based services and make a profit with it, but honestly there was not much new to be learnt. There were no big announcements or exciting new products. Augmented reality was only mentioned in one presentation, but this topic certainly deserved wider attention. Layar wanted to keep their new product announcements to themselves but revealed an upcoming Layar event in June. From a geospatial perspective, I was surprised how non-geospatial people, like the majority of those at the conference, take maps for granted. Or mapping, for that matter, or data quality. The big discussions between crowd sourcing (OSM) or a blend of traditional mapping and crowd sourcing (used by Navteq) seemed to go over the heads of most attendees. Ed Parsons remarked that ‘people have problems with maps, mapping is not that easy’, and gave an example that perfect circles on a map with a Mercator projection should be read with suspicion, showing that there’s something wrong. But the attendees noticed other barriers preventing location based systems from fully taking off. Roaming costs and battery power are still big obstacles for mobile users using location based systems. To answer the question ‘are we there yet?’, I think the answer should be: ‘no, not yet’.
For more information, have a look at www.thewherebusiness.com/locationsummit
Where is the Money?
The primary questions of the conference were addressed by David Gordon, Director of Strategic Planning at Intel. One of the main questions was ‘Where is the money? ‘, meaning ‘how is there money to be made with location based services (meaning advertising)?’. This question came up during almost every presentation. It’s easy to see why, with Google and Nokia offering free map services on mobile devices, mobile system providers are asking themselves the question of how to respond to this move and how to make money with mobile and location-enabled advertising. Considering the diversity of players in this market and the fact that the sales of GPS smart phones are still increasing, all parties are eager to take their part of the cake. Google’s Geospatial Technologist Ed Parsons followed Gordon’s short opening presentation with a talk that focused on data rather than the services around the data. Parsons argued that without context, data itself is irrelevant, because place equals points of interest and people. He made this clear with an example that illustrated how the location where information is
The Data Exchange Company
Snowflake Software is a UK based provider of data exchange solutions. The company combines off the shelf software products, consultancy services and extensive training portfolio to build a complete data exchange for their customers, such as the Defence, Aviation, INSPIRE and Data Provider markets. GeoInformatics talked with Snowflake Software’s Melissa Burns (Marketing Manager) and Eddie Curtis (Chief Technology Officer) on the company’s decision to focus more on data exchange, as well as implementation of GML. Also, they explain the roles that their GO Loader and GO Publisher fulfill. By Eric van Rees
Question: Last year the company made a change by becoming a Data eXchange Company. Could you explain to our readers what caused this change and how this works out in practice, in terms of products, solutions and especially new markets, such as aviation and defence?
Melissa Burns: Since Snowflake started (back in 2001), we’ve noticed a transformation in the needs of geo-spatial data users. When we first started, the industry was learning. And we were on the learning curve with them. Over the last couple of years, and much like the industry itself, we’ve grown up. For a start we have a common set of data exchange standards (that would be GML then!). However, many users weren’t necessarily able to get the most out of their data because many of us (technology vendors) were not servicing their needs; we were too focused on the technology and the data. We needed to focus on the objective – clear and effective communication. Users were struggling. And we quickly realised that simply having good technical standards is not enough. People need to be able to use those standards without spending months to become experts in them first.
They need to apply them to their existing business and therefore their existing systems. Step forth Snowflake Software the Data Exchange Company. Today, users need to be able to load, model, manage, transform, translate, publish, visualise and share their data in a way that isn’t inhibited by existing infrastructure (needing new investment), and in a way that seamlessly integrates with other data sets to really get the most out of them. They need to be able to exchange data between internal departments, between external companies, between legacy and next generation infrastructure and between schemas and frameworks. Snowflake’s data exchange approach means that Open Standards are always at the core of our products. The user can quickly benefit from clear and accessible information to make critical business decisions, whether they need to load or publish, or even do both. Whilst our GO Loader and GO Publisher products complete the cycle of open data exchange, the addition of our training and consultancy really supports our customers in being able to get up and running with open data exchange and fully realise the value.
In terms of our penetration into the Aviation and Defence markets, that’s just a sign of how much the industry has grown up. Now that we have GML, we are finding other industries that have adopted the use of it have exactly the same data exchange challenges as the rest of us. And because the data is often relied upon in those industries for critical and timely decision making, the need to solve the data exchange challenges by using tools like GO Loader and GO Publisher to open the data for analysis, management and quick decision making was key.
Q: Snowflake has been involved in the rollout of GML maps by Ordnance Survey. Have you already seen everything, or does every country experience its own specific problems with the implementation of GML?
Eddie Curtis: We certainly haven’t seen everything yet. GML is much more than just a format for distributing maps. It gets used in many ways for many purposes. However, since we’ve been helping people use GML in a wide variety of sectors, I think we have seen quite a lot of the challenges. With OS MasterMap, the issue was how to distribute a very large data set (around 500 million features) to hundreds of customers. That is very different from the Aviation world, where GML is being used to send notices about changes to airports and airspace on a minute by minute basis and temporal aspects of the data are critical. In meteorology the size and complexity of each individual weather forecast can present challenges, but it has many similarities with air quality monitoring. Within a sector, the issues are often similar when moving from one country to another. For example, land parcel information in one country tends to have many similarities to land parcels from other countries. But there are always variations too, since each country has different legal processes and administrative structures. The results of implementing GML as a common standard far outweighs the challenges associated with the implementation.
Q: GML is being implemented step by step as a basis for object oriented map material in Europe. Use is made obligatory by the government, but users don’t seem to be ready for it yet. Platform suppliers are also sending out mixed signals. What is so difficult about a file format that only seems to be supported by smaller organizations such as Safe and Snowflake?
Eddie Curtis: One of the good things about having a mature Open Standard like GML is that it creates opportunities for people to share information that wasn’t previously feasible to exchange. As a result, in many situations where GML is being promoted, it is in the context of wider business change. Quite often, data exchange where GML is to be used is a completely new business process. That kind of change is always hard work because it involves business change as well as changes in technology. The main reason Open Standards are considered a ‘good thing’ is that they allow you to choose best-of-breed components for each task and
Latest News? Visit www.geoinformatics.com
integrate components from different suppliers. It should not be surprising to see small companies getting involved to complement established platforms with extra functionality since this is very much the ethos of the Open Standards approach. Data exchange is an area of expertise just as survey or GIS analysis is, so it makes a lot of sense to use specialist data exchange tools for these new processes but integrate them with the existing platforms so that you can maintain business as usual for established processes. GML isn’t a difficult technology to work with. If you are implementing open data exchange for the first time, you are bound to face a bit of a learning curve (which is one reason our training courses are so popular!). One important thing to remember is that exchanging data with someone who does the same job as you is very different from communicating with someone in a different walk of life. You can’t rely on them to understand your terminology or your data structures. Before you can exchange information you need to agree a common language that you both understand. GML provides a toolkit for you to define that language, which is known as an ‘application schema’ in GML terminology. Once you have agreed, you will find that it’s not the language you use in your organisation, nor is it the language of the person receiving the data. It is a common ground between the two – a lingua franca for the community sharing data. This means that the provider of the data will have to do some translation to turn their data into something the recipient can understand. The recipient will have to do some further translation to convert the data into a form their applications and business processes can use. Thus GML becomes everyone’s second language and allows people talk to people they couldn’t previously. That does take a little work to think through and configure, but the benefits of effective communication are immense.
Q: How would you explain the difference between FME (supports 200 file formats) and the GML Viewer, which is used mostly for GML display? Are they competing plugins?
Melissa Burns: This is a question we get asked a lot!
Our GML Viewer is supplementary to our GO Loader and GO Publisher products in the same way that the FME Universal Viewer is a component part of FME. The confusion between Snowflake and FME usually comes in with our loading and publishing tools – and the difference between a data format and a data exchange standard or application
the new technology. It can even handle schemas that haven’t yet been created! A great example of this is in the UK where our OS MasterMap customers could immediately manage a brand new dataset – OS VectorMap, all because the datasets were released in the data exchange standard, GML.
2. GO Publisher
Whilst FME offers a transformation tool, GO Publisher offers a translation tool that can serve highly complex, onthe-fly model translations. This means that you don’t end up with an extra database or intermediate file. GO Publisher can support multiple output translations easily and cost effectively. With its unique ‘schema mapping’ technology, when a new GML schema is GO Publisher created for use, GO Publisher can immediately handle the data and translate to the new schema without the need for software development to catch up with the new technology. And because so many organisations are adopting Open Standards, it means you can get even more functionality out of your data, such as publishing to Google Earth for visualisation (because you can translate your data and publish out to KML – the data visualisation standard). GO Publisher was the first commercial product to be awarded the Open Geospatial Consortium's (OGC) Compliance Certificate for its support of the WFS 1.0.0 and WFS 1.1.0 Open Standards and we’re currently being tested for WFS 2.0.0 compliance.
schema. At first glance FME and Snowflake’s products might appear to be competitors, but when you look a little closer you will see that they fulfill different roles. In fact, nearly all Snowflake’s customers also use FME, but to solve a different problem. What we are focused on at Snowflake is how to facilitate data flows to and from enterprise IT systems. For us, the creation of Open Standards for spatial data was the big opportunity to help people open up their data. (It is no coincidence that Snowflake was set up at the same time as the first large scale adoption of GML). With hundreds of data formats, FME is a fantastic ‘Swiss-army knife’ for communicating between geographical systems. GO Publisher lets you reach out by connecting your corporate data via web standards. GO Loader lets you consume information from outside organisations into your internal, corporate systems. There are two things that make those lines of communication work: data model translation, and web services. Giving clients on-the-fly self-service access to data in a choice of data models means that information can move from where it is created and into web applications, mash-ups and other corporate data stores, not just into the hands of other professional geographers. At Snowflake we work with standards for business process orchestration, web services and data modeling on a daily basis – not just “geo” standards. Whilst that is the case, people do still ask the question ‘How do you compare to FME?’ so here’s a basic overview of how we’re different:
Q: What’s the difference?
Eddie Curtis: FME – think formats and think existing data – over 250 supported formats. Snowflake – think standards and think past, present and future – infinite numbers supported. We view Safe Software as industry role models and we’ve got a great working relationship with them. We hold Don & Dale in high regard. Ian and I often chat about their success over a beer on a Friday night. If Snowflake could achieve what Safe have, we’d be very happy.
1. GO Loader
Whilst FME is a great tool for transforming to different file formats, GO Loader goes a step further and translates the data between different XML and GML schemas. This generic based approach means that GO Loader is a future proof technology, based on Open Standards and ensures interoperability. GO Loader offers more than just a loading tool and has several features and functions for managing the data within your database, which means you can get more out of the data. It comes with plug-ins and project packs that add specific support for different datasets such as OS MasterMap in the UK, NEN3610 in the Netherlands or AIXM5.1 for Aviation (to name but a few). This means you can really get the most out of the data at no additional cost. With its unique ‘Schema aware’ technology, when new application schemas and datasets are released, GO Loader can immediately handle the data without the need for software development to catch up with
Q: Snowflake Software offers a ‘hands-on’ INSPIRE training course. Do you see a difference in demand for courses like this, geographically speaking? If so, how do you explain this ‘difference’? Where did the need for this specific training course originate?
Melissa Burns: Our INSPIRE training course was a recent addition to our GML and XML training program and was based on customer demand. Because of that we’ve been fully booked for almost every course we’ve run. INSPIRE isn’t going away. Government departments and businesses responsible for providing public geo-spatial data are starting to really think about what INSPIRE means for them and how they can implement it. But, because we’re still in the fairly early stages of implementation in terms of the INSPIRE dates, we’re finding a lot of our customers are really confused about exactly what is, and more importantly what isn’t necessary for INSPIRE compliance.
Nested Data Structures
Here at Snowflake we’re blessed with the brains of several INSPIRE experts who run and support the INSPIRE training. Debbie Wilson who runs the training course, worked at DEFRA as part of the UK Location Programme and offered her expertise on INSPIRE implementation. Additionally, our very own Ian Painter (MD) is on the AGI INSPIRE Working Group and helped compile the training. Geographically, we’re seeing a surge of interest from Eastern European countries such as Croatia, Slovenia and Poland. We’re also seeing massive interest and movement in The Netherlands and Germany where INSPIRE is really beginning to be taken seriously and they’re ahead of
the game in terms of implementation considerations. Here in the UK, we’re slightly behind. We’re talking about it, but we’re not doing it. Our national mapping agency Ordnance Survey GB still publishes data in GML 2 rather than INSPIRE compliant GML 3.2.1. At the time of writing this, possibly due to our political situation with a national vote due soon, people are stalling. But, once the situation settles down and we can see the overall vision for the next couple of years, we expect INSPIRE to really take a hold. It’s beginning already, but not as fast as in some other European countries. INSPIRE is complicated. It’s time consuming. It’s confusing. There’s so much information available about INSPIRE but it’s often product specific and difficult to get the real important nuggets of information from the noise. Organisations with commitments to publish to INSPIRE standards have been screaming out for some simple, practical advice. And that’s what our training offers. We try and only give you the information you need to make INSPIRE easily understandable (and without the product spin). We aim to simplify it so you know exactly what you need to do in terms of implementation and when. We also make it black and white in terms of what you do and don’t need to do.
You can get in touch with Snowflake Software by: Visiting their website: www.snowflakesoftware.com Reading their blog: http://blogs.snowflakesoftware.com/news/ Following them on Twitter: www.twitter.com/sflakesoftware Joining their GML discussion group on LinkedIn (just search ‘GML’ in www.LinkedIn.com).
Latest News? Visit www.geoinformatics.com
Translate, Transform, Integrate and Deliver Data
With a new version of Safe Software’s FME Desktop and Server products, Don Murray (President of Safe Software) and Dale Lutz (VP Development) are travelling the globe and meeting users of their products. But their travels are designed not only to discuss new technology, but also to hear the needs of the user community. During the FMEdays in Münster, Germany, Don Murray and Dale Lutz discussed some of the recent developments in the geospatial domain and their relationship with Safe Software’s market view. Other topics discussed were cloud computing, 3D data, meeting users’ needs, and partnerships with GIS vendors. By Eric van Rees
efore Don Murray and Dale Lutz left for their North American 10-city tour called ‘2010: An FME Odyssey’, they visited Europe for a similar series of user meetings. They started off in Münster, Germany, where during the course of three days a program was put together with user presentations, product presentations and training sessions. During their opening session, Murray and Lutz discussed the new release of FME 2010, which includes a new version of FME Desktop, as well as FME Server. Since an in-depth analysis of FME 2010 has already been covered by this magazine (please refer to GeoInformatics issue 2, March 2010), this interview focuses on Safe Software’s product strategy, technology developments such as cloud computing, and how the company continues to meet the needs of the user.
What is striking about FME User Meetings, is that Murray and Lutz know their users really well, in addition to their needs. They are addressed personally during presentations, as well as their work. The need for better data access through FME technology keeps growing. At the moment there are more than 7500 users of FME technology across 116 countries. Murray describes several common uses for FME today: “CAD to GIS is a very common scenario that Safe Software has been helping address since 1993. A technical person uses our authoring environment, FME Workbench, to define what needs to happen with their data; specifically, how to read features and attributes from their CAD data and restructure them into a useable dataset for the applications or end users who need access to it.” Data migration is another use of FME: “We’re also seeing organizations that are migrating their technology to spatial databases, and in many cases we FME is used to support the legacy applications they used to run, in order to push the data back out.” As far as FME Server goes, the main use case is data distribution: “An organization wants to make their data available to internal or external stakeholders. Now, with FME Server, the technical user who understands the input-output data model can make this knowledge available through FME workspaces on the web. Before FME Server, somebody would have to find the data or hire somebody to actually do that.”
Over the years, GIS vendors have embraced FME technology in their own way. For example, Bentley Systems and Trimble both announced
their FME integration some time ago. Judging by the number of GIS vendors that embraced FME technology, it becomes clear that Safe Software is complimentary to all the vendors who build GIS. Don Murray: “Today, they recognize that more and more, they’re not just a pure one-vendor solution. Their customers need to be able to share data among applications and these vendors are happy to have us do that task by enabling them to leverage our technology.” It so happens that a number of vendors are licensing FME technology, such as Autodesk, MapInfo, ESRI, Intergraph, ERDAS, rather than reinventing it themselves.
Indeed, moving data is the heart of what FME technology does. Through workspaces, a user of FME Desktop can set up a set of rules to translate input data to another format, transform data into a specific data model, or integrate different data types all at once. The end result can be delivered to end users in the structure and format they desire. But as with any data, there can be bad data that will cause problems when used or brought together. To make people aware of the quality of their input data, FME includes a data viewer that enables user to view their data before, during and after the conversion process. Murray: “When solving a data moving challenge, users sometimes think their source data is better than it really is. Our data inspection tool, FME Viewer, helps users see exactly what they have in their source data.” Since FME is a technical product, being able to understand the input data is indispensable. Murray: “You have to know what you´re working with. For instance, with 3D, people need to be able to inspect their data in 3D, and for this we have created the next generation of our data inspector. PDF is a very efficient way of sharing data with people who don´t have a 3D visualization tool. Everybody seems to have an Acrobat Reader, it´s sort of a de facto standard.”
and Murray both think this will eventually take place and are confident about the possibilities, but also see the challenges that are crucial for success. Murray: “It’s a new way of deploying, but there are challenges of course: you have to get the data into the cloud and many organizations at this point are concerned with that. You have to understand what that means: are we giving the other rights to the data or not? You also have to move your data into the cloud and out of the cloud, because that is probably the biggest cost. And not so much moving it out, but moving it up.” Murray sees no problems for deploying a desktop application, but this changes when things get bigger: “if you are going to deploy a big application in the cloud, you need a significant investment in the hardware to be able to deploy it. Now with Amazon web services, you can deploy small and as your application grows, it just automatically ramps up and down. Your cost can be incremental as the demands for your services grow. In the cloud, you’re only paying for what you use, as opposed to hardware infrastructure.”
For a company that focuses on moving data, data integration is always a concern. Software vendors have all tried to meet their users’ needs when it comes to this issue. But with the ever increasing amount of data types, data models, standards, products and the like, life certainly hasn’t been made any easier. Dale Lutz has a clear view on the possibility and use of data integration: “It seems to me that the world is complex enough and that any format which could satisfy the needs of all possible applications would itself be so unbelievably complex that no one could ever use it. And any format that tries quickly suffers from that problem.” An explanation for this is that the needs of both worlds are different, and the tools reflect that. Lutz: “For example, consider building information model (BIM) and GIS. These worlds are very different and their tools reflect that. Ultimately a GIS user doesn’t have the same needs as the person who’s building the place”. However, this does not mean that the two universes cannot complement each other. Lutz: “There’s a need only to take over the relevant information. Typically, the information is going to come out of the architects and be dumbed down for the GIS guy. Rarely is it going to go in the other direction.”
3D Data Format Support
With FME 2010, Safe Software has furthered its focus on the 3D realm by making it easier for users to access and visualize data in more 3D formats. Murray: “We´re taking non-3D building blocks, like CAD drawings or building footprints, and then dropping on textures, and digital elevation models. If you have to do all that stuff by hand, it would take you a great deal of time, so we´re able to do that and load it into PDF or other 3D things.” An explanation of the large number of different 3D file formats is comparable to what happened before in the 2D world, says Murray: “3D is like the 2D story all over again. There are vendor versions and there are standards. For example, there’s IFC, which is a standard, and then you have Revit, which is a vendor product. Then, you have things like Google Sketchup, COLLADA or Presagis OpenFlight.” Not only are there many different data types that cause interoperability problems, but there is also another cause for the new demands in 3D solutions, namely legacy. Dale Lutz: “An example of this is a common 3D format for exchange called OBJ. There isn’t any product left that uses OBJ as a native format, but there’s a legacy of using OBJ as a convenient exchange format. The challenge there is that there’s really no reference application for OBJ so we have to cope with de facto interpretations of this legacy file format.” Lastly, within a standard there can be many variants, says Lutz: “For instance, for CityGML there’s a noise abatement extension and there are other extensions as well. All this causes us to need a small army of developers at Safe.”
During the FMEdays in Munster, a lot of requests were made for data types to be included in later versions. Both Lutz and Murray were amazed by the fact that things keep changing all the time, causing Safe Software to take notice and incorporate those changes into their products. Lutz: “The question is how to gather all the information and distill it into actions, because we can’t do everything. You have to figure out all of the things that are going on, and what things are going to be the most valuable to people. We try to make good guesses, but to be honest we know we may have hits, and we may have misses.” And there are those hits that have been a complete surprise, apparently. Both give a lot of credit to the user community, and it is true that the interaction between the community and the company is quite inspired. Lutz: “Part of why Don and I love to come to conferences is that it gives us a great opportunity to learn what’s most important to our users and how we can make our products and company even better.”
Eric van Rees is editor in chief of GeoInformatics. For more data insights from Don and Dale, have a look at Safe’s “It’s All About Data” blog at http://blog.safe.com. http://blog.safe.com/2010/04/ working-with-xml-and-loving-it/ http://fmepedia.com/index.php/Converting_ Relational_Datasets_to_XML
Looking ahead, there’s a lot of talk these days on moving into the cloud. In discussing the recent partnership with WeoGeo, a North American company that manages and serves maps and mapping data in the cloud, Lutz
Latest News? Visit www.geoinformatics.com
A Collaborative Project
The Archaeological Potential for Shipwrecks
The “AMAP2 - Characterising the Potential for Wrecks” project (AMAP2), commissioned by English Heritage in October 2009, is a collaborative project between SeaZone and the University of Southampton (UoS) which seeks to improve the management of the marine historic environment by enhancing our understanding of the relationship between shipwrecks and their surrounding environment. This will be sought through the refinement of baseline data for marine spatial planning and the development of a characterisation of the environmental variables affecting the potential for wrecks to survive on the seabed. The project will provide an evidence base for the assessment of the potential for different marine environments to harbour unrecorded wrecks. By Olivia Merritt
A pilot project completed in 2008 by Bournemouth University demonstrated the potential for correlations to exist within the Eastern English Channel, leading to the commissioning of AMAP2 to further investigate and quantify these relationships across a much larger area encompassing all of England’s territorial waters. Key trends identified during AMAP1 included a strong bias in known wrecks towards the 20th century, with few iron or steel vessels reported lost but remaining unidentified. Iron and steel wrecks were found to cluster in areas of shallow sediments and dynamic seabed, irrespective of their condition, while wooden vessels tended to be concentrated closer inshore. Correlations were suggested between wrecks recorded as buried or partly buried and areas of shallow but dynamic seabed. Relationships were also identified between the materials ships were built of and their distribution, burial and location methods.
Figure 1: Shipwreck
Making the Most of Wreck Data
The information on wrecks will be sourced from two distinct databases: the Wrecks Database managed by the UK Hydrographic Office (UKHO) and licensed through SeaZone, and the National Monument Record (NMR) managed by English Heritage. There are, however, two issues to address before the information held in these databases can be taken forward for analysis. First, there are overlaps between the databases which must be identified and removed so that the project has a single source of wreck information to work from. Second, much of the data which will be useful to the project is held in lengthy descriptive text fields. Therefore, the AMAP2 project will initially seek to compare and identify matching records within the databases (Figure 2), to enable the best use to be made of available physical and circumstantial information on each wreck site. During this process, the project seeks to further develop interoperability between the wreck data published by the UKHO and historical data available from the NMR, thereby enhancing the usefulness and accessibility of both datasets beyond the scope of this project. Data significant to understanding trends in the condition of wrecks on the seabed such as age, construction materials, distribution on the seabed,
he aim of the AMAP2 project is to study statistical relationships between the physical nature of shipwrecks and their surrounding natural environment. The results will be used to develop a characterisation map of Areas of Maritime Archaeological Potential (AMAP) based on the environmental parameters affecting the survival of wrecks in seabed sediments. Improving the understanding of the relationships between wrecks and their environment, coupled with the results of seabed modelling undertaken by the University of Southampton (UoS), will provide a firm basis for interpreting the variables which affect the potential for wrecks to survive in different marine environments. The term ‘Archaeological Potential’ describes areas of land or seabed where it is anticipated that previously unrecorded archaeology is likely to exist and survive. The project seeks to encourage a considered interpretation of the variable affecting potential in the marine environment by demonstrating relationships between wrecks and their environment. It will not, however, attempt to generate a predictive model that would allow the estimation of the number of wrecks that might be found, or their spatial distribution.
burial environment are being extracted to produce an enhanced database of environmental shipwreck characteristics.
ment type and depth, seabed morphology, water depth, sediment mobility.
Modelling Sediment Dynamics
The AMAP2 project is being run in collaboration with UoS with an aim to integrate the results of sediment dynamics modelling techniques conducted for the EU funded MACHU Programme (www.machuproject.eu) with the model built to generate the AMAP2 environmental characterisation. An essential component for effective underwater cultural heritage management is a clear understanding of the sediment dynamics of both regional areas encompassing numerous archaeological sites and individual sites themselves. The successfully developed approach for MACHU has demonstrated the clear capability of such numerical models to identify areas of erosion, accumulation and nil change under a range of ambient and extreme conditions. Further, they are able to identify direction and order of magnitude of sediment transport.
Building a Characterisation of Potential
The characterisation will be constructed using the results of mapping of wreck characteristics to environmental conditions using statistical and spatial analysis. The project draws close parallels with the requirements for generating marine habitat maps and is therefore seeking to adopt or adapt, where possible, the statistical techniques employed for this purpose.
The development of a characterisation map of the environmental variables and trends in wreck data which determine the potential for archaeological materials to survive in different marine environments will encourage a more justified assessment of the archaeological potential for a particular marine environment to harbour shipwrecks during the process of marine planning. The creation of a digital characterisation map of archaeological Figure 2 – Spatial discrepancies between wreck geometries in the potential and the consequent UKHO and NMR databases enhancement of data, core to the MACHU has focused primarily on aggregate licensing process, will the development of a robust, coarse resolution, numerical model for the enhance the approach to marine spatial planning and benefit the marine North-west European Shelf, with a higher resolution nested domain cenindustry as a whole. tred on the Goodwin Sands, in the Dover Straits. SeaZone and UoS believe AMAP2 will help build firm foundations for future The AMAP2 project builds on the work undertaken for the MACHU promarine planning and research, promoting consistency, and encouraging gramme, using high resolution bathymetric modelling generated by the enhancement and interoperability of digital marine data. SeaZone to build a sediment transport model of the Thames Estuary and Olivia Merritt, Heritage/GIS Consultant, SeaZone Goodwin Sands for use in the development of Email: Olivia.Merritt@SeaZone.com the AMAP characterisation across a test area. Internet: www.SeaZone.com The final outputs of the MACHU model are a description of the net sediment transport pathways and the nature of gross and/or sudden changes in seabed level (erosion or accumulation) as a response from either ambient tidal and wave conditions or extreme conditions (the passage of a storm through the area), as well as information of the direction and magnitude of sediment transport (e.g. Figure 3). In addition to the sediment dynamics model, a wide range of environmental data considered relevant to the formation of wreck sites and their survival in the marine environment has been collated, with particular focus on datasets available on a national scale. These include best available data relating to superficial sediLatest News? Visit www.geoinformatics.com
Figure 3 - Bed Level Change and sediment transport magnitude and direction for the Goodwin Sands.
The GeoSAR aircraft, a modified Gulfstream-II jet aircraft. P-band antennas are installed in the fairings on each wingtip, while the X-band antennas are in the fairings under the wings near the fuselage.
Making Mapping the ‘Impossible’ Possible
In less than a decade of commercial operations, Fugro EarthData’s GeoSAR system has earned a reputation for mapping the impossible. GeoSAR is a dual-band airborne interferometric radar system that is capable of rapidly mapping large areas in any weather conditions. In 2009 Fugro EarthData, which integrated and operates the system commercially, used GeoSAR to complete one of the most challenging terrestrial mapping projects the firm had ever attempted. By Kevin P. Corbley
“The tropical region of Australasia has been
a challenge to every remote sensing platform out there,” said L.G. (Jake) Jenkins, Fugro EarthData’s Senior Vice President. “GeoSAR took it on and successfully mapped it.” Jenkins explained that tropical areas in Australasia embody nearly every geographic and topographic trait that make mapping difficult. Located just south of the equator, the region is characterized by almost constant cloud cover that renders optical airborne and satellite imaging systems impractical. And even when an optical image can be captured, the dense tropical rain forests which carpet much of the land mass keep the surface terrain hidden from view. For decades, the area’s treacherous topography has thwarted attempts at performing detailed ground surveys.
With the search for natural resources accelerating around the world, this region of the world has generated considerable interest in recent years. For the project, the government of Australia sought a mapping system that could provide images and terrain models of land concealed below the forest canopy. Australia identified the airborne GeoSAR system as the only platform that could penetrate both the clouds and the jungle to return accurate surface data over the entire area in a reasonable period of time.
ping purposes, SAR offers numerous advantages, most notably the ability of radar signals to pass through clouds, many types of vegetation and even unconsolidated surface materials to return images of what lies beneath. “Radar operates day or night in most weather conditions enabling it to maintain aggressive mapping schedules with few interruptions,” said Roy Hill, Project Manager at Fugro. Interferometric SAR, or IFSAR, is a variation on the synthetic aperture radar technology. It uses two antennas separated by a precise distance on the aircraft to send and receive the radar pulses that are emitted from the system, bounce off the Earth’s surface, and return to the sensor. By measuring the phase difference
Topographic Mapping with IFSAR
As the system integrator for GeoSAR, Fugro worked closely with the U.S. government to develop a commercially viable interferometric synthetic aperture radar (SAR) platform using custom and off-the-shelf components. For map-
By comparison, the P-band peers through the canopy to image bare-Earth features below, including buildings, roads, paths, fences, field delineations, rivers and streams – objects that are often completely invisible in the X-band magnitude image. Together, the X and P bands captured a comprehensive view of the topographic and natural and man-made features across the region. From these datasets, users are able to extract 3D features and generate maps at 1:25,000 and 1:50,000 scale.
Several modifications to the configuration of the IFSAR system on the aircraft have contributed significantly to the accuracy of the elevation data it captures. Among these was the selection of the P band instead of other radar frequencies. P band was chosen because it was the longest wavelength that could function on a cost-effective aircraft without compromising interferometric performance. Separating the antenna pairs by a distance of multiple wavelengths maximizes the phase differences between emitted and reflected signal, yielding a more accurate elevation measurement.
between the reflected signals received at the two antennas, a processor can calculate extremely accurate surface elevation values. The GeoSAR development team took the IFSAR technology several steps further to expand its mapping capabilities. The decision was made to operate the IFSAR in two separate radar bands, X and P, giving it the ability to map in both short and long wavelengths simultaneously. The X-band operates at a frequency of 9630-9790 MHz with a relatively short 3-centimeter wavelength that passes through clouds but provides a return signal from the first reflective surface it encounters, such as tree canopies, man-made structures and solid ground. The P-band, however, functions at 270-430 MHz and has a one-meterlong wavelength that penetrates dense vegetation and the top layers of soil and sand in very arid regions. This dual-band collection enables GeoSAR to simultaneously collect first-surface elevation values with the X-band and bare-Earth elevation models with the P-band, generating two valuable DEMs from a single collect. GeoSAR’s dual-band IFSAR generates a set of black-and-white scenes called magnitude images from the X and P bands, each revealing different information about the topography. The X-band image provides details of surface features covering the ground. In projects completed in South America and Australasia, this surface information was mainly limited to the vegetative canopy and the few rock outcrops and water bodies not obscured by the jungle.
Latest News? Visit www.geoinformatics.com
Data layers extracted from dual-band GeoSAR data.
GeoSAR collects X- and P-band IFSAR data simultaneously and on both sides of the aircraft.
“The dual-side looking capability really pays off in tropical areas where the terrain is rugged and has extreme elevation changes in small areas,” said Hill. At the standard GeoSAR operating altitude of about 12,000 kilometers, the typical swath width of each antenna pair is about 20 kilometers. Even in relatively rugged terrain, this swath width provides sufficient overlap between the two sets of antennas to capture a minimum of two to four redundant points for each spot on the ground. However, when the topography is accentuated by deep ravines or steep cliff faces, such redundancy might be reduced, or worse – a ground point may be completely blocked from view of the sensor by the terrain. “To compensate for the extreme terrain, we tightened up our flight lines for greater overlap over rugged areas to ensure we captured at least the minimum number of looks at every ground point,” said Hill. “This allowed us to achieve consistent elevation mapping throughout the project area, totaling 388,000 square kilometers.” In addition to capturing highly accurate elevation measurements, the dual-side-look enables GeoSAR to map large areas in short periods of time, covering twice as much ground in one flight line as a single-look SAR can. This capability combined with the altitude and speed of the Gulfstream II allows the system to collect image and elevation data at a rate of 300 square kilometers per minute.
GeoSAR maps Mt.Huila through the clouds.
At the time of integration, the Gulfstream II had the longest wingspan – 20 meters – of a civilian jet that could be used affordably for mapping, and the P-band antenna pairs are located in pods on its wingtips. The X-band pods do not need as much spacing and are positioned much closer together under the wings near the fuselage. The other major structural configuration contributing to the quality of GeoSAR’s elevation mapping capabilities is the system’s dual-sidelooking design. In this configuration, each antenna pod contains twin pairs of antennas pointing to different sides of the aircraft with some overlap in between. As a result, when the aircraft is flown with a standard 30 percent overlap in its flight lines, the dual-sided IFSAR simultaneously collects multiple look angles of every point on the ground with both its X and P bands. This dual-side-look IFSAR capability gives the ground processing system a greater number of signal reflectance measurements to use in calculating elevation values. The result is more accurate X-band surface and P-band bare-Earth digital elevation models than could be generated with a single-look system.
GeoSAR in Action
In mapping projects completed in recent years, the unique capabilities of GeoSAR, such as its dual look and dual band designs, have proven advantageous to commercial operations.
Colorized land classification generated using X-band and P-band imagery.
Working in collaboration with commercial customers or in independent experimental projects, this team focuses on enhancing the technology and its applications. Among these have been pilot projects in both polar areas and arid regions, in addition to the equatorial regions that GeoSAR was originally designed for. At the poles, GeoSAR has shown great potential for penetrating snow and ice with its P-band to measure their thickness. Such data could be valuable in determining the pace of glacial melt in climate change research. For more commercial activities in arid zones, the same P band is demonstrating the ability to peer through several meters of sand and unconsolidated overburden to find buried pipes, underground water sources and caves. A major new IFSAR application now under development by the Fugro science team is the use of GeoSAR for biomass calculation in support of climate change projects in tropical jungles. Calculating biomass from remotely senses data is a complicated process because the species of tree must be determined along with the location of the trunk. Fugro has accomplished this by combining ground truthing with IFSAR classification techniques. Once the species and trunk locations have been determined, the P- and X-band signals can be integrated to calculate biomass, which in turn can be used to quantify the carbon sequestration capacity of the jungle area. Fugro has also worked with ESRI Canada to create IFSAR viewing tools with the ESRI PurVIEW ArcGIS extension. This commercially available product greatly simplifies feature extraction from remotely sensed imagery by converting data to 3D with the click of a mouse. This capability is now available for GeoSAR data, allowing end users to generate synthetic stereo pairs from both the X- and P-band data sets to identify and extract features to produce topographic maps for a wide variety of applications. “GeoSAR is at the cutting edge of large-area mapping, but we believe we have just scratched the surface of its mapping potential,” said Jenkins.
Kevin Corbley is a business consultant located in Denver, Colorado. He may be reached at www.corbleycommunications.com.
GeoSAR-derived 1:50,000 scale topographic map
Fast turnaround time was the primary selling point in a 2009 project that Fugro EarthData completed for a European oil company working in South America. The company was under strict time constraints to begin construction of a pipeline from an exploration block in northwest Peru to a terminal located 200 kilometers to the south. Extremely treacherous terrain lay in between, and the operator wanted to map the landscape to select the most cost-effective route for the pipeline. The timeline, cloud cover, dense vegetation and topography eliminated detailed ground surveys and optical airborne imaging as mapping options. “GeoSAR covered the 3,000-square-mile project area, which was divided into multiple potential corridors, in less than two days,” said Caroline Tyra, Fugro Client Program Manager. “We delivered the final end products in a month.” From the oil company’s perspective, the value of quick acquisition by GeoSAR was doubly appealing because the system ultimately generated the two map products required for comparing construction costs in the proposed corridors. The elevation models provided the slope measurements and topographic details as to which route had the least variation in terrain – a desirable construction trait – while the X- and P-band magnitude images delineated wetlands and other land cover features that would have to be traversed or avoided with potentially negative impacts on construction costs.
Latest News? Visit www.geoinformatics.com
“Prior to the flight, the oil company believed it had selected the best route for the pipeline, but after examining the IFSAR data in three dimensions, the company changed 90 percent of the proposed corridor,” said Tyra. “The new route will save them millions of dollars in construction and operating costs.”
Map Once, Use Many Times
While successful projects in Australasia, South America and other tropical regions have established GeoSAR as the airborne mapping system of choice in cloud- and jungle-covered areas, Fugro’s Jenkins observed that interest in the platform has now expanded to other parts of the world and to other end-user applications. The appeal is the speed with which GeoSAR can map large states or entire countries to generate 1:25,000 and 1:50,000-scale framework data layers that are the basis for a broad range of applications, from natural resources management to economic development. “The biggest financial risk to a large-area mapping project is a delay caused by weather,” said Jenkins. “With GeoSAR, that risk is eliminated and an entire country can be mapped in a few months.” To continue expansion of the system, Fugro has invested in assembling a team of IFSAR scientists and technicians in its Frederick, Maryland, facility where the GeoSAR aircraft is based to develop other unique uses of GeoSAR data.
DevSum attendees could meet with ESRI development staff to discuss solutions and get their questions answered.
Thriving on Energy of Shared Innovation
2010 ESRI Developer Summit
If success is measured by the din of collaborative exchange, then the 2010 ESRI Developer Summit was a triumph. One could barely walk 10 feet without overhearing attendees exchanging ideas about how they improved their work with GIS tools. The chatter was strong evidence that GIS is an inherently interpersonal discipline that thrives on the energy of shared innovation. By Matthew DeMeritt
That collaborative spirit was enhanced last
year with the addition of user presentations to the Developer Summit. With twice as many presentations this year compared to last, developers had a wide range of topics to learn about and apply to their work environment. Many of the presentations packed the largest rooms at the Palm Springs Convention Center and were followed by spirited and informative Q&A. More often than not, Q&A sessions spilled into the
lobby of the venue and took on a life of their own. ESRI selected user presentations based on their usefulness in tackling everyday problems. A standout among user presenters, Timmons Group’s Vish Uma gave two presentations on common obstacles that developers face. His first presentation on continuous integration (CI) highlighted the commonality in workflows across a dauntingly wide spectrum of software
and organizations. Uma shared best practices for overcoming the challenges posed by the GIS software development process and explained how he optimized his workflow through automation. “One of my big revelations with the last release of ArcGIS was how many tasks I could eliminate from my schedule by simply automating them,” said Uma. “All of the sudden, building and deploying solutions became more fun and opened new avenues of inspiraJune 2010
Jack Dangermond addresses the crowd at the Plenary session.
prize of $10,000. His Executive Compensation Mashup, compared top U.S. executive salaries with the total income for selected counties in the United States. Bouwman and his DTSAgile colleague Brian Noyle also received some of the best attendance of all the user presentations. One of Bouwman’s presentations, “Ruby-fu: Using ArcGIS Server with Rails,” explained how Ruby on Rails, a popular Web development platform that powers Twitter, Hulu, and Basecamp, can be configured to work with ArcGIS Server. One of Noyle’s presentations covered the hot issue of iPhone and Android app writing. Noyle demonstrated the design and implementation of a geocoding-enabled site for location-based feedback within a user’s local community.
For more information, have a look at www.esri.com/events/devsummit/index.html Many peer-to-peer discussions took place before and after the user and technical sessions.
APIs and Mashup Challenge
Presentations on APIs were a top draw. More half of all user presentations mentioned their applicability in a multitude of scenarios. Brenden Collins and Steven Andari from Blue Raster software presented on their ArcGIS
Latest News? Visit www.geoinformatics.com
President of ERDAS Joel Campbell
Emphasis on Understanding
I believe we are entering into the most exciting time of our industry’s history, where geospatial is no longer a niche industry, but has broad reaching relevance.
Joel Campbell, President of ERDAS
Last year, Joel Campbell joined ERDAS as the new President. With over 20 years of experience in the geospatial industry, Campbell is a well known and highly regarded speaker, lecturer and trainer throughout the world. During more than a decade with ESRI, he held chief leadership and management positions in the U.S. sales operation. GeoInformatics asked mr. Campbell about his views on the past, present and future of the geospatial industry, and new business models and techniques such as radar. By the editors
Congratulations with your new position as President of ERDAS. You bring with you over 20 years of experience in the geospatial industry, in a variety of senior roles including sales, business development and product management. With all this experience, what can we expect from ERDAS and its product line and market strategy for the coming years?
Joel Campbell: During the first 30 years, companies within our industry focused on methods of creating and generating geographic data. Collecting current imagery and using this as the source of all geospatial data is still important, but now there is also an emphasis on understanding and thoroughly analyzing the changes in data. End-users need more than just vector, raster and terrain data. They need tools that allow them to author, manage, connect and deliver geospatial information. The industry is now focused on producing powerful, yet intuitive tools for detecting change and understanding its implications. These tools enable users to deliver the timely geospatial information required. As the industry has progressed, realigning its focus from data collection to data analysis, my
enables users to run on-demand change detection, enabling users to update existing features and perform primary feature extraction. Moving forward, we’re introducing time and other business-critical information to make 4D and 5D systems. These are de facto parts of the overall system, people expect not just a 2D view of the world, but a dynamic 4D and 5D view of the world that ties business information to location. Data is always a huge problem. We essentially allow customers to get more than simply static maps, we allow them to get things like, how much green space has changed over the course of the last three years, or the percentage of impervious surface on my land. Additionally, where can I land my helicopter in an E911 situation? Where is the slope of the land less than 4%? Customers want live and up-to-date information, they don’t want to look at an elevation model that was created fifteen years ago with a 15M resolution. They want and expect more. The update of spatial information on-the-fly is now possible, and we’re doing this with the geoprocessing available in ERDAS APOLLO.
Apollo Feature Interoperability
career has followed a similar path. My time at ESRI, Definiens and GeoEye all served as significant milestones directing me towards data analysis, and using imagery to exploit change. Throughout my career, I regularly encountered ERDAS, and had a deep respect for the company’s rich history, leadership and innovation. At ERDAS, we develop all of our plans and products around fulfilling customer needs. Our purpose is to make the customer successful, and we are constantly improving our products to ensure this happens. As the new President, I will continue to lead this organization with a customer focus. I believe we are entering into the most exciting time of our industry’s history, where geospatial is no longer a niche industry, but has broad reaching relevance. ERDAS continues to be a leader, as the industry experts in handling all forms of imagery and image analysis, enabling users to easily gather the geospatial information they need.
and heads up digitizing. Tools during this time were often complex and user-intensive and were not a part of a standardized workflow. Later, ortho images were used to give context to GIS data. These were used extensively in the Defense realm. Traditional (albeit digital) human-intensive photo-interpretation was done rather than through software analysis. Weather satellites were one of the few operational examples of using satellite imagery. In the 1990s and early 2000s, companies like Microsoft, Google and Oracle moved into the geospatial space, benefiting everyone involved by raising our overall awareness. However, these new users were mainly just using imagery as a backdrop for other data sources. As a whole, change detection and exploitation was still largely concentrated to the expert realm. Since then, there has been greater acceptance of geospatial technologies outside the traditional customer base. In the future, this demand will continue to grow as businesses recognize the importance of thoroughly understanding change. We see an increasing demand for server technologies, as well as desktop applications that provide automation such as georeferencing (using tools like IMAGINE AutoSync). These kinds of tools simplify and streamline the process when existing imagery doesn’t automatically line up. An operator doesn’t need to be an expert to orthorectify. These types of tools are not simply for imagery, but for other types of geospatial information as well. At ERDAS, we are connecting and integrating the strengths of these types of desktop applications into our server technologies. For example, ERDAS APOLLO
If intelligence is added right after image capture, doesn’t this take away some of the work GIS technicians are currently doing? Or are you helping them by making their work easier?
Joel Campbell: Our tools do not eliminate the need for technicians – the human interaction continues to be important throughout geospatial analysis. Therefore, GIS technicians will always be needed. However, their responsibilities will change as processes continue to be automated, enabling these individuals to be more efficient in their jobs, taking on additional responsibilities and quality control throughout each project’s workflow.
Looking back at the last two decades in the geospatial industry, what do you think have been the most revolutionary trends that have happened and how do you value the industry and imagery market? What will be the challenges ahead for the industry itself?
Joel Campbell: Both satellite imagery and remote sensing continue to revolutionize the way we see and interpret the world. In the 1980s, remote sensing and satellite imagery were oversold. Our industry was one of specialist niche applications for image classification
Latest News? Visit www.geoinformatics.com
These days everybody talks about the cloud. What is ERDAS’ answer to initiatives such as WeoGeo, which presents itself as an ‘iTunes for Maps’ on a cloud-base?
Joel Campbell: This is a significant area of research focus for ERDAS. Within cloud computing, there are two areas where there is a great deal of interest. One is the ability to leverage the cloud for the scalable processing power that you need. If I received all new imagery, and wanted to create a whole new orthomosaic of my entire area, that’s a compute-intensive process. Maybe I
ITC develops and transfers knowledge on geo-information science and earth observation
ITC is the largest institute for international higher education in the Netherlands, providing international education, research and project services. The aim of ITC's activities is the international exchange of knowledge, focusing on capacity building and institutional development in developing countries and countries in transition. Programmes in Geo-information Science and Earth Observation
Master of Science (MSc) degree (18 months) Master degree (12 months) Postgraduate diploma (9 months) Diploma (9 months) Certificate course (3 weeks-3 months) Distance course (6 weeks)
Courses in the degree programmes
Applied Earth Sciences Geoinformatics Governance and Spatial Information Management Land Administration Natural Resources Management Urban Planning and Management Water Resources and Environmental Management
For more information: ITC Student Registration office P.O. Box 6, 7500 AA Enschede The Netherlands E: email@example.com I: www.itc.nl
INTERNATIONAL INSTITUTE FOR GEO-INFORMATION SCIENCE AND EARTH OBSERVATION
would have to have a large server system to crunch through that in a meaningful timeframe, but I would only need to do that maybe once per year. The ability to access that power outside my organization when needed is something that is really interesting to our customers, and we’re working toward doing that. The second part of the cloud that is equally interesting to us is the software-as-a-service (SaaS) that a cloud environment could offer. If you were an engineering company doing a feasibility study for a new highway, maybe it’s only a 90-day project, and you have 10 engineers working on it for 90 days. You may not want to buy a lot of software for a 90-day project, but you could use software as a service through the cloud, with the full robustness of the product, but only pay for it on a monthly basis. There’s an interesting business model there that the technology allows us to engage in.
ware, taking the best of developmental algorithms and implementing these into a userfriendly environment, easily accessible for both the novice and the expert user. With tools for georectifying, filtering and calibrating radar images, analysts can derive elevation information regardless of cloud cover, day or night from stereo or interferometric image-pairs using the IMAGINE Radar Mapping Suite. Analysts can save their radar data in any raster format, create color images to emphasize the magnitude of change, derive binary images to detect the most dramatic changes, create shapefiles for GIS applications and more. Utilizing the power of ERDAS IMAGINE, these tools are interoperable, supporting all raster file formats, with the ability to seamlessly connect radar data to any of ERDAS’ portfolio of solutions and many other geospatial products. ERDAS radar mapping products support a growing number of satellite sensors, including ERS1, 2 and EnviSat, RADARSAT-1 and 2, TerraSARX (with one meter spotlight modes), COSMO-SkyMed, ALOS PALSAR and more. Data from other radar sensors are supported by a generic import interface.
Joel Campbell: As the Earth changes, data organizations also have to be agile and innovative to accommodate the growing volumes of data. ERDAS APOLLO helps organize geographic information, enabling users inside and outside an organization to have the ability find, view and directly use the geographic information. Based on a 100% Service Oriented Architecture (SOA), ERDAS APOLLO provides a Spatial Data Infrastructure (SDI) that manages and delivers TB’s of GIS data, imagery and terrain information to customers. The heart of the SDI is the catalog and the key component of the catalog is metadata. To achieve a vision of a connected Digital Earth, an interoperable and open catalog is critical. ERDAS APOLLO provides an out-of-the box environment for cataloguing data and services.
Once data is organized and managed, the next step is to get the geographic information to users. It’s one thing to deliver data to users as web services. But it’s another thing to deliver on-demand geographic information products to a community. This is possible with the delivery of on-demand geoprocessing capabilities. Together, ERDAS IMAGINE and ERDAS APOLLO support an end-to-end workflow for desktop to enterprise geoprocessing. Connecting to a catalog, users have the ability to publish spatial models that can then be extended to everyone through the internet so that information products can be requested, visualized and used.
For more information, have a look at www.erdas.com
I hear a lot of exciting stories on the use of radar data. Can you talk about the possibilities of this technology, the client base and applications that ERDAS offers for radar data?
Joel Campbell: With over 15 years of experience in radar image processing, ERDAS provides leading radar mapping solutions, with specialized tools for processing radar data in a standard remote sensing or GIS environment. ERDAS continues to focus on operational softLatest News? Visit www.geoinformatics.com
ERDAS is well aware of the speed that users want data and also how they want it. What is your solution to the possibility of too much data, since data is captured and updated all over the world, 24-7 and stored in data silos everywhere?
June 02-04 June ISPRS Commission VI Mid-Term Symposium: "Cross-Border Education for Global Geo-information" Enschede, ITC, The Netherlands E-mail: firstname.lastname@example.org Internet: www.itc.nl/isprscom6/ symposium2010 02-05 June ACSM 2010 Baltimore, MD, U.S.A. Tel: +1 317 637 9200 x141 E-mail: email@example.com Internet: www.acsm.org/AM/ Template.cfm?Section=Conferences 03 June COMPASS10 Annual Conference Dublin, Ireland Internet: www.compass.ie 07-09 June Sensors Expo & Conference Rosemont, IL, Donald E. Stephens Convention Center, U.S.A. Tel: +1 (617) 219 8330 E-mail: firstname.lastname@example.org Internet: www.sensorsexpo.com 07-10 June 2010 Joint Navigation Conference Orlando, FL, Wyndham Orlando Resort, U.S.A. Tel: +1 (703) 383-9688 E-mail: email@example.com Internet: www.jointnavigation.org 08-10 June 58th German Cartographers Day 2010 Berlin and Potsdam, Germany E-mail: firstname.lastname@example.org Internet: http://dkt2010.dgfk.net 12-14 June Digital Earth Summit Nessebar, Bulgaria Tel: +359 (887) 83 27 02 Fax: +359 (2) 866 22 01 E-mail: email@example.com Internet: www.cartography-gis.com/ digitalearth 14-16 June 2nd Workshop on Hyperspectral Image and Signal Processing Reykjavik, Iceland Tel: +354 525 4047 Fax: +354 525 4038 E-mail: firstname.lastname@example.org Internet: www.ieee-whispers.com 14-17 June Intergraph 2010 Nashville, TN, U.S.A. Internet: www.intergraph2010.com/schedule/aag.aspx 14-18 June 8th Annual Summer Institute on Geographic Information Science "Interfacing social and environmental modeling" Florence (Firenze), Italy E-mail: email@example.com Internet: www.vespucci.org 15-18 June Canadian Geomatics Conference Calgary, AB, Canada E-mail: firstname.lastname@example.org Internet: www.geoconf.ca 15-20 June 3rd International Conference on Cartography and GIS Nessebar, Bulgaria Tel: +359 (887) 83 27 02 Fax: +359 (2) 866 22 01 E-mail: email@example.com Internet: www.cartography-gis.com 17 June 7th ALLSAT OPEN - GNSSReference Network - Quo Vadis Hannover, Germany E-mail: firstname.lastname@example.org Internet: www.allsat.de/en/news/ allsat_open/2010.html 20-25 June 10th International Multidisciplinary Scientific Geo-Conference and Expo – SGEM 2010 (Surveying Geology & mining Ecology Management) Albena sea-side and SPA resort, Congress Centre Flamingo Grand, Bulgaria E-mail: email@example.com Internet: www.sgem.org 21-22 June 2nd Open Source GIS UK Conference Nottingham, University of Nottingham, U.K. Internet: www.opensourcegis.org.uk 21-23 June COM.Geo 2010 Washington, DC, U.S.A. Internet: www.com-geo.org 22-24 June Mid-Term Symposium of ISPRS Commission V: Close range image measurement techniques Newcastle upon Tyne, Newcastle University, U.K. E-mail: firstname.lastname@example.org Internet: www.isprs-newcastle2010.org 23-25 June INSPIRE Conference 2010 Krakow, Poland Internet: http://inspire.jrc.ec.europa.eu/events/confer ences/inspire_2010 28-30 June ISVD 2010 Quebec City, Canada E-mail: ISVD2010@scg.ulaval.ca Internet: http://isvd2010.scg.ulaval.ca 29 June-02 July GEOBIA 2010 Ghent, Belgium Internet: http://geobia.ugent.be 29 June-09 July Bridging GIS, Landscape Ecology and Remote Sensing for Landscape Planning (GISLERS) Salzburg, Austria E-mail: email@example.com Internet: www.edu-zgis.net/ss/gislers2010 29 June-09 July Spatial Data Infrastructure for environmental datasets (EnviSDI) Salzburg, Austria E-mail: firstname.lastname@example.org Internet: www.edu-zgis.net/ss/envisdi2010 10-13 July 2010 ESRI Education User Conference San Diego, CA, U.S.A. Tel: +1 909-793-2853, ext. 3743 E-mail: email@example.com Internet: www.esri.com/educ 10-13 July 2010 ESRI Survey & Engineering GIS Summit San Diego, CA U.S.A. Tel: +1 909-793-2853, ext. 4347 E-mail: firstname.lastname@example.org Internet: www.esri.com/segsummit 10-13 July 2010 ESRI Homeland Security GIS Summit San Diego, CA, U.S.A. Tel: +1 909-793-2853, ext. 2421 E-mail: email@example.com Internet: www.esri.com/hssummit 11-12 July 2010 ESRI Business GIS Summit San Diego, CA, USA Tel: +1 909-793-2853, ext. 2371 E-mail: firstname.lastname@example.org Internet: www.esri.com/bizsummit 12-16 July 2010 ESRI International User Conference San Diego, CA, U.S.A. Tel: +1 909-793-2853, ext. 2894 E-mail: email@example.com Internet: www.esri.com/uc 18-25 July COSPAR 2010 Bremen, Germany Tel: +49 (0)421 218-2940 E-mail: firstname.lastname@example.org Internet: www.cospar2010.org 20-23 July Accuracy 2010 Leicester, U.K. Internet: www.spatial-accuracy.org/ Accuracy2010 26-30 July GeoWeb 2010 Vancouver, Canada E-mail: email@example.com Internet: www.geoweb.org 29 July-02 August MAPPS 2010 Summer Conference Incline Village, NV, U.S.A. Internet: www.mapps.org/events/index.cfm August 01-05 August SPIE Optics + Photonics 2010 San Diego, CA, San Diego Convention Center, U.S.A. Internet: http://spie.org/x30491.xml 01-05 August SPIE Photonics Devices + Applications San Diego, CA, San Diego Convention Center, U.S.A. Internet: http://spie.org/x13192.xml 01-05 August SPIE Optical Engineering + Applications San Diego, CA, San Diego Convention Center, U.S.A. Internet: http://spie.org/x13188.xml 07-12 August GIslands 2010 Ponta Delgada, Azores Islands, Portugal E-mail: firstname.lastname@example.org Internet: www.gislands.org 09-12 August ISPRS Technical Commission VIII Symposium Kyoto, ICC Kyoto, Japan Internet: www.isprscom8.org/index.html 16-18 August 2010 URISA/NENA Addressing Conference Charlotte, NC, U.S.A. Internet: www.urisa.org/conferences/ addressing/info
September 01-03 September RSPSoc 2010 From the Sea-bed to the Cloud-tops Cork, Ireland E-mail: email@example.com Internet: www.rspsoc2010.org 01-03 September PHOTOGRAMMETRIC COMPUTER VISION and IMAGE ANALYSIS Conference - ISPRS Technical Commission III Symposium Paris, France Internet: http://pcv2010.ign.fr 02-03 September COBRA 2010 - RICS Research Conference Paris, France E-mail: firstname.lastname@example.org or email@example.com Internet: www.cobra2010.com
July 01-03 July German-Austrian-Swiss conference for photogrammetry, remote sensing, and spatial information science 04 July ISPRS Centenary Celebration 05-07 July ISPRS TC VII Symposium '100 Years ISPRS-Advancing Remote Sensing Science' Vienna, Austria Internet: www.isprs100vienna.org 03-04 July InterCarto - InterGIS 16 Cartography and Geoinformation for Sustainable Development Rostov (Don), Russia Internet: http://intercarto16.net 04-10 July 20th Anniversary Meeting on Cognitive and Linguistic Aspects of Geographic Space Las Navas, Spain E-mail: firstname.lastname@example.org Internet: www.geoinfo.tuwien.ac.at/ lasnavas2010 06-07 July InterCarto - InterGIS 16 Cartography and Geoinformation for Sustainable Development Salzburg, Austria Internet: http://intercarto16.net 06-09 July GI_Forum 2010 Salzburg, Austria E-mail: email@example.com Internet: www.gi-forum.org
Please feel free to e-mail your calendar notices to:firstname.lastname@example.org
Advertiser GEODIS GEOMAX Intergeo ITC LEICA Geosystems NovAtel Optech RACURS Sokkia Spectra Precision SPOT Image Topcon Europe VEXCEL Imaging www.geodis.cz www.geomax-positioning.com www.intergeo.de www.itc.nl www.leica-geosystems.com www.novatel.ca www.optech.ca www.racurs.ru www.sokkia.net www.spectraprecision.com www.spotimage.com www.topcon-positioning.eu www.microsoft.com/ultracam Page 22 29 35 52 9 2 13 33, 39 56 19 15, 17 55 23
SURVEY AT SPEED
Capture geo-referenced 360 degree images and point clouds with any car in your ﬂeet