Thursday 31 December 2020

Announcing: vscode-map-preview 0.5.7

This final release of anything from me for 2020 fixes our KML content scrubbing code to no longer trash KML icon styles. OpenLayers didn't support KML icons properly when this extension was first created which necessitated scrubbing said content out when previewing KML files so that at least something will render in the preview instead of nothing.

That is no longer the case, so now KML icon styles are preserved when previewing. Case in point below, we now get cutlery icons instead of the standard pin marker.


One small caveat needs to be observed: Due to content security restrictions on the HTML generated by this extension, the KML icon URLs must be https otherwise nothing will render.

This release also updates OpenLayers to 6.5.0 and ol-layerswitcher to 3.8.3

Wednesday 9 December 2020

My new rig

This past week, I made the final decision to retire my old PC used for gaming and dev on my various open source projects and start shopping out for a new PC.

The main reason for the decision was my daily YouTube feed being subliminally inserted with many videos praising the (just-released) AMD Ryzen 5000 series of CPUs and the AMD Radeon RX 6000 series of GPU and how they were absolutely trouncing Intel and Nvidia in their respective CPU and GPU performance.

I also haven't been a fan of Intel CPUs since the advent of the meltdown and spectre vulnerabilities and Linus Torvalds' remarks about Nvidia have long stuck with me and my open source sensibilities, so I knew my next PC was going to have an AMD CPU and an AMD GPU. Now for me personally, I'm more of a coder than a gamer so while most of the competition-trouncing benchmarks thrown around were gaming-oriented, I was more interested in any benchmarks on developer-style workloads. One particular benchmark on one particular video caught my attention and told me that not only will my next PC be all AMD, I must get this next PC yesterday!


That's right, the new top-of-the-line AMD Ryzen 5950x compiles the Chromium browser (the software whose C++ codebase is most notorious for long build times) in around 45 minutes! Since the C++ codebases I work with are way smaller than Chromium's, but still took a considerable time to build on my old 2012 rig I was salivating at the thought of being able to compile MapGuide/FDO/GDAL/etc in 5-10 minutes on my brand new Ryzen-powered beast.

So then I started to look around to see who would offer such a beast of a PC in my home country of Australia. This eventually lead me to PC Case Gear (PCCG) which offered pre-built Intel and AMD Ryzen Gaming PCs at various pricing tiers. Some people may tell me that I should buy the components and build the PC myself, but I'm more of a software person than a hardware person so when it comes to a new PC, I prefer to go a with a pre-built one than risk screwing thing up building a new PC myself.

My personal budget for this new PC was to be no more than $5000 AUD. This sadly put any pre-built systems with a Ryzen 5950x out of my price range, so I settled on their Elite 6800 XT gaming system, whose main components of interest were:

  • AMD Ryzen 9 5900x
  • AMD Radeon RX 6800 XT
  • 4TB (2x2) SSD
  • 32 GB of DDR4 RAM
  • Windows 10 Home pre-installed
So I placed my order and gleefully waited in anticipation for this box to arrive. Then reality hit.

These AMD CPUs were selling faster than toilet paper during the early stages of this pandemic. Demand so wildly exceeded supply that you even had dedicated YouTube livestreams tracking availability of AMD Ryzen stock! And so a few days after placing the order, I sadly got the phone call from PCCG that they run out of stock on the Ryzen 9 5900x and for reasons unknown (miscalculation of inventory perhaps?) their website had erroneously reported the PC I ordered as being in stock. In light of this, they offered to replace the out-of-stock Ryzen 9 5900x with a less-but-still-powerful Ryzen 7 5800x with a $300 AUD discount. My dev-oriented mindset at this point was "well... it still compiles Chromium in under 90 minutes! So it should still be great for developer workloads relatively-speaking", so I accepted their revised offer.

A few days later, my doorbell rings and it arrived.


PCCG obviously took great care in packaging by wrapping the PC in so much foam and bubble wrap that it could double as a padded cell for a psychiatric hospital. Several minutes of cutting all the wrapping and unboxing later, my new rig was ready to be powered on and ready to rock!





Now for some developer-oriented first impressions/observations.

Windows 10 Home observations

This PC (like most computers these days) came with Windows 10 Home pre-installed and is my first computing device with this particular edition installed. My experiences with Windows 10 thus far have either been from playing with their preview builds or using the Pro edition on computers at work. I've known that home editions of past Windows releases really gimped out on the features that I needed or would be of interest to me as a developer. But Windows 10 home edition was a real surprise on this front.

Firstly, it includes IIS. So I can install MapGuide on this with IIS/.net configuration. Previous versions of Windows left IIS as a pro-only feature.

The other big surprise was Windows 10 Home supports WSL2. This one is a game-changer and was a feature I seriously did not expect to be appear in the Home edition. As someone who builds software for both Windows and Linux, being able to build (and debug) for both OSes from a single host OS without needing to spin up separate virtual machines is a massive productivity boost!

And by supporting WSL2, it means I can also spin up docker containers as Docker for Windows uses WSL2 for its container backend. I can run the new docker-based MapGuide/FDO build system completely inside an Ubuntu WSL2 distro, while building MapGuide/FDO for Windows at the same time!

MapGuide/FDO observations

So with my new rig up and running, the first item of order was obviously to get my dev environment all set up and see how fast MapGuide/FDO will take to build from a clean SVN checkout.
  • FDO takes 17 minutes (release build, windows)
  • MapGuide takes 15 minutes (release build, windows)
  • Linux numbers TBD, but I'm expecting comparable numbers
I have never seen C++ compiler output whiz by so fast! I can only imagine how much faster this would be if I my new PC had the original Ryzen 9 5900x (or even better, if the Ryzen 9 5950x was in stock and not so expensive to fit within my budget)

When all the compiler output is whizzing by, you start to notice the slower parts of the build because their compiler output isn't ticking along as fast. In the case of FDO, it noticeably slowed down when building the internal OpenSSL library. It turns out that the OpenSSL build system is woefully un-parallel on Windows, using only 16% of my available CPU during the entire build.

Overall though, I really like these numbers, which only go down once we start doing actual dev work where we won't be building thirdparty libs and certain projects over and over.

In closing

So far, I am very happy with my new purchase. PCCG were very speedy in their delivery and very helpful in their communications.

My last PC lasted a solid 8 years. I'm certain this new PC will last me a good decade. The only slight disappointment was not being able to get the Ryzen CPU I originally wanted, but then again many others can't get the Ryzen CPU they want either!

Wednesday 18 November 2020

Experimental Azure PaaS support in the SQL Server FDO provider

Do you use MapGuide? Do you also use SQL Server on Azure and have lamented for the longest time the inability of the SQL Server FDO provider to read/write data to a Azure-PaaS-hosted SQL Server db? 

Then boy, do I have news for you!

Thanks to some day-job dogfooding of FDO Toolbox and a test Azure account on hand I have finally tackled this 7 year old FDO ticket and landed experimental support for SQL Server on Azure PaaS.

You can download the 7zip archive of the patched provider from here.

Download, extract and overwrite the existing provider dll in your MapGuide Open Source 4.0 preview or FDO Toolbox 1.5.1 installation.

I have so far verified this provider can perform most provider operations needed for use in a MapGuide context.

The only operation I couldn't verify was whether the FDO CreateDataStore API worked for this provider in Azure PaaS due to insufficient permissions in the test account I was given. The CreateDataStore API is not used in the MapGuide context, so if this API doesn't work properly on Azure PaaS, it is not a deal breaker for use in MapGuide.

I'd be interested from anyone who tries this patched provider to let me know if this particular API works on Azure PaaS (ie. You can create a new FDO data store on Azure PaaS via FDO Toolbox with this patched provider)

Many thanks to Crispin Hoult of TrueViewVisuals (FKA LinkNode) for the original patch, which I had tweaked so the azure db detection is a runtime check instead of support you have to explicitly compile into the provider dll. If we had been using Azure PaaS earlier, the patch wouldn't have taken 7 years to finally be applied!

Sunday 15 November 2020

MapGuide dev diary: The beginnings of clearing the final hurdle

I've stated many times in this long and arduous MapGuide Open Source 4.0 development cycle that the final hurdle that must be cleared before 4.0 could ever be considered final is when we can finally generate language bindings for .net, Java and PHP with a vanilla and un-modified version of SWIG.

The reasons for needing to do this were already explained in my previous introductory post to these new bindings, but to re-iterate the cliff notes version:

  • We need to support and bundle PHP 7. This is non-negotiable. The current bundled PHP 5.6 is too old and long past EOL and it is a bad look to have to bundle this version of PHP for a production MapGuide deployment/installation.
  • The latest release of SWIG can generate bindings for PHP 7
  • The cross-platform .net core has grown in leaps and bounds over traditional windows-only .net Framework in terms of adoption. The just released .net 5.0 is a sign that the current windows-only .net Framework is dead/legacy and the future of .net is a cross-platform one.
  • As a result, if we're going to be supporting .net in MapGuide, we should be generating a .net binding that can work in both Windows and Linux.
  • And if we need to do that, we might as well do it with the latest release of SWIG
  • And if 2/3 languages require vanilla SWIG, we might as well go for the trifecta and generate our Java binding with it as well!

As this final hurdle involves many steps, I figure this journey is worth documenting with its own mini dev diary series.

So what has changed since the initial announcement of these experimental bindings?

Firstly, I have decided to bring the current binding work into the official MapGuide source in a new vanilla_swig sandbox branch. All development work will continue in this branch. The previous GitHub repo housing this work will no longer be maintained and I will eventually archive/delete this repo. Going from Git back to SVN might sound like a downgrade (technically yes), but my developer "inner loop" has sped up a lot by having everything in the same repo and not having to coordinate files/changes across 2 different repos in 2 different locations. Maybe one day we'll permanently migrate the MapGuide source on GitHub, but today is not that day.

Secondly, before I tackle the PHP 7 support, I wanted to see whether the .net/Java bindings were still functional and what other final improvements we can make before all attention is fully diverted to the PHP 7 binding.

For Java, after some minor fix ups, the binding and its test suite were still A-OK. So onto the .net binding.

When I introduced these new experimental bindings, the .net one was back to a single monolithic assembly (MapGuideDotNetApi). I wasn't fully comfortable with the monolithic assembly as it would hamper any attempts to write code that could work in both MapGuide and mg-desktop. The mg-desktop .net support was hanging off of the currently Foundation/Geometry/PlatformBase split assemblies. Having our new .net binding go back to a monolithic assembly would hamper our ability to write such shared code, so if it was possible we should try to replicate the Foundation/Geometry/PlatformBase/MapGuideCommon/Web split layout in the new .net binding.

Using the current .net binding as a point of reference, splitting the monolithic MapGuideDotNetApi assembly back to the 5 constituent parts was a relatively simple affair. Thanks to the dramatically simplified csproj format we now have 5 hand-crafted C# projects targeting netstandard2.0 that reference each other and that SWIG dumps all its generated C# source into (without having to add each .cs file into the project itself) for easy compilation that automatically publishes out to respective nuget packages like so.

And because our 5 projects reference each other, those dependencies are also expressed in the nuget packages themselves. That is to say, if you install the MapGuideCommon package, it will automatically install the PlatformBase, Geometry and Foundation packages as well as they were defined as project dependencies of the MapGuideCommon C# project file.

And the final cherry on top? These nuget packages are still self-contained and bundle the native dlls that the .net binding is wrapping. The current nuget packages are already self-contained, but they are only consumable in legacy .net Framework, are windows-only and require kludgy powershell hacks to make sure all the native dlls are copied out to the project's output directory. Our new nuget packages take advantage of the fact that native libraries are now first class citizens in the .net core packaging world.

By adding such dlls to the runtimes/win-x64/native folder of a C# project, they will automatically be bundled into any nuget package created and the .net core project system knows to automatically copy these dlls out to the right location where the .net assembly can P/invoke them. 

Now for a multi-platform .net binding to work, we have to get SWIG to generate the same C++ glue code, but this time to be compiled on Linux but with the same library names so our SWIG-generated C# code will correctly P/Invoke into the respective Windows .dll or Linux .so, and pack those compiled .so files into the runtimes/linux-x64/native folder of our 5 C# projects.for automatic bundling into our nuget package.

How we are able to do this will be the topic of a future post once I've figured it all out.

Thursday 5 November 2020

MapGuide 4.0 Showcase: Making WFS/WMS support beyond a box ticking exercise

For the longest time, MapGuide's support for WFS and WMS was nothing too special. The level of support was the bare-minimum enough so that we could say "We support WFS/WMS"

For MapGuide 4.0, the WFS and WMS support has been enhanced in the following areas:

GeoJSON format support

As I've previously mentioned, if we're going to serve feature data in a JSON format, we should just go straight to GeoJSON and not bother with anything else.

This now also applies for WFS and WMS operations that return feature data. Namely:
  • GetFeatures for WFS
  • GetFeatureInfo for WMS
For both these operations, specifying application/json as the requested format will return the data in GeoJSON. This support is most useful for WMS GetFeatureInfo as due to the ubiquity of GeoJSON support, a WMS GetFeatureInfo response in GeoJSON format can be used as a convenient "selection overlay" to display selected features when clicking on a WMS map.

Configurable geometry output for WMS GetFeatureInfo

Sometimes, one may not wish to have geometry output for certain WMS GetFeatureInfo requests. So for MapGuide 4.0, there is a new _EnableGeometry metadata option for Layer Definition resources, that determines if WMS GetFeatureInfo requests against this layer should return geometry data or not.

The next release of MapGuide Maestro lets you toggle this setting in the UI without having to mess around with resource header XML.



This setting is only applicable if the Layer Definition itself has been set to be queryable for WMS.

WFS Support for hit count

The spec for WFS GetFeatures defines a special mode where one can request a hit count (ie. A raw total) instead of the actual raw feature data. MapGuide did not implement this part of the WFS spec (it is optional). For MapGuide 4.0, this is now implemented.

If you pass resultType=hits to your WFS request, you now get a total instead of the feature data.


As an aside, if you use VSCode on the regular, I highly recommend you install the REST client extension. It has replaced Postman for my HTTP/REST API testing. As evidenced by the above screenshot, testing HTTP requests is dead simple.

Special thanks to OpenSpatial for their assistance in testing out this feature.

Viewer representation for WMS GetMap

As of the 4.0 preview 2 release, the mapagent now also supports a new viewer representation for WMS layers, giving you a built-in way to easily preview any published WMS layer in MapGuide by simply specifying a format of application/openlayers, which is a new option in the GetMap test page


In this format, a HTML page is returned which contains an OpenLayers viewer set up to the WMS layer(s) in question.


No more needing to fire up a client GIS application like Gaia or QGIS to preview such layers, MapGuide now provides the means to preview such layers out of the box!

Tuesday 20 October 2020

Announcing: vscode-map-preview 0.5.6

This release updates OpenLayers to the latest 6.4.3, ol-layerswitcher to the latest 3.7.0 and adds support for opacity properties of the mapbox simple style spec for GeoJSON features.



Monday 19 October 2020

Announcing: FDO Toolbox 1.5.1

This is a bugfix release that updates our MapGuide API binaries to the recently-released 4.0 Preview 2 version and fixes the ability to export a schema where one or more classes have an association property.

Download

Thursday 15 October 2020

MapGuide 4.0 showcase: Supercharged tile sets part 2

Previously, we showcased the new tile capabilities of MapGuide Open Source 4.0.

Since the last preview release, we've now introduced a new tile format that will open up new web mapping possibilities not previously possible: Support for Mapbox Vector Tiles.

Vector Tiles are a way to deliver geographic data in small chunks to a browser or other client application Vector Tiles are similar to raster tiles, but instead of raster images, the data returned is a vector representation of the features in the tile.

As a result, Vector Tiles relieves the burden of rendering/stylization on the MapGuide Server as stylization is now a responsibility of client mapping applications, allowing for much richer and interactive user/application experiences as in this case, the MapGuide Server only has to focus on the encoding and delivery of vector tiles.

Consider this new Mapbox Vector Tile sample that ships with the preview 2 release.

In this example, the MapGuide Server is delivering the Mapbox Vector Tiles. The styling and labeling of the features is all done client-side through OpenLayers style functions.

Here's an interesting exercise for the reader (if I don't beat you to it first :)) Implement a JS library that automatically translates our basic Vector Layer Definition documents in MapGuide in clean JSON format, to OL style functions for automatic client-side styling of MVT tiles.

Mapbox Vector Tiles has been around for a while, so what took so long for support to finally land in MapGuide? A few factors.

  • The lack of a consensus around a "standard" around vector tiles. Our usual go-to for geo standards (the OGC) have been somewhat silent on vector tiles (last I checked) and if they actually did push any kind of standard for vector tiles, it certainly wasn't reflected in support in the popular GIS/Mapping software out there. Over time, it was clear that Mapbox Vector Tiles is the de-facto standard for vector tiles and if MapGuide was to ever support vector tiles, it should be supporting Mapbox Vector Tiles.
  • MapGuide had not yet standardized on adopting C++11 as the base language version of C++, which limited the range of existing MVT encoding libraries available for us to use. Now that we have adopted the use of C++11, it was a case of what MVT encoding library to use for MVT support in MapGuide. I had initially tried using vtzero for MVT encoding, but the low-levelness of the library and uncertainty around how coordinate systems are meant to be handled meant that I had to look for an alternate library. In the end, we chose to graft the MVT tile encoder code from GDAL/OGR's MVT vector driver, which was a much more simpler affair.

So how can you start pumping out MVT tiles for your own MapGuide Open Source 4.0 Preview 2 install? Simply create a new XYZ tile set definition with a tile format of MVT


Then you can start fetching MVT tiles with your favorite web mapping library, whether that be OpenLayers or Leaflet or anything else on this big list!

And because MVT support in MapGuide is through XYZ tile access semantics, you can use MgTileSeeder or any other XYZ tile seeder to pre-load your MVT tile cache.

Announcing: MapGuide Open Source 4.0 Preview 2

Here is the long awaited second preview release of MapGuide Open Source 4.0.

Refer to the release notes for download links and an overview of what's new and change since the last preview release.

For MapGuide users on Linux who use Java, we are sad to say that the Preview 2 release currently has a known issue with Java support being broken (all .jsp requests cause Tomcat to return http 403 forbidden errors, meaning our Java-based viewer and code samples do not work out of the box).

We recently upgraded our bundled Tomcat to the 9.x series and required some config changes. Java support works on Windows, but is failing on Linux and we lack the proper resources to debug and investigate further. All we know and suspect so far is that it is some missing security-related configuration needed since Tomcat 8 or 9, but we don't know what that missing configuration is and as a result all our .jsp requests cause Tomcat to return HTTP 403 forbidden errors, rendering our Java viewer and associated code samples to not work out of the box.

If any of you have any idea what the problem is (by installing the preview 2 Linux build into your spare Ubuntu 16.04 VM or docker container and mess with the existing configuration to get the Java viewer into a working state), please let us know of your findings in this trac issue or by posting your findings to the mapguide-users mailing list.

We will resume our 4.0 showcase blog series shortly to cover the new features of MapGuide Open Source 4.0. To recap, this is what has been showcased so far:

This list is just the tip of the iceberg so stay tuned!

In accordance with my original plans, development efforts will now be focused on the final non-negotiable requirement in order for us to be able to produce a final MGOS 4.0 release: Supporting PHP7 in our MapGuide API so we can finally bundle a version of PHP that is actively supported. This primarily involves resuming development work on our experimental API bindings and getting it into a working shape for bundling with MGOS 4.0.

Thursday 1 October 2020

In awe of what vscode can do

I was originally hoping to drop the long awaited 2nd Preview of MapGuide Open Source 4.0 this week, but sadly some show-stopping bugs have crept up on the Linux side that means that sadly I have to push back the release until at least this one particular show-stopper for the PostgreSQL FDO provider is addressed.

Because this bug is present only on Linux it means we have to dive into gdb and try to debug through how this provider is producing garbage SRID values that result in broken PostGIS spatial queries.

Now normally I would dread at this prospect because gdb is command-line based and I miss being able to easily debug and step through code graphically with Visual Studio, but that was then and nowadays things are a lot different.

  • We now have VSCode, un-doubtedly the most popular code editor that is also multi-platform.
  • VSCode has extensions for C++ intellisense and integrated debugging with gdb
  • For MGOS 4.0, we now also build MapGuide/FDO for Linux inside docker containers.
  • VSCode also has extensions for remote development inside docker containers.

So this show-stopper has presented the perfect opportunity to see how hard or easy it is to tie all these pieces together for a nice debugging experience.

I start by spinning up the FDO build container and a PostgreSQL docker container to run our test code against.

Then after installing the remote extensions, I click the green box which then gives me an option to attach to a running docker container.


Which then shows the list of running docker containers, which includes my FDO build container


This then spawns up a second VSCode instance that allows me to open a folder within the running container. The FDO source code which I want to step through is accessible in this container, so I pick that folder.


Now if the experience here is the same as though I wanted to debug through this code from *outside* the container, I would then need to make sure the C++ extension is installed. I notice in this case that the extension UI shows local and remote installed extensions, so I have to install the C++ extension remotely.


If the debug experience for remote sources is the same as for local, then what should happen next is that I make a launch.json set up to run gdb with the executable that contains our test code. VSCode nicely creates a useful starting launch.json for me to tweak to what I need.

At this point, I make sure gdb is installed in the FDO build container, find some source code to stick some break points and hit the play button on the debug tab to start debugging and lo and behold ...

I am now visually debugging and stepping through the FDO source code! Just like Visual Studio on Windows. There was some small setup involved, but the process was mostly seamless.

VSCode is one truly amazing editor! With the right extensions, it can match any dedicated IDE in capabilities.

Now to tackle the actual show-stopper in question.

Wednesday 30 September 2020

Bootstrapping an Oracle XE database with spatial data using FdoCmd

With the introduction of FdoCmd in FDO Toolbox, I'd thought I revisit this post from 4 years ago.

The cliff-notes were that I couldn't bootstrap a fresh Oracle XE installation with spatial data because the create data store UI failed and as a result I had to fallback to Oracle's SQLPlus CLI to create the necessary oracle user with appropriate granted permissions and then proceeded to use ogr2ogr to copy the SHP file into the Oracle database.

What caused me to revisit?
  • Turns out the King Oracle provider doesn't implement the FDO create data store command! That's why the UI will ultimately not work, despite the error message in the referenced post actually referring to something else.
  • The actual "creating" of the data store is a series of SQL commands executed in SQLPlus.
  • I used ogr2ogr to copy that SHP file because I had given up on using FDO Toolbox for any of the remaining setup steps until the data was in.
Because FdoCmd supports executing pass-through SQL queries/commands if the provider supports it (King Oracle does) and I didn't even try the actual copying of data, with the advent of FdoCmd this was something worth revisiting and trying again.

So for this post, we begin from this starting position.
  • We have a freshly pulled down Oracle XE docker image (in this example. the running container is listening on 192.168.0.3)
  • We have the latest FDO Toolbox that comes with the FdoCmd CLI utility
And our end goal is:
  • We can create the required oracle user through FdoCmd
  • We can grant that oracle user the necessary permissions through FdoCmd
  • We can "context switch" to this created user by running some test SQL commands through FdoCmd with this user's credentials
  • And then the main event: We can bulk copy a SHP file of parcel data into this data store, with FdoCmd automatically creating whatever schemas/classes required
  • Finally, we can preview the data afterwards in both FdoCmd and FDO Toolbox and that the data is consumable from MapGuide

Creating the mapguide user

The very first thing we need to do is create the mapguide user and grant it the necessary permissions needed for the King Oracle provider to do what it needs to do, which can be done as follows:

FdoCmd.exe execute-sql-command --provider OSGeo.KingOracle --connect-params Username system Password oracle Service //192.168.0.3/xe --sql "CREATE USER mapguide IDENTIFIED BY mapguide"

FdoCmd.exe execute-sql-command --provider OSGeo.KingOracle --connect-params Username system Password oracle Service //192.168.0.3/xe --sql "GRANT CREATE SESSION, ALTER SESSION, CREATE DATABASE LINK, CREATE MATERIALIZED VIEW, CREATE PROCEDURE, CREATE PUBLIC SYNONYM, CREATE ROLE, CREATE SEQUENCE, CREATE SYNONYM, CREATE TABLE, CREATE TRIGGER, CREATE TYPE, CREATE VIEW, UNLIMITED TABLESPACE TO mapguide"

Testing the created Oracle user works

To test that created Oracle user works, we can do a schema listing with our new Oracle user credentials

FdoCmd.exe list-schemas --provider OSGeo.KingOracle --connect-params Username mapguide Password mapguide OracleSchema mapguide Service //192.168.0.3/xe

This should print out KingOra as the schema name. Even though we haven't created any tables yet, the act of attempting to do a schema listing requires opening a King Oracle FDO connection with our user credentials first.

Copying our SHP file

Unlike for other FDO providers, there are several quirks unique to the King Oracle provider that we have to negotiate around.
  • By default, FDO class names are transcoded to a weird tilde-delimited format. Presumably, this format is the provider's way to support tables with multiple geometry columns or allowing for tables of the same names across different schemas.
  • FDO spatial contexts are automatically inferred from the name, which is of the form OracleSridXXXX. When creating spatial contexts for King Oracle, we don't bother filling all the information we'd normally provide for a spatial context (CS, extents, etc) as the provider disregards this information.
With these quirks now known, here's how we can negotiate around them.

Firstly, we run the expected copy-class command but with a --setup-only flag specified.

FdoCmd.exe copy-class --src-provider OSGeo.SHP --src-connect-params DefaultFileLocation D:\fdo-4.1\Providers\SHP\TestData\Sheboygan --dst-provider OSGeo.KingOracle --dst-connect-params Username mapguide Password mapguide OracleSchema mapguide Service //192.168.0.3/xe --src-schema Default --src-class Parcels --dst-schema KingOra --dst-class Parcels --override-sc-name "WGS84 Lat/Long's, Degrees, -180 ==> +180" --override-sc-target-name OracleSrid4326 --setup-only

The string "WGS84 Lat/Long's, Degrees, -180 ==> +180" happens to be the spatial context name of our source SHP file.

When --setup-only flag is specified, only the setup portion of the bulk copy is executed, which in our case means:
  • The Parcels feature class will be created in Oracle under its transcoded name
  • That feature class will be associated to the 4326 SRID through the OracleSrid4326 spatial context name that we're overriding from the source
Now that the table has been created, and knowing how the provider transcodes FDO feature class names (in our case Parcels -> MAPGUIDE~PARCELS~GEOMETRY), we can re-run the above command but:
  • Omitting the --setup-only flag
  • Using the provider-transcoded FDO class name instead of our normal class name
FdoCmd.exe copy-class --src-provider OSGeo.SHP --src-connect-params DefaultFileLocation D:\fdo-4.1\Providers\SHP\TestData\Sheboygan --dst-provider OSGeo.KingOracle --dst-connect-params Username mapguide Password mapguide OracleSchema mapguide Service //192.168.0.3/xe --src-schema Default --src-class Parcels --dst-schema KingOra --dst-class MAPGUIDE~PARCELS~GEOMETRY --override-sc-name "WGS84 Lat/Long's, Degrees, -180 ==> +180" --override-sc-target-name OracleSrid4326

And the bulk copy should commence.

One thing that you'll notice and why this post will sadly be nothing more than an intellectual exercise rather than a practical how-to is that the bulk copy is extremely slow. Although the provider supports batched inserts which would allow for greater throughput, the implementation has proven to be buggy when bulk copying my various example test data files.

KingFdoClass setup

At this point the Oracle data store is ready to be consumed in any FDO client application (eg. MapGuide). However the data store provides the ugly transcoded class names by default which complicates use cases like exposing King Oracle feature classes as WFS layers in MapGuide.

Fortunately, the provider supports a feature called the KingFdoClass which is a special table in your Oracle schema to register spatial tables as FDO feature classes. With this table present and specified as a connection property, it augment the default FDO schema/class listing behavior with extra classes based on the contents of this table. The names of these feature classes will be based on whatever's registered in the KingFdoClass table, so no more ugly tilde-fied class names!

The following command will create a table named FDO_FEATURE_CLASSES which we'll nominate as the KingFdoClass table.

FdoCmd.exe execute-sql-command --provider OSGeo.KingOracle --connect-params Username mapguide Password mapguide OracleSchema mapguide Service //192.168.0.3/xe --sql "CREATE TABLE FDO_FEATURE_CLASSES( FDO_UNIQUE_ID NUMBER(*,0),FDO_ORA_OWNER VARCHAR2(64 BYTE), FDO_ORA_NAME VARCHAR2(64 BYTE), FDO_ORA_GEOMCOLUMN VARCHAR2(1024 BYTE), FDO_SPATIALTABLE_OWNER VARCHAR2(64 BYTE), FDO_SPATIALTABLE_NAME VARCHAR2(64 BYTE), FDO_SPATIALTABLE_GEOMCOLUMN VARCHAR2(1024 BYTE), FDO_CLASS_NAME VARCHAR2(256 BYTE), FDO_SRID NUMBER, FDO_DIMINFO MDSYS.SDO_DIM_ARRAY , FDO_CS_NAME VARCHAR2(256 BYTE), FDO_WKTEXT VARCHAR2(2046 BYTE), FDO_LAYER_GTYPE VARCHAR2(64 BYTE), FDO_SEQUENCE_NAME VARCHAR2(64 BYTE), FDO_IDENTITY VARCHAR2(1024 BYTE), FDO_SDO_ROOT_MBR MDSYS.SDO_GEOMETRY , FDO_POINT_X_COLUMN VARCHAR2(128 BYTE), FDO_POINT_Y_COLUMN VARCHAR2(128 BYTE), FDO_POINT_Z_COLUMN VARCHAR2(128 BYTE), FDO_SPATIAL_CONTEXT VARCHAR2(128 BYTE))"

Then we'll register our freshly copied parcels table with a class name of "Parcels"

FdoCmd.exe execute-sql-command --provider OSGeo.KingOracle --connect-params Username mapguide Password mapguide OracleSchema mapguide Service //192.168.0.3/xe --sql "INSERT INTO FDO_FEATURE_CLASSES ( fdo_class_name, fdo_ora_owner, fdo_ora_name, fdo_ora_geomcolumn, fdo_identity ) values ( 'Parcels', 'MAPGUIDE', 'PARCELS', 'GEOMETRY', 'FEATID' )"

When we run our list-classes command by default

FdoCmd.exe list-classes --provider OSGeo.KingOracle --connect-params Username mapguide Password mapguide OracleSchema mapguide Service //192.168.0.3/xe

We get our default ugly class name

MAPGUIDE~PARCELS~GEOMETRY

Now when we run the same command but with the KingFdoClass property set

FdoCmd.exe list-classes --provider OSGeo.KingOracle --connect-params Username mapguide Password mapguide OracleSchema mapguide Service //192.168.0.3/xe KingFdoClass FDO_FEATURE_CLASSES

We get the "clean" class names in addition to the ugly ones

MAPGUIDE~PARCELS~GEOMETRY
Parcels

In conclusion

While we are able to finally bootstrap a new Oracle data store end-to-end solely with FdoCmd.exe, the slow bulk copy performance means I cannot seriously recommend the use of FdoCmd.exe when copying spatial data files of non-trivial size into Oracle with this provider.

If/when the batch insert issue is finally addressed, we may revisit this process to see if it now actually viable, rather than an intellectual curiosity as it currently stands.

Wednesday 29 July 2020

Back into the groove of MapGuide things

Getting back into the groove of MapGuide development to finally make some progress on that long arduous journey to MapGuide Open Source 4.0

First item on the books is a small improvement to WMS GetMap. Namely, I don't think you will need to whip out a dedicated WMS client to preview your WMS layers anymore after this, we can just generate an OpenLayers viewer for you straight from the mapagent so you can see for yourself!




Monday 20 July 2020

Announcing: vscode-map-preview 0.5.5

This release adds partial support for the Mapbox SimpleStyle spec for GeoJSON features




Many thanks to @vohtaski for the PR to add this support.

The other new feature is a new opt-in configuration property to de-clutter vector feature labels. To illustrate, here's a point KML preview with de-cluttering disabled (current behavior)


And here's the same KML preview with de-cluttering enabled


The other points/markers will be visible as you zoom in where there will be more "breathing space" for OpenLayers to draw the extra labels.

This release also updates:

  • OpenLayers to 6.3.1
  • ol-layerswitcher to 3.5.0

Tuesday 7 July 2020

Announcing: FDO Toolbox 1.5

Another release already? Yes indeed!

There were 2 reasons for this new release.

Firstly, the previous 1.4 release had a major oversight where the windows installer did not bundle a required dll needed for the new FdoCmd tool to work, so it was totally broken out of the box in both the 1.4 and 1.4.1 releases. This release now properly bundles the missing dll, making this tool finally operational for the first time for you all.

Secondly, this release include a new major feature that was originally slated to be part of the 1.4 release, but got shelved due to stability issues around the FDO .net wrapper that was resolved with the 1.4.1 release. And for this announcement post, I'll be talking about this feature in great detail.

That feature, is the long overdue integration of MgCoordinateSystem from the MapGuide API for coordinate system support. Here's how we use the MgCoordinateSystem integration in FDO Toolbox.

Transforming Features in Bulk Copy

Let''s start with the obvious place, bulk copying now supports optional transformation of geometry features. This is expressed in several different ways.

In the main bulk copy editor, when specifying a spatial context override with a different coordinate system WKT like so.


There is now a new flag that you can toggle to indicate that this override should be interpreted as an instruction to transform geometries for the source coordinate system WKT to the coordinate system indicated by the override WKT.



This method of enabling transformation may look a bit un-intuitive if you're accustomed to using tools like ogr2ogr where you state upfront what the source CS is and what target CS you want to transform to. The reason for this is due to the concept of spatial contexts in FDO. In FDO, coordinate systems are not set explicitly, they are inferred through its spatial context. Thus the UI to enable transformation stems from the constraint imposed by FDO's spatial contexts. We can't set a source or target CS, we have to override what is inferred by FDO.

When using the SDF/SQLite dump context menu command, the UI has an option to allow transforming the dumped features to the specified coordinate system.



When you click the Pick CS button, a dialog appears that you should be familiar with. It's the same coordinate system picker dialog from MapGuide Maestro and fulfills the same purpose here in FDO Toolbox as it does in Maestro.



Streamlining Various UI

Any UI that deals with spatial contexts now take advantage of the newly integrated coordinate system picker to help auto-fill most, if not, all of the fields in question.

For example, creating a spatial context now can use the coordinate system picker to auto-fill in most of the fields in the UI



Similarly, when creating a new RDBMS data store, you can use the coordinate system picker to pre-fill the sections about the coordinate system and extents



And from the previous bulk copy example, you can easily override a source spatial context by picking an existing coordinate system.



FdoCmd enhancements

Not only will you finally get a functional FdoCmd tool in this release, it has also been enhanced with various integrations with the new coordinate system functionality.
  • Any command that outputs geometry data will have new options to allow transforming it to a target coordinate system
  • Creating spatial contexts can use an existing coordinate system to fill in most of the required spatial context information.
  • The bulk copy commands has support for transformation and can use an existing coordinate system as the basis for specifying the WKT of any override spatial context
  • When listing spatial contexts, we'll now use the new coordinate system facilities to resolve/display the corresponding mentor/EPSG code for any coordinate system displayed
Also FdoCmd receives a whole series of new commands for interacting with the coordinate system facilities:
  • enumerate-cs-categories for listing all categories from the coordinate system catalog
  • enumerate-cs for listing all coordinate systems for a given category
  • find-cs-by-code for fetching and displaying details of a coordinate system from an input mentor CS code
  • find-cs-by-epsg for fetching and displaying details of a coordinate system from an input EPSG code
  • wkt-to-cs-code for obtaining a mentor CS code from an input coordinate system WKT
  • wkt-to-epsg for obtaining an EPSG code from an input coordinate system WKT
  • cs-code-to-wkt for the inverse of wkt-to-cs-code
  • epsg-to-wkt for the inverse of wkt-to-epsg
  • is-valid-wkt for validating coordinate system WKT strings
A small caveat

If you've used CS-MAP (or its MgCoordinateSytem wrapper in MapGuide), you'll know about its legendary support for nearly every coordinate system in existence on Planet Earth.

But that support comes at a price: The size of our installers/installation suffers as a result.

To support so many coordinate systems requires country-specific grid data files that are:
  • Huge
  • Difficult to compress due to their binary nature
So to avoid making this installer a 300MB file, and making it a 30MB file instead, this release of FDO Toolbox only bundles the core CS-MAP dictionary data files and omits all of the country-specific grid files. Such files are available for download as a separate zip file on the 1.5 release page.

If you need these grid files for your coordinate system transformation (most likely, you are transforming from one CS to another and it is failing because it is looking for a certain grid file), just download the zip file (from the releases page) and extract the contents into the Dictionaries directory of your FDO Toolbox installation.

Other changes

  • FDO Toolbox now ships with libpq.dll and libmysql.dll (dlls courtesy of the VS2015 build of GDAL/OGR from gisinternals) allowing for the MySQL and PostgreSQL providers to work out of the box without having to source these dlls yourself. King Oracle provider still requires you to source the Oracle 11g Instant Client binaries yourself.
  • When bulk copying, we no longer try to create spatial contexts for names that already exist on the target connection
  • More cases are handled when trying to convert an incompatible feature class when applying a schema to a target connection

In Closing

Barring bug fix releases to address any critical issues that show up after this release, I believe that FDO Toolbox 1.5 will be the last major release of FDO Toolbox I will be putting out for a while and the project will most likely return back to hibernation. I had restarted this journey a few months ago to address some long standing pain points that had built up and with the completion of this coordinate system integration in this release, I feel this journey is now complete ...

... Until another series of annoyances and pain points builds up to critical mass in 5/10 years time perhaps :)

Thursday 11 June 2020

Announcing: FDO Toolbox 1.4.1

This release should dramatically improve stability of FDO Toolbox and the new FdoCmd CLI when working with SQLite data files.

In fixing the stability problem, it also finally gave me the ultimate insight into the proper usage patterns around the FDO .net API. I had made my concerns known over a decade earlier about the flakiness of the FDO .net wrapper and the various answers given didn't quite give me a solid set of "rules of thumb" to avoid.

What lit the light bulb was looking the FDO SQLite provider codebase once more. I noticed that the main connection class not only implements the main connection interface, it also implements several other interfaces for convenience. The other insight was that there are several places in the FDO Toolbox code that it would be possible that a reference to a capability or a property dictionary (off of the capability, or some other top-level connection property) would out-live the underlying connection which meant that when it came time for the .net GC to start cleaning up references, it would call the finalizers on these references which would then proceed to subtract the ref count on the underlying native pointer.

But because in the case of SQLite the connection implements several FDO interfaces, we may be subtracting a ref count of a pointer to a non-connection interface, but the underlying implementation is actually the connection itself, causing an access violation from tearing down the same connection more than once or tearing down something that is connected to the torn down connection.

I didn't have full evidence to confirm that the above was indeed the case, but it was a solid enough theory that was backed by my observations in running isolated snippets of C# code using the FDO API targeting the SQLite provider.

Here's an example of a crashing snippet:


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
using OSGeo.FDO.ClientServices;
using OSGeo.FDO.Commands;
using OSGeo.FDO.Commands.DataStore;
using OSGeo.FDO.Connections;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace FdoCrash
{
    class Program
    {
        static void Main(string[] args)
        {
            var conn = FeatureAccessManager.GetConnectionManager().CreateConnection("OSGeo.SQLite");
            using (conn)
            {
                if (HasCommand(conn, CommandType.CommandType_CreateDataStore, "Creating data stores", out var _))
                {
                    using (var cmd = (ICreateDataStore)conn.CreateCommand(CommandType.CommandType_CreateDataStore))
                    {
                        var dict = cmd.DataStoreProperties;
                        foreach (string name in dict.PropertyNames)
                        {
                            Console.WriteLine("{0}", name);
                        }
                    }
                }
            }
        }

        static bool HasCommand(IConnection conn, CommandType cmd, string capDesc, out int? retCode)
        {
            retCode = null;
            if (Array.IndexOf<int>(conn.CommandCapabilities.Commands, (int)cmd) < 0)
            {
                //WriteError("This provider does not support " + capDesc);
                //retCode = (int)CommandStatus.E_FAIL_UNSUPPORTED_CAPABILITY;
                return false;
            }
            return true;
        }
    }
}


And here's the same snippet, refactored to the point it does not crash with System.AccessViolationException:



 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
using OSGeo.FDO.ClientServices;
using OSGeo.FDO.Commands;
using OSGeo.FDO.Commands.DataStore;
using OSGeo.FDO.Connections;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace FdoCrash
{
    class Program
    {
        static void Main(string[] args)
        {
            var conn = FeatureAccessManager.GetConnectionManager().CreateConnection("OSGeo.SQLite");
            using (conn)
            {
                if (HasCommand(conn, CommandType.CommandType_CreateDataStore, "Creating data stores", out var _))
                {
                    using (var cmd = (ICreateDataStore)conn.CreateCommand(CommandType.CommandType_CreateDataStore))
                    {
                        using (var dict = cmd.DataStoreProperties) //Dispose the property dictionary asap
                        {
                            foreach (string name in dict.PropertyNames)
                            {
                                Console.WriteLine("{0}", name);
                            }
                        }
                    }
                }
            }
        }

        static bool HasCommand(IConnection conn, CommandType cmd, string capDesc, out int? retCode)
        {
            retCode = null;
            using (var cmdCaps = conn.CommandCapabilities) //Dispose the command capabilities asap
            {
                if (Array.IndexOf<int>(cmdCaps.Commands, (int)cmd) < 0)
                {
                    //WriteError("This provider does not support " + capDesc);
                    //retCode = (int)CommandStatus.E_FAIL_UNSUPPORTED_CAPABILITY;
                    return false;
                }
            }
            return true;
        }
    }
}


And with this snippet, I have come to what I confidently feel should be the "best practices" to using the FDO .net API.

Firstly, dispose of any reference to a top-level connection property (or a sub-property that hangs off of that property) as soon as you are done with it. You must do whatever you can to eliminate the possibility of such references out-living the main connection if/when the .net GC goes to cleanup. The C# using keyword helps streamlining this a lot. Other FDO objects do not need such aggressive disposal (the key point is that they don't directly hang off of the main connection or one of its top-level properties), but you should probably do it anyways out of habit. A thing that may result in confusion (it's confused me for about a decade!) is that disposing a .net FDO object is not the same as disposing a FDO object in C++. In C++, disposing is actual deleting/deallocation of memory, in .net disposing is just subtracting the refcount from the underlying C++ pointer (a way to tell it we're done with the object on the .net side). And because nearly every FDO class/interface exposed to .net implements the IDisposable interface, you are encouraged to dispose early and often once you're done with them.

Secondly, avoid compound statements (some.property.PropertyOrMethod) on anything involving classes/interfaces from the FDO .net API. These are the rules for working with its C++ API (to prevent memory leaks from not housing intermediate parts of a compound statement in smart pointers) and we should probably follow the same rules on the .net side just to be safe.

With these 2 rules established, I did a sweep of the FDO Toolbox codebase to make sure these 2 rules were being adhered to and the end result was that our powershell test harness no longer crashes when involving the SQLite provider, which objectively gave me confidence that this issue was finally addressed.

And that's the little (and hopefully insightful) side-story to pad out this blog post :)

Download

Friday 5 June 2020

Announcing: FDO Toolbox 1.4

After 5+ years of inactivity, the dam of pain points and personal frustrations finally burst and resulted in 2 months of solid enhancements and quality-of-life improvements, culminating in a new release of FDO Toolbox that I am pleased to finally announce.

Here's the significant changes of this release

FDO Toolbox is now 64-bit only

You should all be running a 64-bit version of Windows by now, so there's no real point trying to make a 32-bit version available.

Bulk Copy Enhancements

The Bulk Copy feature of FDO Toolbox has undergone many enhancements and quality-of-life improvements to make it ever more robust in getting spatial data out of one spatial data store and into another. This post covers all the significant changes.

New FdoCmd command-line tool

This release includes FdoCmd, a much more powerful and flexible command-line tool that replaces the existing FdoInfo.exe and FdoUtil.exe tools.

FDO 4.1

This release of FDO Toolbox ships with FDO 4.1 (r7973). This is pretty much equivalent to the FDO that ships with MapGuide Open Source 3.1.2 with extra PostgreSQL and MySQL provider enhancements made after the 3.1.2 release.

Improved file extension to provider inference

Most of you are probably accustomed to dragging and dropping a .sdf file or a .shp file into the Object Explorer of FDO Toolbox and it automatically creating a respective SDF or SHP FDO connection.

Unfortunately, the list of file extensions that work like this was a hard-coded list. Drag/drop a:
  • .geojson file
  • .csv file
  • .tab file
  • etc
And expect a FDO connection to be created? This was not possible until this release. For this release, we no longer hard-code a list of file extensions, we delegate that out to a new and external FileExtensionMappings.xml file. This file (which you can edit) defines all the file extensions a FDO connection can be created from and this file helps to service:
  • The file drag and drop functionality of the Object Explorer
  • The --from-file command line argument of any FdoCmd verb that requires FDO connection details
Removal of various cruft

5 years is a lot of time in the world of technology so naturally when coming back to this codebase, I took a look at things that were either obsolete or done real half-heartedly and gave them the axe. This includes:

1. Removing specialized connection UIs for Autodesk Oracle and SQL Server FDO providers. I have not touched an Autodesk Geospatial product in many years (where these providers are included), so whether this feature still works or not I do not know, nor do I want to invest in the resource to know (I'm long gone from the Autodesk reseller/partner game, so it's not like I have easy access to these products to find out). The existing open source SQL Server and Oracle providers are more than adequate for this task.

2. Removing specialized connection UI for the legacy PostGIS provider. If you don't know what this is, this was the FDO provider for PostGIS that was replaced by the more robust OSGeo.PostgreSQL provider around the FDO 3.5 timeframe. The legacy provider itself has long since been removed, but the UI for this was still present. Not anymore.

3. Removal of Sequential Process support. This feature (which I probably never documented, but you may see references to it in various UI menus) was an XML-based definition around wrapping calls to the old FdoInfo/FdoUtil CLI tools. Obviously, with the consolidation of these tools into FdoCmd, the Sequential Process support was broken and since it is much simpler to just write your own powershell wrapper around the FdoCmd tool, the choice to axe Sequential Process support was an easy one.

4. Removing all semblances of scripting and extensibility. My original ambitions for FDO Toolbox was for it to be a fully customizable and scriptable Windows GUI application. These ambitions were better realized in MapGuide Maestro, but were left half-baked in FDO Toolbox. Having the opportunity to revisit this codebase, I've come to the conclusion that FDO Toolbox doesn't need customization or an integrated scripting engine.

It is a tool with a singular set of purposes:
  • I want to peek/inspect some spatial data
  • I want to query some spatial data
  • I want to get data in/out of some spatial data source
And in this frame of reference, it was clear that adding in a half-baked IronPython scripting engine was overkill. Maestro has legitimate use cases for an integrated scripting engine, whereas for FDO Toolbox I struggle to find such use cases that cannot be satisfied with this release. In terms of scripting/automation, FdoCmd + powershell is already a combination that can address this niche. So as a result, this release of FDO Toolbox no longer include the scripting engine UI and has closed off the addin system. There just isn't a strong need for such features. 

This release also no longer includes API documentation for the FDO Toolbox core library. Coming back to this codebase, I've found that this library is really just a thin-wrapper on top of the FDO .net API and there isn't much value that the library adds on top. You are better off just using the FDO .net APIs directly or using the new FdoCmd + powershell combination.

In closing

This release has addressed all of my personal pain points and frustrations built up in the last 5 years since the last release of FDO Toolbox, and hopefully it does the same for you!