Saturday, February 22, 2014

Pocket Power Plus Starting a 1989 F150

I wanted to know if a Solutions from Science Pocket Power Plus could start my 1989 F150.  Three starts later, the unit still reports a full-charge.  Here are few snapshots of the first test.

This is what the package looks like, it has just about every adapter you could need.

This is the pocket power plus, sitting in front of the battery.  Notice how small the unit is.

The adapter is heavy duty and has beefy clips. 

All hooked up and ready to start!

After the first start, we still have a full charge!

Sunday, February 16, 2014

Champ RTF Micro Airplane

Yesterday, I decided to splurge and spoil myself with a Champ RTF.   The set comes with the aircraft, a controller and a battery charger.  This image shows the controller and aircraft.

The prop is 5in long (about as long as my smart phone) and it has a 21in wingspan.

It is 16in from nose to tail.

The battery is held in-place by a small strip of velcro on the belly.

In order to charge the battery, you need to disconnect it and and remove it from the aircraft.

The rudder activator protrudes from the right-rear.

The elevator activator protrudes from the left-rear.

The steering wheel is driven by the rudder.

The tires are made of a soft foam, which you can easily squish between your fingers.  The material feels like chamois leather.

The body behind the prop is roughly 1 3/4" wide.

And finally, some shots of the controller.  This is the front face.

This is the back face.

I logged a total of 4 flights over 15 minutes today.  This is my first time flying a micro, so it's going to take some time to adapt.  I take off by holding the unit in my hand, throttling up to 50% and then tossing it like a paper airplane.  I am surprised at how easy it is to control.  I was able to do flips and low-passes on my first flight.  My landing sequence usually consists of throttling down at around fifteen feet or so, then gliding down into the grass and 'crashing with style'.  I had to overcome my fear of bringing the airplane down into my thick St. Augustine grass.  The aircraft has very little mass, which allows the thick St. Augustine grass to bring it to a gentle stop.  Instead of landing on the ground, I would prefer to catch the plane in my hand.  This will take some practice but will likely come in handy when I decide to fly around a parking lot.

Wednesday, February 12, 2014

Placing Point Models into Parcels

Part 1 - Aligning Points to Transportation

Imagine that you have a finite but bounded number of polygons.  Each polygon represents an area that can be occupied by a building.  What you would like to do is automatically place a point within each of these polygons that would completely define the location and orientation of a specific building model.  The selected building model must fit completely within the provided region.  The polygons below are reasonable examples of these polygon regions:

These polygons will have irregular shapes; some of them will be pointed, rounded or elongated.  If we need to place a point in these geometries, where should it be placed?  The first-place one would attempt to place a point is at the polygons "center".  Unfortunately, the definition of center is not always clear; consider the example geometry below:

Simply summing the x and y values of this geometry and then averaging will result in a point that is positioned in the upper-left hand corner.  This is due to the geometries point density in this area.  Naturally, we would expect our point to be placed somewhere towards the center of the whole-geometry, as observed in the example below:

This is more along the lines of what we would expect.  If we were to place a 3D model in-and-around this point, we would have lots of wiggle room.  Unfortunately, a position like this is not always guaranteed to exist, as is the case with many concave geometries.  This topic will be discussed in a future posting.  For now, let's consider a large area that contains many of these simple cases:

If we wish to assign building models to these points, we will need to know which way they should face.  After all, it would be nice if our building models were facing the street.  Lets consider some examples and see if we can come up with a generalized approach for selecting an orientation angle.

Let's start our analysis by making some general observations of the input:

  1. More than one valid orientation may exist for each area of interest
  2. If an orientation can not be established by examining an individual area, we could use neighboring areas to make a good guess.

The most trivial approach to point orientation can be realized by aligning a point to the nearest transportation feature.  This is a poor choice for most cases but it is reasonably trivial to implement and worth some discussion.  I have roughly 400k areas in my test data set and it takes a minute to generate these sample points and orient them to the roads.  The screenshot below illustrates the types of anomalies that are produced when you align to the closest point on the transportation feature.

It should be noted that we are not aligning to an existing vertex on the geometry but rather to the closest point (which may or may not be an existing vertex).  Notice how most of the orientation segments are not centered within the areas.  We may (or may not) prefer for these angles to be chosen in a way that preserves the relative straightness within the surrounding area.

We can see how the results may vary with this approach when more than one orientation exists.  The screenshot below highlights some of the obvious complexities.

This method does not provide a general purpose solution but it does illustrate most of the fundamental issues that are involved with choosing an orientation for building model placement.  In part 2, I will discuss how to improve the alignment of the orientation segment and greatly simplify the resulting model fitting process.  Below are some screenshots of different sample areas.

Sample 1
Large area, generally acceptable results

Sample 2
Generally acceptable results

Sample 3
The closest vert is not always where you think it is...

Sample 4
What do we do when multiple solutions exist?

Sunday, February 9, 2014

Orange County Florida Parcel Map Tiles

I am setting up styles for another set of Orange County Florida parcel map tiles.

Thursday, February 6, 2014

Scatter for Terrain Database Generation

Part 2 - Storing Scatter Points in Shapefiles

In part 1 of this series, I described two general storage methods that could be used for scatter points.  These two methods apply equally to shapefiles, geodatabases and sql-databases:
  1. All scatter points stored in a single container and attributed with a type
  2. Scatter points stored within individual containers according to their types
The big difference between these two methods is how many objects you decide to store in a single container.  Regardless of which method we choose, we need to choose a container.  In addition, we would like this container to be compatible with popular GIS tools.  For this tutorial, I will store the points in shapefiles.

Storing data in a shapefile usually consists of segregating geometry and attributes.  Geometry is stored in the .sh* files and attributes are stored in the dbf.  Let's try this out using a shapefile to see how it performs.  In this case, we will create one million point features that each have an integer attribute value:

for(j=0; j < 1000000; ++j)
    x = i % 360;
    y = j % 360;
    m = j % 10;
    shape = SHPCreateObject(SHPT_POINT, -1, 0, NULL, NULL, 1, &x, &y, NULL, NULL);
    SHPWriteObject(shp_handle, -1, shape);
    DBFWriteIntegerAttribute(dbf_handle, counter++, 0, m);

On my 2.5Ghz MacBook Pro (with several standard desktop programs running), this operation took 19 seconds.  I stress that I am using a single integer attribute to persist the feature type.  This practice of using multiple attributes to categorize a features characteristics is a traditionally accepted practice in GIS communities.  Let's think outside the box for a minute and consider what would happen if we did not encode this data in an attribute table.  What other options do we have?  Suppose we encoded this data using a measure, aka m-value.  This would completely eliminate the attribute table overhead while mildly impacting our shapefile write performance.  See the snippet below, notice that the DBF write integer call has been eliminated:

for(j=0; j < 1000000; ++j)
    x = i % 360;
    y = j % 360;
    z = 0.0;
    m = i % 10;
    shape = SHPCreateObject(SHPT_POINTM, -1, 0, NULL, NULL, 1, &x, &y, &z, &m);
    SHPWriteObject(shp_handle, -1, shape);

When you're talking only a million points, 7 seconds vs 19 seconds is huge deal.  This benchmark can easily be replicated for 100 million features+ (you just need to wait a minute).

What have we sacrificed to achieve this gain?  We have sacrificed the comfort of being able to use an attribute table to manipulate a features type ("OH No!" Screams the GIS analyst).  In the case of scattering features for terrain database generation, this is not a huge deal.  The only issue is figuring out how to encode feature types into m-values.  This is where a simple configuration file would come into play.  This config file would define the key-value relationships needed to derive the feature type.

red oak = 0
sugar maple = 1

... = 10
... = 11

The values could also be auto-generated from the type names if a static-definition was not required.   From a technical standpoint, this is very efficient because the lookup table will typically be small.  The structure can be represented using a map in c++ or a dictionary in Python.  This also raises a rather interesting (and perhaps new) concept for the M&S industry, which is the idea of real-number mappings.  The suggestion is to use real number ranges to represent the set of all possible enumeration values for a particular type of feature.  Meaning, we could map all of our point trees to values within the range [0,1) and buildings within the range [1,2).

At this point it looks like we have everything we need; where to scatter the feature, the type of feature to scatter and how to orient it.  Oh wait... how to orient it...?  We completely missed that!  Perhaps this means that we will now be forced into using an attribute and now have to pay the attribute table penalties.  Well, not quite.  Review the second code snippet again and observe the z-value.  Scatter features typically do not use Z-values.  This is a mighty convenient place to store orientation, which means, we have already paid the price.

I understand that this approach is not for everyone.  In fact, from a pure GIS standpoint, this practice would be frowned upon because the data can not be trivially filtered and styled.  However, from a development standpoint, this method can be used to store a large quantity of simple points that have a type and orientation.  It is a convenient and clever method that utilizes a standard, portable container.

Sunday, February 2, 2014

Python Wrapper for the ESRI File Geodatabase API

I have been developing software around ESRIs products for almost 10 years now.  During these years, I have had quite a few ups and downs, including:
  • the transition to 9.x / 10.x
  • dealing with C++ API changes
  • dealing with C# API changes
  • dealing with Python API changes (geoprocessing vs arcpy...)
  • service packs galore
  • transition from foundation / defense to production mapping
  • and much, much, much more...
When ESRI first announced the release of their File Geodatabase API, I was excited!  This would give me cross-platform access to their latest-and-greatest geospatial container in Linux, OSX and Windows!  I use each of these platforms regularly for desktop, development and data processing purposes (though my desktop at home happens to be a MacBook).  Since the release of the first API, I have used it for multiple tasks and it has become a critical part of my day to day development life.

C++ is mighty fine but tonight I needed to interface with the File Geodatabase API using Python (specifically with 2.6..... Do Not Ask!).  I searched high and low and could not find a Python wrapper for it, so I had a fresh cup of Joe and decided to dive right in.  I made my first attempt using SIP but I decided that SWIG would be the easiest route.  After about an hour of fiddling around, I had an interface and a dirty script that would build it all for me.  This is a snapshot of my dirty build script:

swig -python -IFileGDB_API/include -c++ filegdbapi.i
c++ -c filegdbapi_wrap.cxx -IFileGDB_API/include -I/usr/include/python2.6
c++ -shared filegdbapi_wrap.o -LFileGDB_API/lib -lpython2.6 -lFileGDBAPI -o

If you're reading this article and not familiar with software development, all you need to know is that those couple of lines actually do quite a bit.  In the end, two files of interest are generated; and, everything I needed to get started!  At this point, we are ready to fire open the Python interpreter and examine our new module:

Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49) 
[GCC 4.2.1 (Apple Inc. build 5646)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import filegdbapi

No errors, that's a good thing!  Let's see what our module has packed inside.

>>> dirs = dir(filegdbapi)
>>> dirs.sort()
>>> for d in dirs:
...     print d

It looks like we are good to go, so let's do something geospatial, like create a new file geodatabase:

>>> g = filegdbapi.Geodatabase()
>>> ret = filegdbapi.CreateGeodatabase('./test.gdb', g)
>>> ret

Because FileGDBCore defines S_OK to be ((fgdbError)0x00000000), the value of 0 means successful! Though for sanity, we will check:

>>> import glob
>>> glob.glob('*.gdb')

And double check...

Geodatabase creation via our new wrapper, check!  Note to self: make the Python wrapper a little bit nicer by bringing over the integer constants.  Moving on, can I open the file geodatabase that I just created?

>>> g = filegdbapi.Geodatabase()
>>> ret = filegdbapi.OpenGeodatabase('./test.gdb', g)
>>> ret

It looks like we're on a roll, so let's try creating a new table in our geodatabase.  I will use the streets.xml straight out of the 'TableSchema' sample provided in the file geodatabase api distribution.

>>> import filegdbapi 
>>> geodatabase = filegdbapi.Geodatabase()
>>> ret = filegdbapi.OpenGeodatabase('./test.gdb', geodatabase)
>>> ret
>>> table_def = open('Streets.xml').read()
>>> table = filegdbapi.Table()
>>> geodatabase.CreateTable(table_def, '', table)
>>> filegdbapi.CloseGeodatabase(geodatabase)

Isn't it nice when things just, work?  Honda sure knows it.  Now it is time to set up an open source project and share this with the world.  I think I will name it the file-geodatabase-api-python-wrapper, ain't that a mouthful!

Saturday, February 1, 2014

Scatter for Terrain Database Generation

Part 1 - Overview

In the field of terrain database generation, feature scatter is a standard practice.  The typical use case consists of converting a line or polygon geometry into one or many different types of features.  For example, suppose we have a collection of land cover polygons:
The green polygons may represent a forrest or park that is filled with a variety of vegetation.  From the single polygon, one could choose to scatter multiple points, as illustrated below.

In addition to scattering within a feature, you could choose to scatter points along or around the border of a target polygon, as illustrated below:

Whether it is performed within a terrain tool or manually using a GIS tool, the operation is fundamentally the same.  It all boils down to one thing: placing features in a reasonable place.  However, this raises the questions:
  • How many features should be scattered?
  • What type of features should be scattered?
  • Where should the scattered features be placed?
  • Is the scatter method deterministic or non-deterministic?
Regardless of what is scattered (or how) the points need to be stored somewhere.  Regardless of which container is used(shapefile, geodatabase, sql-database), scatter points can be stored in one of two ways:
  1. All scatter points could be stored in a single container and individually attributed with a type
  2. Scatter points can be stored within individual datasets according to their types
Either way, a data model is required for the attribution that is associated with the individual scatter points.  With respects to vegetation a good starting point for a data model is the FGDCs Vegetation Classification Standard.  Now, although vegetation is the focus of my example, scatter features are not  limited to vegetation.  There are several other kinds of features that get scattered in a terrain database pipeline, such as buildings.  Before we consider the affects of different scatter types, let's add some  geometry to the mix and see what happens.  Consider the case where multiple features overlap, as illustrated below:

In this situation we have land cover and hydrology polygons that overlap.  In my geographic viewer, I placed the hydrology on top of the ground cover.  There is a visually obvious, well-defined layering order that can be observed based on how I decided to order the layers.  Should the scatter system be aware of these layering 'rules'?  Are these layering rules everything that scatter needs to do its job?  Let's consider how these layering rules may operate:
  • IF a ground cover polygon and a different ground cover polygon overlap, THEN you may scatter different types of features within the overlapping region.
  • IF a hydrology polygon and a ground cover polygon overlap THEN you may not scatter within the hydrology feature.
After all, you do not want to scatter trees in the water... do you?  Well, actually... you may!  We conclude that the answer is not always that simple.  So what should a good scatter system do?  Since we are still not certain, let's sum up what we have learned so-far; a good scatter system:
  • may or may not be deterministic
  • must take feature relationships into account
    • layering appears to be one aspect of this
    • feature relationships is another
  • must know how many features should be placed
    • this may be driven by density
    • this may be driven by location
    • or may not even matter, just as long as there are 'enough'
In the modeling and simulation industry, the most basic approach to scatter is based upon three variables, area, density and avoidance:
  • area - where the features should be placed
  • density - the number of features to place
  • avoidance - features must be placed so that they avoid existing features
Though the variables are simple, some immediate problems come to mind.  Does scatter fail if it can not fit a certain number of features into a given area?  Does scatter fail if it can not avoid features in a given area?  In addition to these questions, there are other non-obvious complexities involved with avoidance.  Suppose you are interested in scattering two tree models whose footprints are not rectangular, like those seen below:
Remember, the illustration above is the footprint of the tree models.  If you choose to scatter these features using only their bounding volumes, you will end up with un-natural looking scatter results, because the features will be spaced too far apart.  Unless feature model overlap is allowed, the geometry of the trees must be considered during scatter.

Despite these problems, it appears that the variables involved with the 'simple approach' are a subset of what we consider to be a good scatter system.  The critical pieces that are missing are realistic feature relationships.  Avoidance is one type of feature relationship but this is insufficient.  A lack of good feature relationships is what prevents most modern scatter systems from generating realistic results.

In a future post, we will explore what feature relationships mean and how they can be implemented.  In addition, we will examine how feature relationships can be used to enhance terrain database content in other fascinating ways.