Did Robert E. Lee Spend Saturday Night in Toledo, Ohio?

Saturday night in Toledo, Ohio is like being nowhere at all
All through the day how the hours rush by
You sit in the park and you watch the grass die!
Ah, but after the sunset, the dusk and the twilight
When shadows of night start to fall
They roll back the sidewalk precisely at ten
And people who live there are not seen again!
Just two lonely truckers from Great Falls, Montana
And a salesman from places unknown 
All huddled together in downtown Toledo
To spend their big night all alone!


The song is by Randy Sparks, written after a particularly uninspiring night in Toledo.  John Denver started performing it in the early 1970s and was ‘uninvited’ to do a concert by Toledo Mayor Harry Kessler.  Denver and Toledo eventually kissed and made up, but there’s no denying that Toledo wasn’t, and still isn’t, an entertainment mecca.
_______________________________________________________________________

Everybody’s heard of the great Toledo War, right?

Well, for those of you who haven’t, here’s the synopsis:

In 1835 the State of Ohio and the Territory of Michigan went to war over a six mile strip of land that extended from Toledo west to the Indiana border.  The war arose from a boundary dispute which was triggered by an inaccurate boundary description set out in the Northwest Ordnance of 1787 and an inaccurate description of Ohio’s northern boundary set out in the Enabling Act of 1802 (yet another Congressional screw-up).  Both Ohio and Michigan considered this strip of land, known as the Toledo Strip, to be theirs.  You may well ask (hell, you should ask) why all the interest in Toledo?  Well, in the early 1800s Toledo was poised to become a major shipping center on the Great Lakes.  The Erie Canal had just been opened, triggering a trade and settlement boom in the upper Midwest.  Politicians emboldened by the success of the Erie Canal were talking seriously of financing a canal following the Maumee River from Toledo to Fort Wayne, Indiana, and from there on to the Mississippi River.  If this plan came through then bulk goods could move cheaply by water between New York and the Mississippi.  Toledo would become one of the major trading hubs in North America.  Governors and legislatures drooled over the prospect.  Suddenly Toledo was worth fighting for!

The Toledo War was really nothing more than a bunch of alcohol-fueled hotheads on both sides throwing insults and the occasional lead ball across the border.  Still, the federal government had to do something to settle the dispute.  After long negotiations and intervention by President Andrew Jackson (and a little arm twisting to get Michigan to play along), the border issue was ‘settled’.  All that remained was for a formal boundary survey to be conducted and the results agreed to by both Ohio and Michigan.

Enter Lieutenant Robert E. Lee.  He was appointed to a party of Corps of Engineer officers detailed by the War Department to survey and map the Ohio – Michigan border as described in the agreement:

In 1835 (Washington) Hood was associated with Robert E. Lee in a map-making expedition to settle once and for all the Ohio-Michigan boundary dispute. This involved a strip of land averaging six and one half miles in width and extending along the northern border of Ohio west of Lake Erie. Michigan’s claim was based on the boundary laid down by the Northwest Ordinance (1787). Ohio’s claim was based on the line set forth in its state constitution, which the U.S. Congress had neither confirmed nor rejected when Ohio was admitted to the Union.
To settle this dispute, the government sent Captain Andrew Talcott and Lieutenants Robert E. Lee and Washington Hood to survey and map this area. On the basis of this survey both Michigan and Ohio agreed to compromise and Michigan became a state in 1837. This dispute nearly erupted into a border clash and is often referred to as the “Toledo War.”
– Charles R. Steitz, Jr., “Washington Hood: Five Hundredth Graduate of the United States Military Academy”, Pennsylvania Folklife, 1990, Volume 39, No. 3
Which begs the question, just what do a trio of wild and crazy 19th century West Point grads do for entertainment while in Toledo?  Grab a bite to eat and listen to some jazz music perhaps?
Tony Packo’s.  A Toledo landmark.  The
best damned hot dogs and potato salad in the world
and home of the Cake Walking Jazz Band.

Although Michigan lost this round they were given what is today their Upper Peninsula as compensation.  At the time it seemed like an unfair trade and there was a lot of grousing about it among Michiganders (or Michiganians, or whatever they call themselves).

Today’s Michiganders think it was one heck of a good deal.  All you have to do is drive through downtown Toledo to understand why.

And the Toledo – Mississippi River canal project?  It petered out.  One word.  Railroads.

– Brian

Gas Station Maps

In olden times, like back in the 1960s, you could pull into any gas station in the US and grab a free road map.  These maps were designed for one purpose – to show the motorist how to get from where he was to where he wanted to be.  The maps were part advertising and part incentive.  The idea was to encourage travel by automobile.  The more you traveled the more gas you burned.

The idea of the free road map was born back in the early 1900s when automobile companies like Ford were involved in a major push to get the state and federal governments to expand and improve roads throughout the country.  Road conditions were simply awful back then and the thought was that better roads would encourage travel and commerce and, of course, spur automobile sales.  This led to the creation of the federal Bureau of Public Roads (later the Federal Highway Administration) and the first allocations of federal money for ongoing road construction and maintenance.

By illustration, one of Harry Truman’s standard campaign platforms when he was serving as a commissioner in Missouri, then Senator and ultimately as President was better roads.  He felt that no farmer in a rural area should have to travel more than two miles to find a paved road to get his crops to market.  The fact that two miles was viewed as a reasonable distance to have to haul products before finding a good road is reflective of the state of road construction in the rural areas of the country right up into the 1950s.

Well, if we’ve got all these good new roads how do we let people know about them?  Why the road map, of course!  Gasoline companies like Texaco, Shell, BP, Mobile, Standard Oil and many others viewed free road maps as part of the cost of doing business.  The gasoline companies didn’t do the map production themselves.  They farmed out the production to one of the few companies that specialized in making road maps.  Rand McNally, Gousha and General Drafting were the major players in this industry and they cranked out millions of maps between 1920 and 1970.

The other great thing about gas station road maps, besides being free, was that they were kept fairly current.  The compilation of these maps was a cooperative effort between the gasoline producers, the mapping companies and local, state and federal road and transportation bureaus.  Maps were updated and re-published as frequently as every year depending on the rate of road construction in a particular state.  Of course each gas company’s map was tailored to show company service stations and to proudly trumpet the superiority of their product over their competitor’s, but the actual map information tended to pretty consistent from company to company.

A side benefit from this program was the standardization of road map symbology.  Map makers realized we needed a common map language to depict things like primary roads, secondary roads, city boundaries, rivers and lakes and route symbols.  In very short order common symbols were standardized and used on all road maps, not just those handed out for free in gas stations.  Map symbols were a unifying language on the highways and byways of mid-20th century America.

In addition, millions of American school kids learned map reading from gas station road maps.   Schools regularly integrated map reading into the curriculum, and the map of choice was the good old gas station road map.  I think the peak of America’s map literacy came in the 1950s, when millions of American kids, eager to tell their parents where to go, took over the job of automobile navigation and honed their skills in route finding and trip planning with good old gas station maps.

In the 1950s we planned our journeys using a paper map and imagination.  Today we fire up the GPS and wait for it to tell us where to go.  I fear we have become map dummies.

Let’s take a trip back in time and see what it was like for a mapping company to keep up with changes to roads and road conditions.  Many would be surprised to learn that the methods used today are pretty much the same as we see in this video.  The equipment has changed – it’s all computerized now – but someone still has to drive the roads and note the changes.

– Brian

From The Deck Of The SS Northing & Easting

Earlier this morning I let the dogs out to do their business and stepped out onto my deck to have a look around.  Although it was a bit cloudy out I noticed that the Moon was hanging brightly about 8 degrees above my roof line.  Dawn was just starting to break and I figured it would be a good time for this pseudo-mariner to get some practice sights in with the sextant.  The moon is entering its last quarter here in Georgia and there was still enough of the orb available for a good upper or lower limb shot.

I grabbed my old Astra IIIB sextant, screwed on the artificial bubble horizon and spent about 10 minutes practicing ‘pulling down the sight’, focusing more on technique than accuracy.  With a bubble horizon you have a lot of room for error because the horizon indicator (the bubble) is so large when viewed through the sight tube.  Don’t worry – around 0720 EST the Moon was hanging at about 40 degrees 4.8 minutes, right where it should be.  The clockwork heavens are still ticking along just fine.

Astra IIIB Sextant

As I was fiddling with the sextant the winds started pushing the low clouds around and the Moon began darting in and out of view, sometimes partially obscured, sometimes fully obscured.  This made for an interesting practice session as I was forced to time the approach and departure of the heavier cloud patches and practice pulling down the sight quickly before the Moon became too indistinct for a good shot.   This is a common problem in celestial navigation – the navigator is at the mercy of the weather.  That’s why so much emphasis was placed on grabbing a celestial shot whenever the heavens and the weather cooperated.  It is also why so much emphasis was placed on accurate dead reckoning – estimating your current location based on distance and direction traveled from your last known location.  Since you were never sure when you’d be able to get your next celestial fix an accurate running estimate of your position was absolutely crucial.

I was reminded of the particular problem celestial navigation posed for our submarine crews in WWII.  More than any other arm of the Navy, the Submarine Service operated far into enemy waters in search of victims, and they traveled alone.  Accurate navigation was absolutely essential and the navigators assigned to our submarines were some of the best the Navy produced.

WWII submarines were extremely vulnerable when caught in the wrong combination of circumstances.  Our subs like the Gato-class boats were really highly modified surface ships that could spend limited amounts of time under water on battery power.

US Gato-class submarine

The lower spaces of these subs were filled with giant lead acid batteries that allowed the boat to remain submerged for up to 48 hours and maneuver slowly (9 knots).  Eventually, however, the sub had to surface to charge her batteries, refill her air tanks and get a navigational fix.  For a boat operating alone in enemy waters this was a hazardous activity.  A submarine was never more vulnerable than when on the surface with low batteries.  It was common practice for the subs to surface in the dark of night and make a high speed dash to a new hunting area while replenishing her batteries.  The problem is that the middle of the night is generally a lousy time for a celestial fix.  Sure, the skies are filled with stars and planets, but the horizon is difficult to distinguish.  The best time for a fix is at nautical twilight, when the sun is 6 – 12 degrees below the horizon.  At this time the nautical horizon is still distinct and key navigational stars and planets are visible in the darkening sky.  But there’s also enough light left to be spotted by an enemy aircraft or nearby surface ship.

This led to a unique ‘navigator’s dance’ on American submarines.  At twilight the Captain would bring the boat to periscope depth to check for enemy ships and aircraft and to check weather conditions.  If the skies were clear of enemy and clouds he’d give the heads up to the navigator, who was usually the boat’s executive officer.  The navigator would have already checked his navigational tables and picked one or more likely celestial objects to try to use for a fix.  This could be a planet or bright star or, if he was really lucky the Moon was already up and far enough above the horizon to provide a good fix.  The navigator would often wear goggles with red lenses to get his eyes adapted to dark conditions.

The Captain would give the command to surface the boat and once the conning tower was clear of the water the hatch would be opened and the watch personnel would scramble up with binoculars, climb the periscope shears and scan the skies and the horizon for any signs of the enemy.  Once the all-clear was given the navigator would come up with the sextant hanging from his neck by a lanyard.  He would take a series of quick shots on the available celestial bodies and call the sextant readings down to the navigation team in the control room.  The navigation team would note the time of the observations against the boat’s chronometers and begin the process of using the sight readings to establish a line of position.  A quick shot on Polaris gave the navigator an accurate and easily determined latitude, but the shots on the stars and planets to determine longitude took a bit more number crunching.  Things like the height of the navigator above the surface of the water, the time difference from GMT, the uncorrected error built into the sextant and other factors all had to be calculated.  This process was called ‘sight reduction’.  It was (and still is) straight forward but somewhat tedious math.

In the end the navigation team (usually consisting of the executive officer, an enlisted navigator known as a quartermaster and another pair of trained eyes, often those of the Captain) would come up with intersecting lines of position, one for latitude and one for longitude, that provided the boat’s true position at the time the sights were taken.

Here’s an interesting description of the process taken from the book The Underwater War 1939 – 1945 by Richard Compton-Hall:

Away from land every opportunity for taking sun, moon, planet and star sights had to be snatched. Sight-taking with a sextant was treated as an evolution; if surfacing primarily for that purpose it was combined when possible with ditching (trash) — which made matters no easier for the navigator competing in the conning tower and on the crowded bridge with a hustling (trash) party, the lookouts and the sea itself. The smallest drop of water on the sextant mirror made sight-taking impossible and the instrument had to be wrapped tenderly in a towel when not actually bringing the observed body down on to the lurching, irregular horizon which, with so low a height-of-eye, made the task doubly difficult. The ‘exec’ was primarily responsible for navigation in American boats (assisted by excellent quartermasters) but German commanders relied upon the equivalent of a specially trained warrant officer to take sights. Most British captains thought sight-taking far too important to entrust to Vasco (the navigator) and did the sextant work themselves; but they were quite happy to delegate the long and boring working-out of the sights when they were taken! It could easily take an hour to plod through the spherical trigonometry (which actually amounted to no more than straight forward arithmetic) before arriving at a solution which almost invariably produced a large cocked hat; this led to thinly veiled hints from Vasco to the effect that the captain was incapable of reading sextant angles, and to more direct accusations from the captain that the navigator was incapable of simple addition and subtraction. Some boats carried rapid reduction tables derived from air navigation manuals which greatly shortened the time required to produce a fix: but the Royal Navy and most other services clung doggedly to Inman’s Nautical Tables with their long columns of five-figure logarithms.

Today we are spoiled.  Want to know where you are on the face of the earth to within a few hundred feet?  Just turn on your smartphone or GPS receiver.  Within seconds you’ll get a position fix that is far more accurate than any experienced navigator could have calculated using celestial navigation.

Yet I believe it is important we continue to practice the old techniques.  First, it is great mental exercise.  To be a good celestial navigator you need to be at least proficient in basic astronomy and mathematics.  You need to know how to evaluate and calculate error.  You need to be a good problem solver.  Celestial navigation is like golf – it takes just a few months to learn but a lifetime to master.  It sure beats playing another round of World of Warcraft.

Next, celestial navigation gives one a greater appreciation for the technology we have available today, and that appreciation and the resulting awareness of the GPS system’s capabilities and limitations will make you a better navigator overall.

And last, the celestial navigation techniques and tools we use today are exactly the same as those used by history’s great explorers and navigators – Capt. James Cook, Lewis and Clark, Robert Peary, Roald Amundsen, Earnest Shackleton, Robert Scott, Capt. William Bligh (yes that Capt. Bligh) and many others. Anyone interested in the history of exploration can make a direct and relevant connection to their heroes and better appreciate their achievements by dabbling in celestial navigation.

So that’s today’s report from the deck of the SS Northing & Easting.  I’ll keep the spyglass and blunderbuss handy in case the pirates try to board.

Brian

The NGS Does the IAGS

My blog post last year about the Inter-American Geodetic Survey (IAGS) has has proven to be my most popular post, both in the number of pageviews and the number of comments.  Although I’m not burning up the internet, it is interesting to track where visitor’s interests lie.  Surprisingly, my blog post is also the #2 return on Google searches against the term ‘inter american geodetic survey’ (it seems that the acronym IAGS is in use by several completely unrelated organizations that generate a lot of traffic, so searches against that term won’t put my post anywhere near the top of the list).

I’m both elated and just a bit saddened by this outcome.  Elated that I seem to have hit on poorly covered yet important subject that I can contribute significantly to, yet saddened that the Army Corps of Engineers continues to ignore the very crucial contributions their topographic services and personnel made to the professions of mapping, surveying and geodesy.

When I wrote the blog about the IAGS I noted that there’s very little available information about the organization on the web and I tried to provide links to as much relevant info as I could find.  One of the sources I completely ignored was the excellent article about the IAGS that appeared in the March 1956 edition of the National Geographic Magazine.

March 1956 National Geographic article on the
IAGS.  Click here to read it.

Before the National Geographic gave up serious scholarly writing for feel good stories about baby seals and the therapeutic effects of tree hugging it actually published some darned good stories about geography, exploration, and adventure.  All three of these elements come together in this great story about the IAGS.  It is probably the best, and perhaps the only, popular account of the agency’s activities.  So, follow this link and read about the Men Who Measure the Earth.

Whence The Meridian?

It seems that most folks’ awareness of history reaches back only 20 or 30 years.  Few today can conceive of a world without laptop computers, cable TV, cell phones and Dancing With The Stars.  Oh how dark and disordered life must have been before the internet!  How did man survive?

Old farts like me realize that history is a cumulative progression of events, and the things we take for granted today had an origin in the murky mists of time long past.

So it is with the concept of longitude and the meridian.  Today we take for granted that the zero line of longitude, the international meridian, runs through Greenwich, England.  It seems just so natural.  Most people today don’t realize that getting the meridian at Greenwich accepted as the world-wide standard was a long, drawn out process that spanned over a hundred years and involved most of the major world powers.  What we take for granted today is actually the result of some pretty intense diplomatic and scientific battles.

Man has realized since the time of the ancient Greeks that the Earth is round(ish) and that one of the best ways to refer to one’s position on this big ball is to use angular measurements – degrees, minutes and seconds.

On land this convention wasn’t really important – the common man was content relating his location as distance and direction traveled from known points.  There was always a track, trail or road that led to where he wanted to go.  The history of the march of civilization is the history of road building.  It’s coded into our DNA.

But as soon as man started sailing out of sight of land things changed.  There are no roads in the middle of the Atlantic.  All sailors had to fall back on was positioning by latitude and longitude.  With the explosion of maritime trade in the 18th and 19th centuries most of the big navigation problems were quickly worked out.  By the late 1700s we had reliable navigation instruments (sextants and chronometers), most of the important places of the world had been charted and sea captains could reliably and safely make their way to the other side of the world and back carrying cargoes that made the ship’s owners immensely wealthy.

But there was one last international navigation issue that was was still hanging out there in the late 1800s that needed to be addressed.  The issue of a common meridian.

Latitude and longitude are calculated from an accepted reference line, or zero line.  For latitude the solution was simple.  The equator is the natural zero line of reference.  When calculating latitude you are measuring your location north or south of the equator – the zero line of latitude.  Simply measure the angular distance from your location to the North Star – Polaris – and you have your latitude in degrees, minutes and seconds.  Mariners around the world have been doing this since before recorded history.  (If you are south of the equator it’s a little trickier since there is no star that hovers directly over the south pole.  However, there are nearby stars such as those that make up the Southern Cross that permit similar measurements.)

Longitude however has always been the problem child of navigation.  Part of the problem is that there is no natural zero line of longitude – the earth does not have a natural vertical equator.  The other problem is that there are no fixed or unmoving celestial bodies – stars or planets – that could offer an easy reference for longitude measurements like Polaris does for latitude measurements.  All the useful celestial bodies are in constant motion overhead.  What was needed was first a fixed reference line – a meridian – and then in reference to that line the minute-by-minute locations of all useful stars and planets as they marched across the sky.  The designation of the meridian also drove the publication of accurate nautical charts for use by merchantmen and navies.  This was a monumental task that only governments could support.

The realization that establishing a meridian and charting the night skies was a national necessity coincided with the scientific revolution of the 1700s.  Nations were willing – even eager – to put their money and their best minds to the task; it became an issue of national pride among the major seafaring nations.  Most also saw it as a national security issue.  These efforts were some of the earliest examples of directed research – scientific investigation not for the sake of enlightenment but to meet a specific economic or military goal.

As a result everybody who fancied themselves a bigshot on the global stage established their own meridian and published navigational charts and celestial almanacs referenced to their meridians.  Most also required their navies and merchant fleets to use their meridian.  Countries such as England (Greenwich), Spain (Madrid), France (Paris), United States (Washington D.C. and Philadelphia), Portugal (Lisbon), Norway (Oslow), Russia (St. Petersburg) and Japan (Kyoto) all developed their own meridians.  Even non-seafaring nations like Switzerland and Romania tried to get in on the act.

By the mid 1800s the worldwide nautical charting community had become a seafaring Tower of Babble; not everybody spoke the same positional language.  At the same time the development of reliable steam power was driving an explosion of commercial shipping activity.  As merchant marine activities became more globalized ship captains, owners and insurance companies began demanding standardized navigational charts and nautical almanacs.  A ship captain sailing from Boston needed to know that when he got to Lisbon and needed a new chart he could walk into a chandler and purchase a chart that uses to the same meridian he was trained to use and was comfortable with.  Seafaring nations realized that a single universally recognized meridian was a good thing.  But whose meridian would be the winner?

In 1884 the President of the United States, Chester A. Arthur, got representatives from all the key nations together in a big room at the State Department in Washington D.C. and told them to figure it out.  By this time the US really didn’t have a dog in the fight; although we were willing to accept Greenwich, England as our standard Prime Meridian we were open to switching and were not going to push a US-based solution.  My guess is that most of the attendees saw the US as something of a neutral party in this argument, which is why they agreed to show up and work things out.

The ‘International Conference For the Purpose of Fixing a Prime Meridian and a Universal Day’ was convened on October 1st 1884 and ran for the full month.  The proceedings can be found on the Project Gutenberg website.  The proceedings are an interesting read from a historical, scientific and political perspective.  The delegates from France did a lot of talking, extolling the glories of the Empire and the primacy of the observatory in Paris.  In the end however Greenwich in England won out, in large part, I suspect, because at the time over 72% of the world’s shipping used nautical charts based on the Greenwich meridian.  The Greenwich solution had the weight of numbers behind it and came without a lot of Gallic preening and posturing.

And so we have the Final Act:

Note who decided to take a pass on the final vote.  Sore losers I guess.

The conference addressed and adopted a number of other issues including the designation of a ‘universal day’.  This is why we have Greenwich Mean Time, or GMT (also referred to as UTC) and the standard solar day starts, and ends, at Greenwich.

After the conference the US Congress moved quickly to adopt Greenwich as the standard prime meridian for all US-based mapping and charting.  The US Navy and the Royal Navy began working jointly on the development and maintenance of nautical almanacs for celestial navigation and the sharing of nautical charts.  This is a collaboration that continues to today.

So there you have it.  Something a seemingly mundane as the starting point for all longitudinal measurements around the world actually has a history that impacts us today.

Brian

Spy Satellites Declassified

A KH-9 Hexagon Imagery Satellite.  The thing’s as big
as a Greyhound bus!

On 17 September 2011 the US declassified the KH series of satellites and their mission information.

Guess now I can tell my wife what I was doing for most of those 23 years I was in the Army.

What’s not discussed in the story, and I won’t go into too much detail until I know for sure it’s OK to discuss it in full, is the contribution these satellites made to the DoD’s world-wide mapping program.  Suffice to say, without these birds we would not have been able to accurately map the vast territories of the Soviet Union, Eastern Europe, China and all the other hostile places we thought we might have to go fight in.

More to follow…  Maybe.

Brian

The Software I Hate To Love

In the Geospatial Engineering world there is one Big Dog software developer and a pack of miniature chihuahuas snapping at its heels.  The Big Dog is ESRI, developers of the ArcGIS suite of software products.

ESRI dominates the GIS (geospatial information systems) software field in the same way Microsoft dominates the computer operating system field – there are competitors but nobody even comes close to the market share that ESRI developed and has held for decades.

But unlike Microsoft, ESRI didn’t get to where it is by being predatory and imposing crushing licensing agreements on its clients.  ESRI got it’s market share the old fashioned way – by simply being the best product in the market for the target consumer group.  ArcGIS is the software product that moved the traditional discipline of topography out of the paper map and overlay era and into the computer-based, analysis driven discipline of Geospatial Engineering.

ESRI was started by Jack Dangermond, someone I refer to as a “Birkenstock wearin’, Volvo drivin’, granola crunchin’ hippie.”  In the late 1960s and early 70s, building on pioneer work that had been done on early GIS concepts and development in Canada (where the discipline of GIS got its start), Dangermond created a land cover analysis program called ArcInfo and released it as a commercial product in the early 1980s.

Early versions of ArcInfo were hindered by limited computer processing, storage and graphics capability.  Geospatial analysis is very much a visual discipline – you’re making maps, after all.  Early desktop hardware simply didn’t have the capability and capacity to bring the full visual mapping experience to the user.  Up through the mid 1990s only expensive Unix workstations could handle that level of processing.  This all changed around 1995 when desktop computing power started increasing exponentially with each new processor design while at the same time hardware prices dropped like a brick.  Almost overnight inexpensive desktop computers appeared that could easily handle the processing and graphics demands a software package like ArcInfo placed on them.  I was working as a GIS program manager for the US Army when this hardware revolution hit the field and watched as in less than two years inexpensive desktop PCs caught up with and then quickly surpassed the processing power of the Unix-based Sun, Silicon Graphics and HP  systems we had been relying on.  What also helped was Microsoft’s release of WindowsNT at about the same time.  Finally we had a serious network-ready enterprise operating system running on high capacity hardware that didn’t make our budget guys weep every time we said we needed to do an upgrade.

ArcInfo is the flagship product of the ESRI line and is extremely powerful software.  But in the 1980s ESRI realized that not everyone needed the processing power of ArcInfo (nor could they afford the nausea-inducing cost of an ArcInfo software license).  ESRI introduced a lightweight version of ArcInfo that included most of the visualization capability of the high end package but left out the heavyweight analysis and data development functionality.  They named it ArcView.  It was priced right – something small organizations and even individuals serious about GIS could afford (if I remember correctly the GSA schedule price for a single ArcView license ran around $600 in 2000).  The vast majority of today’s GIS professionals cut their teeth on ArcView.

But ESRI’s real contribution to the GIS profession is the development of data types that both support complex spatial analysis and can be shared across different software platforms.  It is Dangermond’s vision that GIS-based mapping and analysis solutions should not be a stovepipe, but a shared resource.  This drove ESRI to develop the concept of the geodatabase.  A geodatabase is a collection of data in a standard relational database management system (RDBMS) like Oracle or SQL Server, but the data has very unique spatial values (location in x, y and z coordinates) assigned to it.  This means that GIS software can leverage the spatial values to relate the data in a location context and other RDBMS-based software systems can easily share their information with the geodatabase.   The geodatabase only needs to store GIS-unique features and can pull and do analysis against associated data in another database.

ESRI also developed a version of the geodatabase that does not require a high powered relational database management system as it’s foundation.  About a decade ago ESRI introduced the concept of a file-based geodatabase designed for use by small organizations or groups.  The file geodatabase is a simple to create yet powerful and extremely flexible data format that brings most of the power of the relational database and complex data analysis to the desktop machine and the individual user.

But what does the future hold?  ESRI realized long ago that the Internet was the map content delivery vehicle of the future.  Paper maps were headed to obsolescence and what Jack Dangermond describes as the ‘rich web map’ would quickly become the geospatial data visualization and analysis tool of the future.  He’s right, but only very recently has web technology started to catch up with his vision.

For the better part of a decade it was possible to hire professional web developers to create some very nice web mapping applications built on ESRIs early web technology called ArcIMS.  The problem was that those applications were difficult to develop, difficult to maintain, and required a lot of heavy weight back-end web and database server technology.  Only large enterprises and governments could support the hardware, software, development and maintenance costs.  ESRI’s web solutions were very much limited by the immature web development technologies available at the time.  It is ESRI’s vision that even the average geospatial professional working for a small business or local government should be able to develop, launch and maintain high quality web maps that bring value to the organization they support.  ESRI started laying the groundwork for this vision back with their ArcGIS 9 series of software releases and the development of things like ArcGIS Server and the concept of Map Services.  Two years ago they released ArcGIS 10 that brought a lot of maturity to the concept of integrated and streamlined web mapping using the Microsoft Silverlight and Adobe Flex web development environments, and the launch of ArcGIS Online with its peek into the future concept of ‘cloud services’ for hosting GIS data, services and web maps.

At it’s recent worldwide user’s conference ESRI announced the pending release of ArcGIS 10.1 with better integrated and streamlined web development tools.  But ESRI also announced two new developments that are generating a lot of interest.  The first is the announcement that ESRI has partnered with Amazon.com to host robust, enterprise-level cloud services for GIS web mapping, data hosting and application development.  The idea is that an enterprise purchases an ArcGIS Server software license, passes that license over to Amazon and Amazon stands up and maintains the necessary database and web development environment for the enterprise.  This is a huge development because it can free the GIS group supporting the enterprise from the often onerous and restrictive shackles placed on it by their local IT department.

The other announcement was the pending release of the ArcGIS Online Organizational Account program.  The Organizational Account program appears to be targeted as smaller enterprises and groups that don’t have the money or need to purchase full-up cloud services like those offered by Amazon.  Under the Organizational Account concept an organization will be able to purchase data and web hosting services from ESRI on a subscription basis.  It is still a ‘cloud’ model, but on a smaller, more tailorable scale that should allow small organizations to enjoy most of the capabilities of a full-up ArcGIS Server implementation.

The last good thing I need to discuss is another little-known program released this year – the concept of ArcGIS for home or personal use.  ESRI’s software licensing fees have escalated to the point that the geospatial professional simply can’t afford a copy to use to keep his or her skills sharp.  I noted above that the GSA price for an ArcView license used to run about $600 – a bearable cost if you were serious about GIS.  However, the cost for an ArcView license now hovers around $1,600, far too much for even the serious home user.  This year ESRI announced the ArcGIS for Home Use program.  Anyone can purchase a 1-year license of ArcView for $100, a very reasonable price.  Not only does this $100 include on-line software training and support, but you also get a very extensive suite of add-on modules like 3D Analyst, Spatial Analyst and Geostatistical Analyst.  The total value of the software you get for your $100 subscription comes to over $10,000.  One hell of a deal.  Of course there are restrictions attached to this deal.  The intent of the home use program is just that – you can only use it at home.  You can also only use it for personal development/training purposes or non-profit use.  Still, like I said, it’s one hell of a deal.

__________________________________________________________

Now, it’s not all rainbows and unicorns when it comes to ArcGIS and ESRI’s position in the GIS world.  All this GIS goodness is of little use unless it’s leveraged in an environment with clearly defined professional standards.  Nor can you allow a professional discipline to be defined by a software application or be inexorably joined to a piece of software.  This is where ESRI’s has failed the geospatial community, and they have failed in ways they can’t even visualize from where they sit.

Here’s the reality: geospatial engineering is the discipline, the term geospatial information systems – GIS – merely describes the tools geospatial professionals use to do their job.  Where ESRI has failed is in using its industry position and influence to help clearly delineate the difference between the two.  As a result, far too many engineering professionals view geospatial professionals as little more than button pushing software monkeys, one step up from data entry clerks.

Part of the culture Jack Dangermond has fostered and progressed through ESRI is the idea that GIS is for everyone and nobody owns it.  What he is effectively saying is that GIS is the discipline; the tools and the software drive the field, not the other way around.

While community ownership is a noble goal, ESRI’s dominance of the field gives lie to that very philosophy.  Effectively, ESRI ‘owns’ GIS; it is by far the world’s largest GIS software developer.  It has either developed or successfully implemented most of the recognized spatial analysis processes in use today.  It’s data management features have driven the development of most of the spatial data standards in use today.  The vast majority of geospatial professionals worldwide learned their trade using ArcGIS.

What is lacking, however, is a clear and recognized definition of just what a geospatial professional is.  Dangermond is correct when he claims it’s not his role to define what a geospatial professional should be – that is the job of the geospatial field and industry as a whole.  But Dangermond has been the biggest catalyst in the geospatial world for the last 30 years.  He and the resources he commands through ESRI have been in the best position to cajole and coerce the private sector, academia and the government to establish the roles, practices and responsibilities that define Geospatial Engineering as a formal discipline.  He should have been the single biggest champion of the concept of Geospatial Engineering as a professional discipline.  Instead he’s been pretty much silent on the whole issue.

It is only in the last few years that the US Department of Labor developed a formal competency model for GIS (GIS, not Geospatial Engineering), and the GIS Professional Certification program is just starting to get its feet on the ground (after a disastrous grandfathering period that allowed perhaps hundreds of clearly unqualified individuals to get a GISP certificate and do damage to the reputation of the geospatial profession that may take years to overcome).  Great, but all this should have happened 20 years ago.

What this means is that Geospatial Engineering is not respected as a professional discipline.  I can tell you from long personal experience that geospatial professionals are looked down upon by other disciplines such as civil engineering and surveying, in large part because there are no testable and enforced standards that define us as a ‘profession’.  Guess what – they are right!

_________________________________________________________

Many readers are probably asking themselves “Huh?  What’s he getting at here?”  I guess I’d ask the same question myself if I didn’t understand the background issues.

I’ve been a topographer and geospatial engineer for over 30 years.  A few months back I laid out my initial arguments in a post titled In Praise of the Old Topographer.  In that post I made the argument that Geospatial Engineering is just a logical continuation of the older and much respected profession of Topographer.  I also outlined my argument that geospatial information systems, including ArcGIS, are merely the tools that the Geospatial Engineer uses to do his or her job.

With this post my goal was to identify one of the main culprits that is keeping Geospatial Engineering from fully maturing into a recognized profession, a profession with it’s own standards, roles and responsibilities.

ArcGIS is that culprit.  On the one hand we have extraordinarily capable software that is almost single handedly responsible for bringing the discipline into the computer age and is poised to bring it fully into the age of  world wide web.  On the other hand, ArcGIS and it’s parent company ESRI are almost single handedly responsible for holding the discipline back and keeping it from taking it’s rightful place as a profession on par with other engineering disciplines.

For these reasons ArcGIS is the software I hate to love.

Ohio Is Such a Mess

“On the road above the Bell Company’s dock, Pennsylvania Route 68 invisibly changes to Ohio Route 38, and trees half hide some signs by the roadside.  The place could hardly be more anonymous.  Even someone familiar with the historical significance of this particular spot, who has traveled several thousand miles to find it, and whose eyes are flickering wildly from the narrow blacktop to the grassy verge between the road and river, can drive a couple of hundred yards past it before hitting the brakes.

The language of the signs is equally undemonstrative.  A stone marker carries a plaque headed “The Point of Beginning” that reads “1112 feet south of this spot was the point of beginning for surveying the public lands of the United States.  There on September 30th, 1785, Thomas Hutchins, first Geographer of the United States, began the Geographer’s Line of the Seven Ranges.”

There is nothing to suggest that it was here that the United States began to take physical shape, nothing to indicate that from here a grid was laid out across the land that would stretch west to the Pacific Ocean, and north to Canada, and south to the Mexican border, and would cover more than three million square miles, and would create a structure of land ownership unique in history…”

 – Andro Linklater, “Measuring America” 


In his wonderful book Measuring America, author Andro Linklater explains in detail just how it is that the concept of property ownership, and in particular the ownership of land, is the cornerstone of the American republic.  America was founded on the concept of property rights, and there is no greater realization of that concept than the idea that the common man can buy, hold and own land and that he, his family and his descendants will prosper and profit from the ownership and improvement of land.  The land does not belong to a government or a sovereign, but to the people.  It was a radical concept in 1776 and it is still very much a unique concept in the world today.

At the end of the Revolutionary War the weak federal government was cash poor but increasingly land rich.  Under the Articles of Confederation the federal government had no authority to raise revenue through taxation – that power was still retained by the individual states.  But the states were defaulting on their obligations to provide funding for the federal government.  The federal Army had not been paid for months and was on the brink of mutiny.  We had no navy to speak of.  Revolutionary War veterans were holding IOUs from the Continental Congress that were about to come due and our overseas creditors were demanding payment.  In desperation the federal government turned to the only asset it had available – land.

The Treaty of Paris that ended the Revolutionary War gave the new American nation control of a large tract of land west of the Ohio River in what is today southeastern Ohio.  This was really the only tangible asset the federal government owned that was not already claimed by one of the 13 states.  Almost in desperation, the Congress of the Confederation  hit on the idea of land sales as a way to support the struggling federal government.  The idea was simple – divide up the land and sell it for a dollar an acre. Cash only, no credit!

But how to divide it?  This new nation needed a land measurement and inventory system that was logical, easy to implement and resulted in land parcels that could be easily and quickly sold.  The resulting system, codified in the Land Ordinance of 1785, gave us what we know today as the township and range land survey system.  Conceptually is was simple – divide the land into six miles square sections (townships), then subdivide each township into one mile square sections, then further into quarter sections.  The initial unit of sale was a quarter section of 640 acres.

But where to start?  The Congress of the Confederation set up a committee to study the issue and appointed Thomas Hutchins, a noted military engineer and surveyor, as Geographer of the United States.  It was decided to start the land survey at the point where Pennsylvania’s northwestern boundary intersects the Ohio River.  This point became the Point of Beginning for all public land surveys in the United States.

So, on a blustery day in late September, 1785, Thomas Hutchins and his survey party walked down to the banks of the Ohio River, drove a stake in the ground, set their survey instruments up and began to lay out what became known as the Seven Ranges region of Ohio.

From this Point of Beginning Thomas Hutchins set in place the land survey system that would ultimately encompass 75% of the land mass of the United States, clearly establish and define private land ownership and set the stage for the explosive westward expansion of the US in the 19th century.  On September 30th, 1785 Thomas Hutchins literally drove the stake that established the geographic fabric upon which the United States was built.

Ohio was to be the proving grounds for the township and range survey system.  Like a lot of first tries at anything problems cropped up, adjustments were made and shortcuts were taken.  Part of the problem stemmed from the fact that much of the land in what we today call Ohio was subject to prior claim.  Large areas of  northern Ohio were ceded to Native Americans under various treaties.  Connecticut claimed a large region stretching from present day Sandusky, Ohio east to the Pennsylvania border.  Virginia claimed a large tract in the south to use to compensate her veterans.  Other bits and pieces here and there were set aside.  Ohio was a patchwork quilt of land claims, set-asides, treaty lands and private holdings.
Ohio Land Claims – 1800’s

But very quickly another series of problems popped up.  Congress was pressured by speculators to sell large chunks of land.  Congress saw this as a way to generate quick cash – sell land at a slight discount for immediate payment and let the speculators carry the cost of the land surveys.  The land speculators saw it as a road to riches – if they could sell fast.  But before any land could be sold it had to be surveyed and the surveys registered.  That meant the surveys needed to be done fast.  Accuracy be damned!

In the 18th century anyone with rudimentary math skills and who could afford a surveyor’s compass and chain could call themselves a surveyor, and many did.  Since surveyors at the time were paid by the mile the faster they worked the more they got paid.  This meant the surveys were sloppy and niceties like calculating the local differences between true north and magnetic north were either not done as often as required or simply not done at all.

As a result, a lot of Ohio’s township and range section lines take off at odd angles and don’t quite form square parcels.  Eventually the errors accumulated and corrections had to be made.  Often it was the simple expedient of offsetting a north-south range line at the start of the next township line.  Since roads in Ohio tended to follow the township and range section boundaries this led to the quirky (and often dangerous) tendency of country roads ending at a T-intersections for no apparent reason, then picking up again about 100 feet east or west of the end point.  These little jogs are a modern reflection of the corrections the surveyors were forced to build into their work over 200 years ago.

Other times the errors were so extreme that there was really no way to correct them and the government was just forced to incorporate the errors into the public record as-is:

The intersection of surveys for the Symmes Purchase, Virginia Military Reserve
and standard Public Land Survey areas.  There are about three different
interpretations of true north indicated by these township and range layouts!

So there you have it.  Ohio is a darned mess.  But a fascinating mess that leaves us the physical traces of the birth of the survey system that made westward expansion possible.

– Brian

The Military Lensatic Compass

I’m going to kick off our formal evaluation of compass accuracy with the a design that has been in continuous use  for over 60 years and has seen use by millions of individuals.  It is perhaps the most tested compass design in history, with documented use in jungle, desert, woodland and arctic environments around the world.  It is tried and true and is one of my favorite compass designs.

It is the US Army’s Model 1950 lensatic compass.

The M1950 compass is a design born of war.  It’s predecessor, the M1938 lensatic compass, was developed and adopted just as WWII opened.  It was a good design that was easy to manufacture.  Equally important, the adoption of the M1938 compass allowed the Army to standardize land navigation training, simplify it and teach it to the millions of young men who were being drafted into the Army and Marines. The experiences of war taught the Army a few things about compass design.  First, it proved that the lensatic compass design was a good one.  It was accurate, reliable and versatile.  With its compass card graduated in both degrees and mils it was usable by the both infantry and artillery.  The military liked the basic design and stuck with it.

A Model 1938 (M1938) lensatic compass manufactured
by the Superior Magneto Company of New York
Superior Magneto appears to have been the prime supplier of this
compass design during WWII

However, wartime experience also highlighted some shortcomings in the M1938 design.  It was somewhat fragile.  While not a toy, the M1938 was lightly built – just two stamped aluminum cups fitted together to form the compass bowl and lid.  It had no mechanism to lift the compass card off of the the pivot needle when the compass was closed.  A lot of compasses were damaged when the the tip of the pivot needle gouged or cracked the pivot jewel through rough handling.  But perhaps the biggest shortcoming of the M1938 compass is that it had no dampening mechanism. This meant that the compass card would swing wildly and would take a good number of seconds to settle down to the point where the Soldier could get an accurate reading.  There were some versions of the M1938, those manufactured by the W. E. Gurley Company (a leading manufacturer of surveying instruments), that included a needle lift mechanism that with practice could be used to brake or slow the compass card oscillations.  However, the vast majority of compasses were manufactured by the Superior Magneto Corporation and did not include this needle lift mechanism.

After WWII the Army incorporated induction dampening into the M1938 design.  Induction dampening is a beautifully simple concept.  It takes advantage of the inductive magnetic field generated between a swinging compass needle and a highly conductive but non-magnetic alloy like copper.  When a magnetic needle (or bar) is placed inside a cup made of copper and the needle swings (oscillates) that movement causes a slight magnetic eddy current to form.  When the needle swings to the left the eddy current pulls it to the right.  When it swings to the right the eddy current pulls it to the left.  The eddy current is self-canceling; as the needle oscillations decrease the eddy current strength decreases and very quickly the needle settles down and is aligned with magnetic north.  Simple, elegant and effective.

A late model M1938 compass with induction dampening.  This is
a transitional design, bridging the gap between the original M1938 compass
and the the M1950.  The white colored compass bowl is actually
a copper cup that forms part of the induction dampening system.
This compass was made in December 1950 by the
Marine Compass Company out of Pembroke, Massachusetts.

However, by the late 1940s the Army decided it was time for a whole new design.  The Army took the best functional elements of the M1938 compass – the lensatic sighting design and the combined degree and mil scales on the compass card – added induction dampening, a needle lift device, a much larger sighting lens and a larger thumb loop and placed it all in a beefed-up waterproof aluminum housing.  The resulting compass was designated the M1950 Lensatic Compass.  It is a rugged, versatile device that has remained in use with the US military for over 60 years, pretty much as originally designed.

M1950 Lensatic Compass
This particular compass was manufactured in February1953 by the
Marine Compass Company out of Pembroke, Massachusetts.   The
cloudy dial cover is the result of the plastic aging.  Remember, this compass
is almost 60 years old!
Same compass with the cover closed, showing the manufacturer and
manufacturing date.

Fast forward almost 60 years and the same compass design is still in use by the US military, and it doesn’t look like they have plans to switch designs any time soon.

M1950 Lensatic Compass manufactured in 2010 by the Cammenga Corporation
out of Michigan.  This is a military issue compass that uses tritium inserts
for night time illumination.
The same compass with the cover closed.  At the time of this writing
Cammenga has been the sole supplier of lensatic compasses to
the US military for over 10 years.

Since 1950 this compass has been produced by a number of manufacturers, including the Marine Compass Company, Jay-Bee, Union Instrument, Cammenga and even Lionel (yes, the train people!).  However, it seems that the single biggest manufacturer of M1950 compasses was Stocker & Yale out of Massachusetts.  I don’t have any specific production numbers for these compass manufacturers so my claim is based solely on personal observation.  Based on the compasses I was issued in the Army and what I see for sale on auction sites or in surplus stores it appears that Stocker & Yale had the highest production numbers.

Now, the US military doesn’t just turn to a manufacturer and say “Make it!”  Like all things military there are clearly defined specifications.  It doesn’t matter if you are building an aircraft carrier or a handheld compass, there must be clearly spelled out specifications!  So it is with the M1950 Lensatic Compass.  Today’s compasses are built and tested in accordance with the DOD military performance specification known as MIL-PRF-10436N (Performance Specification, Compass, Magnetic, Unmounted, Lensatic, Luminous, 5 Degree and 20 Mil Graduations, With Carrying Case).

MIL-PRF_10436N
The document that spells out the design, construction,
performance and testing requirements for the
lensatic compass

(I should note here that the current specification does not use the ‘M1950’ designation.  I’m not sure when the US military dropped the designation, but for our purposes we’ll continue to call it the M1950.  It’s the same compass.)

The discussion of this performance document is important because the M1950 compass is the only US produced handheld compass I am aware of that is built to a specific performance specification, and is regularly evaluated against this performance specification by an outside agency.  If any test batch of compasses fails the evaluation the devices never make it out of the factory.  The M1950 is a purpose-built device designed to meet a clearly laid out specification not just for accuracy but for shock resistance, water resistance, illumination, thermal shock, durability and service life.  Manufacturers of other compasses may have their own internal standards (and many are quite good), but the M1950 is the only handheld compass you can buy that is designed to meet demanding military standards and is rigorously tested by an independent agency to ensure it meets those standards.

So just how good is the M1950 compass in the real world?  Pretty damned good!  The 58-year old example I show above that was made by the Marine Compass Co. is still perfectly serviceable and would probably meet all of today’s performance specifications for accuracy and durability.  I have other examples in my collection that have clearly seen hard use, some with broken components or cracked dials, but they still provide reliable and accurate readings. The M1950 compass is a device that is hard to kill.

I believe that the key to the M1950’s ruggedness is the fact that is it not a liquid dampened design.  Liquid dampening (where the compass needle or card is suspended in fluid to reduce oscillation) is very effective but is a more fragile design than the induction dampening used in the M1950.  With the liquid filled design the compass needle or card must be sealed inside a leak proof capsule*.  The problem is, compass manufacturers have not yet figured out how to make a leak proof capsule.  I have over 15 liquid filled compasses in my personal collection.  About half have air bubbles inside the capsule, a sure indicator that the fluid is leaking.

How accurate is the M1950 compass?  Every M1950 I’ve used (and after a 23 year Army career I’ve used a lot of them) has at least met, and many exceeded, the performance specification for accuracy.  Now this is where I need to come clean on my evaluation of the M1950.  It is not the most precise handheld compass available.  This compass’ biggest design limitation is that the compass card is divided into only 5 degree increments; pretty coarse even for handheld use.

Compass card of the M1950 compass.
Note the inner degree ring (in red) laid out in 5 degree increments.  The outer
ring (in black) is set out on mils (6400 mils to a circle).

This means that the average user, the common Soldier, can only discern and measure to half of that increment – 2.5 degrees.  Experienced users – mostly infantry and artillery Soldiers who use a compass regularly – can frequently get accurate readings to between 1.5 – 2.0 degrees. But to be realistic, 2.5 degrees is about as good as anyone can expect to get with this compass card layout.  The M1950 compass card design is a compromise.  The military needed to include a mils scale for use by the Field Artillery.  Mils offer more discreet division of the circle (a mil is 1/6400th of a circle), allowing for more precise azimuth determination – very important when you are calling in artillery strikes on distant targets.  To accommodate the mils scale it needed to be printed at the outer edge of the compass card, leaving less space to print the degrees scale.  This results in a coarse, less precise degree scale.

The military performance specification states that the compass must be accurate to within 40 mils.

“4.4.1.8 Magnetic performance and compass error.  The compass shall be placed in a horizontal position on a fixed point and by means of the sighting mechanism, the compass shall be sighted on three targets of known magnetic azimuths approximately 120 degrees apart.  With no remedial action by the operator, before, at, or after, a reading shall be taken at each target.  The difference between the known azimuths and readings taken is the compass error.  An error greater than 40 mils or failure of the compass to function correctly shall constitute failure of this test.”

Since one degree = 17.8 mils, 40 mils is slightly less than 2.5 degrees.  Let’s round up and call it 2.5.

I have tested my 2010 production Cammenga compass at a known azimuth station and found it to be accurate to just over 2 degrees when used in the handheld mode and sighting on targets up to 150 feet away.  This compass very easily meets the performance specification.

Before I wrap up this blog post I need to add that the M1950 compass was merely one component of a land navigation system that the Army developed and adopted at the end of WWII.  Along with the M1950 compass came dramatic changes in how the Army mapped the world, developing standardized maps with overprinted grids (the Military Grid Reference System) and plotting tools.  It was all designed to simplify land navigation for the common Soldier, and is was so successful that the methodology is still in use today.

The US Army’s standardized land navigation ‘system’ included the M1950 compass,
standardized topographic maps, plotting tools and training materials.  It was an
extraordinarily successful program that is still used today.

Let’s wrap this up.  Here is my bottom line – I consider the the M1950 compass to be the best general purpose handheld compass available.  It is a proven design that is built and tested to exacting standards.  They are readily available new or used to civilians and are one of the best examples of trickle-down military technology I’ve seen.  If you spend any time in the outdoors you need a compass.  You might as well get the best available.  Get a M1950 Lensatic Compass.

Thanks!

– Brian

*One compass manufacturer, K&R out of Germany, claims to make a leak proof liquid capsule but I don’t think they have been on the market long enough to have proven the claim.

The US National Map

Earlier this month the US Geological Survey (USGS) released their latest version of The National Map Viewer.
US National Map View of Maumee, Ohio
The same view of Maumee, Ohio with the aerial image
background turned on

The US National Map is not a map per se.  You can’t ring up the USGS and say “Send me a copy of the National Map.”  It doesn’t exist as a single product.  The US National Map is a collection of digital geographic and geospatial data that, when brought together, forms the foundational map of the United States.  Here’s how the USGS describes it:

“As one of the cornerstones of the U.S. Geological Survey’s (USGS) National Geospatial Program, The National Map is a collaborative effort among the USGS and other Federal, State, and local partners to improve and deliver topographic information for the Nation. It has many uses ranging from recreation to scientific analysis to emergency response. The National Map is easily accessible for display on the Web, as products and services, and as downloadable data. The geographic information available from The National Mapincludes orthoimagery (aerial photographs), elevation, geographic names, hydrography, boundaries, transportation, structures, and land cover. Other types of geographic information can be added within the viewer or brought in with The National Map data into a Geographic Information System to create specific types of maps or map views. The National Map is a significant contribution to the National Spatial Data Infrastructure (NSDI) and currently is being transformed to better serve the geospatial community by providing high quality, integrated geospatial data and improved products and services including new generation digital topographic maps.”


OK, like I said, it’s a collection of digital geographic and geospatial data that forms the foundational map of the US.  Geeze, I think government bureaucrats get paid by the word.

Here is the USGS’s introduction to the National Map program and the National Map Viewer:

The National Map Viewer is the USGS’s on-line portal to all the data that makes up the National Map.

The Viewer is pretty good (if you are at all interested, it is built on ESRI’s ArcGIS Server technology) and offers some neat functionality.  It will provide location information in a number of formats, including US National Grid coordinates, it has a pretty robust reverse geocoding feature (click on a building on the map and the map returns the street address for that location) and it will provide spot elevations from the national elevation dataset.  You can do area and distance measurements, add text and simple graphics and even add data from external sources like a GoogleEarth KML file or a web mapping service.  You can also bring up indexes for the USGS’s standard map products like the US Topo series of maps and link to them for download as GeoPDF files.  For advanced users the Viewer offers some pretty good search and query builder functionality, so you can find specific data that is embedded in the data layers.

There are some shortcomings, however.  The print function is essentially useless and is perhaps THE major drawback of this Viewer.  About all it does is grab a screen shot of your viewer and dumps it to a PDF file.  The USGS needs to wake up and realize that people still want quality paper maps and with today’s technology it should be easy to print a fully detailed paper map with things like a grid, scale indicator, geographic extents, legend, etc.

The Viewer also exhibits a common issue found in web-based maps – map content naming conventions can be pretty obtuse and downright confusing.  While the Viewer does pretty good with the base data layer naming conventions, when you start using advanced features like the Query Builder you start to interact directly with the database field names.  For example, if I’m building a query to identify all the wetlands in my county I’m presented with a list of ‘Columns’ (which are the database field names).  Those column names are confusing and don’t mean anything to most humans.  We get to pick from selections named ‘ATTRIBUTE’ or ‘OBJECTID’ or ‘SHAPE_Area’.  There is an easy solution to this – the GIS professional building this map can establish what are called ‘field alias’ names – a human-friendly nickname for each of the information fields.  ATTRIBUTE can be displayed as ‘Wetland Attribute’, OBJECTID can be displayed as ‘Wetland ID’ and SHAPE_Area can be displayed as ‘Wetland Area’.  This naming convention issue usually reflects the fact that GIS professionals with little cartography experience compiled the data for use in the Viewer.  (If I seem to be nit-picking here it is because I build maps for a living using this same technology.  I know these are issues that are easy to fix and should have been taken care of before the Viewer was opened up to the public.)

These shortcomings aside, the National Map Viewer is pretty darned good.  I’d say the USGS gets a good solid ‘B’ for this effort.  If they’d improve the damned printing issue I’d give them an ‘A’.

– Brian