Sunday, August 24, 2008

Quick review of Microsoft Photosynth

As regular readers know, I've been an Apple convert for over a year now, and unlike a number of my friends who run multiple operating systems on their Macs, I have had no compelling reason to run Windows and so have resisted that and been living a largely Windows-free existence. The two main things I miss with not running on Windows are some of the cool 3D things Microsoft has been doing with Virtual Earth, and the impressive Photosynth. When Microsoft announced that Photosynth had moved from a closed demo version with a few sample datasets to a version which lets you create your own "synths", I thought I needed to give it a try (being a keen photographer).

I went to the site using my Mac and got the following message:
Unfortunately, we're not cool enough to run on your OS yet. We really wish we had a version of Photosynth that worked cross platform, but for now it only runs on Windows. Trust us, as soon as we have a Mac version ready, it will be up and available on our site.
That was actually better than I thought - I'm pleased to see that they have a Mac version planned, and also that they have a bit of a sense of humor in their announcement :).

So anyway, I decided there was nothing for it but to dust off my old Toshiba Windows laptop and give the new Photosynth a try. When I first saw the original Photosynth videos, I was super impressed, but also a bit skeptical about some of the more extravagant claims of automatically hyperlinking all the photos on the Internet - it just seemed to me that there wouldn't be enough differented features in many photos for that to work. And that is somewhat borne out by this guide to creating good "synths", and by my experience so far using Photosynth - there are definitely techniques to follow to make sure that the process works well. Check the guide for full details, but in general it is good to take a lot of overlapping photos, more than you might think, with relatively small changes in angle or zoom to ensure that all the photos are matched.

Photosynth screenshot

I created several synths today, starting with a couple of the inside of my loft, and then doing some exterior ones. You can check them out here (Windows only, as previously noted - and you have to install a plug-in, but it's worth it). Overall the process was very easy and I was impressed with the results. It took about 10-15 minutes to create and upload a small synth, and a little over an hour for the largest one. You have to upload the data and make it publicly available at the moment (it sounds as though there may be more flexibility in this regard in future).

In a few cases, it didn't match photos which I thought it would have done. In general the issues seem to be either when you zoom or change angle by too large an amount, and it seemed to have a little more of a problem with non-geometric objects than with those with a regular shape. Also in a few cases I found the navigation didn't quite work as I expected. In the models of the Wynkoop Brewing Company and Union Station, it built everything into one continuous model, but I seem to only be able to navigate continuously around one half of the model or the other (you can jump from one to the other by switching to the grid view and selecting a picture in the other half of the model). If anyone discovers a trick which enables them to navigate around the whole of either of these models in 3D view let me know. I assume that this would probably not be an issue if I had taken more pictures going around a couple of the building corners. I also tried building synths of two smaller models - the Brewing Company and Union Station down the street, as well as a larger model which incorporated all the photos in the two smaller ones, plus a number of additional connecting photos - and it was interesting that some photos which matched in the smaller models did not match in the larger model (even though the photos they matched with previously were still there).

A cool user interface feature is the ability to display the point cloud generated by Photosynth by holding down the control key, and dragging the mouse to rotate. And another cool thing to try is using a scroll wheel to zoom in dynamically on the current image.

It's fun to be able to take pictures at very different detail levels - if you look around in the larger synth of my loft, you can find one of my favorite recipes and see what's on my computer screens. I think there's quite a bit of scope for doing cool creative things in synths - Paula appears in a few of the photos in my loft, and not in others with similar viewpoints, which gives an interesting effect, and I think you could have some fun with changing small things in between photos (but not so much that Photosynth can't match correctly). I think you could also add annotation to certain images, that is on my list of things to try too. I also plan to experiment with doing some which combine different lighting conditions, and would like to do some HDR photosynths using photos like the following - which will be a bit more work but I think would be well worth the effort.

View from our rooftop deck

View from our rooftop deck

Coincidentally, I recently heard from Gigapan Systems that I have made it onto the list to get one of their beta panoramic imaging systems, which should be arriving shortly, so it will be interesting to compare the two different approaches to creating immersive image environments. I don't expect to compete with Stefan's impressive Ogle Sweden expedition, but hope to find time to do a few more cool synths and panoramas over the next few weeks.

Thursday, August 21, 2008

Earthscape on iPhone

Thanks to Jesse at Very Spatial for the tip that my friends Tom Churchill and gang at Earthscape have released their virtual globe product for the iPhone. They have developed a lot of very cool virtual globe technology and have been looking at various directions they could take it in (competing head on with Google Earth and Virtual Earth being a tough proposition!), and I think that the iPhone direction is a really promising one for them.



This first version is missing some obvious features such as search, but the slick navigation means that you miss that less than you might think. I imagine that and other features will probably come along soon though, and I know they had various other cool ideas so I look forward to seeing more good things in future releases. One thing that is missing from earlier prototypes that I saw is changing the view based on tilting the iPhone. I think this is probably a good decision - this approach had novelty value but was hard to control well, and I think that the navigation controls they have now are very intuitive.

Overall it definitely has a high "cool factor" for showing off the graphics and touch screen capabilities of the iPhone, which I think is a big factor in driving application sales at this early stage of the iPhone application platform. Congrats to the team at Earthscape on getting the product out!

More on Google Local Search

As I recently discussed, we have decided to use Google Local Search rather than Google Geocoding (well, rather than a hybrid approach) for whereyougonnabe. That post got some interesting comments, including one from Pamela Fox of Google who said that Local Search will soon be using Tele Atlas data (rather than NAVTEQ according to one of the other posts, though she didn't say that explicitly), and the implication was that this was the same source data as the Google Geocoder. One would hope that this may mean that they are bringing the two services together, which would be a lot nicer for users. I also ran into Martin May from Brightkite yesterday at the Techstars investor day (which was excellent) and he mentioned that they are making the same move, which was interesting - it turned out we had been looking at a lot of common issues at the same time.

Anyway, I thought I would follow up with a little more discussion on some of the benefits we see of using Google Local Search, as well as some outstanding issues. One aspect that I really like is that you can give it a search point to provide context, and this is very useful for our application. We are especially leveraging this in our calendar synchronization. A typical (and recommended) way of using the system when you are traveling is to create a high level activity saying that, for example, you are in New York City for four days next week, and then subsequently enter more specific activities for those days. When you add an activity in your calendar, we look at the location string in the context of where we think you are on that date. So for example, if I just said I was having lunch at 1100 Broadway, it would locate me at 11 Broadway in New York during those four days, or at 1100 Broadway in Denver if I was at home. And as I discussed in the previous post, the feature we like most about Local Search is that you can search for business names, and again these take that location context, which makes it very easy to specify many locations. Some examples that I have used successfully at home in Denver include "Union Station", "Vesta Dipping Grill" (or just "Vesta"), "Enspiria" (a company where I have an advisory role"), and even "Dr Slota" (my dentist). It's very convenient to be able to use names like this rather than having to look up addresses.

So that's great, but there are still a few issues. One is that Local Search really doesn't work well for airports - if you enter a 3 letter airport code it is very hit or miss whether it finds it. I'm pretty sure this used to work a lot better, though I haven't tracked down specific tests to prove that. But we plan to do our own handling of this case (unless we see an improvement really soon), using geonames (which we already use for locating airports in our tripit interface, but we haven't hooked this into our general geocoding).

I reported over a year ago that I felt there were some issues with Google Local Search on the iPhone, so I thought it would be interesting to revisit some of the same problem areas I reported on then. These problems generally revolved around local search being too inclusive in the data it incorporates, which is good in terms of finding some sort of result, but the risk is that you get bad data back in some cases. Given the good results I had found in my initial testing this time round, I expected that I would probably see some improvement, so I was a little surprised to find that in general most of the same problems were still there. You can see these by trying the search terms in Google Maps (and in MapQuest for comparison).

My test cases were as follows, centered around my home at 1792 Wynkoop St, Denver:
  • King Soopers (a local supermarket chain): last time in the top 10 on Google, there were three entries with an incomplete address (just Denver, CO), all of which showed as being closer than the closest real King Soopers, and there was one (wrong) entry which said it was an "unverified listing". This time, there was just one incomplete address like this, so that was a definite improvement, but unfortunately this appeared top of the list so the closest King Soopers was not returned as the "best guess" from the local search API. There were two other bogus looking entries in the top 4, for a total of 3 apparently bad results in the top 10. On MapQuest, 10 legitimate addresses are returned, though in some cases they appear to have multiple addresses for the same store (e.g. entry points on two different streets) - but this is not a serious error as you would still find a King Soopers. This was the same as last time. I tried typing "King Soopers Speer" (the name of the street where the store is located) and that returned me the correct location as the top result.
  • Tattered Cover (a well known Denver book store with 3 locations). MapQuest returns just 3 results, all correct. Google Maps returns 9 entries in its initial list, of which three have incomplete addresses (duplicates of entries which do have complete addresses, but they show up at entirely different locations on the map), one is a store which closed two years ago, and one is a duplicate of a correct store. The old store does say removal requested, but it seems surprising that this closed two years ago and is still there.
  • Searches for Office Depot and Home Depot were more successful, with no obvious errors - hooray :) !! This was the same as last time.
  • Searching for grocery in Google Maps online last time returned Market 41, a nightclub which closed a year before, and four entries with incomplete addresses. This time there were only two which looked bad (including the King Soopers with an incomplete address we saw before). The same search on MapQuest had one incomplete address, the rest of the results looked reasonable. So a definite improvement in this case.
So in summary, Google showed a little improvement over last year but not a huge change - they still could use some more quality control to remove bad data entries. But as I've discussed, overall we've obtained very good results for our application needs, so we are looking forward to rolling out the new Local Search based capabilities shortly.

Improving software quality

I had missed this previously but Glenn reported from the ESRI conference that the "#1 time saver" in ArcGIS 9.3 is the ability to report product bugs more efficiently :O. Don't get me wrong, all products have bugs, I'm all for being able to report them easily, and this is a useful feature. And I'm sure customers are pleased to hear about a focus on software quality. But to me it sends a pretty negative message to the marketplace about how much time your users spend entering bugs for any software company to present this as the top productivity improvement in a major new release - it seemed like a very odd way to present it to me!

However, ESRI has not gone as far as Microsoft in its dedication to improving software quality. A friend of mine recently drew my attention to the program highlighted in the video below, which I highly recommend watching. I am thinking of instituting a similar program at Spatial Networking :) !!

Wednesday, August 13, 2008

Google Local Search better than Google Geocoding?

Google offers two (apparently) unrelated solutions for doing geocoding (converting a text string into a latitude-longitude and a structured address): the Google Geocoding API (which is part of the Google Maps API) and the Google Local Search API (which has JavaScript and non-JavaScript variants). This post discusses our experiences with both and the results of recent testing we have been doing, which has led us to a decision to switch from our current hybrid approach to using only local search.

Initially we were attracted to using Google Local Search for whereyougonnabe, as it lets you search for places of interest like "Wynkoop Brewing Company" or "Intergraph Huntsville", rather than having to know an address in all cases - whereas the geocoding API only works with addresses.

However, in our initial testing with local search we found a number of cases where it returned a location and address string successfully, but did not properly parse the address into its constituents correctly (for example, returning a blank city). For our application it was important to be able to separate out the city from the rest of the address. In our initial testing, the geocoding API seemed to do a better job of correctly parsing out the address. In addition, it returned a number indicating how accurate the geocoding was (for example street level or city level). So we ended up implementing a rather ugly hybrid solution, where we first used local search to allow us to search for places of interest in addition to addresses, and then passed the address string which was returned to the geocoding API to try to structure it more consistently. In most cases this was transparent to the user, but in a number of cases we hit problems where the second call would not be able to find the address returned by the first call, or where the address would get mysteriously "moved". With so much to do and so little time :) we elected to go with this rather unsatisfactory solution to start with, but I have recently been revisiting this area, and doing some more detailed testing of the two options.

Before I get into more details of the testing, I should just comment on a couple of other things. One is that a key reason we are revisiting our geocoding in general is that we recently introduced the ability to import activities from external calendar systems, which means we need to call geocoding functionality from our server from a periodic background process, whereas before that we just called it from the browser on each user's client machine. This is sigifnicant because the Google geocoding API restricts you to 15,000 calls per IP address per day, which is not an issue at all if you do all your geocoding on the client but quickly becomes an issue if you need to do server based geocoding. Interestingly, Google Local Search has a different approach, with no specified transaction limits (which is not to say that they could not introduce some of course, but it's a much better starting point than the hard limit on the geocoding API).

Secondly, a natural question to ask is whether we have looked at other solutions beyond these two, and the answer is yes, though not in huge detail. Most of the other solutions out there do not have the ability to handle points of interest in addition to addresses, which is a big issue for us. Microsoft Virtual Earth looks like the strongest competitor in this regard, but it seems that we would need to pay in order to use that, and we need to talk to Microsoft to figure out how much, which we haven't done yet - and obviously if we can get a free solution which works well we would prefer that. Several solutions suffer from lack of global coverage and/or even lower transaction limits than Google. We are using the open source geonames database for an increasing number of things, which I'll talk about more in a future post, but that won't do address level geocoding and points of interest are more limited than in Google (currently at least).

Anyway, on to the main point of the post :) ! I tested quite a variety of addresses and points of interest on both services. Some of these were fairly random, a number were addresses which specific users had reported to us as causing problems in their use of whereyougonnabe. In almost all the specific problem cases, we found that the issue was with the geocoding API rather than local search. The following output shows a sample of our test cases (mainly those where one or other had some sort of problem):

I=Input, GC=geocoding API result, LS=local search API result, C=comments

I: 1792 Wynkoop St, Denver
GC: 1792 Wynkoop St, Denver, CO 80202, USA
LS: 1792 Wynkoop St, Denver, CO 80202
C: LS did not include country in the address string returned, but it was included separately in the country field. The postcode field was not set in LS, even though it appeared in the address string. Other fields including the city and state (called "region") were broken out correctly. This was typical with other US addresses - LS did not set the zip code / postal code, but otherwise generally broke out the address components correctly.

I: 42 Latimer Road, Cropston
GC: 42 Latimer Rd, Cropston, Leicestershire, LE7 7, UK
LS: null
C: The main case where GC fared better was with relatively incomplete addresses, like this one with the country and nearest large town omitted (Cropston is a very small village)

I: 72 Latimer Road, Cropston, England
GC: 72 Latimer Rd, Cropston, Leicestershire, LE7 7, UK
LS: 72 Latimer Rd, Cropston, Leicester, UK
C: Both worked in this case, with slight variations. GC included the postal code (a low accuracy version) and LS did not. LS included the local larger town Leicester which is technically part of the mailing address.

I: 8225 Sage Hill Rd, St Francisville, LA 70775
GC: null
LS: 8225 Sage Hill Rd, St Francisville, LA 70775
C: One of several examples from our users which didn't work in GC but did in LS.

I: Wynkoop Brewing Company Denver
GC: null
LS: 1634 18th St, Denver, CO
C: As expected, points of interest like this do not get found by GC

I: Kismet, NY
GC: Kismet Ct, Ridge, NY 11961, USA
LS: Kismet, Islip, NY
C: Another real example from a user, where GC returned an incorrect location about 45 miles away from the correct location, which was found by LS.

I: 1111 West Georgia Street, Vancouver, BC, Canada
GC: 1111 E Georgia St, Vancouver, BC, Canada
LS: 1111 W Georgia St, Vancouver, BC, Canada
C: Another real example from a user which GC gets wrong - strangely it switches the street from W Georgia St to E Georgia St, which moves the location about 2 miles from where it should be.

I: London E18
GC: London, Alser Strafle 63a, 1080 Josefstadt, Wien, Austria
LS: London E18, UK
C: Another real user example. London E18 is a common way of denoting an area of London (the E18 is the high level portion of the postcode). GC gets it completely wrong, relocating the user to Austria, but LS gets it right.


So in summary, in our test cases we found a lot more addresses that could not be found or were incorrectly located by the Google Geocoding API, but were correctly located by Google Local Search, than vice versa. It is hard to draw firm conclusions without doing much larger scale tests, and it is possible that there is a bias in our problem cases as our existing application may have tended to make more problems visible from the Geocoding API than from Local Search (it is hard to tell whether this is the case or not). But nevertheless, based on these tests we feel much more comfortable going with Local Search rather than the Geocoding API, especially given its compelling benefits in locating points of interest by name rather than address. These point of interest searches can also take a search point, which is also a very useful feature for us, which I will save discussion of for a future post. Advantages of the geocoding API are that it returns an accuracy indicator and generally returns the zip/postal code also, neither of which are true for Local Search. Neither of these were critical issues for us, though we would really like to have the accuracy indicator in local search also. For our application we are not too concerned about precisely how the addresses align with the base maps - one reason I have heard given in passing for Google having these two separate datasets is that they want a geocoding dataset which uses the same base data as Google Maps uses, to try to ensure that locations returned by geocoding align as well as possible with the maps. If this is a high priority for your application, you might want to test this aspect in more detail. It also appears that the transaction limits are more flexible for server based geocoding with Local Search.

So we'll be switching to an approach which uses only Google Local Search in an upcoming release of whereyougonnabe. I'll report back if anything happens to change our approach, and I'll also talk more about what we're doing with geonames relating to geocoding in a future post.