If you use a product that delivers something in one hour and someone tells you
“Hey you’re wasting time with it, this software can do that job in ten seconds!”
you’ll definitely try it.
Well, this situation can arise today in GIS world, when you need to execute complex algorithms that analyze great amounts of Raster or Vector data that can take hours, or even days.
If it’s possible to develop a multi-thread implementation for those algorithms, then I bet that some GIS companies are already working on it. Take for example the Manifold software, they argue that some of the functionalities of their software execute in less time than most of the GIS products that other companies offer. Although this is true for some GIS functions, which can be converted to multi-thread, the conversion using the Nvidia CUDA SDK and ATI Stream SDK is only possible for some special cases (yet). These SDKs allow the programmer to implement algorithms that can be parallelized into hundreds of processors that the GPUs may offer, instead of the 16, 8, 4 or 2 cores that the CPUs can provide in our desktops (in the short term).
ESRI and other GIS companies will not despise these technologies and they are most probably researching new ways of using it in their algorithms or products, although anyone heard about it, I bet my Euros they are…
Therefore I predict that we will have 2 orders of magnitude in performance improvement in the next year or so, for a limited set of geo-processing functionalities. The performance improvement will depend entirely on the capability of the research community in “parallelizing” the current (well known) GIS algorithms.
Right now it’s possible to create an application made in C, C++, .NET or Java that can use the CUDA C functions library to communicate with the GPU.
The CPU architecture vs. GPU architecture (the GPU can have hundreds of processor units and has a small memory cache shared across small sets of those processor units).
In short, I think this will be a huge step forward for those companies that invest in these areas (parallel GPU computing), especially if their products have functionalities which take a lot of time to execute, don’t you agree?