There was always the issue which bugged me: Why do I have to go through an intermediate format on disk when I want to import a GRASS raster layer into R? At the moment, when I use
readRAST6(), the raster layer is exported from GRASS into an intermediate formate (I don’t recall which format it is) on the HDD, then this format is imported into R, and the intermediate layer is deleted. Now – this is working reliably and reasonably fast, but somehow I don’t like this intermediat file. So my idea is: why not use Rcpp to access the functions in GRASS to read the raster collumn-wise and write a function in R which allows to
- read the whole raster from the GRASS raster
- read single columns or column ranges from the GRASS raster
- read single cells from the GRASS raster
- read user specified blocks from the GRASS raster
Vice versa, there is a C function in GRASS which writes columns to a raster – so it would be possible to
- write a whole R raster to GRASS raster
- write single columns or column ranges to a GRASS raster
- write single cells to a GRASS raster
- write user specified blocks to the GRASS raster
An example module for grass to read a raster and write it into a new raster is at http://svn.osgeo.org/grass/grass/trunk/doc/raster/r.example/main.c.
And now comes the intriguing part: ther is the raster package, which uses a similar machanism to avoid having to load a whole raster into R memory. If raster is linked to GRASS by using these functions, there would be a brilliant backend for working with rasters in R.
Now these are ideas, but I am planning on following them up. Some things which need to be considered and thought trough:
- To compile the modules for GRASS, it might be the easiest to write the C code in GRASS so that it get’s compiled with GRASS, possibly even becoming a part of the binary distribution of GRASS. In this way, one would simply have to load the library in GRASS and call the function to read the raster, and it would make it possible to be used by other programs as well. (In my view, GRASS is missing a simple API for these kind of things, but this is a different story).
- One could put the C code into an R package and compile it from there, but this might be calling for trouble, as it would be very much linked to GRASS and dependant on internal changes. So the option of writing a C library as part of GRASS which provides functions to read and write blocks of and whole rasters might be the better solution.
- The wrapper around the C library would be relatively straightforward using Rcpp.
- The R part should be GRASS version agnostic, i.e. the same code independant of the GRASS version. By specifying the path to the GRASS installation, a specific library would be loaded and used.ossible to even switch between different GRASS version.
- It might make sense to split this into two packages: one frontend which defines the functions to be used by the R user, and a backend which supplies the functionality to ink these functions to the GRASS backend. So it would be similar to the dbi package which defines the database access functions, and on the other hand the backends which link these to different databases. This would enable a common interface to access spatial data in a GRASS database, Postgresql database, spatialite database, directory containing the raster layers in a specific format, …
OK – so what are the next steps:
- Setting up a github repo where interested parties can contribute and comment: https://github.com/rkrug/grassRLink
- Getting input from the GRASS community and what they think about this
- Getting a structure of the package(s) setup, so that a framework is available in which one can do the coding to satisfy the requirements
I don’t think this is something which can (and should!) be done in a rush, as this framework could possibly form a crucial backbone for spatial processing.
And: if this is there, one can do the same for vectors, spatio-temporal data, …
My feeling is that the time is ripe to give R an interface to the spatial GRASS database which can easily be extended to other spatial storage systems, in the same way that dbi is doing this for databases.
So: please give feedback, let me know what you think, if you have suggestions, tell me if this is not going to work (if you think so).
Cheers and enjoy life.
Four days sunshine, heat and R – isn’t that a dream? Well, I guess for some this would be a nightmare, but that depends if you like heat or not. And it was hot: at 21:00 it was still 35 degrees in the sun. So that aspect is covered, and we can move on to the non controversial part, which is R.
We all know that R is great, and if you would have forgotten, you were permanently reminded that it is. OK – several talks highlighted the shortcomings and problems with R (speed, parallelization, inconsistent (or actually missing) naming conventions) but there was that general agreement: R is great.
There were some unlucky ones who used other statistice packages before (SaS comes to mind…), but fortunately I have to count myself among the lucky ones.
So how was this years useR in Albacete? Great. I enjoyed it very much (also from here a thank you to the organizers and the sponsors) and the talks were overall really interesting and inspiring. Nevertheless, I had the feeling that the talks at the last useR I attended (2011 in Warewick) were a little bit broader, but it was definitely worth attending and I learned a lot. The tutorials were again brilliant, and the one about Rcpp by Hadley Wickham (and Romain Francois, one of the two authors of Rcpp, the other one is Dirk Eddelbuettel) was outstanding. The second one I attended was the one on spatial analysis in R given by Roger Bivand (one of the authors of the sp package, the core of nearly all spatial packages in R) was, although not as hands-on as the one on Rcpp, extremely informative — although I am using sp and spgrass for several years already, I learned many new and useful things, and have some ideas about the R – GRASS interface and how to get data from GRASS into R (see my post Read GRASS raster directly into R?).
The invited talks were, for me as a non-statistician, a little bit to mathematical, as most of them dealt with quite technical aspects of statistical (mostly baysian) analysis. The exception were the talks by Duncan Murdoch, one of the R core team members and THE windows R core team member, who presented news in R 3.0.x and the way forward, and Hadley Wickham (one of the “R Rock Stars”).
So what are my take home messages from this useR in Albacete?
- The Beatles are fantastic, and now we know why
- there are other implementations of the R language apart from GNU R, but thay are not yet ready for usage. They promise to be faster and more menory efficient then GNU R
- Bayes is everywhere, especially where you least expect him to be, and he is getting faster!
- brogramming is not a spelling error but a life style
- either use lowerCamelCase or underscoreseparatedfunctionnames (Hadley is watching you!) but Do.notMixandmatch
- I have to improve on my C++!!!!!!!!!!!!
And if there is only one you remember, remember this:
R is great!!!
Cheers and enjoy life (and R).
Long time no blog – I hope this will change.
Recently, I got a new computer. Well, this s not to much news, this happens to nearly everybody (especially if the old one is about 6 years old). But I decided that I want to have a Mac – retina – and as I got the opportunity, here it is: Mac Powerbook Retina.
I must say, very nice machine.
Now to the OS. There are many links on how to install Ubuntu on a Mac Retina (http://randomtutor.blogspot.co.uk/2013/02/installing-ubuntu-1304-on-retina.html is the one I used, but there are many others around) and I tried it.
Well – this is a story for itself, but after four times re-partitioning the mac, installing ubuntu, not getting it to work, deleting the partitions, giving up, deleting the partitions, not managing to translate my workflow into OS X, repartitioning, …, I finally got ubuntu to boot. Very nice indeed.
- The touch pad is unusable
- The screen is nothing compared to OS X
So I gave up. It was not easy – I mean after all the NSA listening in in the internet, the backdoors in different OS, Linuxz would be the choice. But I decided: If the NSA is listening at the odes, it is irrelevant which OS I am using, they will listen anyway. So I decided to give OS X a try and we will see if I am happy.
Now this was not as easy as I thought, as there are many pitfalls. As one (I) has (had) to learn, even if OS X is build aon free BSDm, it is definitely not Linux.
Now what were the problems:
- Understanding the differences between Linux and OS X (and there ar many!)
- Understanding the similarities between Linux and OS X (and there are many!)
- installing certain programs
- where the hack is the package management????????
- paths, paths and more paths.
Before this is getting to long, I will have several follow up posts ,describing certain aspects of the migration from Ubuntu to OS X. These will include
- org mode
- whatever comes to mind or is suggested.
So – if I don’t follow up my promise to blog about this migration: remind and push me. And let me know, which aspects interest you the most.
It is a very interesting experience for me, and I think it is worth to spread the word.
So this is it for today
Cheers and enjoy life
As I was talking recently about reproducible research, I have to post this.
A new paper by Eric Schulte, Dan Davison, Thomas Dye, Carsten Dominik. If you haven’t heard about them, you haven’t been on the org-mode mailing list. They could be called the main contributors to org-mode and the part of org-mode called babel, without taking credit away from the numerous other contributors.
The paper is called
A Multi-Language Computing Environment for Literate Programming and Reproducible Research
and you can find it at http://www.jstatsoft.org/v46/i03 and it is open access.
here is the abstract:
We present a new computing environment for authoring mixed natural and computer language documents. In this environment a single hierarchically-organized plain text source file may contain a variety of elements such as code in arbitrary programming languages, raw data, links to external resources, project management data, working notes, and text for publication. Code fragments may be executed in situ with graphical, numerical and textual output captured or linked in the file. Export to LATEX, HTML, LATEX beamer, DocBook and other formats permits working reports, presentations and manuscripts for publication to be generated from the file. In addition, functioning pure code files can be automatically extracted from the file. This environment is implemented as an extension to the Emacs text editor and provides a rich set of features for authoring both prose and code, as well as sophisticated project management capabilities.
Definitely worth reading, even though R only plays a small role in it, but the principles are important.
Cheers and enjoy life.
I just found these two gems about debugging in R on r-help today (here is the thread):
1) posted by Thomas Lumley:
traceback()gets you a stack trace at the last error
options(warn=2)makes warnings into errors
options(error=recover)starts the post-mortem debugger at any error,
allowing you to inspect the stack interactively.
2) added by William Dunlap:
will start that same debugger at each warning.
I think these are very useful ideas to remember – thanks.
Cheers, and enjoy life.
But there was one thing which bothered me after the upgrade: auto-mount of external drives was not working anymore. I used under natty
nautilus -n to start nautilus in the background and enable the auto-mount. But in Oneiric, auto-mounting has been moved from nautilus to the gnome-settings-daemon. So I asked on the fluxbox list, and got the tip to try udisks and after installing udisk-glue, it worked out of the box. Both are in the Oneiric repo, so
sudo apt-get install udisks-glue
will do the job and it worked out of the box.
Cheers and enjoy life.