I am especially keen to ensure that the map stays up-to-date and that everyone who lives in the area can contribute their knowledge and experience to the map, thus enriching the resource. So the right way to go is crowd sourcing data…
Over the last couple of year I have been following lots of mapping exercises and various ways of collecting data and keep being both amazed and inspired at how quickly maps can be put together by people who have never met but each have information which contributes to the story the map is telling. In the last few days a Google map has been created to track the riots in London. It is both fascinating and heartbreaking to follow the developments through the population of the map. – It is data by the people, for the people. It is so like what Paul and I are working on, and so distant from it.
The evening’s two presenters, Ben Lyons and Hoss Gifford, shared their enthusiasm and knowledge of data visualisation.
Ben presented the results of his recent practical experience using Scraperwiki at the BBC’s Hacks and Hackers day, a one-day project which has turned hitherto impenetrable fire service data into a live updated map and a Twitter feed.
Hoss presented a very engaging and inspiring insight into data visualisation, drawing from various sources, such as his own experience with learning times tables as well as variety of online visualisation tools.
Recently the multi-talented IRISS programmer, Ben Lyons tool part in the BBC’s scraperwiki hacks and hackers day. The event focussed on getting hacks (journalists) and hackers (programmers) to work together on developing a tool which would bridge the two worlds. Ben and programmer Paul Miller formed a team with theBBC journalist Chris Sleight who reports on the central scotland area (http://www.youtube.com/watch?v=TeH6Ca77coY).
After some intial digging the team found a way to use scraperwiki to scrape limited data from the site (at circa 60 records at a time), and also discovered another 14,460 historical records, which allow them to start building several outputs.
Within one day the team managed to build a dynamic protoviz visualisation from within scraperwiki (http://scraperwikiviews.com/run/firebug_simple_viz_2/), as well as extract some very useful statistics (the one they chose to work with on the day was the percentage of malicious calls per area). They also did some initial work on creating geo-mapping views, which is one of IRISS’s focus for its data visualisation team this year.
For the team’s impressive efforts they were awarded second place on the day, and also won the scraperwiki “best scraper” award.
With the launch of the D-viz toolkit, its come time to describe a little of what happening under the hood (and to an extent why).
First, a note (or two) about our intended audience.
One early decision was to use SVG as our chosen base format, which has some advantages as a resolution independent format (it also allows use the ability to have interactive visualisations in the future), though it is not supported in IE until the latest version (and we’ve received some indications that its support is not complete )
Data has been split into 2 types words and numbers.
Words are simply copied and pasted into a text box, he hope to allow the upload of MS word documents at some point in the future.
‘Numbers’ can be uploaded as excel spreadsheets. This is done using the excellent drupal sheetnode module, which handles both the upload of several formats of spreadsheet, but also displays them in a familiar interface for users.
The dashboard is our term for the form which allows you to choose your options. visualisation options are presented depending on the type of data you uploaded, once chosen you are given supplementary options depending on your chosen visualisation type.
Visualisation (aka the fun part).
To build the visualisations we built 3 different (for lack of a better word) ‘engines’. I’ll describe the protovis engine in depth later, but here are the other two described in brief.
wordclouds, The data is parsed and sent off to a remote server, which generates the svg and returns it.
ISOTYPES, this module generates an svg based on preloaded svg paths for icons, changing size, colour and the number of items based on the data provided.
Once drupal has an SVG file through this process it creates a visualisation content type, and saves the SVG to it. the SVG is also rasterised and a standard image (firstly so it is in a familiar format, secondly to ensure all users can view it). and it is also saved as a .pdf.
While a rather brief explanation of the whole process I hope it gives an insight to the architecture of the site, and the many complexities that can be encountered, and overcome in building a project such as this.
Our datavis site has launched! http://look.iriss.org.uk
15th March was International Social Work day, and we were very proud to have Alison Petch (Director of IRISS) announce and present the site at the conference in glasgow. We also had a stand allowing delegates to have a first play with the tools on offer. For my own part it was the culmination of nearly 30 hours work over 2 days to finalise some of the tools, and it was a proud moment for me to see some of the tools being put to use for the first time.
Work still continues developing and improving the tool, and I’m looking forward to expanding on the visualisations.
Currently the registration system has a couple of issues, however if you wish to use the site, please email us at firstname.lastname@example.org and well sign you up.
It was an evening full of great ideas and lively discussion. Rikke presented the story and processes behind the bar code chart IRISS has created for Fife Council, using the Scottish Government’s annual deprivation data.
As the new site starts to fall into place, We’ve evaluated several Drupal modules and are working to develop some new ones.
Firstly, We’ve chosen the awesome sheetnode module, which allows users to upload spreadsheet data in various formats ( Excel, mac-excel and google spreadsheets ) into a Drupal node. not only does this offer a simple way for users to get their data into the system, but it also allows them to modify the data via a easy to use editing interface. Lastly ( and best of all ) each cell, row and column can be accessed through Views. All that power held in a single ( and woefully unknown ) module. check it out here -http://drupal.org/project/sheetnode
You may have noticed that we are concentrating on SVG as our base format. This has been chosen since it is a resolution- independent format ( you can re-size it without any loss in quality ). Our goal is to use imagemagick to allow users to convert the SVG to jpeg for when a user wants to download and use the image in their own site/publication.
We’ve been working on wireframes to show our pilot users so they can get a better understanding of how the website will work for them. It’s been useful to get us thinking about the tool will work in terms of layout, functions.
These were all sketched out on paper & pencil before being transferred to more finalised versions and inked in with a sharpie.
We’ve attached the wireframes to this post if you would like to share any thoughts.