List of Datasets, List of Lists of Datasets

Consider this sort of a public-facing list of datasets I’ve found interesting, have played with or want to play with.

List of Datasets

Lending Club

Peer to peer credit marketplace Lending Club publishes data on issued and declined loans.

World Health Organisation

The WHO publishes many interesting datasets at They don’t however do a great job of linking to the raw datasets: is a comprehensive dataset providing mortality rates for all reporting countries, but difficult to find from the navigation.

New York Times

The New York Times has a fairly comprehensive open api, documented at


The Chicago public cycle hire scheme (akin to New York’s Citibike, London’s Barclays Boris Bike) published data on 750 000 trips made for their data challenge.


Outpan aims to provide a single database for turning barcodes into product information. Not extremely complete.


Under the efforts of transparency, a dataset containing information around usage of Medicare. Could make a complement to some of the other medical datasets available.

List of Lists of Datasets

List of Datasets, List of Lists of Datasets

How Spark does Class Loading

Using the spark shell, one can define classes on-the-fly and then use these classes in your distributed computation.

Contrived Example
scala> class Vector2D(val x: Double, val y: Double) extends Serializable {
| def length = Math.sqrt(x*x + y*y)
| }
defined class Vector2D
scala> val sourceRDD = sc.parallelize(Seq((3,4), (5,12), (8,15), (7,24)))
sourceRDD: org.apache.spark.rdd.RDD[(Int, Int)] = ParallelCollectionRDD[5] at parallelize at :13
scala> => new Vector2D(x._1, x._2)).map(_.length).collect()
14/03/30 09:21:59 INFO SparkContext: Starting job: collect at :17
res1: Array[Double] = Array(5.0, 13.0, 17.0, 25.0)

In order for the remote executors here to actually run your code, they must have knowledge of the Vector2D class, yet they’re running on a different JVM (and probably different physical machine). How do they get it?

  • we choose a directory on disk to store the class files
  • a virtual directory is created at SparkIMain:101
  • a scala compiler is instantiated with this directory as the output directory at SparkIMain:299
  • this means that whenever a class is defined in the REPL, the class file is written to disk
  • a http server is created to serve the contents of this directory at SparkIMain:102
  • we can see info about the Http server in the logs:
    14/03/23 23:39:21 INFO HttpFileServer: HTTP File server directory is /var/folders/8t/bc2vylk13j14j13cccpv9r6r0000gn/T/spark-1c7fbed7-5c87-4c2c-89e8-be95c2c7ac54
    14/03/23 23:39:21 INFO Executor: Using REPL class URI:
  • the http server url is stored in the Spark Config, which is shipped out to the executors
  • the executors install a URL Classloader, pointing at the Http Class Server at Executor:74

For the curious, we can figure out what the url of a particular class is and then go check it out in a browser/with curl.

def urlOf[T:ClassTag] = {
   val clazz = implicitly[ClassTag[T]].erasure

Do it yourself

It’s pretty trivial to replicate this ourselves – in Spark’s case we have a scala compiler which writes the files to disk, but assuming we want to serve classes from a fairly normal JVM with a fairly standard classloader, we don’t even need to bother with the to disk. We can grab the class file using getResourceAsStream. It also doesn’t require any magic of scala – an example class server in java using Jetty:

class ClasspathClassServer {
	private Server server = null;
	private int port = -1;

	void start() throws Exception {
		System.out.println("Starting server...");
		if(server != null) {

		server = new Server();
		NetworkTrafficSelectChannelConnector connector = new NetworkTrafficSelectChannelConnector(server);

		ClasspathResourceHandler classpath = new ClasspathResourceHandler();


		port = connector.getLocalPort();
		System.out.println("Running on port " + port);

	class ClasspathResourceHandler extends AbstractHandler {
		public void handle(String target, Request baseRequest, HttpServletRequest request, HttpServletResponse response)
					throws IOException, ServletException {
			System.out.println("Serving target: " + target);

			try {
				Class<?> clazz = Class.forName(target.substring(1));
				InputStream classStream = clazz.getResourceAsStream('/' + clazz.getName().replace('.', '/') + ".class");


				OutputStream os = response.getOutputStream();

				IOUtils.copy(classStream, os);
			} catch(Exception e) {
				System.out.println("Exception: " + e.getMessage());

It’s then just a matter of setting up a URL Classloader on the other side!

Further Examples An example of using a similar technique to write a ‘compute server’ in scala – somewhat akin to a very stripped down version of Spark.

How Spark does Class Loading

Chunkify – Grouping series in pandas for easy display

Continuing with the same population data-set from last time, say we wanted to group countries by when their last valid entry in the population data table.

last_year_of_data = pop_stats.groupby('Name')['Year'].max().reset_index()
Name Year
0 Albania 2004
1 Antigua and Barbuda 1983
2 Argentina 1996
3 Armenia 2012
4 Australia 2011

Let’s say we want to display these groups of countries in chunks – enter chunkify.

def chunkify(series, into=4):
    #ensure the series has a sequential index
    series = series.reset_index(drop=True)
    #chunk the series into columns
    columns = [series[range(i, series.size, into)]
                    for i in range(into)]
    #stick the columns together
    df = pd.concat(columns, axis=1)
    #rename the columns sequentially
    df.columns = range(into)
    return df

Usage is simple.

last_year_of_data.groupby('Year')['Name'].apply(lambda x : chunkify(x, 3))\



0 1 2
2008 0 Malaysia Maldives
2009 0 Fiji Iceland Ireland
1 Jordan New Zealand South Africa
2010 0 Bahrain Belarus France
1 Georgia Italy Kazakhstan
2 Kyrgyzstan Lithuania Mongolia
3 Romania Russian Federation Slovakia
4 Slovenia Sweden Switzerland
5 TFYR Macedonia United Kingdom United Kingdom, Northern Ireland
2011 0 Australia Bosnia and Herzegovina Brunei Darussalam
1 Bulgaria Cyprus Denmark
2 Egypt Finland Greece
3 Hong Kong SAR Japan Kuwait
4 Malta Mauritius Netherlands
5 Poland Portugal Qatar
6 Republic of Korea Rodrigues Singapore
7 Spain United Kingdom, Scotland
2012 0 Armenia Austria Belgium
1 Croatia Czech Republic Estonia
2 Germany Hungary Israel
3 Latvia Luxembourg Norway
4 Republic of Moldova Serbia Seychelles
5 Ukraine United Kingdom, England and Wales
Chunkify – Grouping series in pandas for easy display

Labelled lines – prettier charts in matplotlib

The default legend for matplotlib line charts can leave a little to be desired. With many colours it can also sometimes be a little tricky to match the legend to the appropriate line. Suppose instead we place the labels next to the lines.

Smoothed Annual Population Change - WHO Population Data
Smoothed Annual Population Change – WHO Population Data

Additionally here, we’ve removed the top and right axes, increased the font sizes of the labels and set the ticks to extend outwards. We firstly take our pandas dataframe death_rates_data_frame and plot it as normal (disabling gridlines and making the lines slightly thicker). We add the larger axis labels.

population_data_frame.plot(legend=False, grid=False, linewidth=2, figsize=(9,6))
plt.xlabel('Year', fontsize=20)
plt.ylabel('Population \n% Change', rotation='horizontal', labelpad=80, fontsize=20)

Next we define a function that, given a set of axes, will

  • Hide the top and right axes
  • Set the ticks to only display on the bottom x-axis and the left y-axis
  • Set the ticks to extend outwards
  • Increase the font-size of the labels
def format_graph(ax):
    ax.tick_params(axis='both', direction='out', labelsize=14)

The label_lines function handles the placing of text next to the final vertex of the line.

def label_lines(ax, offset, label_formatter=None, **kwargs):
    for handle, label in zip(*ax.get_legend_handles_labels()):
        path = handle.get_path()
        #careful with the NaNs
        last_vertex = pd.DataFrame(path.vertices).dropna().as_matrix()[-1]
        nicer_label = label_formatter(label) if label_formatter else label
        plt.text(last_vertex[0]+offset[0], last_vertex[1]+offset[1], nicer_label, color=handle.get_color(), transform=ax.transData, **kwargs)

Finally we call the above code with the current axes! Given the slight label overlap, let’s outline the text in white with path_effects.

import matplotlib.patheffects as PathEffects
ax = plt.gca()
label_lines(ax, offset=(1,0),
                path_effects=[PathEffects.withStroke(linewidth=3, foreground="w")])


Labelled lines – prettier charts in matplotlib

Connecting to Vertica from Spark

So you have a lot of data in vertica, and you want to do analytics beyond what’s easily expressible in vSQL, at scale, without writing nasty C++ UDFs; or perhaps you already have a lot of data already sitting in HDFS to join against.

Enter spark.

1. Grab the vertica jbdc drivers and hadoop connectors from the vertica support portal and put them on your spark classpath (e.g. via ADD_JARS)

2. Use something like this class

import org.apache.spark.rdd.RDD
import com.vertica.hadoop._
import org.apache.hadoop.mapreduce._
import org.apache.hadoop.conf.Configuration
class Vertica(val hostnames: String,
              val database: String,
              val username: String,
              val password: String,
              val port: String = "5433") extends Serializable {

    def configuration:Configuration = {
        val conf = new Configuration
        conf.set("mapred.vertica.hostnames", hostnames)
        conf.set("mapred.vertica.database", database)
        conf.set("mapred.vertica.username", username)
        conf.set("mapred.vertica.password", password)
        conf.set("mapred.vertica.port", port)

    def query(sql: String):RDD[VerticaRecord] = {
        val job = new Job(configuration)

        VerticaInputFormat.setInput(job, sql)
        sc.newAPIHadoopRDD(job.getConfiguration, classOf[VerticaInputFormat], classOf[LongWritable], classOf[VerticaRecord]).map(_._2)

3. Voilà!

val vertica = new Vertica("my-node-1,my-node-2,my-node-3", "my-db", "username", "password")
val v:RDD[VerticaRecord] = vertica.query("select date, category, sum(amount) from my_transaction_table group by date, category;")
Connecting to Vertica from Spark

Linux Keylogger Proof of Concept

I’ve just read ‘The Linux Security Circus: On GUI isolation’

It struck me that a linux keylogger is perfectly easy to write – I had previously (naïvely) thought such a program would only work given root permissions.

Alas! It’s stupidly easy.

see result of 30 minutes of hacking

The code simply calls xinput test [id of keyboard device] and parses out the keycodes. The id of your keyboard device can be found from the device listing given by xinput list.

Linux Keylogger Proof of Concept