Adventures in biggish data

This is going to be an evolving blog post retracing my current attempt at dealing with a dataset of 65 gigabytes. It will often look silly – that’s because I am not a programmer by training, and I make an effort at honestly recording the steps I took – including all mistakes and “doooohh!” moments.

See the bottom of the post for explanations on some questions. Add yours in the comments if you wish, I’ll do my best to respond.

I do this for the goal of  exploring this dataset visually (an interesting methodological question I find) – and maybe foremost, to learn how to work with big datasets in practice. That’s harder than I thought.

The dataset:

This is the IRI dataset, which is documented in the journal Marketing Science (link to the pdf, link to the jounal website).

It is delivered by post on a external hard drive containing a hierarchy of folders containing csv files (in various formatting) and excel files containing weekly data on product purchases in drugstores and shopping malls – collected across 10 years in participating stores in the US. The size of each file ranges from a couple of megabytes (Mb) to ~ 800 Mb. In total they make ~ 160 Gb, of which only 65 Gb I’ll end up using.

COUNTER OF COSTS SO FAR:

80 euros (server rental costs)
70 euros (one Terabyte external hard drive).

TOTAL ———–> 150 euros.

ACHIEVED SO FAR:

The files have been imported into a database.

Early november 2013

– delivery of the dataset (160Gb) on a 500Gb hard drive.
– reading of the 75 pages pdf coming with the dataset. The datasets contains several different aspects, I realize I’ll start using a portion of it, making 65Gb.
– copy of the dataset on the hard drive of my laptop (450Gb, spinning disk). Note: the laptop has a 2nd hard drive where the OS runs (SSD, 120Gb, almost full).
– I write Java code to parse the files and import them into a Mongo database stored on my 450Gb hard drive, using the wonderfully helpful Morphia (makes the syntax so easy).
– First attempts at importing: I realize that the database will be much bigger in size than the original flat files. Why? I investigate on StackOverflow and get to reduce the size of the future db significantly.
– Still, I don’t know the final size of the db, so there is the risk that my hard drive will get full. I buy a 1 Terabyte / USB 3.0 external hard drive (Seagate, 70 euros at my local store).

Mid November 2013

– First attempts to import the Excel / csv files into MongoDB on this external hard drive. The laptop grinds to a halt after  2 hours of so: memory issues. What, on my 16Gb RAM laptop? The cause: by design, MongoDB will use all the memory available on the system if it needs it. It’s supposed to leave enough RAM for other processes but apparently it does not. I feel stuck. Oh wait, running MongoDB on a virtual machine would allow for allocating a specific amount of RAM to it? I tried Oracle’s Virtual Box but long story short, I can’t run a 64b virtual machine on my 64b laptop because a parameter in my BIOS should be switched on to allow for it, but my BIOS does not feature this parameter (and I won’t flash a BIOS, that’s beyond what I feel able to).

– At this point I realize that the external hard drive I bought won’t serve me here. I need a distant server for the database where Mongo willl sit alone. Or were there other options to keep the data locally?

End November 2013

– I try to rent a server from OVH (13 euros for a month + 13 euros setup costs: 1 Terabyte server with a small processor from Kimsufi, their low cost offer). I don’t get access to it in the following 3 days, and give up. Got a refund later.

– I rent a server (at ~ 40 euros per month, no setup cost) with 2 Terabyte hard drives, 24Gb of RM (!!) and a high performing processor (i9720) from Hetzner’s auction site. Sounds dodgy and too good to be true, yet I get access to it within 3 hours, install Debian and Mongo on it (easier than I thought, given that I am a Linux noob).

– Re-run my Java code on my laptop for importing the Excel/csv files onto this distant server. New bottleneck: it takes ages for the data to transfer from my wifi to the server. Of course…

– I rent a second server (at ~ 40 euros per month, still at Hetzner), in the same geographical region as the first, where I’ll put the data and run my Java code from.
– Start uploading the data to it: takes ages (more than two weeks at this pace).

Early December 2013

– Went to my university to benefit from their transfer speed. After some hicups I got the 65Gb to transfer from my laptop to one of the remote servers I rented in just a couple of hours.
– Starting the import of these 65Gb of csv / Excel files from this server to the Mongodb server. Monitoring the thing since the last 30 minutes, I see that already 60,000,000 917,000,000 (close to 1 billion!!) weekly purchase data transactions have transferred to the db – and counting! (one transaction looks like “this week, 45 packs of Guiness were bought at the store XXX located at Austin, Texas for a total of 200$”). Big data here I come! For some reason the stores descriptions didn’t get stored yet though. I’ll see that later. Very excited about the 1 billion transaction thing. Also worried on how to query this. We’ll see.
– For some reason the database crashed after 1,1 billion transactions imported. Trying to relaunch the import where it stopped, I accidentally drop (delete) the database. Oooops.
– Before relaunching the import, I optimize a bit the code, clear a bug, and go!
– 14 hours that this new import has started. 2,949 stores found and stored, 138,985 products found and stored. And 1,3 billion transactions found and stored, and counting. Wow. No crash, looks good.
– 2 days after it started, the import has finished without a crash! 2.29 billion “weekly purchase data” entries were found and stored in the db. The csv / Excel files take 65Gb of disk space, but once imported in the db the same data takes 400 Gigabytes of space. Wow. Next step: building indexes and start a first query.

QUESTIONS:

– Why not using university infrastructures?

I am transitioning between two universities (from Erasmus University Rotterdam to EMLyon Business School) at the moment, that’s not the right moment to ask for the set-up of a server, which could take weeks anyway. When arriving at EMLyon I’ll reconsider my options. The other reason is that I want to learn how “big data” works in practice. My big dataset is still smallish, and I already run into so many issues. So I am happy to go through it, as it will give me a better comprehension of what’s involved in dealing with the next scale: terabytes. I feel that this first hand knowledge will give me to teach the students in a better way, and that I will make more informed choices when dealing with experts (IT admins from the university or the CNRS) when comes the moment to launch larger scale projects in big data.

– Why MongoDB?

I was just seduced by the easyness of their query syntax. That’s horrifying as a decision parameter, I know. Still, I stand by it. I feel that it is indeed a determining factor because if the underlying performance is good enough (I’ll see that), then as a coder I can choose the db system which is the less painful / nicest to use (though I don’t use it myself, the MongoDB javascript console is I think a main driver behind the adoption of Mongo as a default for the Node.js community, I think). And with the Morphia library added to it, Mongo for Java is just a breeze to use: create POJOs, save POJOs, query POJOs. That’s it:

Datastore ds = ...; // like new Morphia(new Mongo()).createDatastore("hr")
morphia.map(Employee.class);

ds.save(new Employee("Mister", "GOD", null, 0));

// get an employee without a manager
Employee boss = ds.find(Employee.class).field("manager").equal(null).get();

No table, rubbish query syntax or whatever.

Of course, I’ll see with this current experiment if Mongo fits the job or not in terms of performance. If it doesn’t, I’ll explore Neo4J or SQL (in this order).

– Why not Amazon services?

Yes, yes. I am constrained by my attachement to MongoDB here. I would have run MongoDB on Amazon and all would have been fine, maybe. But the instructions on how to run Mongo on Amazon EC2 got me scared.

Benchmark Akka vs regular Java

I think I found *the* solution for dealing with big data / big computations in Java. That’s called Akka, and I learned about it thanks to a tip from Miguel Biraud.

I had tried several solutions to speed up things but they did not work well:

– multithreading? Yes, but there is a hard limit at the number of threads available on your computer.

– GPGPU computing? Very hard to code, and I was disappointed by the performance of Java libraries supposed to ease the pain, like Ateji.

So, Akka!

That’s a framework for Scala and Java, still evolving very much. It uses the logic of actors to distribute work. Actors can create actors, which promises a nice multiplier effect. Actors can be created by millions!

Anyway, I created a small benchmark to make sure it was easy to code, and that it delivered some speed up even with a quick and dirty test. The results:

TEST 1

double loop through an array of 1,000 integers
operation: product of elements of arrays

nb of actors / operations Akka (in milliseconds) Regular java (in milliseconds)
10 150 150
100 1,200 5,600
1000 11,000 56,0000

conclusion test 1: Akka is faster by a factor of 5.

TEST 2

double loop through an array of 10,000 integers
operation: do some complex regex ops on a random String for each of the 10,000 steps

nb of actors / operations Akka (in milliseconds) Regular java (in milliseconds)
10 348 874
100 2,231 7,600
1000 20,000 75,000

conclusion test 2: Akka is faster by a factor of 3 to 4.

The setup was painless. The code was written by adapting the Hello World example provided on the site. The documentation is not that easy to follow, and as the versions of Akka are evolving quickly it makes it harder to rely on tutorials or StackOverflow Q&A even a few months old. But the logic of operations (actors receiving and sending messages) is quite straightforward.

Note that I did not use the “demultiplier effect” of Akka (Actors launching actors), which could have improved the result. Finally, this was performed on a laptop, whereas the real promise of Akka is on distributed environments (several servers working together). I don’t have a case of use yet for that, but this benchmark suggests that Akka will be very handy for these cases.

The code of the benchmark, and the results in an Excel file:

https://github.com/seinecle/AkkaTest/blob/master/README.md

I am Clement Levallois, and you can find examples of my work here or follow me on Twitter

Gephi – the possibilities of a data visualization platform

Gephi is a reference for the visualization of networks. It can become much more.

1. The first usage of Gephi is probably to, well, download it, install it and work with it. Simple, that’s how we know Gephi:

gephi_post_6

 

2. On the Gephi website, we see that a second use is possible: download the “Toolkit” version of Gephi.

gephi_post_1

This toolkit version of Gephi is made for programmers: there is no “window” appearing or anything, just pure code to execute Gephi functions automatically and repeatedly. For example:

– import a network

– apply a layout

– export the picture of the network to pdf.

– pick another network.

– repeat  previous steps x 1000

As the activity of the Gephi forum shows, there are many users who take Gephi this way.

 

3. Gephi also comes in a third flavor: Gephi plugins.

Plugins are basically little modifications that add  missing functions to Gephi. As a user of Gephi you probably ran in a situation when you wished this or that stuff could be done in Gephi. To name some:

– add a map as a background

– replace nodes by pictures, or any shape

– import twitter networks into Gephi

– run you prefered networks metrics, not present in the statistics panel.

Etc…

Gephi plugins can be written to do all that, adding all the functionalities that you need and which are not originally present in Gephi. Actually, many plugins have been written by individuals and firms to meet their needs, and they shared these plugins publicly. Anybody can  install these plugins directly from Gephi:

gephi_post_2

(open this window in Gephi by navigating in the menu: Tools -> Plugins)

When a plugin you chose is installed in Gephi, they are integrated perfectly so that you see no difference with the original functions of Gephi:

gephi_post_8These plugins are also described and cataloged in a convenient way here: https://marketplace.gephi.org/plugins/. I personally developed 2 public plugins: one to sort isolated nodes alphabetically, another to apply a 3D layout. Certainly, many plugins have been written by firms for their inhouse needs.

 

4. Now, I really believe that Gephi is ready to develop into a 4th flavor: Gephi as a data visualization platform. How?

Gephi can be extended with plugins, we’ve seen that. The thing is, Gephi is itself made of plugins – that’s not a monolithic piece of written code. Each part of Gephi has the flexibility to be modified and extended. So that by creating new, elaborated plugins you are not just adding some minor features to Gephi – you actually transform Gephi into something quite new. Two examples:

– displaying some barcharts in Gephi? Easy:

gephi_post_5

The barcharts above are made possible simply by adding a new plugin to Gephi (tech note: based on JavaFX. Example above was integrated in Gephi in 15 minutes following this tutorial).

But that’s just the beginning. I suppose that just like many others, I often face the Cornelian dilemma: web-based or desktop-based viz? You can actually include webpages into Gephi:

gephi_post_3

That’s the New York Times front page here, but let’s think of javascript based data visualizations – a d3.js viz why not:

gephi_post_4

There are some limits in terms of performance for web-based viz inside Gephi, but the screenshot above shows a perfectly functional and interactive d3js example inside Gephi. Think that it is possible to generate and load these viz from local js files based on local data…

 

In short, Gephi can be seen as a free, open-source, well-architectured data visualization platform – not just a network viz app. With the liberal license model chosen for Gephi (free for integration in commercial apps), this is surely a very effective solution to be explored by companies and data-vizzers in general.