cmd developing software

Jupyter Notebook Server with pyspark over SSL

In this post, we will describe how to configure a publicly accessible Jupyter Notebook Server over SSL. The Jupyter notebook is, by default, accessible only via localhost. In some cases, it is useful to expose it publicly. Here is how to do it simply… Configure a password for public Notebook server Open python REPL $python >>>from IPython.lib import passwd >>>passwd() >>>Enter password >>>Verify password ‘sha1:408a945027ad:fec843e6f020d6c172a16b5ad89989e3c3175d99’ Create a self signed cert openssl req -x509 -nodes -days...

Apache Zeppelin with SSL

Apache Zeppelin is an awesome web based notebook that allows for interactive data analytics. It is architected to be language agnostic and (as of today) supports Scala (with Apache Spark), SparkSQL, Markdown and Shell. In this post, we will describe how to configure a  Zeppelin notebook Server with SSL Here is how to do it simply… First Install Zeppelin Install Zeppelin git clone https://github.com/apache/incubator-zeppelin.git mvn clean package -Pspark-1.4 -Dhadoop.version=2.2.0 -Phadoop-2.2 -DskipTests Note that, eventually,...

Play Framework – Adapting Java 8 CompletableFutures to Play F.Promises

Recently, I have been playing around with Java 8 and the Play Framework (2.4RC1) – bad pun intended :P. As I worked to develop the codebase, I found the need to integrate with Java 8 CompletableFuture‘s. No big deal, right? At first, it didn’t seem to be, but as I continued coding, I quickly became very unhappy with how my code was turning out. The Play Framework for Java introduces...

JavaScript minification. It’s easy!

Minification is definitely not rocket science. You just need a few simple tools… cat or type – to concatenate files. (type for Windows users) minifier – a js minifier (there are many, if you don’t fancy minifier) Setting up the tools is easy… cat / type – Nothing to do! minifier – Simply Run:  npm install -g minifier        >>Don’t have npm!?!? It ships with Node.js. Get it here!<< And using the...

Apache Spark: Convert CSV to RDD

Below is a simple Spark / Scala example describing how to convert a CSV file to an RDD and perform some simple filtering. This example transforms each line in the CSV to a Map with form header-name -> data-value. Each map key corresponds to a header name, and each data value corresponds the value of that key the specific line. This particular example also assumes that the header information is...

Follow

Get every new post on this blog delivered to your Inbox.

Join other followers: