Spark update brings R support and machine learning chops

11.06.2015
One of the most popular big data processing platforms, Spark, now supports one of the premier statistical programming languages, R, which could pave the way for easier big data statistical analysis.

"R is the lingua franca of data scientists and its adoption has exploded in the last two years," wrote Patrick Wendell, one of the chief contributors to Spark, in an e-mail. Wendell is also a cofounder and software engineer at Databricks, which offers a commercial cloud-based version of Spark for enterprises.

The new version "will let R users work directly on large datasets, scaling to hundreds or thousands of machines, well beyond the limits of a stand-alone R program," Wendell wrote.

The newly updated Spark, version 1.4, also includes production-ready machine learning capabilities and a more comprehensive set of visual debugging tools.

With more than 2 million users worldwide, R is one of the most widely used programming languages specifically designed for statistical computing and predictive analytics.

An open source project, R was designed to work only on a single computer, which limits the size of the analysis jobs that can be readily executed. There have been a few efforts to make large R jobs run on clusters of computers for larger jobs, such as Hewlett-Packard's Distributed R package.

The newly updated Spark provides another boost for running R in parallel. In the past year, the Spark data processing platform, an open source project overseen by the Apache Software Foundation, has grown in popularity, as many organizations have used the technology for analyzing data stored across a cluster of computers.

Companies such as Autodesk, eBay, NASA, Opentable and Yahoo have all used Spark to make sense of large collections of data. About 17 percent of 3,000 Java professionals noted that they were running Spark in their operations, according to a December 2014 survey conducted by Java tool provider TypeSafe.

Spark 1.4 comes with SparkR, which is an API (application programming interface) that allows programs to submit R-based analysis jobs to Spark to execute. The data to be analyzed can come from a variety of sources, including Hadoop Hive-based data warehouses, the Hadoop File System, the Apache Parquet columnar store, or a JSON (JavaScript Object Notation)-formatted data feed.

"Because SparkR uses Spark's parallel engine underneath, operations take advantage of multiple cores or multiple machines, and can scale to data sizes much larger than stand-alone R programs," noted Wendell, in a blog post announcing the release.

The new release also comes with a production-ready machine learning pipeline, first introduced as an alpha feature in Spark 1.2. Machine learning is the programmatic approach for computers to infer new information through the use of preset rules and copious amounts of data. The new machine learning pipeline comes with a set of commonly used algorithms for preparing and transforming the data. Emerging from alpha status means that the developers safely can use the API without worrying that it will change in future editions of Spark.

The new release comes just in time for the Spark Summit user conference, held next week in San Francisco.

Joab Jackson covers enterprise software and general technology breaking news for The IDG News Service. Follow Joab on Twitter at @Joab_Jackson. Joab's e-mail address is Joab_Jackson@idg.com

Joab Jackson