DataTorrent tackles complexity of Hadoop data ingestion
To help solve this problem, Santa Clara, California start-up DataTorrent has released what it calls the first enterprise-grade ingestion application for Hadoop, DataTorrent dtIngest.
The application is designed to streamline the process of collecting, aggregating, and moving data onto and off of a Hadoop cluster.
The software is based on Project Apex, an open source software package available under the Apache 2.0 license.
Working as a component within a Hadoop platform, dtIngest can work with both streaming and batch data. It can exchange data across a variety of file systems and protocols, including NFS, FTP, the Hadoop File System, Amazon Web Service's Simple Storage Service (S3), Kafka, and the Java Message Service.
The software is fault tolerant, in that it can resume a file transfer automatically after disruption. It comes with a point-and-click interface, as well as monitoring logs.
The company has released dtIngest for free, hoping that users will upgrade to DataTorrent's enterprise Hadoop data ingestion pipeline software, DataTorrent RTS 3, which is based on dtIngest/Project Apex and includes additional capabilities for operational management, easy development and data visualization.
DataTorrent was co-founded by Amol Kekre and Phu Hoang, a pair of engineers who used to work at Hadoop pioneer Yahoo. The company has formed partnerships with Hadoop distributors Hortonworks and Pivotal, and has drummed up nearly $24 million in early stage funding from investors.
Joab Jackson covers enterprise software and general technology breaking news for The IDG News Service. Follow Joab on Twitter at @Joab_Jackson. Joab's e-mail address is Joab_Jackson@idg.com