Introduction
Hadoop provides a Java native API to support file system operations such as create, rename or delete files and directories, open, read or write files, set permissions, etc. A very basic example can be found on Apache wiki about how to read and write files from Hadoop.
This is great for applications running within the Hadoop cluster but there may be use cases where an external application needs to manipulate HDFS like it needs to create directories and write files to that directory or read the content of a file stored on HDFS. Hortonworks developed an additional API to support these requirements based on standard REST functionalities.
WebHDFS REST API
WebHDFS concept is based on HTTP operations like GET, PUT, POST and DELETE. Operations like OPEN, GETFILESTATUS, LISTSTATUS are using HTTP GET, others like CREATE, MKDIRS, RENAME, SETPERMISSIONS are relying on HTTP PUT. APPEND operations is based on HTTP POST, while DELETE is using HTTP DELETE.
Authentication can be based on user.name query parameter (as part of the HTTP query string) or if security is turned on then it relies on Kerberos.
The standard URL format is as follows: http://host:port/webhdfs/v1/?op=operation&user.name=username
In some cases namenode returns a URL using HTTP 307 Temporary Redirect mechanism with a location URL referring to the appropriate datanode. Then the client needs to follow that URL to execute the file operations on that particular datanode.
By default the namenode and datanode ports are 50070 and 50075, respectively, see more details about the default HDFS ports on Cloudera blog.
In order to configure WebHDFS, we need to hdfs-site.xml as follows:
<property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property>
WebHDFS examples
As the simplest approach, we can use curl to invoke WebHDFS REST API
1./ Check directory status
$ curl -i "http://localhost:50070/webhdfs/v1/tmp?user.name=istvan&op=GETFILESTATUS" HTTP/1.1 200 OK Content-Type: application/json Expires: Thu, 01-Jan-1970 00:00:00 GMT Set-Cookie: hadoop.auth="u=istvan&p=istvan&t=simple&e=1370210454798&s=zKjRgOMQ1Q3NB1kXqHJ6GPa6TlY=";Path=/ Transfer-Encoding: chunked Server: Jetty(6.1.26) {"FileStatus":{"accessTime":0,"blockSize":0,"group":"supergroup","length":0,"modificationTime":1370174432465,"owner":"istvan","pathSuffix":"","permission":"755","replication":0,"type":"DIRECTORY"}}
This is similar to execute the Hadoop ls filesystem command:
$ bin/hadoop fs -ls / Warning: $HADOOP_HOME is deprecated. Found 1 items drwxr-xr-x - istvan supergroup 0 2013-06-02 13:00 /tmp
2./ Create a directory
$ curl -i -X PUT "http://localhost:50070/webhdfs/v1/tmp/webhdfs?user.name=istvan&op=MKDIRS" HTTP/1.1 200 OK Content-Type: application/json Expires: Thu, 01-Jan-1970 00:00:00 GMT Set-Cookie: hadoop.auth="u=istvan&p=istvan&t=simple&e=1370210530831&s=YGwbkw0xRVpEAgbZpX7wlo56RMI=";Path=/ Transfer-Encoding: chunked Server: Jetty(6.1.26)
The equivalent Hadoop filesystem command is as follows:
$ bin/hadoop fs -ls /tmp Warning: $HADOOP_HOME is deprecated. Found 2 items drwxr-xr-x - istvan supergroup 0 2013-06-02 12:17 /tmp/hadoop-istvan drwxr-xr-x - istvan supergroup 0 2013-06-02 13:02 /tmp/webhdfs
3./ Create a file
To create a file requires two steps. First we need to run the command against the namenode then follows the redirection and execute the WebHDFS API against the appropriate datanode.
Step 1:
curl -i -X PUT "http://localhost:50070/webhdfs/v1/tmp/webhdfs/webhdfs-test.txt?user.name=istvan&op=CREATE" HTTP/1.1 307 TEMPORARY_REDIRECT Content-Type: application/octet-stream Expires: Thu, 01-Jan-1970 00:00:00 GMT Set-Cookie: hadoop.auth="u=istvan&p=istvan&t=simple&e=1370210936666&s=BLAIjTpNwurdsgvFxNL3Zf4bzpg=";Path=/ Location: http://istvan-pc:50075/webhdfs/v1/tmp/webhdfs/webhdfs-test.txt?op=CREATE&user.name=istvan&overwrite=false Content-Length: 0 Server: Jetty(6.1.26)
Step 2:
$ curl -i -T webhdfs-test.txt "http://istvan-pc:50075/webhdfs/v1/tmp/webhdfs/webhdfs-test.txt?op=CREATE&user.name=istvan&overwrite=false" HTTP/1.1 100 Continue HTTP/1.1 201 Created Content-Type: application/octet-stream Location: webhdfs://0.0.0.0:50070/tmp/webhdfs/webhdfs-test.txt Content-Length: 0 Server: Jetty(6.1.26)
To validate the result of the WebHDFS API we can run the following Hadoop filesystem command:
$ bin/hadoop fs -ls /tmp/webhdfs Warning: $HADOOP_HOME is deprecated. Found 1 items -rw-r--r-- 1 istvan supergroup 20 2013-06-02 13:09 /tmp/webhdfs/webhdfs-test.txt
4./ Open and read a file
In this case we run curl with -L option to follow the HTTP temporary redirect URL.
$ curl -i -L "http://localhost:50070/webhdfs/v1/tmp/webhdfs/webhdfs-test.txt?op=OPEN&user.name=istvan" HTTP/1.1 307 TEMPORARY_REDIRECT Content-Type: application/octet-stream Expires: Thu, 01-Jan-1970 00:00:00 GMT Set-Cookie: hadoop.auth="u=istvan&p=istvan&t=simple&e=1370211032526&s=suBorvpvTUs6z/sw5n5PiZWsUnU=";Path=/ Location: http://istvan-pc:50075/webhdfs/v1/tmp/webhdfs/webhdfs-test.txt?op=OPEN&user.name=istvan&offset=0 Content-Length: 0 Server: Jetty(6.1.26) HTTP/1.1 200 OK Content-Type: application/octet-stream Content-Length: 20 Server: Jetty(6.1.26) Hadoop WebHDFS test
The corresponding Hadoop filesystem is as follows:
$ bin/hadoop fs -cat /tmp/webhdfs/webhdfs-test.txt Warning: $HADOOP_HOME is deprecated. Hadoop WebHDFS test
5./ Rename a directory
$ curl -i -X PUT "http://localhost:50070/webhdfs/v1/tmp/webhdfs?op=RENAME&user.name=istvan&destination=/tmp/webhdfs-new" HTTP/1.1 200 OK Content-Type: application/json Expires: Thu, 01-Jan-1970 00:00:00 GMT Set-Cookie: hadoop.auth="u=istvan&p=istvan&t=simple&e=1370211103159&s=Gq/EBWZTBaoMk0tkGoodV+gU6jc=";Path=/ Transfer-Encoding: chunked Server: Jetty(6.1.26)
To validate the result we can run the following Hadoop filesystem command:
$ bin/hadoop fs -ls /tmp Warning: $HADOOP_HOME is deprecated. Found 2 items drwxr-xr-x - istvan supergroup 0 2013-06-02 12:17 /tmp/hadoop-istvan drwxr-xr-x - istvan supergroup 0 2013-06-02 13:09 /tmp/webhdfs-new
6./ Delete a directory
This scenario results in an exception if the directory is not empty since a non-empty directory cannot be deleted.
$ curl -i -X DELETE "http://localhost:50070/webhdfs/v1/tmp/webhdfs-new?op=DELETE&user.name=istvan" HTTP/1.1 403 Forbidden Content-Type: application/json Expires: Thu, 01-Jan-1970 00:00:00 GMT Set-Cookie: hadoop.auth="u=istvan&p=istvan&t=simple&e=1370211266383&s=QFIJMWsy61vygFExl91Sgg5ME/Q=";Path=/ Transfer-Encoding: chunked Server: Jetty(6.1.26) {"RemoteException":{"exception":"IOException","javaClassName":"java.io.IOException","message":"/tmp/webhdfs-new is non empty"}}
First the file in the directory needs to be deleted and then the empty directory can be deleted, too.
$ curl -i -X DELETE "http://localhost:50070/webhdfs/v1/tmp/webhdfs-new/webhdfs-test.txt?op=DELETE&user.name=istvan" HTTP/1.1 200 OK Content-Type: application/json Expires: Thu, 01-Jan-1970 00:00:00 GMT Set-Cookie: hadoop.auth="u=istvan&p=istvan&t=simple&e=1370211375617&s=cG6727hbqGkrk/GO4yNRiZw4QxQ=";Path=/ Transfer-Encoding: chunked Server: Jetty(6.1.26) $ bin/hadoop fs -ls /tmp/webhdfs-newWarning: $HADOOP_HOME is deprecated. $ curl -i -X DELETE "http://localhost:50070/webhdfs/v1/tmp/webhdfs-new?op=DELETE&user.name=istvan&destination=/tmp/webhdfs-new" HTTP/1.1 200 OK Content-Type: application/json Expires: Thu, 01-Jan-1970 00:00:00 GMT Set-Cookie: hadoop.auth="u=istvan&p=istvan&t=simple&e=1370211495893&s=hZcZFDOL0x7exEhn14RlMgF4a/c=";Path=/ Transfer-Encoding: chunked Server: Jetty(6.1.26) $ bin/hadoop fs -ls /tmpWarning: $HADOOP_HOME is deprecated. Found 1 items drwxr-xr-x - istvan supergroup 0 2013-06-02 12:17 /tmp/hadoop-istvan
Conclusion
WebHDFS provides a simple, standard way to execute Hadoop filesystem operations by an external client that does not necessarily run on the Hadoop cluster itself. The requirement for WebHDFS is that the client needs to have a direct connection to namenode and datanodes via the predefined ports. Hadoop HDFS over HTTP – that was inspired by HDFS Proxy – addresses these limitations by providing a proxy layer based on preconfigured Tomcat bundle; it is interoperable with WebHDFS API but does not require the firewall ports to be open for the client.
Thank you for the great post sir. Good work. I was just wondering if there is any way to download a file through webHDFS without having to open it.
I am not aware of any way how to read a file without opening it. This is the WebHDFS API specification, it says you need to open a file in order to read it: http://hadoop.apache.org/docs/r1.0.4/webhdfs.html#OPEN
-bash: curl: command not found error imgetting
It seems that you need to install curl. For instance, on Ubuntu you could do it using
$ sudo apt-get install curl
On CentOS/Fedora/RedHat it would need to be using ‘sudo yum install’.