fuse-j-hdfs is built on fuse, fuse for java, and the hadoop-dev.jar.contrib/fuse-dfs is built on fuse, some C glue, libhdfs and the hadoop-dev.jar.So, while the NFS route gets you started quickly, for production it is more robust to automount fuse on all the machines you want access to hdfs from.Īlso, since NFS often reorders writes, you can have write failures when you use the NFS -> FUSE -> HDFS route. If an inode is flushed from the kernel cache on the server, NFS clients get hosed they try doing a read or an open with an inode the server doesn't have a mapping for and thus NFS chokes. The bad news is that fuse relies on the kernel's inode cache since fuse is path-based and not inode-based. Note that a great thing about FUSE is you can export a fuse mount using NFS, so you can use fuse-dfs to mount hdfs on one machine and then export that using NFS. Although the Webdav-based one can be used with other webdav tools, but requires FUSE to actually mount. Once mounted, the user can operate on an instance of hdfs using standard Unix utilities such as 'ls', 'cd', 'cp', 'mkdir', 'find', 'grep', or use standard Posix libraries like open, write, read, close from C, C++, Python, Ruby, Perl, Java, bash, etc.Īll, except HDFS NFS Proxy, are based on the Filesystem in Userspace project FUSE ( ). These projects (enumerated below) allow HDFS to be mounted (on most flavors of Unix) as a standard file system using the mount command. Filesystem Size Used Avail Use% Mounted on
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |