val path = new Path("/user/bewang/data") val fs = path.getFileSystem(conf) fs.mkdirs(path) fs.closeNothing seems wrong. We were told to clean up the mess you created. After using it, just close it. Is it a good practice? Unfortunately it doesn't work in Hadoop world. Actually Hadoop client manage all connections to Hadoop cluster. If you call fs.close(), the connections to the cluster are broken, and you cannot do anything after that. Don't call close, let Hadoop handle it.
Wednesday, August 14, 2013
Don't call filesystem.close in Hadoop
Recently the MapReduce jobs in my project suddenly failed. It turned out that my colleague added a line in the code which closes the filesystem.
Subscribe to:
Post Comments (Atom)
This comment has been removed by the author.
ReplyDeleteHi admin thanks for sharing informative article on hadoop technology. In coming years, hadoop and big data handling is going to be future of computing world. This field offer huge career prospects for talented professionals. Thus, taking Hadoop & Spark Training in Hyderabad will help you to enter big data hadoop & spark technology.
ReplyDelete