After 3 days of hectic efforts in installing, error fixing and configuration changes, I managed to run Hadoop 3.x.x in Windows 10 environment with Java 12 being installed. It was a bunch of errors that made me delay in running it successfully, which I finally managed to go through each line via console and resolve it. Here I would like to discuss those errors in detail rather than steps to install it, which is readily available in internet.
This release works with Hadoop 0.20.X, 1.X, 0.23.X and 2.X 21 February, 2013: release 0.11.0 available This release includes hundreds of bug fixes and many new features including DateType datatype, RANK, CUBE and ROLLUP operators, Groovy UDFs, pluggable reducer estimation logic, additional UDF features, schema-based tuples and HCatalog DDL. Compiled hadoop 321 however I had problems compiling 320 and 321 hdfs native client so here are hadoop common native libraries only, for 32x versions ec63c2d Git stats. Download Spark: Verify this release using the and project release KEYS. Note that, Spark 2.x is pre-built with Scala 2.11 except version 2.4.2, which is pre-built with Scala 2.12. Spark 3.0+ is pre-built with Scala 2.12. Latest Preview Release. Preview releases, as the name suggests, are releases for previewing upcoming features. Dds-MacBook-Pro-2: Rkey$ hadoop checknative -a 2018-07-15 16:18:25,956 WARN bzip2.Bzip2Factory: Failed to load/initialize native-bzip2 library system-native, will use pure-Java version 2018-07-15 16:18:25,959 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library 2018-07-15 16:18:25,963 WARN erasurecode.ErasureCodeNative. Hadoop 2.7.3 requires Java 7 or higher. Run the following command in a terminal to.
Follow the below link to install & Configure Hadoop3.x in Windows: http://toppertips.com/hadoop-3-0-installation-on-windows/ . Just follow all the configuration steps in it as prescribed
Main rules you need to follow is, whenever you step into an issue hadoop is not running properly, need to give utmost attention to every console window you have (2 console windows when you run start-dfs.cmd & 2 console windows you get while running start-yarn.cmd)
ERROR#1] When the console shows NativeIOLibraries are not loaded (eg:
), it means hadoop.dll and/or winutils.exe which is needed for hadoop 3.x onwards (for windows platform) is not in BIN directory of hadoop. You can just download the entire BIN directory needed for Hadoop 3.1.x from the gitlab link below : https://github.com/s911415/apache-hadoop-3.1.0-winutils . Replace it in your Hadoop directry and execute all commands in following order
![Download hadoop 3.1 2 for mac 64-bit Download hadoop 3.1 2 for mac 64-bit](https://images4.pianshen.com/768/c9/c92371a979ae03dc029b6d6f0cb645e8.png)
stop-dfs.cmd Adobe flash player install manager mac download.
stop-yarn.cmd
hdfs namenode -format
hdfs datanode -format
start-dfs.cmd
start-yarn.cmd
ERROR#2] If error as like “ERROR namenode.NameNode: Failed to start namenode. java.lang.IllegalArgumentException: No class configured for C” shows in console, just go into hadoop-3.1.2etchadoop path, open the hdfs-site.xml and make sure the path is without the root drive (Example below)
Hadoop 2.6.0 Download
Make sure its forward slash itself being used to separate path parameters. If used backward slash, the error console will show you Path not found exception.
ERROR#3] After the hadoop portal is up and running (http://localhost:9870/ ), when you go to “Browse file system” menu, you may step upon the error message as : ” Failed to retrieve data from /webhdfs/v1/?op=LISTSTATUS: Server Error “
This is because the javax.activation component is removed from Java 11 onward and you are using some version >= Java 11. (Look closely the console and you can see the similar error message there also). Go to https://jar-download.com/?search_box=javax.activation and download the activation jar file. Paste it into the hadoop directory as “<hadoop root directory>sharehadoopcommon”. Close all Hadoop consoles and execute the commands in the order as I quoted above . The issue resolved !
ERROR#4] After fixing above error, I stumbled on a permission error while trying to upload or create a directory in Hadoop file system as ” Permission denied: user=dr.who, access=WRITE, inode=”/”:Binukumar.S:supergroup:drwxr-xr-x ” This is simply because the user you are logged in to machine has no write permission by default. For quick results you just go into hdfs-site.xml (<hadoop root>etchadoop) and add a property tag as below:
Download Hadoop 3.1.2
That will bypass the permission for testing/staging. Not recommended for production though
ERROR#5] Even though folders can be got created in file system, when you start uploading files, we again get disappointed with a message as ” Couldn’t find datanode to write file. Forbidden “. This is because the datanode has not been created properly or not at all created. Why because its not created is, you may have formatted namenode in between, but forgot to format datanode and so the clusterID’s got mismatch.
Download Hadoop 3.1 2 For Macs
Again if you look at the console closely (got while running start-dfs.cmd), you can see the message related to this as “Failed to add storage directory [DISK]file:/C:/hadoop-3.1.2/data/datanode
java.io.IOException: Incompatible clusterIDs“. Here is the solution for this… Just go to the datanode folder you created at the time of installation, delete the folder and files in it manually. Run all the commands again and you are good to go. Now you can upload files to HDFS successfully
java.io.IOException: Incompatible clusterIDs“. Here is the solution for this… Just go to the datanode folder you created at the time of installation, delete the folder and files in it manually. Run all the commands again and you are good to go. Now you can upload files to HDFS successfully