Category : Hadoop | Sub Category : Hadoop Concepts | By Prasad Bonam Last updated: 2023-07-12 05:17:04 Viewed : 559
Hadoop Distributed File System (HDFS) example :
Here is an example of using the Hadoop Distributed File System (HDFS) in Java:
To interact with HDFS programmatically, you need to include the Hadoop dependencies in your Java project. Here is an example using Maven:
xml<dependencies>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>3.3.0</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>3.3.0</version>
</dependency>
</dependencies>
Once you have the dependencies set up, you can use the FileSystem
class to interact with HDFS. Here is an example that demonstrates how to create a file in HDFS:
javaimport org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import java.io.BufferedWriter;
import java.io.OutputStream;
import java.io.OutputStreamWriter;
public class HDFSExample {
public static void main(String[] args) {
try {
// Create a configuration object
Configuration conf = new Configuration();
conf.set("fs.defaultFS", "hdfs://localhost:9000"); // Set the HDFS URI
// Create a FileSystem object
FileSystem fs = FileSystem.get(conf);
// Specify the path of the file to be created in HDFS
Path filePath = new Path("/user/myuser/example.txt");
// Create the file and get its output stream
OutputStream outputStream = fs.create(filePath);
BufferedWriter writer = new BufferedWriter(new OutputStreamWriter(outputStream));
// Write some data to the file
writer.write("Hello, HDFS!");
writer.newLine();
// Close the writer and output stream
writer.close();
outputStream.close();
System.out.println("File created successfully in HDFS.");
} catch (Exception e) {
e.printStackTrace();
}
}
}
In this example, we first create a Configuration
object and set the HDFS URI using fs.defaultFS
property. Then, we obtain a FileSystem
object using FileSystem.get(conf)
. We specify the path of the file to be created in HDFS and create it using fs.create(filePath)
. Finally, we write some data to the file, close the writer and output stream, and display a success message.
Make sure to adjust the HDFS URI and the file path according to your Hadoop cluster configuration.
This is a simple example to demonstrate writing a file to HDFS programmatically. HDFS provides a rich set of operations for file manipulation, such as reading, appending, deleting, and listing files. You can explore the Hadoop documentation and APIs for more advanced usage and interactions with HDFS.