Scala with HBase -Read, Insert and Update operations

Category : Scala | Sub Category : Scala Programs | By Prasad Bonam Last updated: 2020-10-10 05:07:21 Viewed : 516


Scala with HBase -Read, Insert and Update operations  

Scala with Hbase

A distributed Apache HBase (TM) installation depends on a running ZooKeeper cluster. All participating nodes and clients   need to be able to access the running ZooKeeper ensemble. Apache HBase by default manages a ZooKeeper "cluster" for you.  It will start and stop the ZooKeeper ensemble as part of the HBase start/stop process. You can also manage the ZooKeeper  ensemble independent of HBase and just point HBase at the cluster it should use. To toggle HBase management of ZooKeeper,  use the HBASE_MANAGES_ZK variable in conf/hbase-env.sh. This variable, which defaults to true, tells HBase whether to   start/stop the ZooKeeper ensemble servers as part of HBase start/stop.

Employee Table from Hbase Browser :


/**

 *  Table Name:Employee

 * cf:,cf:empId,cf:empName,cf:location,cf:salary

 * 001:Ram,001,Ram,India,50000

 * 002:Rajesh,002,Rajesh,USA,70000

 * 003:xingpang,003,xingpang,China,50000

 * 004:Yishun,004,Yishun,Singapore,60000

 * 005:smita,005,smita,India,70000

 * 006:swetha,006,swetha,India,90000

 * 006:Archana,006,Archana,India,90000

 * 007:Mukhtar,07,Mukhtar,India,70000

 */

Example:

Following example illustrates about Scala with Hbase – Read, Insert and Update operations

Save the file as −  ScalaHBaseExample.scala 

 ScalaHBaseExample.scala  

 package runnerdev

import org.apache.hadoop.hbase.client._

import org.apache.hadoop.hbase.util.Bytes

import org.apache.hadoop.hbase.{ CellUtil, HBaseConfiguration, TableName }

import org.apache.hadoop.conf.Configuration

import scala.collection.JavaConverters._

object ScalaHBaseExample { 

  def main(args: Array[String]): Unit = {

    val ZOOKEEPER_HOST = "10.16.19.112"

    val ZOOKEEPER_PORT = "2170"

    try {

      val conf: Configuration = HBaseConfiguration.create()

      conf.set("hbase.zookeeper.quorum"ZOOKEEPER_HOST);

      conf.set("hbase.zookeeper.clientPort"ZOOKEEPER_PORT);

      val connection = ConnectionFactory.createConnection(conf)

      val table = connection.getTable(TableName.valueOf(Bytes.toBytes("Employee"))) 

      // Put example insert a record

      var put = new Put(Bytes.toBytes("001:Manoj"))

      put.addColumn(Bytes.toBytes("cf"), Bytes.toBytes("empId"), Bytes.toBytes("001"))

      put.addColumn(Bytes.toBytes("cf"), Bytes.toBytes("empName"), Bytes.toBytes("Manoj"))

      put.addColumn(Bytes.toBytes("cf"), Bytes.toBytes("location"), Bytes.toBytes("Mumbai"))

      table.put(put) 

      // Get example get the data

      println("Get Example:")

      var get = new Get(Bytes.toBytes("001:Manoj"))

      var result = table.get(get)

      println(result) 

      //Scan example read the data

      println(" Scan Example:")

      var scan = table.getScanner(new Scan())

      scan.asScala.foreach(result => {

        println(result) 

      })

    } catch {

      case eException => {

        println("Exception==>"e.printStackTrace()) 

      }

    }

  } 

}  

 Compile and run the above example as follows −

C:>scalac ScalaHBaseExample.scala

C:>scala ScalaHBaseExample 

Output:  

Get Example:

keyvalues={001:Manoj/cf:empId/1602239288871/Put/vlen=3/seqid=0, 001:Manoj/cf:empName/1602239288871/Put/vlen=5/seqid=0, 001:Manoj/cf:location/1602239288871/Put/vlen=6/seqid=0}

 

Scan Example:

keyvalues={001:Manoj/cf:empId/1602239288871/Put/vlen=3/seqid=0, 001:Manoj/cf:empName/1602239288871/Put/vlen=5/seqid=0, 001:Manoj/cf:location/1602239288871/Put/vlen=6/seqid=0}

keyvalues={001:Ram/cf: salary/1600677806143/Put/vlen=5/seqid=0, 001:Ram/cf:empId/1600677806143/Put/vlen=3/seqid=0, 001:Ram/cf:empName/1600677806143/Put/vlen=3/seqid=0, 001:Ram/cf:location/1600677806143/Put/vlen=5/seqid=0}

keyvalues={002:Rajesh/cf: salary/1600677806143/Put/vlen=5/seqid=0, 002:Rajesh/cf:empId/1600677806143/Put/vlen=3/seqid=0, 002:Rajesh/cf:empName/1600677806143/Put/vlen=6/seqid=0, 002:Rajesh/cf:location/1600677806143/Put/vlen=3/seqid=0}

keyvalues={003:xingpang/cf: salary/1600677806143/Put/vlen=5/seqid=0, 003:xingpang/cf:empId/1600677806143/Put/vlen=3/seqid=0, 003:xingpang/cf:empName/1600677806143/Put/vlen=8/seqid=0, 003:xingpang/cf:location/1600677806143/Put/vlen=5/seqid=0}

keyvalues={004:Yishun/cf: salary/1600677806143/Put/vlen=5/seqid=0, 004:Yishun/cf:empId/1600677806143/Put/vlen=3/seqid=0, 004:Yishun/cf:empName/1600677806143/Put/vlen=6/seqid=0, 004:Yishun/cf:location/1600677806143/Put/vlen=9/seqid=0}

keyvalues={005:smita/cf: salary/1600677806143/Put/vlen=5/seqid=0, 005:smita/cf:empId/1600677806143/Put/vlen=3/seqid=0, 005:smita/cf:empName/1600677806143/Put/vlen=5/seqid=0, 005:smita/cf:location/1600677806143/Put/vlen=5/seqid=0}

keyvalues={006:Archana/cf: salary/1600677806143/Put/vlen=5/seqid=0, 006:Archana/cf:empId/1600677806143/Put/vlen=3/seqid=0, 006:Archana/cf:empName/1600677806143/Put/vlen=7/seqid=0, 006:Archana/cf:location/1600677806143/Put/vlen=5/seqid=0}

keyvalues={006:swetha/cf: salary/1600677806143/Put/vlen=5/seqid=0, 006:swetha/cf:empId/1600677806143/Put/vlen=3/seqid=0, 006:swetha/cf:empName/1600677806143/Put/vlen=6/seqid=0, 006:swetha/cf:location/1600677806143/Put/vlen=5/seqid=0}

keyvalues={007:Mukhtar/cf: salary/1600677806143/Put/vlen=6/seqid=0, 007:Mukhtar/cf:empId/1600677806143/Put/vlen=2/seqid=0, 007:Mukhtar/cf:empName/1600677806143/Put/vlen=7/seqid=0, 007:Mukhtar/cf:location/1600677806143/Put/vlen=5/seqid=0}


Search
Related Articles

Leave a Comment: