Freitag, 13. November 2015

Build maven-based RPM's

In an daily DevOps world it's necessary to have an easy to use mechanism for a revisionable software deployment. Especially when continuous integration comes to play, in terms of installing, upgrading and deleting software in an easy and proven way.
Why not use RPM for that? The great is, Maven can do that easily.

Eclipse (or IntelliJ or any other editor)
maven (command "mvn" has to work on command line)
git (command "git" should work on command line)
rpm build (sudo yum install rpm-build)

Building an RPM works on Linux systems like RedHat or CentOS.

Build the project, so that the targets are available locally (skipTest if they fail on your PC, e.g. because of missing MongoDB or TomCat or ...). 

The following params are necessary to get it working properly:
directory = where the code should be placed after the RPM is rolled out
filemode = permissions for the installed code
username = UID 
groupname = GID
location = local location of the project which will be included in the RPM

Add the RPM goal to your pom.xml:

... <build>
... <plugin>
<defineStatement>_unpackaged_files_terminate_build 0</defineStatement>
<script>echo "Deploying test-api webapp"</script>

Build the RPM file
mvn rpm:rpm

Check the contents of the RPM file

rpm -q --filesbypkg -p target/rpm/<build-name>/RPMS/noarch/test-api-0.0.1-1.noarch.rpm 

Dienstag, 14. April 2015

Hive on Spark at CDH 5.3

However, since Hive on Spark is not (yet) officially supported by Cloudera some manual steps are required to get Hive on Spark within CDH 5.3 working. Please note that there are four important requirements additionally to the hands-on work:
  1. Spark Gateway nodes needs to be a Hive Gateway node as well
  2. In case the client configurations are redeployed, you need to copy the hive-site.xml again
  3. In case CDH is upgraded (also for minor patches, often updated without noticing you), you need to adjust the class paths
  4. Hive libraries need to be present on all executors (CM should take care of this automatically)
Login to your spark server(s) and copy the running hive-site.xml to spark:

cp /etc/hive/conf/hive-site.xml /etc/spark/conf/

Start your spark shell with (replace <CDH_VERSION> with your parcel version, e.g. 5.3.2-1.cdh5.3.2.p0.10) and load the hive context within spark-shell:

spark-shell --master yarn-client --driver-class-path "/opt/cloudera/parcels/CDH-<CDH_VERSION>/lib/hive/lib/*" --conf spark.executor.extraClassPath="/opt/cloudera/parcels/CDH-<CDH_VERSION>/lib/hive/lib/*"
scala> val hive = new org.apache.spark.sql.hive.HiveContext(sc)
sql: org.apache.spark.sql.hive.HiveContext = org.apache.spark.sql.hive.HiveContext@1c966488

scala> var s1 = hive.sql("SELECT COUNT(*) FROM sample_07").collect()
s1: Array[org.apache.spark.sql.Row] = Array([823])

Montag, 16. Februar 2015

Hadoop and trusted MiTv5 Kerberos with Active Directory

For actuality here a example how to enable an MiTv5 Kerberos <=> Active Directory trust just from scratch. Should work out of the box, just replace the realms:

HADOOP1.INTERNAL = local server (KDC)
ALO.LOCAL = local kerberos realm
AD.REMOTE = AD realm

with your servers. The KDC should be inside your hadoop network, the remote AD can be somewhere.

1. Install the bits

At the KDC server (CentOS, RHEL - other OS' should have nearly the same bits):
yum install krb5-server krb5-libs krb5-workstation -y

At the clients (hadoop nodes):
yum install krb5-libs krb5-workstation -y

Install Java's JCE policy (see Oracle documentation) on all hadoop nodes.

2. Configure your local KDC


default_realm = ALO.LOCAL
dns_lookup_realm = false
dns_lookup_kdc = false
kdc_timesync = 1
ccache_type = 4
forwardable = true
proxiable = true
fcc-mit-ticketflags = true
max_life = 1d
max_renewable_life = 7d
renew_lifetime = 7d
default_tgs_enctypes = aes128-cts arcfour-hmac
default_tkt_enctypes = aes128-cts arcfour-hmac

kdc = hadoop1.
admin_server = hadoop1.internal:749
max_life = 1d
max_renewable_life = 7d
kdc = ad.remote.internal:88
admin_server = ad.remote.internal:749
max_life = 1d
max_renewable_life = 7d

alo.local = ALO.LOCAL
.alo.local = ALO.LOCAL

ad.internal = AD.INTERNAL
.ad.internal = AD.INTERNAL

kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmin.log
default = FILE:/var/log/krb5lib.log


kdc_ports = 88
kdc_tcp_ports = 88

#master_key_type = aes256-cts
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
*/admin@ALO.ALT *

Create the realm on your local KDC and start the services
kdb5_util create -s -r ALO.LOCAL
service kadmin restart
service krb5kdc restart
chkconfig kadmin on
chkconfig krb5kdc on

Create the admin principal
kadmin.local -q "addprinc root/admin"

3. Create the MiTv5 trust in AD

Using the Windows - Power(!sic) - Shell
netdom trust ALO.LOCAL /DOMAIN: AD.REMOTE /add /realm /passwordt: passw0rd

=> On Windows 2003 this works, too:
ktpass /ALO.LOCAL /DOMAIN:AD.REMOTE /TrustEncryp aes128-cts arcfour-hmac

=> On Windows 2008 you have to add:
ksetup /SetEncTypeAttr ALO.LOCAL aes128-cts arcfour-hmac

4. Create the AD trust in MiTv5
kadmin.local: addprinc krbtgt/ALO.LOCAL@AD.REMOTE
password: passw0rd

5. Configure hadoop's mapping rules



Done. Now you should be able to get an ticket from your AD which let you work with your hadoop installation:

#> kinit alo.alt@AD.REMOTE
#> klist
Ticket cache: FILE:/tmp/krb5cc_500
Default principal: alo.alt@AD.REMOTE

Montag, 9. Februar 2015

Hadoop based SQL engines

Apache Hadoop comes more and more into the focus of business critical architectures and applications. Naturally SQL based solutions are the first to get considered, but the market is evolving and new tools are coming up, but leaving unnoticed.

Listed below an overview over currently available Hadoop based SQL technologies. The must haves are:
Open Source (various contributors), low-latency querying possible, supporting CRUD (mostly!) and statements like CREATE, INSERT INTO, SELECT * FROM (limit..), UPDATE Table SET A1=2 WHERE, DELETE, and DROP TABLE.

Apache Hive (SQL-like, with interactive SQL (Stinger)
Apache Drill (ANSI SQL support)
Apache Spark (Spark SQL, queries only, add data via Hive, RDD or Parquet)
Apache Phoenix (built atop Apache HBase, lacks full transaction support, relational operators and some built-in functions)
Presto from Facebook (can query Hive, Cassandra, relational DBs & etc. Doesn't seem to be designed for low-latency responses across small clusters, or support UPDATE operations. It is optimized for data warehousing or analytics¹)
VoltDB (ACID compatible, ANSI SQL near 92, low-latency multi query engine)
SQL-Hadoop via MapR community edition (seems to be a packaging of Hive, HP Vertica, SparkSQL, Drill and a native ODBC wrapper)
Apache Kylin from Ebay (provides an SQL interface and multi-dimensional analysis [OLAP], "… offers ANSI SQL on Hadoop and supports most ANSI SQL query functions". It depends on HDFS, MapReduce, Hive and HBase; and seems targeted at very large data-sets though maintains low query latency)
Apache Tajo (ANSI/ISO SQL standard compliance with JDBC driver support [benchmarks against Hive and Impala])
Cascading's Lingual² ("Lingual provides JDBC Drivers, a SQL command shell, and a catalog manager for publishing files [or any resource] as schemas and tables.")

Non Open Source, but also interesting
Splice Machine (Standard ANSI SQL, Transactional Integrity)
Pivotal Hawq (via Pivotal HD, ANSI SQL 92, 99 and OLAP)
Cloudera Impala (SQL-like, ANSI 92 compliant, MPP and low-latency)
Impala does not incorporate usage of Hadoop, but leverages the cached data of HDFS on each node to quickly return data (w/o performing Map/Reduce jobs). Thus, overhead related to performing a Map/Reduce job is short-cut and one can gain improvements runtime.
Conclusion: Impala does not replace Hive. However, it is good for different kind of jobs, such as small ad-hoc queries well-suited for analyzing data as business analysts. Robust jobs such as typical ETL taks, on the other hand, require Hive due to the fact that a failure of one job can be very costly.

Thanks to Samuel Marks, who posted originally on the hive user mailing list

Freitag, 9. Januar 2015

Major compact an row key in HBase

Getting an row key via hbase-shell per scan:
hbase (main):0001:0 > scan ‘your_table',{LIMIT => 5}

see what the row contains:
hbase (main):0002:0 > get ‘your_table’,"\x00\x01"

To start the compaction based on the row key use this few lines and replace <row_key> and <your_table> with the findings above:
hbase (main):0003:0 > configuration = org.apache.hadoop.hbase.HBaseConfiguration.create table =, '<your_table>') regionLocation = table.getRegionLocation("<row key>”) regionLocation.getRegionInfo().getRegionName() admin = admin.majorCompact(regionLocation.getRegionInfo().getRegionName())