Dienstag, 10. Mai 2016

SolR, NiFi, Twitter and CDH 5.7

Since the most interesting Apache NiFi parts are coming from ASF [1] or Hortonworks [2], I thought to use CDH 5.7 and do the same, just to be curious. Here's now my 30 minutes playground, currently running in Googles Compute.

On one of my playground nodes I installed Apache NiFi per
mkdir /software && cd /software && wget http://mirror.23media.de/apache/nifi/0.6.1/nifi-0.6.1-bin.tar.gz && tar xvfz nifi-0.6.1-bin.tar.gz

Then I've set only nifi.sensitive.props.key property in conf/nifi.properties to an easy to remember secret. The next bash /software/nifi-0.6.1/bin/nifi.sh install installs Apache NiFi as an service. After log in into Apache NiFi's WebUI, download and add the template [3] to Apache NiFi, move the template icon to the drawer, open it and edit the twitter credentials to fit your developer account.

To use an  schema-less SolR index (or Cloudera Search in CDH) I copied some example files over into a local directory:
cp -r /opt/cloudera/parcels/CDH/share/doc/solr-doc-4.10.3+cdh5.7.0+389/example/example-schemaless/solr/collection1/conf/* $HOME/solr_configs/conf/

And added to solrconfig.xml into the <updateRequestProcessorChain name="add-unknown-fields-to-the-schema"> declaration below <updateRequestProcessorChain name="add-unknown-fields-to-the-schema">:
<str>EEE MMM d HH:mm:ss Z yyyy</str>

So it looks like:
<processor>
<arr name="format">
<str>EEE MMM d HH:mm:ss Z yyyy</str>


Since the new Twitter API HTML format the client source, I added a HTML strip processor into the same declaration:

</processor>
  <processor class="solr.HTMLStripFieldUpdateProcessorFactory">
  <str name="fieldName">source_s</str>
</processor>

All configs are available per Gist [4,5].

To get the configs running, initialize SolR:

solrctl --zk ZK_HOST:2181/solr instancedir --create twitter $HOME/solr_configs
solrctl --zk ZK_HOST:2181/solr collection --create twitter -s 2 -c twitter -r 2

Setup Banana for SolR is pretty easy:
cd /software && wget https://github.com/lucidworks/banana/archive/release.zip && unzip release.zip && mv banana-release banana && cp -r banana /opt/cloudera/parcels/CDH/lib/solr/webapps/ on one of the solr hosts and check if it's running per http://solr-node:8983/banana/src/index.html. To move fast forward, I have a dashboard available on gist [5], too.

Screenshot Dashboard:


Apache NiFi flow:


Conclusion
This demo shows that's pretty easy today by using available tools to setup more or less complex data flows within a few hours. Apache NiFi is pretty stable, has a lot of sinks available and runs now 2 weeks in Google Compute, captured over 200 mio tweets and stored them in SolR as well as in HDFS. It's interesting to play around with the data in realtime, interactive driven by Banana. 




Dienstag, 5. Januar 2016

Apache Tez on CDH 5.4.x

Since Cloudera doesn't support Tez in their Distribution right now (but it'll come, I'm pretty confident), we experimented with Apache Tez and CDH 5.4 a bit.
To use Tez with CDH isn't so hard - and it works quite well.  And our ETL and Hive jobs finished around 30 - 50% faster.

Anyway, here the blueprint. We use CentOS 6.7 with Epel Repo.

1. Install maven 3.2.5 
wget http://archive.apache.org/dist/maven/maven-3/3.2.5/binaries/apache-maven-3.2.5-bin.tar.gz
tar xvfz apache-maven-3.2.5-bin.tar.gz -C /usr/local/
cd /usr/local/
ln -s apache-maven-3.2.5 maven

=> Compiling Tez with protobuf worked only with 3.2.5 in my case

1.1 Install 8_u40 JDK
mkdir development && cd development (thats my dev-root)

wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u40-b26/jdk-8u40-linux-x64.tar.gz"
tar xvfz jdk-8u40-linux-x64.tar.gz
export JAVA_HOME=/home/alo.alt/development/jdk1.8.0_40
export JRE_HOME=/home/alo.alt/development/jdk1.8.0_40/jre
export PATH=$PATH:/home/alo.alt/development/jdk1.8.0_40:/home/alo.alt/development/jdk1.8.0_40/jre

2. Create a maven profile.d file
vi /etc/profile.d/maven.sh
export M2_HOME=/usr/local/maven
export PATH=${M2_HOME}/bin:${PATH}

3. Get Tez
git clone https://github.com/apache/tez.git
git checkout tags/release-0.7.0
git checkout -b tristan

modify pom.xml to use hadoop-2.6.0-cdh.5.4.2

<profile>
   <id>cdh5.4.0</id>
   <activation>
   <activeByDefault>false</activeByDefault>
   </activation>
   <properties>
     <hadoop.version>2.6.0-cdh5.4.0</hadoop.version>
   </properties>
   <pluginRepositories>
     <pluginRepository>
     <id>cloudera</id>
     <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
     </pluginRepository>
   </pluginRepositories>
   <repositories>
     <repository>
       <id>cloudera</id>
       <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
     </repository>
   </repositories>
</profile>

And apply the patch from https://gist.github.com/killerwhile/23225004a78949d4c849#file-gistfile1-diff

4. Install protobuf
sudo yum -y install gcc-c++ openssl-devel glibc
wget https://protobuf.googlecode.com/files/protobuf-2.5.0.tar.bz2
tar xfvj protobuf-2.5.0.tar.bz2
cd protobuf-2.5.0/
./configure && make && make check
make install && ldconfig && protoc --version

or use the precompiled RPMS:
ftp://ftp.pbone.net/mirror/ftp5.gwdg.de/pub/opensuse/repositories/home:/kalyaka/CentOS_CentOS-6/x86_64/protobuf-2.5.0-16.1.x86_64.rpm 
ftp://ftp.pbone.net/mirror/ftp5.gwdg.de/pub/opensuse/repositories/home:/kalyaka/CentOS_CentOS-6/x86_64/protobuf-compiler-2.5.0-16.1.x86_64.rpm

5. Build Tez against CDH 5.4.2
mvn -Pcdh5.4.2 clean package -Dtar -DskipTests=true -Dmaven.javadoc.skip=true

6. Install Tez
hadoop dfs -mkdir /apps/tez && hadoop dfs -copyFromLocal tez/tez-dist/target/tez-0.7.0.tar.gz /apps/tez/tez-0.7.0.tar.gz

sudo mkdir -P /apps/tez && tar xvfz tez/tez-dist/target/tez-0.7.0.tar.gz -C /apps/tez/

6.1 create a tez-site.xml in /apps/tez/conf/
<configuration>
  <property>
    <name>tez.lib.uris</name>
    <value>${fs.default.name}/apps/tez/tez-0.7.0.tar.gz</value>
  </property>
</configuration>

7. Run Tez with Yarn
export TEZ_HOME=/apps/tez
export TEZ_CONF_DIR=${TEZ_HOME}/conf
export HADOOP_CLASSPATH="${HADOOP_CLASSPATH}:${TEZ_CONF_DIR}:$(find ${TEZ_HOME} -name "*.jar" | paste -sd ":")"

hive> set hive.execution.engine=tez;
hive> SELECT s07.description, s07.salary, s08.salary, s08.salary - s07.salary FROM sample_07 s07 JOIN sample_08 s08 ON ( s07.code = s08.code) WHERE s07.salary < s08.salary ORDER BY s08.salary-s07.salary DESC LIMIT 1000;

beeline --hiveconf tez.task.launch.env="LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$YOUR_HADOOP_COMMON_HOME/lib/native" \ 
--hiveconf tez.am.launch.env="LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$YOUR_HADOOP_COMMON_HOME/lib/native" '
Check if you have the lib*.so available in the native folder (or point to the folder which contains the .so files).

Sources:
https://gist.github.com/killerwhile/23225004a78949d4c849#file-gistfile1-diff
http://tez.apache.org/install.html

Freitag, 13. November 2015

Build maven-based RPM's

In an daily DevOps world it's necessary to have an easy to use mechanism for a revisionable software deployment. Especially when continuous integration comes to play, in terms of installing, upgrading and deleting software in an easy and proven way.
Why not use RPM for that? The great is, Maven can do that easily.

Prerequisites:
Eclipse (or IntelliJ or any other editor)
maven (command "mvn" has to work on command line)
git (command "git" should work on command line)
rpm build (sudo yum install rpm-build)

Building an RPM works on Linux systems like RedHat or CentOS.

Guide:
Build the project, so that the targets are available locally (skipTest if they fail on your PC, e.g. because of missing MongoDB or TomCat or ...). 

The following params are necessary to get it working properly:
directory = where the code should be placed after the RPM is rolled out
filemode = permissions for the installed code
username = UID 
groupname = GID
location = local location of the project which will be included in the RPM

Add the RPM goal to your pom.xml:

<project>
... <build>
... <plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>rpm-maven-plugin</artifactId>
<executions>
<execution>
<id>generate-rpm</id>
<goals><goal>rpm</goal></goals>
</execution>
</executions>
<configuration>
<license>Apache</license>
<distribution>Development</distribution>
<group>Applications/Internet</group>
<packager>ALO</packager>
<defineStatements>
<defineStatement>_unpackaged_files_terminate_build 0</defineStatement>
</defineStatements>
<mappings>
<mapping>
<directory>/var/lib/tomcat/webapps</directory>
<filemode>600</filemode>
<username>tomcat</username>
<groupname>tomcat</groupname>
<directoryIncluded>false</directoryIncluded>
<sources>
<source>
<location>target/test.war</location>
</source>
</sources>
</mapping>
</mappings>
<preinstallScriptlet>
<script>echo "Deploying test-api webapp"</script>
</preinstallScriptlet>
</configuration>
</plugin>
</plugins>...
</build>...
</project>

Build the RPM file
mvn rpm:rpm

Check the contents of the RPM file

rpm -q --filesbypkg -p target/rpm/<build-name>/RPMS/noarch/test-api-0.0.1-1.noarch.rpm 

Dienstag, 14. April 2015

Hive on Spark at CDH 5.3

However, since Hive on Spark is not (yet) officially supported by Cloudera some manual steps are required to get Hive on Spark within CDH 5.3 working. Please note that there are four important requirements additionally to the hands-on work:
  1. Spark Gateway nodes needs to be a Hive Gateway node as well
  2. In case the client configurations are redeployed, you need to copy the hive-site.xml again
  3. In case CDH is upgraded (also for minor patches, often updated without noticing you), you need to adjust the class paths
  4. Hive libraries need to be present on all executors (CM should take care of this automatically)
Login to your spark server(s) and copy the running hive-site.xml to spark:

cp /etc/hive/conf/hive-site.xml /etc/spark/conf/

Start your spark shell with (replace <CDH_VERSION> with your parcel version, e.g. 5.3.2-1.cdh5.3.2.p0.10) and load the hive context within spark-shell:

spark-shell --master yarn-client --driver-class-path "/opt/cloudera/parcels/CDH-<CDH_VERSION>/lib/hive/lib/*" --conf spark.executor.extraClassPath="/opt/cloudera/parcels/CDH-<CDH_VERSION>/lib/hive/lib/*"
..
scala> val hive = new org.apache.spark.sql.hive.HiveContext(sc)
sql: org.apache.spark.sql.hive.HiveContext = org.apache.spark.sql.hive.HiveContext@1c966488

scala> var s1 = hive.sql("SELECT COUNT(*) FROM sample_07").collect()
s1: Array[org.apache.spark.sql.Row] = Array([823])

Montag, 16. Februar 2015

Hadoop and trusted MiTv5 Kerberos with Active Directory

For actuality here a example how to enable an MiTv5 Kerberos <=> Active Directory trust just from scratch. Should work out of the box, just replace the realms:

HADOOP1.INTERNAL = local server (KDC)
ALO.LOCAL = local kerberos realm
AD.REMOTE = AD realm

with your servers. The KDC should be inside your hadoop network, the remote AD can be somewhere.

1. Install the bits

At the KDC server (CentOS, RHEL - other OS' should have nearly the same bits):
yum install krb5-server krb5-libs krb5-workstation -y

At the clients (hadoop nodes):
yum install krb5-libs krb5-workstation -y

Install Java's JCE policy (see Oracle documentation) on all hadoop nodes.

2. Configure your local KDC

/etc/krb5.conf

[libdefaults]
default_realm = ALO.LOCAL
dns_lookup_realm = false
dns_lookup_kdc = false
kdc_timesync = 1
ccache_type = 4
forwardable = true
proxiable = true
fcc-mit-ticketflags = true
max_life = 1d
max_renewable_life = 7d
renew_lifetime = 7d
default_tgs_enctypes = aes128-cts arcfour-hmac
default_tkt_enctypes = aes128-cts arcfour-hmac

[realms]
ALO.LOCAL = {
kdc = hadoop1.
internal:88
admin_server = hadoop1.internal:749
max_life = 1d
max_renewable_life = 7d
}
AD.REMOTE = {
kdc = ad.remote.internal:88
admin_server = ad.remote.internal:749
max_life = 1d
max_renewable_life = 7d
}

[domain_realm]
alo.local = ALO.LOCAL
.alo.local = ALO.LOCAL

ad.internal = AD.INTERNAL
.ad.internal = AD.INTERNAL

[logging]
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmin.log
default = FILE:/var/log/krb5lib.log


/var/kerberos/krb5kdc/kdc.conf

[kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88

[realms]
ALO.LOCAL = {
#master_key_type = aes256-cts
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
}
 
/var/kerberos/krb5kdc/kadm5.acl
*/admin@ALO.ALT *

Create the realm on your local KDC and start the services
kdb5_util create -s -r ALO.LOCAL
service kadmin restart
service krb5kdc restart
chkconfig kadmin on
chkconfig krb5kdc on

Create the admin principal
kadmin.local -q "addprinc root/admin"

3. Create the MiTv5 trust in AD

Using the Windows - Power(!sic) - Shell
ksetup /addkdc ALO.LOCAL HADOOP1.INTERNAL
netdom trust ALO.LOCAL /DOMAIN: AD.REMOTE /add /realm /passwordt: passw0rd
ksetup /SetEncTypeAttr ALO.LOCAL RC4-HMAC-MD5 AES128-CTS-HMAC-SHA1-96 AES256-CTS-HMAC-SHA1-96 DES-CBC-CRC DES-CBC-MD5

=> On Windows 2003 this works, too:
ktpass /ALO.LOCAL /DOMAIN:AD.REMOTE /TrustEncryp aes128-cts arcfour-hmac

=> On Windows 2008 you have to add:
ksetup /SetEncTypeAttr ALO.LOCAL aes128-cts arcfour-hmac

4. Create the AD trust in MiTv5
kadmin.local: addprinc krbtgt/ALO.LOCAL@AD.REMOTE
password: passw0rd

5. Configure hadoop's mapping rules

core-site.xml

<property>
<name>hadoop.security.auth_to_local</name>
<value>RULE:[1:$1@$0](.*@\QAD.REMOTE\E$)s/@\QAD.REMOTE\E$//
RULE:[2:$1@$0](.*@\QAD.REMOTE\E$)s/@\QAD.REMOTE\E$//
DEFAULT</value>
</property>

Done. Now you should be able to get an ticket from your AD which let you work with your hadoop installation:

#> kinit alo.alt@AD.REMOTE
password:
#> klist
Ticket cache: FILE:/tmp/krb5cc_500
Default principal: alo.alt@AD.REMOTE