Thursday, November 12, 2015
How to import/export workbooks in Datameer
To export workbook
curl -u <user>:<pass> -X GET http://<datameer_server>:8080/rest/workbook/<workbookid> > <fil_name>.json
To import workbook:
curl -u <user>:<pass> -X POST -d @<file_name>.json 'http://<datameer_server>:8080/rest/workbook/'
curl -u <user>:<pass> -X GET http://<datameer_server>:8080/rest/workbook/<workbookid> > <fil_name>.json
To import workbook:
curl -u <user>:<pass> -X POST -d @<file_name>.json 'http://<datameer_server>:8080/rest/workbook/'
- It is better to remove "uuid" lien to give new uuid to workbook.
Friday, August 7, 2015
Tuesday, August 4, 2015
How to move data out of Informatica File Archive to Hadoop using sqoop
Although it seem hard and complicated. Once you discovered correct connection string and driver name getting data out of Informatica file archive with sqoop is pretty simple and straightforward.
Unfortunately due to structure there is no way to specify schema name other than getting data out with --query command
sqoop import --driver com.informatica.fas.jdbc.Driver --connect jdbc:infafas://<server_name>:<port: Default=8500>/<database_name> --username xxxx --password xxxxx -m 1 -delete-target-dir --target-dir <target_dir> --query "SELECT * FROM <schema_name>.<table_name> where \$CONDITIONS" --fields-terminated-by \| --lines-terminated-by \\n --hive-drop-import-delims
You have to keep "where \$CONDITIONS" even though you do not specify one.
You also need to copy "infafas.jar" to shared library.
Please feel free to ask any questions.
Unfortunately due to structure there is no way to specify schema name other than getting data out with --query command
sqoop import --driver com.informatica.fas.jdbc.Driver --connect jdbc:infafas://<server_name>:<port: Default=8500>/<database_name> --username xxxx --password xxxxx -m 1 -delete-target-dir --target-dir <target_dir> --query "SELECT * FROM <schema_name>.<table_name> where \$CONDITIONS" --fields-terminated-by \| --lines-terminated-by \\n --hive-drop-import-delims
You have to keep "where \$CONDITIONS" even though you do not specify one.
You also need to copy "infafas.jar" to shared library.
Please feel free to ask any questions.
Monday, July 27, 2015
Pig: ERROR 1070: Could not resolve org.apache.hcatalog.pig.HCatLoader - [Solved] - CDH 5.4.X+
Hi All;
With new CDH versions class name for HCatalog loader changed. Unfortunately this is not documented clearly.
Just change
org.apache.hcatalog.pig.HCatLoader to org.apache.hive.hcatalog.pig.HCatLoader
in your pig scripts to solve problem.
I hope, you wont waste too much time like me.
PS: You still need to specify hive-site.xml
With new CDH versions class name for HCatalog loader changed. Unfortunately this is not documented clearly.
Just change
org.apache.hcatalog.pig.HCatLoader to org.apache.hive.hcatalog.pig.HCatLoader
in your pig scripts to solve problem.
I hope, you wont waste too much time like me.
PS: You still need to specify hive-site.xml
Monday, July 13, 2015
Datameer upgrade guide - Step by step instructions
Here is a list of instructions to upgrade Datameer.
It is basically unzip the new
Datameer version into a new folder and copy over several files to the new
version. Then you run a script that upgrades HSQLDB.
Step 0: Save all Datameer setting (Administration -->
Hadoop Cluster
Save all configurations, connection strings and especially
the YARN settings.
Step 1: Stop Datameer
/<datameer installation path>/<Datameer Application
Folder>/bin/conductor.sh stop
Step 2: Backup Datameer directory just in case
Step 3: Unzip new Datameer version
/<datameer installation path>/<New Datameer Application Folder>
Step 4: Give groups and user rights to new datameer
folder similar to old one
Step 5: Adjust Xmx memory settings in das-env.sh to match
old settings
Step 6: Copy the files from the das-data folder of the
old distribution to the new location.
cp -r /<old-location>/das-data /<new-location>/
Step 7: Copy over the native libraries that you have added to Datameer (if this applies)
cp -r /<old-location>/lib/native/* /<new-location>/lib/native/
Step 8: Copy over files from etc/custom-jars (these files could be database drivers or 3rd party libraries).
cp -r <old-location>/etc/custom-jars /<new-location>/
Step 9: Copy Datameer Plugins
cp -r <old-location>/etc/custom-pluginss /<new-location>/
Step 10: Upgrade HSQLDB:
<new location>/bin/upgrade_hsql_db.sh --old-das-data /<old-location>/das-data --new-das-data /<new-location>/das-data
For official instructions you can also visit
Monday, March 2, 2015
Solution to Failed to execute command Create Sqoop Database on service Sqoop 2 error
Depending on java configuration Sqoop2 database creation may fail to update derby drivers and may get "Failed to execute command Create Sqoop Database on service Sqoop 2" error.
To solve this apply following steps:
To solve this apply following steps:
1. Download derby derby client from db-derby-10.11.1.1-bin.zip
2. Extract and copy derby.jar and derbyclient.jar from zip file to /var/lib/sqoop2
3. Copy derby.jar to /opt/cloudera/parcels/CDH-<version>/jars/ as well
4. Delete /opt/cloudera/parcels/CDH-<version>/lib/sqoop2/webapps/sqoop/WEB-INF/lib/derby-<version>.jar soft link.
5. Make /opt/cloudera/parcels/CDH-<version>/lib/sqoop2/webapps/sqoop/WEB-INF/lib/derby.jar to /opt/cloudera/parcels/CDH-<version>/jars/derby.jar
Wednesday, December 17, 2014
UC4 Automic "Automatically deactivate when finished"
You can use following SQL to query "Automatically deactivate when finished" status from UC4 repository.
select OH_IDNR, OH_NAME,
CASE
WHEN JBA_AUTODEACT = 'A' THEN 'Always'
WHEN JBA_AUTODEACT = 'J' THEN 'After error-free execution'
WHEN JBA_AUTODEACT = 'N' THEN 'No'
WHEN JBA_AUTODEACT = 'F' THEN 'After an error-free restart'
end as "Automatically deactivate"
from uc4.oh t1, uc4.JBA t2
where OH_IDNR = JBA_OH_IDNR
select OH_IDNR, OH_NAME,
CASE
WHEN JBA_AUTODEACT = 'A' THEN 'Always'
WHEN JBA_AUTODEACT = 'J' THEN 'After error-free execution'
WHEN JBA_AUTODEACT = 'N' THEN 'No'
WHEN JBA_AUTODEACT = 'F' THEN 'After an error-free restart'
end as "Automatically deactivate"
from uc4.oh t1, uc4.JBA t2
where OH_IDNR = JBA_OH_IDNR