Thursday, December 31, 2015



The perfect news come at the end. My recent posts has been honoured by OTechMag and it is out !

You can reach the full paper at 

Thanks to everyone supporting me and reading the posts :)

As i say all the time : Enjoy & Share

Have a happy New Year!


Monday, December 28, 2015

[Exadata] About "ORA-14404: partitioned table contains partitions in a different tablespace" error


In this post, i will talk about ORA-14404 error i got recently when i try to drop an old tablespace. It was very annoying and  time consuming because i knew all partitions for all tables reside in same tablespace :)

Oracle Cloud Servisleri Uygulaması @ Oracle Cloud Day 2015 Istanbul


Oracle Cloud Day 2015 Istanbul'da anlattığım Oracle Cloud Servisleri sunumumda yaptığım çalışmanın detaylı dökümanını söz verdiğim gibi paylaşıyorum.

[BigData] Configuring Beeline and Impala Agents on an EdgeNode


In this post, i will configure beeline and impala agents on an EdgeNode which you will remember in my previous posts and configure these agents for Oracle Big Data Appliance (BDA) test and prod targets.

Let me talk about EdgeNode.
You can configure a node as a hadoop client outside BDA and enabling connections to BDA from this node, called EdgeNode. One can use EdgeNode(s) as access points or data land zone to trasport data to BDA. Many edge nodes can be defined in order to handle production load.

In my previous posts, i established my EdgeNode for supporting hdfs, hive and spark operations like working on BDA .
Visit -> for Hadoop client, for Spark client

In this post, i configured beeline and impala command shells supporting authentication without entering connection strings.

Tuesday, December 8, 2015

[Hive] Using Nested Queries


As you know working with SQL-like syntax in HIVE does not allow us all the capabilities as native-SQL or any commercial database improved SQL abilities. In this post, i will talk about nested queries in HIVE and rewriting an SQL query running in Exadata.

Monday, November 30, 2015

[HIVE] Querying hive data dictionary


In of my scripts i need column datatypes but i couldn't get easily from HiveCli (or Beeline). I used a trick to get column names, but datatypes or some other properties need parsing of hive output and it seems quite challening. In this post, i will talk about Hive Server2  metastore and show how to get table's specific properties with queries..

Wednesday, November 25, 2015

[Impala] About "InvalidStorageDescriptorException: Invalid delimiter: 'X' " error


In this post , i will talk about an error we face recently when we query data after loading data from xml file via Beeline. First, we get a generic error but after running query on Impala we get real reason and solved it.

Monday, November 23, 2015

My Presentation about "Oracle Cloud Services - Complete Example " @ Oracle Cloud Day Istanbul, 2015

Hi Everyone,

I am very happy to announce you , sorry i am a bit late :/ , that i had a great  honour to make a presentation about Oracle Cloud at the heart of the my city Istanbul at a great Oracle event .

Wednesday, November 18, 2015

[BigData] HDFS file formats comparison


In this port, i will compare HDFS file formats in Oracle BDA cluster. I will move data from default TEXT format to ORC and Parquet file formats and run aggregation queries in order to obtain file usage gain and performance.

Wednesday, November 11, 2015

[Exadata] How to partition a Table yearly and monthly with respect to DATE T

Hi again,

In this post i am trying to partition a big table in Oracle 11gR2 database in Exadata like this: rows which are earlier than a DATE value (for example Year 2015) must be separeated yearly and rows which are newer must be partitioned by montly. 

For instance:

1,'row1','10/03/2010               to partition              Part_Before_2011
5,'row12','10/08/2010              to partition              Part_Before_2011
3,'row56','01/11/2013              to partition              Part_Before_2014
78,'row4','01/03/2015'             to partition              Part_After_2015
327,'row114','03/06/2015'          to partition              Part_After_20156

Wednesday, November 4, 2015

[NoSQL] How to secure an Oracle NoSQL database


In this post, i will talk about security on oracle NoSQL database and how to add security to an existing topology . After we enable security, pinging or connecting the NoSQL database need credentials.

[NoSQL] Solving "No RMI service for adminX" error on a storage node

I will talk about an error that i see from NoSQL Administrative Console page, says that one of the admin services is not running. The error says "Verification violation: admin[3] No RMI service for admin3: service name=commandService". In this post, i will talk about how to search logs of storage nodes and how to bring up the admin service.

Tuesday, October 27, 2015

Oracle NoSQL Database and performing some basic operations

Hi ,

In this post ,  I am adding  a new page on my career and get closed to Oracle NoSQL database and functionalities. Today, i am going to talk about literally "kv approach" and do some administrative tasks like connecting and querying data.

Friday, October 23, 2015

About "Turkey postpone the switch to DST for 2015 for 2 weeks" and rebooting Exadata Storage Cell without affecting ASM


As you all know, DST is happening at 25/10/2015, but in my country ,Turkey, it is postponed 2 weeks. So all the systems must be patched to handle the situation.  If your database has columns with %TIME ZONE% property, you must apply the db patch. In this post, i will show you some part of the work we done while patching. In Exadata systems, cell nodes and db nodes must be rebooted after the required OS patch applied.

Wednesday, October 21, 2015

Dashboard Design in Cloudera Manager and Creating Custom Charts with tsquery Language


In this post, i will talk about charts and dashboard design in Cloudera Manager  and i will show how to create a dashboard and how to add existing charts to that dashboards. After that we will create custom charts . We will use tsquery language which helps us writing chart-script in SQL-like style.

Thursday, October 1, 2015

[ODI 12c] ORA-30088 error while loading data from Exadata to Exalytics


Recently i faced an ORA-30088 error while loading data from a table in  Exadata server to a table in Exalytics server with ODI 12c tool version 12.1.3.

Thursday, September 24, 2015

Meet The GDELT project and Google BigQuery


In this post, i will talk out of DBA topics and introduce you The GDELT project . In short , it is an event database growing daily storing events captured from online news sites. What is magical that you can get data about how political events are related across unvierse and how online sources shows them. Give it a try at

Wednesday, September 23, 2015

Spark client configuration in a Custom Hadoop Client Environment


In this post, i will show how to communicate spark to a Hadoop Cluster located on Oracle BDA. Currently , in Cloudera Manager version CDH 5.4.0 spark client is not easily configurable . When you download client configuration and see spark related files are not relevant . After a huge effort (!) i managed to start spark-shell with preconfigured to target BDA environment.

Friday, September 11, 2015

[Validating SYS Password - OAUTH marshaling failure] during Upgrade to EM Cloud Control 12c R5


Recently my workmate faced with the following error [OAUTH marshaling failure] , when we upgrade OEM 12c R4 to R5. After some search we found out that it is about Java Options used by  Weblogic Server connection .

Wednesday, September 9, 2015

Custom Hadoop Client Configuration

Hi again,

In this post, i will show how to configure hadoop client on user machine and connecting them to different hadoop clusters. Also i will set client configurations on same machine targeting different Hadoop clusters. I am working on Oracle Big Data Appliance with Cloudera Manager 5.4.0.

It is very important for developers to access Prod and Test systems without changing development machines. For a hadoop developer, custom client configurations are important. In this post, i will connect developer to Prod and Test hadoop system and show you how to change connection target from test system to prod system within the environment.

Sunday, September 6, 2015

Establishing HUE LDAP integration and about ticketing filtering auditing issues


In this post , I am writing about how to enable LDAP authentication on HUE service via Cloudera Manager . When we add users in a commercial environment, it is a must to sync users with their existing account on the system. After that , i will show how to track user activities on our clustered system . Note that all configuration operations will be done on Cloudera Manager.

Thursday, September 3, 2015

Solving HUE-1711 [core] LDAP username import lowercase problem

Hi again,

In this post, i will write about username conflict when adding a user from LDAP on HUE via Cloudera Manager. Usernames are created maintaining case sensitive, but they are lowercase at OS. So people on working HUE can not access their files when logon from terminal. I show how to remove that problem.

Wednesday, August 26, 2015

Thursday, July 30, 2015

FLUME ile HDFS'e real time LOG streaming

Merhaba arkadaşlar,

Hadoop'u örneklerle öğrenmeye devam ediyorum :)

Bu yazıda FLUME aracını kullanarak streaming veriyi HDFS'e atmayı örnekleyeceğim. Streaming bir datayı , mesela LOG fileın real time aktarımını Flume ile hızlı bir şekilde HDFS'e atabiliriz. Daha sonra ise sorgulama araçları ile datayı görebilir , Map/Reduce işlemleri ile analiz edebiliriz.

SQOOP ile HIVE ' a data aktarimi ve HIVE ile sorgulama

Merhaba arkadaşlar,

Bir önceki yazımda SQOOP ile HDFS 'e test databasei üzerinden data aktarimi yapıp, file olarak bu dataları görüntülemiştik.

Bu yazıda ise  yine SQOOP kullanarak direct olarak test databaseden HIVE üzerinde tanımlı bir test database veri aktarip, HUE üzerinden sorgulayacağız.

Öncelikle HIVE dan bahsetmek istiyorum. HIVE , HDFS üzerindeki datayı SQL-like dil ile sorgulamayı sağlayan bir yapı. Yani HDFS de bir file düşünelim, örneğin bir cvs file. Biz bu tablo üzerinde SQL*Loader'dan hatırlayacağımız gibi delimiter'lar ile datayı HIVE üzerinde tanımlı bir db-table a map ederek , external table sorgular gibi SQL query atabiliyoruz. Tabiki aradaaki fark HIVE otomatik map-reduce yaparak sorguyu bize tabular formatda getiriyor.

Wednesday, July 29, 2015

SQOOP ile database den HADOOP'a data aktarimi


Artık BigData kavramı hayatımıza girdi ve dba lerin bigdata ya kayıtsız kalması mümkün kalmamakta :)

HADOOP'a yeni yeni başlayan birisi olarak bu yeni dünya ilk başta karışık gelse de, aslında oynayarak epey eğlenceli olduğunu görüyorsunuz.

Bu yazımda SQOOP ile test database'inden HADOOP'a data aktarmayi yapacağım. İnternette bunla alakalı pek çok örnek görebilirsiniz. :)  Ayrıca bu yazıda ilk import da karşılaştığım bir hatayı ve çözümü göreceksiniz. Muhtemelen sizinde başınıza gelmiştir.

İlk önce SQOOP dan biraz bahsedeyim :

SQOOP, relational bir database den HDFS e data aktarmaya yarayan bir tool. HDFS de işlemler yapıldıktan sonra tekrar RDBMS'e datayı aktarabiliriz.

SOLUTION : SecureFX - Cannot connect to session #SERVERNAME# due to configuration problem


Uzun bir zamandan sonra tekrar makinemin başına geçtim. İlk olarak küçük bir yazı ile tekrar yazmaya başlayalım.

Bu yazıda SecureCRT programından bahsetmek istiyorum. Bu program ile sunucularımıza bağlandığımız sessionlarımzı toplu halde görebiliriz. Bağlantı bilgilerini gruplayabiliriz.Putty den daha rahat kullanabiliyorum. Buradan indirebilirsiniz.