For the past 2 months, I've been involved with one sap hana project of which I am doing predictive analysis(PAL). Since this is a proprietary technology, there is an inheritable handicap when it comes to trying various thing out of the ordinary. Basically the lowdown of this tech is that SAP HANA is a memory resident database. The definition ends there - no one has compared it with an existing solution like sqlite and especially this when the people at SAP are trying to market this inside a bundle of software aimed at startups - not cool attitude here, sap and I wonder why someone getting this step-motherly treatment would base his/her startup over this app stack?
For doing predictive analysis, the recommended way is to use the pal library- which consumes memory on the server itself for speeding things up - but currently is liable to crash the server by getting stuck. Call my approach stupid but I try to run PAL analysis over entire table - it basically freezes in execution and any other PAL job cannot run. Today, it took me entire day to get a calculation view straight - and don't ask me how because for the majority of the day I was getting aggregates of the results when the sql script simply contained:
var_out = select * from ..
There is a huge talk about integrating PAL with hadoop, but it is only for non-structured stuff and no MR job is present to simplify things for server jobs. I hope to see some active work in future for this.
What is most baffling for me is the SAP Hana studio that has tons of blocking calls like right clicking on the packages to create a new calculation view:
I do hope things improve in the future versions where these problems do not harass us developers.