Tuesday 6 October 2009

A check list for NRIS buying a house in India

A check list for NRIS buying a house in India

Investment from any source in the housing sector is an appreciated aspect in today's indian real estate scenario, from an industry perspective. Let it be Resident Indians, NRIs or even companies, constructing houses creates jobs for a lot of people. A back of the napkin calculation shows that for a 1000 sq feet house, 100 direct employment (architect, building engineer, masons, helpers, electricians, plumbers, painters, carpenters, etc) and over 1000 indirect employment (people working in cement plant, brick kilns, tiles kilns, electrical fittings companies, saw mills, steel plants, paint companies, etc) opportunities are created. Of course the duration of the employment will depend on a number of factors like nearness to a supply sources for material and labour, access to high tech equipment, architecture, etc.

But to construct a house is not all that easy. It is not without substance that a Tamil saying goes, "Veetai Katti Par, Kalyanathai Panni Par" (Basically the saying rates constructing a house and having a child's marriage done among the toughest).

To an already difficult task, the sheer distance and absence during construction become problem multipliers for the NRIs. There was an NRI based out of the USA who got a wonderful sales pitch from a builder. The salesman met the NRI at his office in the USA and arranged for all the documentation and also sent video clippings of the apartment at Bangalore. Convinced on the genuineness, the NRI transferred Rs.50 lakhs to the builder's account. The date for the house warming was fixed after one month. The NRI could not make it to the function due to a pressing office work and had asked his parents to do the poojas.

The parents got the shock of their life, when they landed at the apartment complex the day before the poojas. The complex had only one sample apartment finished (the one in the video). They were told by the Project Manager at the site that the poojas can be done at anytime but the apartment can be delivered only after "6 months".

Another NRI who was building the house himself using an experienced and well referenced engineer found that his house orientation has been shifted by 15 feet. This left him space on the wrong side of the house squashing his plans to build a small commercial complex in future. They now have space for parking 4 cars but none for building a rent worthy space !!!

A number of checks could have been used to be on the safer side in both the above cases:

Thankfully there are a number of professional builders who are a lot more trust worthy. So doing a bit of research on the track record of a builder can help.

For any real estate purchase it is preferable to make visits to the sites before buying them. This exercise is worth it not only because we are committing a large amount of money but also because reversing the decision proves costly as well. If the NRI is not able to make it, he can request a trusted friend or relative to opt for the site visit.

Going for a housing loan through a bank will ensure that the money is released in stages only. This keeps the money safe during the construction. Also, all the banks at their local branches, have their list of shortlisted builders for whose constructions loans are pre-approved. It is better to buy only these constructions, as the banks are quite stringent in their norms for pre-approval and shortlist only those builders who have a proven track record and those project, which comply to all legal norms.

Post the construction, the management of the asset is one of the major issues faced by NRIs. There is no easy solution for this. There are some society associations which support the owners of the buildings with services like maintenance and rent collection. There are again the "friendly neighbor hood real estate agents" who may some times double up as the maintenance manager too. Many times though the "friendly" turn into "greedy" after some time. There are a few professional real estate management firms in most metros, who are now expanding into the 2nd Tier cities too.

Whether the construction rate quoted is for Built-up area or Carpet area? Construction is generally quoted for built-up area and rental is quoted only for the carpet area. There can be a difference of 15 % to 20% between the two based on the type of construction. Today in apartments there is the concept of super built-up area which apart from the built-up area includes stair case, common passages, fire escape passage, etc. The super built-up area can be bloated by as much as 50% of the carpet area.

Robert Allen, the Real Estate Mogul suggests the 100 - 20 - 10 - 1 rule for any real estate purchase. The idea is to check out 100 properties in person; shortlist 20 of them for a deeper scrutiny; enter into negotiation with sellers for 10 of the properties and finally buy the ONE that is best suited.

Technically there should be a check for all the statutory approvals - town planning (Nobody wants a flyover at arms length from the balcony!), water supply and sewage disposal, safety approval from the local fire department, etc. It is always better to ask for the encumbrance certificate and the title deed from the builder to get a legal opinion from a lawyer.

Don't hesitate to ask. This is probably the most important point. Many times, for avoiding being thought of as less intelligent, we question less. For any investing and particularly for real estate the more the questions asked the better the investment. The genuineness of the promoter can be gauged by the patience, the promptness and depth of the answers. Answers like, "Don't worry about that, we will manage", without going into the specifics are danger signs.

Take time. Do not restrain yourself by limiting the time for checking the properties and decision making to the time that you are present in India. A 2 to 4 week holiday cannot be hoped to be converted into a real estate investment period. Start the process before you come here. In case you cannot decide before you leave, it is OK. A Power of Attorney to a parent or a relative can be used to decide on the actual purchase even after you leave the shores of India.
Source: ET

Monday 7 September 2009

Virga


Did you know that rain can evaporate before it hits the ground?

In meteorology, virga is an observable streak or shaft of precipitation that falls from a cloud but evaporates before reaching the ground.

Virga can cause varying weather effects, because as rain is changed from liquid to vapor form, it removes heat from the air due to the high heat of vaporization of water. In some instances, these pockets of colder air can descend rapidly, creating a dry microburst which can be extremely hazardous to aviation. Conversely, precipitation evaporating at high altitude can compressionally heat as it falls, and result in a gusty downburst which may substantially and rapidly warm the surface temperature.

Virga can produce dramatic and beautiful scenes, especially during a red sunset.

The word virga is derived from Latin, twig or branch. A backronym sometimes found in amateur discussions of meteorology is "Variable Intensity Rain Gradient Aloft."

http://en.wikipedia.org/wiki/Virga

See pic in
http://imgur.com/gUZof.jpg and details in http://www.reddit.com/r/pics/comments/9h5ei/rain_rain_go_away/

Friday 14 August 2009

Chicken Tikka Masala - with yogurt marinated chicken.


Chicken Tikka Masala - with yogurt marinated chicken.

Ingredients:
        Boneless chicken - 1kg. (You may do with bones if you fancy)
        2 hot green chillies
        1 tea spoon coriander seeds( dhaniya)
        1 tea spoon fenugreek seeds
        1+1 tea spoon Turmeric
        1+1 tea spoon 'red chilli' powder
        2+2 table spoon fine grounded masala (mix of spices) - take any 'chicken tikka masala' pack from Tesco.
        2 table spoon oil  - sunflower or olive as you like. Do not use vegetable oil or any other.
       
        2 tea spoon salt
        500g yogurt.
        2 tomatos.
        4 garlic buds (pieces)
Procedure

        [1]
        Marinate chicken in yogurtby mixing 1 tea spoon turmeric, 1 table spoon find grounded masala and                 1 tea spoon red chilli powder. Mix well so that all chicken pieces are immersed in the 'spicy'
        yogurt.
        Leave for min 30 mins.
        [2]
        * Cut the 2 hot green chilles into small pieces.
        * Cut the 2 tomatos into small pieces.
        * cut garlic into small pieces.
        [3]
        * Take a pan.
        * Add 2 tablespoon oil. let it heat a bit may be 2 mins on full gas. (oh yes and switch on gas before this)
        * Add 1 tea spoon fenugreek seeds and 1 tea spoon coriander seeds          
        * Within 4-5 seconds (if oil is really hot) add cut green chillies and cut garlic.
        * Mix for 10 secs or so (again if oil is hot)
        * Add 1 tea spoon turmeric to this and mix for 4-5 secs.
        * Add 2 table spoon fine grounded masala and 1 tea spoon red chillin powder and mix for 5-10 secs.
        * This will not look like a dry mix of spice - lower the gas to medium flame now.
        * Add tomatos and mix really well. Let tomatos kind of melt.
        * Now add the marinated chicken slowly. And mix really well to form a uniform mixture.
        * Increase gas to full and let this boil now for at least 15-20 mins .
                * Occassionally you would need to stir as the mixture may boil and try to come out of pan.
         (You can do this in pressure cooker and you would just need to cook until 3 whistles of cooker)
        * Check now if the chicken is cooked. Use a fork to poke in it and see if it is soft and not like rubber.
        * If chicken is cooked, your curry is ready. Add salt to taste - typically 2 tea spoons for this much mixture.
       
        Eat with rice of bread/nan/chapati etc.


               
       

10 home remedies to avoid swine flu

10 home remedies to avoid swine flu

(Source: Times of India)

Are the rising swine flu casualties giving you jitters? Not sure how you can avoid falling prey to the growing epidemic? First and foremost, there  

is absolutely no need to panic.

Watching television to keep tabs on the progress of H1N1, particularly in the badly affected areas like Pune, is all right. But don't let the hysterical anchors get under your skin and start wearing a mask each time you step out of the house, unless you are visiting a very crowded area. Then too, the mask will protect you only for a specified period.

Without giving in to the swine flu panic and creating a stockpile of Tamiflu and N-95 masks at home and enriching pharma companies, there are a number of other measures you can take to ensure that the virus is not able to get you, irrespective of which part of the world you are in.

It is essential to remember that all kinds of viruses and bacteria can attack you when your immune system is weak, or they can weaken it easily. Hence, building your own defences would be a better, more practical, long-lasting and much more economical idea.

Here are some easy steps you can take to tackle a flu virus of any kind, including swine flu. It is not necessary to follow all the steps at once. You can pick and choose a combination of remedies that suit you best. However, if you are already suffering from flu, these measures can help only up to an extent. And, if you have been infected by H1N1, visiting a hospital and staying in solitary confinement is a must.

1. Have five duly washed leaves of Tulsi (known as Basil in English; medicinal name Ocimum sanctum) everyday in the morning. Tulsi has a large number of therapeutic properties. It keeps throat and lungs clear and helps in infections by way of strengthening your immunity.

2. Giloi (medicinal name Tinospora cordifolia) is a commonly available plant in many areas. Take a one-foot long branch of giloi, add five to six leaves of Tulsi and boil in water for 15-20 minutes or long enough to allow the water to extract its properties. Add black pepper and sendha (salt used during religious fasts), rock or black salt, or Misri (crystalised sugar like lumps to make it sweet) according to taste. Let it cool a bit and drink this kadha (concoction) while still warm. It will work wonders for your immunity. If giloi plant is not available, get processed giloi powder from Hamdard or others, and concoct a similar drink once a day.

3. A small piece of camphor (kapoor) approximately the size of a tablet should be taken once or twice a month. It can be swallowed with water by adults while children can take it along with mashed potatoes or banana because they will find it difficult to have it without any aides. Please remember camphor is not to be taken everyday, but only once each season, or once a month.

4. Those who can take garlic, must have two pods of raw garlic first thing in the morning. To be swallowed daily with lukewarm water. Garlic too strengthens immunity like the earlier measures mentioned.

5. Those not allergic to milk, must take a glass of hot or lukewarm milk every night with a small measure of haldi (turmeric).

6. Aloe vera (gwarpatha) too is a commonly available plant. Its thick and long, cactus-like leaves have an odourless gel. A teaspoon gel taken with water daily can work wonders for not only your skin and joint pains, but also boost immunity.

7. Take homeopathic medicines — Pyrogenium 200 and Inflenzium 200 in particular — five tablets three times a day, or two-three drops three times a day. While these are not specifically targeted at H1N1 either, these work well as preventive against common flu virus.

8. Do Pranayam daily (preferably under guidance if you are already not initiated into it) and go for morning jog/walk regularly to keep your throat and lungs in good condition and body in fine fettle. Even in small measures, it will work wonders for your body's resistance against all such diseases which attack the nose, throat and lungs, besides keeping you fit.

9. Have citrus fruits, particularly Vitamin C rich Amla (Indian gooseberry) juice. Since fresh Amla is not yet available in the market (not for another three to four months), it is not a bad idea to buy packaged Amla juice which is commonly available nowadays.

10. Last but not the least, wash your hands frequently every day with soap and warm water for 15-20 seconds; especially before meals, or each time after touching a surface that you suspect could be contaminated with flu virus such as a door handle or a knob/handle, especially if you have returned from a public place or used public transport. Alcohol-based hand cleaners should be kept handy at all times and used until you can get soap and warm water.

(The author is an avid reader and follower of alternative therapies including spiritual healing, ayurveda, yoga and homeopathy)


Thursday 13 August 2009

How to calculate 95 Percentile of a set of values in oracle?


How to calculate 95 Percentile of a set of values in oracle?

Oracle provides functions to calculate percentile values in a set of ordered data.

Inverse Percentile Functions
Using the CUME_DIST function, you can find the cumulative distribution
(percentile) of a set of values. However, the inverse operation (finding what value
computes to a certain percentile) is neither easy to do nor efficiently computed. To
overcome this difficulty, the PERCENTILE_CONT and PERCENTILE_DISC functions
were introduced. These can be used both as window reporting functions as well as
normal aggregate functions.
These functions need a sort specification and a parameter that takes a percentile
value between 0 and 1. The sort specification is handled by using an ORDER BY
clause with one expression. When used as a normal aggregate function, it returns a
single value for each ordered set.
PERCENTILE_CONT, which is a continuous function computed by interpolation,
and PERCENTILE_DISC, which is a step function that assumes discrete values. Like
other aggregates, PERCENTILE_CONT and PERCENTILE_DISC operate on a group
of rows in a grouped query, but with the following differences:
_ They require a parameter between 0 and 1 (inclusive). A parameter specified
out of this range will result in error. This parameter should be specified as an
expression that evaluates to a constant.
_ They require a sort specification. This sort specification is an ORDER BY clause
with a single expression. Multiple expressions are not allowed.
Normal Aggregate Syntax
[PERCENTILE_CONT | PERCENTILE_DISC]( constant expression )
WITHIN GROUP ( ORDER BY single order by expression
[ASC|DESC] [NULLS FIRST| NULLS LAST])
Inverse Percentile Example Basis
We use the following query to return the 17 rows of data used in the examples of
this section:
SELECT cust_id, cust_credit_limit, CUME_DIST()
OVER (ORDER BY cust_credit_limit) AS CUME_DIST
FROM customers WHERE cust_city='Marshal';
CUST_ID CUST_CREDIT_LIMIT CUME_DIST
Inverse Percentile Functions
SQL for Analysis and Reporting 21-29
---------- ----------------- ----------
28344 1500 .173913043
8962 1500 .173913043
36651 1500 .173913043
32497 1500 .173913043
15192 3000 .347826087
102077 3000 .347826087
102343 3000 .347826087
8270 3000 .347826087
21380 5000 .52173913
13808 5000 .52173913
101784 5000 .52173913
30420 5000 .52173913
10346 7000 .652173913
31112 7000 .652173913
35266 7000 .652173913
3424 9000 .739130435
100977 9000 .739130435
103066 10000 .782608696
35225 11000 .956521739
14459 11000 .956521739
17268 11000 .956521739
100421 11000 .956521739
41496 15000 1
PERCENTILE_DISC(x) is computed by scanning up the CUME_DIST values in each
group till you find the first one greater than or equal to x, where x is the specified
percentile value. For the example query where PERCENTILE_DISC(0.5), the result
is 5,000, as the following illustrates:
SELECT PERCENTILE_DISC(0.5) WITHIN GROUP
(ORDER BY cust_credit_limit) AS perc_disc, PERCENTILE_CONT(0.5) WITHIN GROUP
(ORDER BY cust_credit_limit) AS perc_cont
FROM customers WHERE cust_city='Marshal';
PERC_DISC PERC_CONT
--------- ---------
5000 5000
The result of PERCENTILE_CONT is computed by linear interpolation between rows
after ordering them. To compute PERCENTILE_CONT(x), we first compute the row
number = RN= (1+x*(n-1)), where n is the number of rows in the group and x is the
specified percentile value. The final result of the aggregate function is computed by
Inverse Percentile Functions
21-30 Oracle Database Data Warehousing Guide
linear interpolation between the values from rows at row numbers CRN =
CEIL(RN) and FRN = FLOOR(RN).
The final result will be: PERCENTILE_CONT(X) = if (CRN = FRN = RN), then
(value of expression from row at RN) else (CRN - RN) * (value of expression for row
at FRN) + (RN -FRN) * (value of expression for row at CRN).
Consider the previous example query, where we compute PERCENTILE_
CONT(0.5). Here n is 17. The row number RN = (1 + 0.5*(n-1))= 9 for both groups.
Putting this into the formula, (FRN=CRN=9), we return the value from row 9 as the
result.
Another example is, if you want to compute PERCENTILE_CONT(0.66). The
computed row number RN=(1 + 0.66*(n-1))= (1 + 0.66*16)= 11.67. PERCENTILE_
CONT(0.66) = (12-11.67)*(value of row 11)+(11.67-11)*(value of row 12). These results
are:
SELECT PERCENTILE_DISC(0.66) WITHIN GROUP
(ORDER BY cust_credit_limit) AS perc_disc, PERCENTILE_CONT(0.66) WITHIN GROUP
(ORDER BY cust_credit_limit) AS perc_cont
FROM customers WHERE cust_city='Marshal';
PERC_DISC PERC_CONT
---------- ----------
9000 8040
Inverse percentile aggregate functions can appear in the HAVING clause of a query
like other existing aggregate functions.
As Reporting Aggregates
You can also use the aggregate functions PERCENTILE_CONT, PERCENTILE_DISC
as reporting aggregate functions. When used as reporting aggregate functions, the
syntax is similar to those of other reporting aggregates.
[PERCENTILE_CONT | PERCENTILE_DISC](constant expression)
WITHIN GROUP ( ORDER BY single order by expression
[ASC|DESC] [NULLS FIRST| NULLS LAST])
OVER ( [PARTITION BY value expression [,...]] )
This query computes the same thing (median credit limit for customers in this result
set, but reports the result for every row in the result set, as shown in the following
output:
SELECT cust_id, cust_credit_limit, PERCENTILE_DISC(0.5) WITHIN GROUP
(ORDER BY cust_credit_limit) OVER () AS perc_disc,
Inverse Percentile Functions
SQL for Analysis and Reporting 21-31
PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY cust_credit_limit)
OVER () AS perc_cont
FROM customers WHERE cust_city='Marshal';
CUST_ID CUST_CREDIT_LIMIT PERC_DISC PERC_CONT
---------- ----------------- ---------- ----------
28344 1500 5000 5000
8962 1500 5000 5000
36651 1500 5000 5000
32497 1500 5000 5000
15192 3000 5000 5000
102077 3000 5000 5000
102343 3000 5000 5000
8270 3000 5000 5000
21380 5000 5000 5000
13808 5000 5000 5000
101784 5000 5000 5000
30420 5000 5000 5000
10346 7000 5000 5000
31112 7000 5000 5000
35266 7000 5000 5000
3424 9000 5000 5000
100977 9000 5000 5000
103066 10000 5000 5000
35225 11000 5000 5000
14459 11000 5000 5000
17268 11000 5000 5000
100421 11000 5000 5000
41496 15000 5000 5000
Inverse Percentile Restrictions
For PERCENTILE_DISC, the expression in the ORDER BY clause can be of any data
type that you can sort (numeric, string, date, and so on). However, the expression in
the ORDER BY clause must be a numeric or datetime type (including intervals)
because linear interpolation is used to evaluate PERCENTILE_CONT. If the
expression is of type DATE, the interpolated result is rounded to the smallest unit
for the type. For a DATE type, the interpolated value will be rounded to the nearest
second, for interval types to the nearest second (INTERVAL DAY TO SECOND) or to
the month(INTERVAL YEAR TO MONTH).
Like other aggregates, the inverse percentile functions ignore NULLs in evaluating
the result. For example, when you want to find the median value in a set, Oracle
Database ignores the NULLs and finds the median among the non-null values. You
Hypothetical Rank and Distribution Functions
21-32 Oracle Database Data Warehousing Guide
can use the NULLS FIRST/NULLS LAST option in the ORDER BY clause, but they
will be ignored as NULLs are ignored.

Reference: Oracle Datawarehousing Guide

Friday 7 August 2009

BI 2.0: Is it really next generation?

We live in real time, minute by minute. News is no longer delayed by days; it is streamed in real time. We bank online and check our real-time balances. We book flights with real-time visibility of seat availability, and we select the seat we want online in real time. All these transactions generate data - lots of data.

To allow us to adapt our business models to today's real-time world, software applications are now built using event-driven technologies. Data moves around in real time over service-oriented architectures (SOAs), using loosely coupled and highly interoperable services that promote standardized application integration.

Yet business intelligence (BI) today has not changed in concept since the invention of the relational database and the SQL query - until the advent of BI 2.0.

BI 2.0 is a term that encapsulates several important new concepts about the way that we use and exploit information in businesses, organizations and government. The term is also intrinsically linked with real-time and event-driven BI but is really about the application of these technologies to business processes.

At the heart of this architecture are events, specifically XML messages. Ultimately, most modern processes themselves are actioned by events. Consequently, when you think about how to add intelligence to modern processes, the humble SQL query looks far from ideal.

The traditional data warehouse has enabled significant advances in our use of information, but its underlying architectural approach is now being questioned. Its architecture limits our ability to optimize every business process by embedding BI capabilities within. We need to look to event-driven, continuous in-process analytics to replace batch-driven reporting on processes after the fact.

In short, how can we build smarter business processes that give our organizations competitive advantage? How can we build the intelligent business?

The Client/Server Legacy

The BI tools most organizations use today were designed to solve a problem that arose in the early 1990s with the spread of the relational database. As more information was stored in databases, simply extracting it became a chore for IT departments because most users weren't interested in becoming experts in writing SQL queries. Getting the data out of databases truly became an end in itself and drove the rise of BI as we know it. Consequently, BI tools today focus on the presentation of data.

As it turns out, though, extracting data that is hours or days old and publishing it into reports, while useful, doesn't provide clear guidance on what users should do right now to improve business performance. As a result, at many companies, BI users don't even review the reports that are sent to them - they relegate them to reference documents. This is often expressed by users who complain that the information arrives too late to be really useful.

Strikingly, this is the antithesis of the real-time, actionable intelligence that many organizations need to provide the quality of service customers demand. At the most basic, such information is a day late and a dollar short in most industries. In retail, for example, three to four percent of potential revenue is foregone due to items not being adequately stocked all the time. The store manager is sent a stock report, but this arrives the next morning, after the close of business and too late to replenish the shelves.

Faster data warehouse queries or prettier dashboard reports — the focus of BI system improvements until now — clearly do not begin to solve the problem because they do not get to the heart of the architectural issue. It is undeniably the case that by the time data has been entered into the data warehouse and then extracted, it is out of date. This isn't a problem for some applications, but it is terminal for those that must run off real-time or near real-time knowledge.

A common misconception is that real-time data isn't needed because there is no way that operations teams could analyze it. This is applying BI 1.0 thinking; simply delivering more reports faster doesn't solve the problem. What's needed is a way to put relevant insight into the hands of operations staff in time to make a difference to day-to-day operations.

Reports are not the optimal deliverable of BI systems. Reports need analyzing and interpreting before any decisions can be made, and there is evidence that users don't look at them until they already know they have a problem. Rather than reporting on the effectiveness of a process after the fact, BI should be used within the process as a way of routing workflow automatically, based on what a customer is doing. In order to do this, you have to not only capture data in real time, but you need to analyze and interpret it as well.

This is essentially event-driven BI - analyzing up-to-the-minute data in the context of historic information - so that actions can be initiated automatically. The data warehouse isn't good at this. Perhaps it is simply being asked to support functions it was not designed to handle.

BI Services Arrive

Over the past few years, companies have started to present their data warehouses as Web services for use by other applications and processes connected by SOA or middleware such as an enterprise service bus (ESB). A fundamental limitation to this approach is that the data warehouse is the wrong place to look for intelligence about the performance of a current process. Real-time process state data, so relevant to this in-process intelligence, is unlikely to be in the data warehouse anyway.

Even layering a BI dashboard onto the data warehouse is inadequate for many operational tasks because they rely on a user noticing a problem based on out-of-date data. Dashboards aggregate and average. They remove details and context and present only a view of the past. Decisions require detail and need to be made in the present.

It's clear that data warehouses will remain, but their role can be clarified as the system of record, as opposed to the only place that BI is done. Reporting and presentation of historical data will continue to be done here - it was designed for that. Given the challenges associated with trying to move to a real-time data warehouse, however, it is clear that information required to support and indeed drive daily operational decisions must come from a different approach to avoid the latency introduced through the extract, transform, load and query cycle.

The Vision for BI 2.0

If the goal of BI 2.0 is to reduce latency - to cut the time between when an event occurs and when an action is taken - in order to improve business performance, existing BI architectures will struggle.

With BI 2.0, data isn't stored in a database or extracted for analysis; BI 2.0 uses event-stream processing. As the name implies, this approach processes streams of events in memory, either in parallel with actual business processes or as a process step itself.

Typically, this means looking for scenarios of events, such as patterns and combinations of events in succession, which are significant for the business problem at hand. The outputs of these systems are usually real-time metrics and alerts and the initiation of immediate actions in other applications. The effect is that analysis processes are automated and don't rely on human action, but can call for human action where it is required.

BI 2.0 gets data directly from middleware, the natural place to turn for real-time data. Standard middleware can easily create streams of events for analysis, which is performed in memory. When these real-time events are compared to past performance, problems and opportunities can be readily and automatically identified.

Intelligent Processes

In order to make a difference to the bottom line, businesses need to make processes smarter. This means either building outstanding ability into automated processes, or providing operations staff with actionable information and changing the day-to-day standard operating procedure to drive data-driven processes. The solution is to leverage the messaging technologies underpinning transactional systems, business process management and SOA, and event-driven real-time BI technologies. These fit together very naturally; you can think of real-time BI as analysis services in an SOA world.

BI 2.0 needs to work with both well-defined processes and less-defined areas. Many processes can't be modeled and explicitly defined using business process management. In fact, the majority of processes today aren't modeled but rather are less explicitly defined. Business users often can't describe their processes accurately enough, and yet operational processes still need intelligence.

BI 2.0 is driven by this need for intelligent processes and has the following characteristics:

Event driven. Automated processes are driven by events; therefore, it is implicit that in order to create smarter processes, businesses need to be able to analyze and interpret events. This means analyzing data, event by event, either in parallel with the business process or as an implicit process step.

Real time. This is essential in an event-driven world. Without it, it is hard to build in BI capabilities as a process step and nearly impossible to automate actions. By comparison, batch processes are informational - they report on the effectiveness of a process but cannot be part of the process itself unless time is not critical. Any application that involves trading, dynamic pricing, demand sensing, security, risk, fraud, replenishment or any form of interaction with a customer is a time-critical process and requires real-time processing.

Automate analysis. In order to automate day-to-day operational decision-making, organizations need to be able to do more than simply present data on a dashboard or in a report. The challenge is turning real-time data into something actionable. In short, businesses need to be able to automatically interpret data, dynamically, in real time. What this means in practice is the ability to compare each individual event with what would normally be expected based on past or predicted future performance. BI 2.0 products, therefore, must understand what normal looks like at both individual and aggregate levels and be able to compare individual events to this automatically.

Forward looking. Understanding the impact of any given event on an organization needs to be forward looking. For example, questions such as "Will my shipment arrive on time?" and "Is the system going to break today?" require forward-looking interpretations. This capability adds immediate value to operations teams that have a rolling, forward-looking perspective of what their performance is likely to be at the end of the day, week or month.

Process oriented. To be embedded within a process in order to make the process inherently smarter requires that BI 2.0 products be process-oriented. This doesn't mean that the process has been modeled with a business process management tool. Actions can be optimized based on the outcome of a particular process, but the process itself may or may not be explicitly defined.

Scalable. Scalability is naturally a cornerstone of BI 2.0 because it is based on event-driven architectures. This is critical because event streams can be unpredictable and occur in very high volumes. For example, a retailer may want to build a demand-sensing application to track the sales of every top-selling item for every store. The retailer may have 30,000 unique items being sold in 1,000 stores, creating 30 million store/item combinations that need tracking and may be selling 10 million items per day. Dealing with this scale is run of the mill for BI 2.0. In fact, this scalability itself enables new classes of applications that would never have been possible using traditional BI applications.

Real-Time, Event-Driven BI

BI 2.0 represents both a bold new vision and a fundamental shift in the way businesses can use information. It extends the definition of BI beyond the traditional data warehouse and query tools to include dynamic in-process and automated decision-making.

In the past, organizations have been forced to rely on out-of-date information and to attempt to fix problems long after they occur. BI 2.0 changes that because it allows BI capabilities to be built into processes themselves - in short, it lets companies create smarter processes.

When BI steps up to identifying problems and initiating corrective actions, not just presenting data, it has definitely evolved. It is ever closer to providing really useful information that can make a difference to the bottom line. Isn't this what BI was supposed to be all along?

Monday 3 August 2009

Top 5 Trends in BI in 2009


Top 5 trends in Business Intelligence for 2009


Trend #1: Complex Event

Processing (CEP) comes of age


The first generation of successful enterprise data
warehouses uncovered new insights and led to
innovative ways to improve business. These systems are
optimized for one-time queries on mostly static data
captured in the data warehouse long after the event
that generated it has occurred. The paradigm is longlived
data, short-lived queries.
CEP, a logical follow-on from business activity
monitoring (BAM), enables analysis of data streams
and linking of seemingly unrelated events in a
meaningful way. Instead of storing data and having
the execution of a query as the catalyst for results, a
continuous query system effectively "stores" the queries
and new results are initiated by the arrival of new
data, generating real-time insight and/or triggering
appropriate action. (The new paradigm is long-lived
queries, short-lived data).
CEP has heretofore been conspicuously missing from
the mainstream BI arena, necessitating stovepipe CEP
implementations that are only loosely integrated with
organizations' existing visualization, reporting,
dashboarding, information modeling, metadata, and
other BI infrastructure components. We are seeing
indications of that changing as leading BI vendors
partner with and acquire CEP engine providers, and
as BI users incorporate CEP in growing numbers.


Trend #2: Convergence of structured

and unstructured data

This has been a topic of interest for years, but it is
coming up in more conversations with current and
prospective customers than ever before. In the HP
2009 survey, 60% of respondents indicated that they
have an identified need to analyze unstructured data
as part of their BI systems, with over half of those either
doing the unstructured data analysis today, or
developing the capability.
As retailers strive to be more customer-centric, and
healthcare organizations strive to efficiently improve
and manage patient care, and financial institutions
strive to better detect risk and fraud threats, they will hit
limitations by not including unstructured data in the
ever-increasing analysis.
Using only structured data as a proxy for "what is
happening" and making an inference from that,
without correlating with available unstructured data,
can lead to very wrong decisions. For example, coded
diagnoses targeted for the payer often do not indicate
what's really wrong with a patient. Analysis of cancer
case reimbursements might indicate that more money
should be put into brain cancer research because of its
prevalence. But, if any cancer metastasizes to the
brain, it's often coded as brain cancer because of the
greater likelihood of reimbursement. Patient file notes
would indicate the true diagnosis.
Another common business driver is to mine call center
service logs and e-mail together to better understand
customers, for early problem detection and to discern
actual cause of problems.

Trend #3: The line is blurring

between data warehouse,

operational data store (ODS) and

operational systems

Initially, users expected ERP systems to provide needed
reporting. When these systems couldn't meet
requirements due to backlog and overload, users
turned to the data warehouse. Traditional BI satisfies
most strategic reporting and analysis, but not real-time
operational reporting with its associated needs for
high-volume real-time data updates, high availability
needs and high throughput rate of operational queries.
Operational reporting has high overhead and often
ties up the data warehouse, preventing other analytics
from running.
We are seeing operational reporting as a top business
initiative, and increasing interest in the use of a data
provisioning platform as companies need to extend the
data warehouse to more operational use. The platform
needs to go beyond the capabilities of an ODS,
providing operational reporting, data cleansing,
metadata management and data warehouse staging.
Such a data hub enables agility and new applications,
while preserving and enhancing the existing data
warehouse structure, and does it in a much more
efficient and cost-effective manner than using disparate
independent data marts. A hub that connects to
existing enterprise service buses (ESBs) and allows
architectural flexibility, including federation for remote
data, reflects the changing nature of the business while
allowing centralized control over data quality and
data access privileges.
The result is the ability to do operational BI which
involves embedding and automating analytics in a
process so that a person — or another process — can
act on generated information in real time, making
decisions and taking action in the context of a
business process.


Trend #4: Data integration focus

gaining new momentum

Many BI systems in place today were built for strategic
decisions, the sweet spot of traditional BI. Analysis is
done by a small number of people, over a period of
time, allowing for analysts to manually cleanse and
reconcile data from multiple disparate sources, and to
ensure that business rules are applied appropriately
and consistently. Many organizations would like to
increase their intelligence by giving more employees
access to these analytic tools, and applying them to
operational decisions. But it's more than a matter of
increasing capacity for data volumes and query
throughput, and giving the users simpler tools. The
limitation of first generation BI systems is not simply
their inability to handle large volumes of data and
users, but their lack of data integration rigor, including
data cleansing, MDM, and metadata management.
Operational analysis does not afford the time for
manual oversight to ensure proper quality,
reconciliation and classification of the source data,
which may have to be served to applications or
processes where the decisions are then made.
Organizations intent on leveraging their data and
expanding their analytic capability are recognizing the
value of an underlying infrastructure which provides
well-integrated, high quality data to applications,
processes and people.
According to Gartner, "Contemporary pressures are
leading to an increased investment in data integration
in all industries and geographic regions."6 In addition,
"recent focus on cost control has made data integration
tools a surprising priority as organizations realize the
'people' commitment for implementing and supporting
custom-coded or semi-manual data integration
approaches is no longer reasonable."7 The weak
economy will drive M&A in many industries, resulting in
a further need to integrate disparate data to get a
single view of the business, supporting continued
demand for MDM. And financial regulations are likely
to increase. Transparency needed for regulatory
compliance requires a consistent and complete view of
the data which represents the performance and
operation of the business.
In addition to early implementations, we are now also
seeing the results of more recent data warehouse
modernization and data mart consolidation projects
that were undertaken to cut costs, improve performance
and provide more headroom. Where the approach was
to move existing data structures to a new platform to
meet those immediate goals without addressing the
fundamental data integration issues, organizations are
left with the same unwieldy data structure as before,
preventing them from expanding the use of the data
warehouse to meet additional business needs.
Organizations are realizing the need for an overall
enterprise information management (EIM) strategy in
order to leverage data as a corporate asset, to apply
advanced analytics that will help them achieve a
discipline of fact-based decision making, and eliminate
the wastefulness of different teams using different tools
with little consistency and lots of overlap and
redundancy. They are also seeing that data integration
is a critical component of an overall EIM strategy.
Inconsistent meanings create barriers to reliable
analytics. As the boundaries between application
domains like CRM, ERP and product lifecycle
management continue to erode, there is a growing
need to create an enterprise-wide information strategy
to ensure semantic consistency for all users,
applications and services.

----
A7 Gartner, "Magic Quadrant for Data Integration Tools," by Ted Friedman,
Mark A. Beyer, Andreas Bitterer, 22 September 2008.
----

Trend #5: Analytics moves to the

front office — More sophistication in

the hands of business users

Companies are looking to apply advanced analytics to
ERP, CRM and supply chain management systems in
order to achieve strategic competitive differentiation.
The traditional approach to analytics has been to hire
modelers with PhDs who spend three months
developing a model, up to a few dozen or a few
hundred per year. The modeling runs offline to do
customer segmentation, for example. Capturing this
sophistication in tools that can be used by business
managers enables the development and use of not
hundreds, but thousands of models, with a much
shorter time to market. This approach makes it possible
for someone who doesn't know what a neural network
is, to use one, as mainstream capability.
There is an Internet influence on interfaces as well.
Instead of pulling data from multiple sources and
building an analysis cube, the user will go to a portal
and request data elements. Provisioning will be
automated rather than manual, assuming that a data
integration infrastructure has been put in place.

Monday 13 July 2009

Nilu Fule - gele


Noted Marathi stage and films actor Nilu Fule dies of cancer at the age of 80 this morning. He acted in total 130 films and many theatre shows.
He was an all rounder in terms of acting and personally I was a fan of the man for his own special style. Salute.

Adding a New Node to an Oracle RAC Cluster


Contents

1.        Introduction:         PAGEREF _Toc190769674 \h 2

2.        Preparing Access to the New Node         PAGEREF _Toc190769675 \h 2
2.1        Create the operating system user and group on the new node         PAGEREF _Toc190769676 \h 2
2.2        Configuring the Secure Shell         PAGEREF _Toc190769677 \h 2

3.        Clone the Oracle Clusterware Home Directory         PAGEREF _Toc190769678 \h 5

4.        Clone the Automatic Storage Management Home Directory         PAGEREF _Toc190769679 \h 6

5.        Clone Oracle Software Home Directory         PAGEREF _Toc190769680 \h 6

6.        Creating a Listener on the New Node         PAGEREF _Toc190769681 \h 7

7.        Create a new cluster instance on the new node         PAGEREF _Toc190769682 \h 8

8.        Conclusions         PAGEREF _Toc190769683 \h 9



 

1.        Introduction:
This process describes how to add a new node to an existing Oracle Real Application Clusters (Oracle RAC) environment.

2.        Preparing Access to the New Node
The following steps needs be followed to prepare the node before it is added to the cluster:

2.1        Create the operating system user and group on the new node
When installing Oracle RAC on UNIX and Linux platforms, the software is installed on one node, and OUI uses the Secure Shell (SSH) to copy the software binary files to the other cluster nodes.

a)        If this is the first time Oracle software is being installed on the new node and the dba group does not exist, then create the dba group as follows:



-        Login as root user
-        # /usr/sbin/groupadd dba


b)        If the user that owns the Oracle software does not exist on the new node then create the user as follows:


# useradd -u <UID> –g dba -d <Home>  -r oracle

Set the password for the oracle account using the following command. The password should be same as the other nodes.

# passwd oracle

Note: The UID should be same as the UID of oracle from any existing node.

c) Verify that the attributes of the user oracle are identical on all the existing nodes:

# id oracle

2.2        Configuring the Secure Shell
a)        Log in to the new node as the oracle user
b)        Determine if a .ssh directory exists in the oracle user's home directory. If not, create the .ssh directory and set the directory permission so that only the oracle user has access to the directory, as shown here:


$ ls -a $HOME
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh








c)        Create the RSA-type public and private encryption keys on the new node as follows:


$/usr/bin/ssh-keygen -t rsa

At the prompts:

        Accept the default location for the key file by pressing the Enter key.
        When prompted for a pass phrase, enter and confirm a pass phrase that is different from the oracle user's password.


This command creates the public key in the /home/oracle/.ssh/id_rsa.pub file and the private key in the /home/oracle/.ssh/id_rsa file.

d)        Create the DSA type public and private keys on the new node as follows:


$ /usr/bin/ssh-keygen -t dsa

At the prompts:

        Accept the default location for the key file by pressing the Enter key.
        When prompted for a pass phrase, enter and confirm a pass phrase that is different from the oracle user's password.


This command creates the public key in the /home/oracle/.ssh/id_dsa.pub file and the private key in the /home/oracle/.ssh/id_dsa file.

e)        Add the Keys to an Authorized Key File


Use Secure Copy (SCP) or Secure FTP (SFTP) to copy the authorized_keys file to the oracle user .ssh directory from any existing cluster node. The following example uses SCP to copy the authorized_keys file to the new node from an existing node.

        Log on to an existing node as oracle user


        $scp ~/.ssh/authorized_keys <New Node>:<oracle user HOME>/.ssh/


You are prompted to accept an RSA or DSA key. Enter yes, and you see that the node you are copying to is added to the known_hosts file.

When prompted, provide the password for the oracle user, which should be the same on all the nodes in the cluster (Note: this is the user password, not the newly specified passphrase). The authorized_keys file is then copied to the remote node.

        Log on to the new node as oracle user where you copied the authorized_keys file. Then change to the .ssh directory, and using the cat command, add the RSA and DSA keys for the new node to authorized_keys file as follows:


$ cat id_rsa.pub  >> authorized_keys
$ cat id_dsa.pub  >> authorized_keys

        Use SCP to copy the authorized_keys file from the new node to all the other existing cluster nodes, overwriting the existing version.


$scp ~/.ssh/authorized_keys <Existing Node>:<oracle user HOME>/.ssh/

        From the new node complete the SSH configuration by using the ssh command to retrieve the date on each node in the cluster.


$ ssh <Existing Node> date

The first time you use SSH to connect to one node from another node, you see a message similar to the following:

The authenticity of host 'docrac1(143.46.43.100) can't be established. RSA key fingerprint is 7z:ez:e7:f6:f4:f2:d1:a6:f7:4e:zz:me:a7:48:ae:f6:7e. Are you sure you want to continue connecting (yes/no)? yes
Enter yes at the prompt to continue. You should not see this message again when you connect to this node to the other node. If you see any other messages or text, apart from the date, then the installation can fail.

        Add the public and private node names for the new node to the /etc/hosts file on the existing nodes


        Verify that the new node can be accessed (using the ping command) from the existing nodes


        Run the following command on any existing node to verify that the new node has been properly configured:


$ cluvfy stage -pre crsinst -n <New Node Name>

3.        Clone the Oracle Clusterware Home Directory
Use Oracle Universal Installer (OUI) to add an Oracle Clusterware home to the new node being added to the Oracle RAC cluster.

        Go to the <Cluster Home>/oui/bin directory of an existing node and run the addNode.sh script.



$ cd <Cluster Home>/oui/bin
$ ./addNode.sh

        OUI starts and first displays the Welcome window.


Click Next.

The Specify Cluster Nodes to Add to Installation window appears.

        Select the new node or nodes that you want to add. click Next.
        Verify the entries that OUI displays on the Summary Page and click Next.
        Run the rootaddNode.sh script from the <Cluster Home>/install/ directory on the existing node when prompted to do so.


Basically, this script adds the node applications of the new node to the OCR configuration.
        Run the orainstRoot.sh script on the new node which is being added if OUI prompts you to do so.
        Run the <Cluster Home>/root.sh script on the new node to start Oracle Clusterware on the new node.


        Add the new node's Oracle Notification Services (ONS) configuration information to the shared Oracle Cluster Registry (OCR).


-        Obtain the ONS port identifier used by the new node, by running the following command from the <Cluster Home>/opmn/conf directory on an existing node node:


$cat ons.config

-        After you locate the ONS port number for the new node, you must make sure that the ONS on existing nodes can communicate with the ONS on the new node.


From the <Cluster Home>/bin directory on an existing node, run the Oracle Notification Services configuration utility as shown below, where remote_port is the port number obtained from previous step:

$ ./racgons add_config <New Node>:remote_port

        At the end of the cloning process, you should have Oracle Clusterware running on the new node. To verify the installation of Oracle Clusterware on the new node, you can run the following command as the root user on the newly configured node:


$CRS_home/bin/cluvfy stage -post crsinst -n docrac3 –verbose

4.        Clone the Automatic Storage Management Home Directory
Use Oracle Universal Installer (OUI) to add ASM home to the new node being added to the Oracle RAC cluster.

        Go to the $ASM_HOME/oui/bin directory on an existing node and run the addNode.sh script.



        When OUI displays the Node Selection window, select the new node to be added then click Next.


        Verify the entries that OUI displays on the Summary window, then click Next.


        Run the root.sh script on the new node, from the ASM home directory on that node when OUI prompts you to do so.


You now have a copy of the ASM software on the new node.

5.        Clone Oracle Software Home Directory
Use Oracle Universal Installer (OUI) to add Oracle Software home to the new node being added to the Oracle RAC cluster.

        Go to the $ORACLE_HOME/oui/bin directory on an existing node and run the addNode.sh script.



        When OUI displays the Specify Cluster Nodes to Add to Installation window, select the node to be added, then click Next.


        Verify the entries that OUI displays in the Cluster Node Addition Summary window, then click Next.


        Run the root.sh script on the new node, from the $ORACLE_HOME directory on that node when OUI prompts you to do so.


After completing these steps, you should have an installed Oracle RAC home on the new node.

6.        Creating a Listener on the New Node
To service database instance connection requests on the new node, you must create a Listener on that node. Use the Oracle Net Configuration Assistant (NETCA) to create a Listener on the new node. Before beginning this procedure, ensure that your existing nodes have the $ORACLE_HOME environment variable set correctly.

        Start the Oracle Net Configuration Assistant by entering netca at the system prompt from the $ORACLE_HOME/bin directory.



        Select Listener configuration, and click Next.


        NETCA displays the Listener Configuration, Listener window.


        Select Add to create a new Listener, then click Next.


        NETCA displays the Listener Configuration, Listener Name window.


        Accept the default value of LISTENER for the Listener name by clicking Next.


        NETCA displays the Listener Configuration, Select Protocols window.


        Choose TCP and move it to the Selected Protocols area, then click Next.


        NETCA displays the Listener Configuration, TCP/IP Protocol window.


        Choose Use the standard port number of 1521, then click Next.


        NETCA displays the Real Application Clusters window.


        Select Cluster configuration for the type of configuration to perform, then click Next.


        NETCA displays the Real Application Clusters, Active Nodes window.


        Select the name of the node you are adding, then click Next.


        NETCA creates a Listener using the configuration information provided. You can now exit NETCA.



You should now have a Listener named LISTENER running on the new node.

7.        Create a new cluster instance on the new node
        Start DBCA by entering dbca at the system prompt from the $ORACLE_HOME/bin directory.



        Select Oracle Real Application Clusters database, and then click Next.


DBCA displays the Operations window.

        Select Instance Management, and then click Next.


DBCA displays the Instance Management window.

        Select Add an Instance, then click Next.


DBCA displays the List of Cluster Databases window, which shows the databases and their current status, such as ACTIVE or INACTIVE.

        In the List of Cluster Databases window, select the active Oracle RAC database to which you want to add an instance. Enter the user name and password for the database user that has SYSDBA privileges. Click Next.


DBCA will spend a few minutes performing tasks in the background, then it will display the Instance naming and node selection window.

        In the Instance naming and node selection window, enter the instance name in the field at the top of this window if the default instance name provided by DBCA does not match your existing instance naming scheme.


Click Next to accept the instance name

DBCA displays the Instance Storage window.

        In the Instance Storage window, you have the option of changing the default storage options and file locations for the new database instance. In this example, you accept all the default values and click Finish.


DBCA displays the Summary window.

        Review the information in the Summary window, then click OK to start the database instance addition operation. DBCA displays a progress dialog box showing DBCA performing the instance addition operation.


        During the instance addition operation, if you are using ASM for your cluster database storage, DBCA detects the need for a new ASM instance on the new node.


When DBCA displays a dialog box, asking if you want to ASM to be extended, click Yes.

After DBCA extends ASM on the new node and completes the instance addition operation, DBCA displays a dialog box asking whether or not you want to perform another operation. Click No to exit DBCA.

8.        Conclusions
You should now have a new cluster database instance and ASM instance running on the new node. After you terminate your DBCA session, you should run the following command to verify the administrative privileges on the new node and obtain detailed information about these privileges:

$ <Cluster Home>/bin/cluvfy comp admprv -o db_config -d oracle_home -n <New Node Name> -verbose

Converting date from local time zone to GMT, BST


I just thought I should add some technical bits to my blog that could be helpful to people trying to find some quick help. Here is one.

--*1*--
Oracle functions to transpose time between different time zones to GMT.

Oracle provides a number of functions to transpose between the different time zones to GMT. You need and need to store 'Olson' code to convert from the required time zone.

Please note when using these functions if the country area is used e.g. 'Europe\London' then this will also convert to BST (if we have entered day light savings). However if the 'GMT' code is used then this will always be set to GMT.

SELECT
-- Show timestamp as if in 'Europe/London'
CAST ('01-JAN-2009 09:00:01.01' AS TIMESTAMP) at time zone 'Europe/London' as A,
-- Show timestamp as if in 'Turkey'
CAST ('01-JAN-2009 09:00:01.01' AS TIMESTAMP) at time zone 'Turkey' as B,
-- Take timestamp from 'Europe/London' and show as 'Europe/London'
FROM_TZ(CAST('01-JAN-2009 09:00:01.01' AS TIMESTAMP), 'Europe/London' )  AT TIME ZONE 'Europe/London' as C,
-- Take timestamp from 'Turkey' and show as 'Europe/London'
FROM_TZ(CAST('01-JAN-2009 09:00:01.01' AS TIMESTAMP), 'Turkey' )  AT TIME ZONE 'Europe/London' as D,
-- Take timestamp from 'Europe/London' and show as 'Turkey' remove Timezone
CAST (FROM_TZ(CAST ('01-JAN-2009 09:00:01.01' AS TIMESTAMP), 'Europe/London' )  AT TIME ZONE 'Turkey' AS TIMESTAMP) as E,
-- Take timestamp from 'Turkey'
FROM_TZ(CAST('01-JAN-2009 09:00:01.01' AS TIMESTAMP), 'Turkey' ) as F,
-- Show offset for 'EST'
tz_offset('EST') as G,
-- Show offset for 'Turkey'
tz_offset('Turkey') as H
FROM
DUAL

A list of the time zone names can be retrieved from the Oracle view V$TIMEZONE_NAMES

--***--