Friday 14 August 2009

Chicken Tikka Masala - with yogurt marinated chicken.


Chicken Tikka Masala - with yogurt marinated chicken.

Ingredients:
        Boneless chicken - 1kg. (You may do with bones if you fancy)
        2 hot green chillies
        1 tea spoon coriander seeds( dhaniya)
        1 tea spoon fenugreek seeds
        1+1 tea spoon Turmeric
        1+1 tea spoon 'red chilli' powder
        2+2 table spoon fine grounded masala (mix of spices) - take any 'chicken tikka masala' pack from Tesco.
        2 table spoon oil  - sunflower or olive as you like. Do not use vegetable oil or any other.
       
        2 tea spoon salt
        500g yogurt.
        2 tomatos.
        4 garlic buds (pieces)
Procedure

        [1]
        Marinate chicken in yogurtby mixing 1 tea spoon turmeric, 1 table spoon find grounded masala and                 1 tea spoon red chilli powder. Mix well so that all chicken pieces are immersed in the 'spicy'
        yogurt.
        Leave for min 30 mins.
        [2]
        * Cut the 2 hot green chilles into small pieces.
        * Cut the 2 tomatos into small pieces.
        * cut garlic into small pieces.
        [3]
        * Take a pan.
        * Add 2 tablespoon oil. let it heat a bit may be 2 mins on full gas. (oh yes and switch on gas before this)
        * Add 1 tea spoon fenugreek seeds and 1 tea spoon coriander seeds          
        * Within 4-5 seconds (if oil is really hot) add cut green chillies and cut garlic.
        * Mix for 10 secs or so (again if oil is hot)
        * Add 1 tea spoon turmeric to this and mix for 4-5 secs.
        * Add 2 table spoon fine grounded masala and 1 tea spoon red chillin powder and mix for 5-10 secs.
        * This will not look like a dry mix of spice - lower the gas to medium flame now.
        * Add tomatos and mix really well. Let tomatos kind of melt.
        * Now add the marinated chicken slowly. And mix really well to form a uniform mixture.
        * Increase gas to full and let this boil now for at least 15-20 mins .
                * Occassionally you would need to stir as the mixture may boil and try to come out of pan.
         (You can do this in pressure cooker and you would just need to cook until 3 whistles of cooker)
        * Check now if the chicken is cooked. Use a fork to poke in it and see if it is soft and not like rubber.
        * If chicken is cooked, your curry is ready. Add salt to taste - typically 2 tea spoons for this much mixture.
       
        Eat with rice of bread/nan/chapati etc.


               
       

10 home remedies to avoid swine flu

10 home remedies to avoid swine flu

(Source: Times of India)

Are the rising swine flu casualties giving you jitters? Not sure how you can avoid falling prey to the growing epidemic? First and foremost, there  

is absolutely no need to panic.

Watching television to keep tabs on the progress of H1N1, particularly in the badly affected areas like Pune, is all right. But don't let the hysterical anchors get under your skin and start wearing a mask each time you step out of the house, unless you are visiting a very crowded area. Then too, the mask will protect you only for a specified period.

Without giving in to the swine flu panic and creating a stockpile of Tamiflu and N-95 masks at home and enriching pharma companies, there are a number of other measures you can take to ensure that the virus is not able to get you, irrespective of which part of the world you are in.

It is essential to remember that all kinds of viruses and bacteria can attack you when your immune system is weak, or they can weaken it easily. Hence, building your own defences would be a better, more practical, long-lasting and much more economical idea.

Here are some easy steps you can take to tackle a flu virus of any kind, including swine flu. It is not necessary to follow all the steps at once. You can pick and choose a combination of remedies that suit you best. However, if you are already suffering from flu, these measures can help only up to an extent. And, if you have been infected by H1N1, visiting a hospital and staying in solitary confinement is a must.

1. Have five duly washed leaves of Tulsi (known as Basil in English; medicinal name Ocimum sanctum) everyday in the morning. Tulsi has a large number of therapeutic properties. It keeps throat and lungs clear and helps in infections by way of strengthening your immunity.

2. Giloi (medicinal name Tinospora cordifolia) is a commonly available plant in many areas. Take a one-foot long branch of giloi, add five to six leaves of Tulsi and boil in water for 15-20 minutes or long enough to allow the water to extract its properties. Add black pepper and sendha (salt used during religious fasts), rock or black salt, or Misri (crystalised sugar like lumps to make it sweet) according to taste. Let it cool a bit and drink this kadha (concoction) while still warm. It will work wonders for your immunity. If giloi plant is not available, get processed giloi powder from Hamdard or others, and concoct a similar drink once a day.

3. A small piece of camphor (kapoor) approximately the size of a tablet should be taken once or twice a month. It can be swallowed with water by adults while children can take it along with mashed potatoes or banana because they will find it difficult to have it without any aides. Please remember camphor is not to be taken everyday, but only once each season, or once a month.

4. Those who can take garlic, must have two pods of raw garlic first thing in the morning. To be swallowed daily with lukewarm water. Garlic too strengthens immunity like the earlier measures mentioned.

5. Those not allergic to milk, must take a glass of hot or lukewarm milk every night with a small measure of haldi (turmeric).

6. Aloe vera (gwarpatha) too is a commonly available plant. Its thick and long, cactus-like leaves have an odourless gel. A teaspoon gel taken with water daily can work wonders for not only your skin and joint pains, but also boost immunity.

7. Take homeopathic medicines — Pyrogenium 200 and Inflenzium 200 in particular — five tablets three times a day, or two-three drops three times a day. While these are not specifically targeted at H1N1 either, these work well as preventive against common flu virus.

8. Do Pranayam daily (preferably under guidance if you are already not initiated into it) and go for morning jog/walk regularly to keep your throat and lungs in good condition and body in fine fettle. Even in small measures, it will work wonders for your body's resistance against all such diseases which attack the nose, throat and lungs, besides keeping you fit.

9. Have citrus fruits, particularly Vitamin C rich Amla (Indian gooseberry) juice. Since fresh Amla is not yet available in the market (not for another three to four months), it is not a bad idea to buy packaged Amla juice which is commonly available nowadays.

10. Last but not the least, wash your hands frequently every day with soap and warm water for 15-20 seconds; especially before meals, or each time after touching a surface that you suspect could be contaminated with flu virus such as a door handle or a knob/handle, especially if you have returned from a public place or used public transport. Alcohol-based hand cleaners should be kept handy at all times and used until you can get soap and warm water.

(The author is an avid reader and follower of alternative therapies including spiritual healing, ayurveda, yoga and homeopathy)


Thursday 13 August 2009

How to calculate 95 Percentile of a set of values in oracle?


How to calculate 95 Percentile of a set of values in oracle?

Oracle provides functions to calculate percentile values in a set of ordered data.

Inverse Percentile Functions
Using the CUME_DIST function, you can find the cumulative distribution
(percentile) of a set of values. However, the inverse operation (finding what value
computes to a certain percentile) is neither easy to do nor efficiently computed. To
overcome this difficulty, the PERCENTILE_CONT and PERCENTILE_DISC functions
were introduced. These can be used both as window reporting functions as well as
normal aggregate functions.
These functions need a sort specification and a parameter that takes a percentile
value between 0 and 1. The sort specification is handled by using an ORDER BY
clause with one expression. When used as a normal aggregate function, it returns a
single value for each ordered set.
PERCENTILE_CONT, which is a continuous function computed by interpolation,
and PERCENTILE_DISC, which is a step function that assumes discrete values. Like
other aggregates, PERCENTILE_CONT and PERCENTILE_DISC operate on a group
of rows in a grouped query, but with the following differences:
_ They require a parameter between 0 and 1 (inclusive). A parameter specified
out of this range will result in error. This parameter should be specified as an
expression that evaluates to a constant.
_ They require a sort specification. This sort specification is an ORDER BY clause
with a single expression. Multiple expressions are not allowed.
Normal Aggregate Syntax
[PERCENTILE_CONT | PERCENTILE_DISC]( constant expression )
WITHIN GROUP ( ORDER BY single order by expression
[ASC|DESC] [NULLS FIRST| NULLS LAST])
Inverse Percentile Example Basis
We use the following query to return the 17 rows of data used in the examples of
this section:
SELECT cust_id, cust_credit_limit, CUME_DIST()
OVER (ORDER BY cust_credit_limit) AS CUME_DIST
FROM customers WHERE cust_city='Marshal';
CUST_ID CUST_CREDIT_LIMIT CUME_DIST
Inverse Percentile Functions
SQL for Analysis and Reporting 21-29
---------- ----------------- ----------
28344 1500 .173913043
8962 1500 .173913043
36651 1500 .173913043
32497 1500 .173913043
15192 3000 .347826087
102077 3000 .347826087
102343 3000 .347826087
8270 3000 .347826087
21380 5000 .52173913
13808 5000 .52173913
101784 5000 .52173913
30420 5000 .52173913
10346 7000 .652173913
31112 7000 .652173913
35266 7000 .652173913
3424 9000 .739130435
100977 9000 .739130435
103066 10000 .782608696
35225 11000 .956521739
14459 11000 .956521739
17268 11000 .956521739
100421 11000 .956521739
41496 15000 1
PERCENTILE_DISC(x) is computed by scanning up the CUME_DIST values in each
group till you find the first one greater than or equal to x, where x is the specified
percentile value. For the example query where PERCENTILE_DISC(0.5), the result
is 5,000, as the following illustrates:
SELECT PERCENTILE_DISC(0.5) WITHIN GROUP
(ORDER BY cust_credit_limit) AS perc_disc, PERCENTILE_CONT(0.5) WITHIN GROUP
(ORDER BY cust_credit_limit) AS perc_cont
FROM customers WHERE cust_city='Marshal';
PERC_DISC PERC_CONT
--------- ---------
5000 5000
The result of PERCENTILE_CONT is computed by linear interpolation between rows
after ordering them. To compute PERCENTILE_CONT(x), we first compute the row
number = RN= (1+x*(n-1)), where n is the number of rows in the group and x is the
specified percentile value. The final result of the aggregate function is computed by
Inverse Percentile Functions
21-30 Oracle Database Data Warehousing Guide
linear interpolation between the values from rows at row numbers CRN =
CEIL(RN) and FRN = FLOOR(RN).
The final result will be: PERCENTILE_CONT(X) = if (CRN = FRN = RN), then
(value of expression from row at RN) else (CRN - RN) * (value of expression for row
at FRN) + (RN -FRN) * (value of expression for row at CRN).
Consider the previous example query, where we compute PERCENTILE_
CONT(0.5). Here n is 17. The row number RN = (1 + 0.5*(n-1))= 9 for both groups.
Putting this into the formula, (FRN=CRN=9), we return the value from row 9 as the
result.
Another example is, if you want to compute PERCENTILE_CONT(0.66). The
computed row number RN=(1 + 0.66*(n-1))= (1 + 0.66*16)= 11.67. PERCENTILE_
CONT(0.66) = (12-11.67)*(value of row 11)+(11.67-11)*(value of row 12). These results
are:
SELECT PERCENTILE_DISC(0.66) WITHIN GROUP
(ORDER BY cust_credit_limit) AS perc_disc, PERCENTILE_CONT(0.66) WITHIN GROUP
(ORDER BY cust_credit_limit) AS perc_cont
FROM customers WHERE cust_city='Marshal';
PERC_DISC PERC_CONT
---------- ----------
9000 8040
Inverse percentile aggregate functions can appear in the HAVING clause of a query
like other existing aggregate functions.
As Reporting Aggregates
You can also use the aggregate functions PERCENTILE_CONT, PERCENTILE_DISC
as reporting aggregate functions. When used as reporting aggregate functions, the
syntax is similar to those of other reporting aggregates.
[PERCENTILE_CONT | PERCENTILE_DISC](constant expression)
WITHIN GROUP ( ORDER BY single order by expression
[ASC|DESC] [NULLS FIRST| NULLS LAST])
OVER ( [PARTITION BY value expression [,...]] )
This query computes the same thing (median credit limit for customers in this result
set, but reports the result for every row in the result set, as shown in the following
output:
SELECT cust_id, cust_credit_limit, PERCENTILE_DISC(0.5) WITHIN GROUP
(ORDER BY cust_credit_limit) OVER () AS perc_disc,
Inverse Percentile Functions
SQL for Analysis and Reporting 21-31
PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY cust_credit_limit)
OVER () AS perc_cont
FROM customers WHERE cust_city='Marshal';
CUST_ID CUST_CREDIT_LIMIT PERC_DISC PERC_CONT
---------- ----------------- ---------- ----------
28344 1500 5000 5000
8962 1500 5000 5000
36651 1500 5000 5000
32497 1500 5000 5000
15192 3000 5000 5000
102077 3000 5000 5000
102343 3000 5000 5000
8270 3000 5000 5000
21380 5000 5000 5000
13808 5000 5000 5000
101784 5000 5000 5000
30420 5000 5000 5000
10346 7000 5000 5000
31112 7000 5000 5000
35266 7000 5000 5000
3424 9000 5000 5000
100977 9000 5000 5000
103066 10000 5000 5000
35225 11000 5000 5000
14459 11000 5000 5000
17268 11000 5000 5000
100421 11000 5000 5000
41496 15000 5000 5000
Inverse Percentile Restrictions
For PERCENTILE_DISC, the expression in the ORDER BY clause can be of any data
type that you can sort (numeric, string, date, and so on). However, the expression in
the ORDER BY clause must be a numeric or datetime type (including intervals)
because linear interpolation is used to evaluate PERCENTILE_CONT. If the
expression is of type DATE, the interpolated result is rounded to the smallest unit
for the type. For a DATE type, the interpolated value will be rounded to the nearest
second, for interval types to the nearest second (INTERVAL DAY TO SECOND) or to
the month(INTERVAL YEAR TO MONTH).
Like other aggregates, the inverse percentile functions ignore NULLs in evaluating
the result. For example, when you want to find the median value in a set, Oracle
Database ignores the NULLs and finds the median among the non-null values. You
Hypothetical Rank and Distribution Functions
21-32 Oracle Database Data Warehousing Guide
can use the NULLS FIRST/NULLS LAST option in the ORDER BY clause, but they
will be ignored as NULLs are ignored.

Reference: Oracle Datawarehousing Guide

Friday 7 August 2009

BI 2.0: Is it really next generation?

We live in real time, minute by minute. News is no longer delayed by days; it is streamed in real time. We bank online and check our real-time balances. We book flights with real-time visibility of seat availability, and we select the seat we want online in real time. All these transactions generate data - lots of data.

To allow us to adapt our business models to today's real-time world, software applications are now built using event-driven technologies. Data moves around in real time over service-oriented architectures (SOAs), using loosely coupled and highly interoperable services that promote standardized application integration.

Yet business intelligence (BI) today has not changed in concept since the invention of the relational database and the SQL query - until the advent of BI 2.0.

BI 2.0 is a term that encapsulates several important new concepts about the way that we use and exploit information in businesses, organizations and government. The term is also intrinsically linked with real-time and event-driven BI but is really about the application of these technologies to business processes.

At the heart of this architecture are events, specifically XML messages. Ultimately, most modern processes themselves are actioned by events. Consequently, when you think about how to add intelligence to modern processes, the humble SQL query looks far from ideal.

The traditional data warehouse has enabled significant advances in our use of information, but its underlying architectural approach is now being questioned. Its architecture limits our ability to optimize every business process by embedding BI capabilities within. We need to look to event-driven, continuous in-process analytics to replace batch-driven reporting on processes after the fact.

In short, how can we build smarter business processes that give our organizations competitive advantage? How can we build the intelligent business?

The Client/Server Legacy

The BI tools most organizations use today were designed to solve a problem that arose in the early 1990s with the spread of the relational database. As more information was stored in databases, simply extracting it became a chore for IT departments because most users weren't interested in becoming experts in writing SQL queries. Getting the data out of databases truly became an end in itself and drove the rise of BI as we know it. Consequently, BI tools today focus on the presentation of data.

As it turns out, though, extracting data that is hours or days old and publishing it into reports, while useful, doesn't provide clear guidance on what users should do right now to improve business performance. As a result, at many companies, BI users don't even review the reports that are sent to them - they relegate them to reference documents. This is often expressed by users who complain that the information arrives too late to be really useful.

Strikingly, this is the antithesis of the real-time, actionable intelligence that many organizations need to provide the quality of service customers demand. At the most basic, such information is a day late and a dollar short in most industries. In retail, for example, three to four percent of potential revenue is foregone due to items not being adequately stocked all the time. The store manager is sent a stock report, but this arrives the next morning, after the close of business and too late to replenish the shelves.

Faster data warehouse queries or prettier dashboard reports — the focus of BI system improvements until now — clearly do not begin to solve the problem because they do not get to the heart of the architectural issue. It is undeniably the case that by the time data has been entered into the data warehouse and then extracted, it is out of date. This isn't a problem for some applications, but it is terminal for those that must run off real-time or near real-time knowledge.

A common misconception is that real-time data isn't needed because there is no way that operations teams could analyze it. This is applying BI 1.0 thinking; simply delivering more reports faster doesn't solve the problem. What's needed is a way to put relevant insight into the hands of operations staff in time to make a difference to day-to-day operations.

Reports are not the optimal deliverable of BI systems. Reports need analyzing and interpreting before any decisions can be made, and there is evidence that users don't look at them until they already know they have a problem. Rather than reporting on the effectiveness of a process after the fact, BI should be used within the process as a way of routing workflow automatically, based on what a customer is doing. In order to do this, you have to not only capture data in real time, but you need to analyze and interpret it as well.

This is essentially event-driven BI - analyzing up-to-the-minute data in the context of historic information - so that actions can be initiated automatically. The data warehouse isn't good at this. Perhaps it is simply being asked to support functions it was not designed to handle.

BI Services Arrive

Over the past few years, companies have started to present their data warehouses as Web services for use by other applications and processes connected by SOA or middleware such as an enterprise service bus (ESB). A fundamental limitation to this approach is that the data warehouse is the wrong place to look for intelligence about the performance of a current process. Real-time process state data, so relevant to this in-process intelligence, is unlikely to be in the data warehouse anyway.

Even layering a BI dashboard onto the data warehouse is inadequate for many operational tasks because they rely on a user noticing a problem based on out-of-date data. Dashboards aggregate and average. They remove details and context and present only a view of the past. Decisions require detail and need to be made in the present.

It's clear that data warehouses will remain, but their role can be clarified as the system of record, as opposed to the only place that BI is done. Reporting and presentation of historical data will continue to be done here - it was designed for that. Given the challenges associated with trying to move to a real-time data warehouse, however, it is clear that information required to support and indeed drive daily operational decisions must come from a different approach to avoid the latency introduced through the extract, transform, load and query cycle.

The Vision for BI 2.0

If the goal of BI 2.0 is to reduce latency - to cut the time between when an event occurs and when an action is taken - in order to improve business performance, existing BI architectures will struggle.

With BI 2.0, data isn't stored in a database or extracted for analysis; BI 2.0 uses event-stream processing. As the name implies, this approach processes streams of events in memory, either in parallel with actual business processes or as a process step itself.

Typically, this means looking for scenarios of events, such as patterns and combinations of events in succession, which are significant for the business problem at hand. The outputs of these systems are usually real-time metrics and alerts and the initiation of immediate actions in other applications. The effect is that analysis processes are automated and don't rely on human action, but can call for human action where it is required.

BI 2.0 gets data directly from middleware, the natural place to turn for real-time data. Standard middleware can easily create streams of events for analysis, which is performed in memory. When these real-time events are compared to past performance, problems and opportunities can be readily and automatically identified.

Intelligent Processes

In order to make a difference to the bottom line, businesses need to make processes smarter. This means either building outstanding ability into automated processes, or providing operations staff with actionable information and changing the day-to-day standard operating procedure to drive data-driven processes. The solution is to leverage the messaging technologies underpinning transactional systems, business process management and SOA, and event-driven real-time BI technologies. These fit together very naturally; you can think of real-time BI as analysis services in an SOA world.

BI 2.0 needs to work with both well-defined processes and less-defined areas. Many processes can't be modeled and explicitly defined using business process management. In fact, the majority of processes today aren't modeled but rather are less explicitly defined. Business users often can't describe their processes accurately enough, and yet operational processes still need intelligence.

BI 2.0 is driven by this need for intelligent processes and has the following characteristics:

Event driven. Automated processes are driven by events; therefore, it is implicit that in order to create smarter processes, businesses need to be able to analyze and interpret events. This means analyzing data, event by event, either in parallel with the business process or as an implicit process step.

Real time. This is essential in an event-driven world. Without it, it is hard to build in BI capabilities as a process step and nearly impossible to automate actions. By comparison, batch processes are informational - they report on the effectiveness of a process but cannot be part of the process itself unless time is not critical. Any application that involves trading, dynamic pricing, demand sensing, security, risk, fraud, replenishment or any form of interaction with a customer is a time-critical process and requires real-time processing.

Automate analysis. In order to automate day-to-day operational decision-making, organizations need to be able to do more than simply present data on a dashboard or in a report. The challenge is turning real-time data into something actionable. In short, businesses need to be able to automatically interpret data, dynamically, in real time. What this means in practice is the ability to compare each individual event with what would normally be expected based on past or predicted future performance. BI 2.0 products, therefore, must understand what normal looks like at both individual and aggregate levels and be able to compare individual events to this automatically.

Forward looking. Understanding the impact of any given event on an organization needs to be forward looking. For example, questions such as "Will my shipment arrive on time?" and "Is the system going to break today?" require forward-looking interpretations. This capability adds immediate value to operations teams that have a rolling, forward-looking perspective of what their performance is likely to be at the end of the day, week or month.

Process oriented. To be embedded within a process in order to make the process inherently smarter requires that BI 2.0 products be process-oriented. This doesn't mean that the process has been modeled with a business process management tool. Actions can be optimized based on the outcome of a particular process, but the process itself may or may not be explicitly defined.

Scalable. Scalability is naturally a cornerstone of BI 2.0 because it is based on event-driven architectures. This is critical because event streams can be unpredictable and occur in very high volumes. For example, a retailer may want to build a demand-sensing application to track the sales of every top-selling item for every store. The retailer may have 30,000 unique items being sold in 1,000 stores, creating 30 million store/item combinations that need tracking and may be selling 10 million items per day. Dealing with this scale is run of the mill for BI 2.0. In fact, this scalability itself enables new classes of applications that would never have been possible using traditional BI applications.

Real-Time, Event-Driven BI

BI 2.0 represents both a bold new vision and a fundamental shift in the way businesses can use information. It extends the definition of BI beyond the traditional data warehouse and query tools to include dynamic in-process and automated decision-making.

In the past, organizations have been forced to rely on out-of-date information and to attempt to fix problems long after they occur. BI 2.0 changes that because it allows BI capabilities to be built into processes themselves - in short, it lets companies create smarter processes.

When BI steps up to identifying problems and initiating corrective actions, not just presenting data, it has definitely evolved. It is ever closer to providing really useful information that can make a difference to the bottom line. Isn't this what BI was supposed to be all along?

Monday 3 August 2009

Top 5 Trends in BI in 2009


Top 5 trends in Business Intelligence for 2009


Trend #1: Complex Event

Processing (CEP) comes of age


The first generation of successful enterprise data
warehouses uncovered new insights and led to
innovative ways to improve business. These systems are
optimized for one-time queries on mostly static data
captured in the data warehouse long after the event
that generated it has occurred. The paradigm is longlived
data, short-lived queries.
CEP, a logical follow-on from business activity
monitoring (BAM), enables analysis of data streams
and linking of seemingly unrelated events in a
meaningful way. Instead of storing data and having
the execution of a query as the catalyst for results, a
continuous query system effectively "stores" the queries
and new results are initiated by the arrival of new
data, generating real-time insight and/or triggering
appropriate action. (The new paradigm is long-lived
queries, short-lived data).
CEP has heretofore been conspicuously missing from
the mainstream BI arena, necessitating stovepipe CEP
implementations that are only loosely integrated with
organizations' existing visualization, reporting,
dashboarding, information modeling, metadata, and
other BI infrastructure components. We are seeing
indications of that changing as leading BI vendors
partner with and acquire CEP engine providers, and
as BI users incorporate CEP in growing numbers.


Trend #2: Convergence of structured

and unstructured data

This has been a topic of interest for years, but it is
coming up in more conversations with current and
prospective customers than ever before. In the HP
2009 survey, 60% of respondents indicated that they
have an identified need to analyze unstructured data
as part of their BI systems, with over half of those either
doing the unstructured data analysis today, or
developing the capability.
As retailers strive to be more customer-centric, and
healthcare organizations strive to efficiently improve
and manage patient care, and financial institutions
strive to better detect risk and fraud threats, they will hit
limitations by not including unstructured data in the
ever-increasing analysis.
Using only structured data as a proxy for "what is
happening" and making an inference from that,
without correlating with available unstructured data,
can lead to very wrong decisions. For example, coded
diagnoses targeted for the payer often do not indicate
what's really wrong with a patient. Analysis of cancer
case reimbursements might indicate that more money
should be put into brain cancer research because of its
prevalence. But, if any cancer metastasizes to the
brain, it's often coded as brain cancer because of the
greater likelihood of reimbursement. Patient file notes
would indicate the true diagnosis.
Another common business driver is to mine call center
service logs and e-mail together to better understand
customers, for early problem detection and to discern
actual cause of problems.

Trend #3: The line is blurring

between data warehouse,

operational data store (ODS) and

operational systems

Initially, users expected ERP systems to provide needed
reporting. When these systems couldn't meet
requirements due to backlog and overload, users
turned to the data warehouse. Traditional BI satisfies
most strategic reporting and analysis, but not real-time
operational reporting with its associated needs for
high-volume real-time data updates, high availability
needs and high throughput rate of operational queries.
Operational reporting has high overhead and often
ties up the data warehouse, preventing other analytics
from running.
We are seeing operational reporting as a top business
initiative, and increasing interest in the use of a data
provisioning platform as companies need to extend the
data warehouse to more operational use. The platform
needs to go beyond the capabilities of an ODS,
providing operational reporting, data cleansing,
metadata management and data warehouse staging.
Such a data hub enables agility and new applications,
while preserving and enhancing the existing data
warehouse structure, and does it in a much more
efficient and cost-effective manner than using disparate
independent data marts. A hub that connects to
existing enterprise service buses (ESBs) and allows
architectural flexibility, including federation for remote
data, reflects the changing nature of the business while
allowing centralized control over data quality and
data access privileges.
The result is the ability to do operational BI which
involves embedding and automating analytics in a
process so that a person — or another process — can
act on generated information in real time, making
decisions and taking action in the context of a
business process.


Trend #4: Data integration focus

gaining new momentum

Many BI systems in place today were built for strategic
decisions, the sweet spot of traditional BI. Analysis is
done by a small number of people, over a period of
time, allowing for analysts to manually cleanse and
reconcile data from multiple disparate sources, and to
ensure that business rules are applied appropriately
and consistently. Many organizations would like to
increase their intelligence by giving more employees
access to these analytic tools, and applying them to
operational decisions. But it's more than a matter of
increasing capacity for data volumes and query
throughput, and giving the users simpler tools. The
limitation of first generation BI systems is not simply
their inability to handle large volumes of data and
users, but their lack of data integration rigor, including
data cleansing, MDM, and metadata management.
Operational analysis does not afford the time for
manual oversight to ensure proper quality,
reconciliation and classification of the source data,
which may have to be served to applications or
processes where the decisions are then made.
Organizations intent on leveraging their data and
expanding their analytic capability are recognizing the
value of an underlying infrastructure which provides
well-integrated, high quality data to applications,
processes and people.
According to Gartner, "Contemporary pressures are
leading to an increased investment in data integration
in all industries and geographic regions."6 In addition,
"recent focus on cost control has made data integration
tools a surprising priority as organizations realize the
'people' commitment for implementing and supporting
custom-coded or semi-manual data integration
approaches is no longer reasonable."7 The weak
economy will drive M&A in many industries, resulting in
a further need to integrate disparate data to get a
single view of the business, supporting continued
demand for MDM. And financial regulations are likely
to increase. Transparency needed for regulatory
compliance requires a consistent and complete view of
the data which represents the performance and
operation of the business.
In addition to early implementations, we are now also
seeing the results of more recent data warehouse
modernization and data mart consolidation projects
that were undertaken to cut costs, improve performance
and provide more headroom. Where the approach was
to move existing data structures to a new platform to
meet those immediate goals without addressing the
fundamental data integration issues, organizations are
left with the same unwieldy data structure as before,
preventing them from expanding the use of the data
warehouse to meet additional business needs.
Organizations are realizing the need for an overall
enterprise information management (EIM) strategy in
order to leverage data as a corporate asset, to apply
advanced analytics that will help them achieve a
discipline of fact-based decision making, and eliminate
the wastefulness of different teams using different tools
with little consistency and lots of overlap and
redundancy. They are also seeing that data integration
is a critical component of an overall EIM strategy.
Inconsistent meanings create barriers to reliable
analytics. As the boundaries between application
domains like CRM, ERP and product lifecycle
management continue to erode, there is a growing
need to create an enterprise-wide information strategy
to ensure semantic consistency for all users,
applications and services.

----
A7 Gartner, "Magic Quadrant for Data Integration Tools," by Ted Friedman,
Mark A. Beyer, Andreas Bitterer, 22 September 2008.
----

Trend #5: Analytics moves to the

front office — More sophistication in

the hands of business users

Companies are looking to apply advanced analytics to
ERP, CRM and supply chain management systems in
order to achieve strategic competitive differentiation.
The traditional approach to analytics has been to hire
modelers with PhDs who spend three months
developing a model, up to a few dozen or a few
hundred per year. The modeling runs offline to do
customer segmentation, for example. Capturing this
sophistication in tools that can be used by business
managers enables the development and use of not
hundreds, but thousands of models, with a much
shorter time to market. This approach makes it possible
for someone who doesn't know what a neural network
is, to use one, as mainstream capability.
There is an Internet influence on interfaces as well.
Instead of pulling data from multiple sources and
building an analysis cube, the user will go to a portal
and request data elements. Provisioning will be
automated rather than manual, assuming that a data
integration infrastructure has been put in place.