Gartner BI Summit Part 2

January 29, 2009

As promised in the Mini-Summary, which was written in some haste to appease those who weren’t enjoying the 24-hour party city that isn’t The Hague, a little (in fact, rather a lot) more on what went on at the Gartner BI Summit.  In the Mini-Summary, I covered the keynote, in somewhat light detail.  It was probably enough to give a flavour. 

I’ll outline the sessions that I attended so you know I wasn’t in the Hague for the crack. It’ll serve as an aide-memoire for me too.  It was great to meet up with some of the folks I met on Twitter and also others that I first met at the conference. On with the summit.

Tuesday

Data Integration Architecture and Technology: Optimizing Data Access, Alignment and  Delivery – Ted Friedman – Gartner

This is an area of interest to me, as one of the products I look after is firmly in this space. A very good presentation containing plenty of survey-based facts, and a case study on Pfizer, who have a ‘data integration shared services’ organization.  I suppose this is a DI version of the Business Intelligence Competency Centre.

ETL is still by far the largest area of DI, with replication, federation and EAI following. In addition, standardization of DI tools/architecture within organizations is still some way off.

The high-level message was that Data Integration is an absolute foundation of any data operations, whether BI or otherwise.  Without good DI, you just end up with the old GIGO scenario. Not too much new for me, as to be expected, but Ted did put the kibosh on the BI as a service by reflecting my own personal view that in most cases, these data environments are ‘too diverse’ to lend themselves to easily to the SAAS model due to being hamstrung by the data integration piece of the puzzle.  Narrow, specialized solutions can work, as well as simple data environments.  However, as was pressed home later in the conference, that’s not the main reason BIaaS will not be as popular as many are projecting.

Innovation and Standardization: The Yin and Yang of BI – Timo Elliott / Donald McCormick of SAP Business Objects

This session started with Timo mashing up some Obama data in Xcelsius and was generally designed to show that SAP Business Objects still has some innovation to show, even now it is part of the Walldorf Borg. The main highlight (from their point of view) was Polestar.  I took a very quick look at the site, but was diverted by the typos “quentity” and “dectect” as well as noting it was not tested on IE8, so I left it for another day. Looks interesting though.

SAP generously conceded that less than 50% of business data exists in SAP.  I am assuming they mean within organizations running SAP.  Even then, that’s probably an underestimation.  To that end SAP are introducing federation capabilities.

The Role of BI in Challenging Economic Conditions – Panel Discussion

The panel consisted of some large customers from around Europe.  They were giving their views on how the climate affected their BI activities.  Key point here include reducing the number of tools and vendors in the BI stack, squeezing licence costs – either by forcing prices down via negotiation, redeploying seldom used licenses or other BI ROI audit activities.  Naturally, I imagine some licenses will become available as headcount shrinks this year.

The customers were focusing their BI efforts more on costs than on revenue and margins, which were previously the focus.  In this uncertain environment, the speed of decision making is critical and some of the selection criteria for BI tools and initiatives have changed a lot.  One of the customers noted that they used to talk about the look and feel, get down to details such as fonts etc, now its “how much, how fast to implement?” 

BI is going to be more tactical for the short term, with small-scope projects targeted at answering key questions quickly.

Emerging Technologies and BI: Impact and Adoption for Powerful Applications – Kurt Schlegel – Gartner

This session looked at the macro trends in BI, which were as follows:

  • Interactive Visualization (didn’t DI-Diver do this back in the late 90’s?)
  • In-Memory Analytics
  • BI Integrated Search (they showed Cognos here, but strange there was no mention of MOSS 2007 / BDC which does this quite nicely)
  • SaaS (showed a good example where the SaaS provider had a ton of industry information that could be leveraged for decision making, rather than just some generic solution shipping in-house data back and forth)
  • SOA / Mashups
  • Predictive Modelling
  • Social Software

None of this was new to me, but there were some good case studies to illustrate them and the SaaS example was the most realistic I’d seen from a business benefits point of view.

 

Wednesday

Using Corporate Performance Management to understand the drivers of Profitability and Deliver Success – Nigel Rayner – Gartner

This was an area I wasn’t too familiar with, but Nigel Rayner did an extremely good job in pitching the information and delivery as to not overwhelm novices, but not oversimplify and thus bore the teeth off seasoned practitioners.

Kicked off with increasing CEO turnover, then how the market measures CEO performance.  Most organizations don’t have a handle on what actually drives profitability, which is where CPM can help with profitability modelling and optimization.  The whale curve was discussed and Activity Based Costing.

A key point that was made is that BI is very often separate from financial systems and CPM links the two together.

Driving Business Performance in a Turbulent Economy with Microsoft BI – Kristina Kerr – Microsoft , Ran Segoli – Falck Healthcare

MS BI case study, focusing on the cost-effectiveness and speed to implement of Microsoft BI.  I have had a lot of exposure to the stack and other case studies, so didn’t make notes.  Sorry.

Does BI=Better Decision Making – Gareth Herschel – Gartner

Really enjoyed this session, for the main reason that this was a welcome step back from BI per se, and looking at decision making in general.  He looked more at the theory of decision-making first, then linked that to BI.

The first area was predicting (root cause) or managing events, if this can be done effectively, then the increased speed of detection can allow more time to make appropriate decisions, especially as the more time you have, the more options you have available. This ties in to CEP (complex event processing) and BAM (business activity monitoring).  In addition, data mining can assist in predicting events and scenarios.

This is a discipline that must be constantly reviewed, as what happens when prediction and analysis disagrees with reality? Either the situation has changed, or you didn’t understand it correctly in the first place.

He went through 4 key approaches to decision making and their rating of explicable vs instinctive and experience required.

  • Rational/Analytical
  • Emotional
  • Recognition Primed
  • Thin-slicing (“Blink”)

This fed in to information delivery methods.  This would be selective displays such as dashboards, alerts/traffic lights, or holistic displays such as visualization, which are more ‘decision-primed’ than data-centric displays such as tabular representations.

It was clear that he saw visualization and very narrow, selective displays as the best way to aid decision-making.

In my opinion, all that’s fine and dandy, if you’re measuring and delivering the right targeted data 100% of the time, otherwise it is very easy to be blindsided.

Would certainly seek him out at other Gartner events for some thought-provoking content.

Gareth made some good book recommendations:

The Art of the Long View

Sources of Power

Various Dan Gilbert stuff on Emotional Decision Making – This is his TED talk.

 

The Impact of Open Source in BI, DW and DI – Andy Bitterer & Donald Feinberg – Gartner

A very good session, surprising at least one of the open source advocates in the audience with it’s upbeat message.  A highlight was Donald Feinberg’s prediction that Unix is dead and the funeral is in 30 years.  This is in response to Unix ceding to Linux in the DBMS world. It appears Gartner have relaxed their usual criteria in order to give OSS a chance to be evaluated based on support subscription revenue.

Feinberg also strongly recommended that anyone using Open Source must get a support subscription, to do otherwise being tantamount to lunacy. 

On to the BI side of OSS and market penetration is low, with less than 2% of Gartner-surveyed customers using it.  However, a growth area with small ISVs using it as an OEM strategy for their BI requirements.

The functionality gaps are getting smaller between commercial and OSS, with Reporting, Analysis, Dashboarding and Data Mining all now being offered, but still no Interactive Visualization, Predictive Modelling, Mobile play, Search or Performance Management.

On the DI side, other than the Talend/Bitterer argument, it’s not hotting up too quickly.  DI is mostly limited to straight ETL of fairly meagre data volumes, daily batches of around 100K records.

Functionality gaps here are in the following areas: Metadata management, Federation/EII, Replication/Sync, Changed Data Capture, Unstructured Content, Native App Connectivity and Data Profiling/Quality.

An overarching issue to adoption in all areas is the lack of skills.

An interesting scenario that was floated was the creation of an open source stack vendor, namely Sun, snapping up some of the OSS BI players. 

The Right Information to the Right Decision-Makers — Metadata Does It – Mark Beyer – Gartner

This was a useful presentation for me, as I am familiar with metadata, but not the systems and processes used to manage it.  So the definition of metadata as per Gartner is data ABOUT data, not data about data.  Metadata describes and improves data, unlocking the value of data.

I knew some classic metadata SNAFUs such as projects where measurements across country-separated teams were in metric or imperial, leading to untold costs.

Some others that Mark mentioned were very amusing, such as the data members of Gender.  I can’t recall the exact figures, but one government organization had 21. 

On to why metadata matters in decision making – it can be an indicator of data quality, it can indicate data latency and can provide a taxonomy of how to combine data from different areas.

In addition, metadata can help provide a business context of the data, in addition to mapping importance, user base and various other elements to give an idea of how critical data may be and the effects of improving that data or the impact of any changes in the generation or handling of the data.

Obviously SOX and Basel II also put increased pressure in managing metadata for the purposes of compliance, governance and lineage.

I think the takeaway for me was this, in terms of key questions that metadata should seek to answer.

  • What are the physical attributes of the data (type, categorization etc) ?
  • Where does it come from?
  • Who uses it?
  • Why do they use it?
  • How often do they use it?
  • What is it’s quality level?
  • How much is it worth?

Comparing the Mega Vendors: How Do IBM, Microsoft, Oracle and SAP Stack Up? –Andy Bitterer, Donald Feinberg, James Richardson, Neil Chandler & Bill Hostmann as moderator

Stupidly, I ran out of paper, so had to take some notes on the phone.  I don’t like doing that as it looks like you’re texting people, or Twittering.  So, I limited myself to the bare minimum.

Performancepoint is weak with respect to the competition.  I guess it’s even weaker now they’ve ditched planning. 

Donald Feinberg is not a fan of SaaS BI.  A view I agree with, party due to the data integration issues in the real world, as highlighted by Ted Friedman, earlier in the week.  So, Donald decided to do a straw poll on who would be interested in/ implementing SaaS BI. I think there might have been 1 person, but possibly zero.  There goes a bunch of predictions for 2009.  The reason for this retiscence was one of trust, they just don’t want to throw all this over the firewall.

Another straw poll was the consolidation to a single vendor, most are doing this and very few said they were going to buy from a pure play vendor.  I suppose you have to take into account the self-selecting group of end users at a Gartner BI summit though. 

Thursday

BI Professional – Caught in a Cube? – Robert Vos – Haselhoff

Entertaining presentation, but I was suffering with a bad cold and insufficient coffee, so didn’t get the full benefit.  He did help me wake up fully for the next presentation, so can’t have been all bad.  No talk of vendors and technology per se here, more stepping back and looking at strategy, organizational elements and making BI work from a people perspective.

Building a BI and PM Strategy Workshop – Andy Bitterer & James Richardson

This was an interactive session.  Like a mock exam for BI folks where a bunch of people were randomly put in groups and asked to design a BI strategy.  The results were pretty good and Andy Bitterer’s wish that they didn’t start naming vendors was fulfilled.  However, I did note an issue with people really thinking details first, rather than strategy first.  I also found it slightly strange that the CEO did not tend to come up as a contender for involvement.  I saw more of this in Nigel Rayner’s CPM presentation, with CPM giving the CEO insight into profitability, so it seems to me to make absolute sense to have CEO involvement in the BI strategy, since the BI goals need to be aligned with the business goals.  Some others did pick up on the alignment, but still saw it as in the CIO remit.  All in all a pretty good showing, but the IT and ‘the business’ lines were still visible, if somewhat more hazy than before.

Data Quality: Your Decision Insurance – Andy Bitterer & Ted Friedman

I took a LOT of notes in this session, so I’ll try and boil it down.  Typical situation is a bunch of folks in the boardroom, all claiming different numbers.  This leads to risky decision-making if unnoticed and a huge time sink reconciling when it is noticed.

Once again, there is a turf aspect involved, with data being considered IT’s problem, so they should be responsible for data quality.  However, IT is not the customer for the data, so don’t really feel the pain that the business feels from data quality issues.  In addition, IT don’t know the business rules or the domain expertise.  It’s not a pure technology problem, but IT need to be involved to make it work.

There were some examples of the costs of bad data quality, leading to working out ROI for investment.  With Sarbox et al, of course there is a new cost involved for the CEO/CFO, the one of going to jail if the numbers are wrong.

Another aspect of the ROI was based on the level of data quality, it may be that 80% is enough, especially when the move to 100% is astronomically expensive.  The return on incremental improvements needs to assessed.

So, who’s on the hook for DQ then? Data stewards, who are seen as people that take care of the data, rather than owning it (the organisation owns it) they should know the content and be part of the business function, rather than IT.

An example to show how exposing data quality within an organisation was a DQ ‘scorecard’.  This gives an idea of the quality, in terms of completeness, duplication, audited accuracy etc.  A problem that I see with this is a kind of data quality hubris versus a data quality cynicism.  If it works well, then the scorecard can give the right level of confidence to the decision makers, but if not, then it could lead to overconfidence and less auditing.

So, operationally the key elements are:

  • Data Profiling / Monitoring – e.g. how many records are complete.
  • Cleansing – de-duping & grouping
  • Standardization – rationalizing SSN formats, phone nos etc
  • Identification & Matching (not 100% sure here, I see some of this in cleansing)
  • Enrichment – bringing in external data sources, e.g. D&B to add more value to the data.

Ideally DQ should be services, which are then reusable and repeatable – used by many different data sources.  SOA model, although SOA is supposed to be dead isn’t it?  Who knows, maybe the term has died – the technology and approach certainly lives on.

Lastly DQ ‘Firewalls’ were discussed.  This is a set of controls used to stop people/systems from poisoning the well.  Inbound data is analyzed and given the elbow if it isn’t up to snuff. It even incorporates a ‘grass’ element, where DQ criminals are identified and notified.

Market Players: Magic Quadrant Power Session – Andy Bitterer, Donald Feinberg, Gareth Herschel, James Richardson, Mark Beyer, Neil Chandler & Ted Friedman

The conference starting to take its toll by this point, a flu-like cold and no more tablets left.  Add that to a few pretty late nights, notably with folks from Sybase, Kognitio, the BeyeNetwork, end-users and even Gartner (not analysts, I hasten to add) and the writing is on the wall.  Deciphering my handwriting is like translating hieroglyphics written by a 3 year old.

So, the summary of this session is ultra-short.

  • BI MQ SAP/BO moved down a little, counterintuitive to some.
  • DI MQ Data services / SOA capability is key.  Tools need to supply and potentially consume metadata to play well in a real world environment.  Currently 54 vendors ‘eligible’ for this MQ
  • DQ MQ Pace of convergence between DI and DQ is increasing, it will become critical.  Acquisitions will increase from DI vendors having to fill out their feature sets.

Overcoming The BIg Discrepancy: How You Can Do Better With BI – Nigel Rayner

I made a herculean effort to stay conscious in this session, mainly because I had enjoyed Nigel’s CPM session and he proved also to be a very nice chap when we chatted over a cup of coffee earlier in the week.  In addition, I had paid for the 3rd day, so was going to extract every drop of value 😉

Nigel kicked off with “the downturn”, of course.  The message was do not hit the panic button.  BI and PM will play a key role in navigating the downturn:

  • Restoring business trust
  • Understanding why, what and where to cut
  • Identifying opportunities of business growth, or which parts of the business to protect

There was some reality also, in that it is unlikely that “Big BI” projects will be approved in the short term and you will need to do more with what you already have.

The plan of attack is the 3 I’s – Initiatives, Investments and Individual Actions

Initiatives

  • BI/PM Competency Centre
  • BI and PM Platform Standards
  • Enterprisewide Metrics Framework
  • Inject Analytics into Business Processes

Investments

Prioritization of investments is critical.  Targeted short-term, cost-effective investments are the order of the day.  Some suggestions include:

  • Data Quality
  • Data Mining
  • Interactive Dashboards
  • CPM Suites

There was a mention of ‘Spreadsheet hell’ being addressed by CPM.

Individual Actions

  • Take advantage of key skills as companies undertake knee-jerk cost-cutting, AKA get good laid-off people on the cheap.
  • Redeploy key employees to tactical, short term roles rather than RIF-ing them.
  • Respect “conspicuous frugality” but don’t be defined by it.
  • Learn from others (i.e BI award winners, case studies, social networks)
  • Evangelize BI

Then, it was a mad rush for the taxi to the station.

For more, detailed coverage of the event, check out Timo Elliott’s blog post.

Advertisements

Not going to be a long post, this, but wanted to get a few things down. Will edit/append later with some info from other sessions. Look at my Twitter feed also for some snippets.

Keynote: ‘The BIg Discrepancy: Strategic BI, but no BI Strategy’

BI Analyst Techno with Andy Bitterer and Nigel Rayner AKA Star Schema.  BI-related refrains to the sound of banging techno with an ‘Every Breath You Take’ melody.

Once again, BI is #1 on CIO agenda.  This has been the case since 2006, but not much further along.  This is more due to human factors than BI tools. Many organizations don’t appear to have a strategy for BI and there are still problems in the following areas:

  • Governance
  • Standards
  • Trust
  • Skills
  • Definitions

Still a lot of silo thinking and a proliferation of tools.  Adding that to internal politics leads to a heady mix.

A straw poll revealed only 15 hands from customers that had a formal BI strategy.  Some BI from Gartner is needed about how many customers were in the keynote, even an approximation would help.

Another key point was the ability of BI to support change, as well as the effects of making changes to variables.  BI must be able to adapt quickly to external conditions, such as the ability to optimize to cost reduction instead of revenue growth, for example.

I was left with the impression that the biggest problem is not the tools, but the bad craftsmen (in the nicest possible sense).

Another idea that BI Competency Centres would help to make BI initiatives succeed, since it would likely be tasked with addressing the problem areas above explicitly.

The next musing was why does IT often sell BI to the business, when it really should be the business users driving it.  One possible problem with business users creating BI requirements is that they may not know what is or isn’t possible, resorting to the comfort of reports when asked to define their own requirements. I suppose this is another plus point for a BI competency centre, which could serve the function of business user training and/or demonstration of the techniques and technologies available.

As a follow-on to this, the point was made that IT building BI systems in isolation, away from business users will very likely lead to failure.

This all reminds me of the work I did on ‘Expert Systems’ shells back in the early 90s, with the Knowledge Engineer (read IT BI person) and Domain Expert (read business user) working in conjunction. This was a pre-requisite of the approach, not a nice-to-have, as it seems to be with BI.

Unfortunately the web seems to have failed me for any really good references and solid examples of this, certainly none as detailed, iterative or collaborative as the processes we were using at IBIES back in 1992.

From Engelmore & Feigenbaum:

‘A knowledge engineer interviews and observes a human expert or a group of experts and learns what the experts know, and how they reason with their knowledge. The engineer then translates the knowledge into a computer-usable language, and designs an inference engine, a reasoning structure, that uses the knowledge appropriately’

I digress.

Then they went through the 2009 Predicts, which are available here.

I’ll probably add updates later, just wanted to get some info on the keynote down.

Business Intelligence is a term that covers a multitude of sins.  It is also a term which is extremely open to interpretation, depending on your viewpoint, technology mastery, user skillset and information environment.

Creating new terms, especially acronyms is what the technology industry does best, they delight in it, but it does serve some purpose other than the amusement of marketing folks and analysts.

To go back to an old paradigm, creating labels or categories is an essential part of the market.  Not just the BI market, but any village market, or supermarket. 

Categories help consumers navigate quickly to the types of products they are interested in, like finding the right aisle to browse by looking up at the hanging signs in the supermarket, or the area in the village market where the fruit vendors gather.  Labels give more information, such as pricing, size etc and then it is down to the product packaging and the rest of the marketing the consumer has been exposed to in terms of advertising, brand awareness and so on.

Business intelligence is a pretty long aisle.  At one end, the labels are pretty narrow but at the other, very very wide, to accommodate the zeros after the currency symbol and ‘days to implement’ information.

The problem is the the long aisle – vendors need to break that aisle up into manageable (walkable) segments to help the customer navigate quickly to the solution they need.

The other problem is that in this case, the supermarket is not in charge of the category names, not even the vendors or analysts are – it’s a free for all.

This means chaos for the poor consumer, all capering around in the aisle like some kind of Brownian motion.

Thinking about this, after being bombarded with a panoply of BI terms lately, I thought of INCOTERMS, which is a standard set of terms used in international sales contracts.  These terms are strictly defined and published by an independent organization so that both parties in the transaction know exactly where they stand.

According to Boris Everson of Forrester Business Intelligence is “a set of processes and technologies that transform raw, meaningless data into useful and actionable information

Not sure about that one myself – who acquires and stores meaningless information? Other than maybe Twitter.  Other suggestions most welcome.  It might help show the possible technologies Forrester are referring to.

This certainly excludes my product, since we work with data that theoretically, people are probably already making decisions from. They just need to slice and dice it differently.

I can’t complain too much about Forrester though, at least they have Report Mining in their buzzword bonanza (courtesy of Curt Monash), our little niche.

The concept of transforming raw data is easier to work with (in the Forrester BI definition sense anyway) as it could refer to something like a web log, which is pretty difficult to gain any insight from by looking at it in a text editor, unless you have an eidetic memory and the ability to group and summarize the various keys and measures in your head.

Now, as often is the case when you start writing about a topic, the research you do unearths people who have written pretty much the same thing before you.

Going back to definitions, finding Todd Fox’s decent definition of BI – “A generic term to describe leveraging the organizations’ internal and external information assets for making better business decisions.” from a define:Business Intelligence search on Google, leads to Todd’s own attempts from a Data Warehousing perspective, which, in turn was prompted by James Taylor’s post on the confusion around the term analytics (in the context of BI). In addition, even Kimball was involved with his “Slowly Changing Vocabulary” section in one of his books.

This at least tells me I’m on the right track, if not entirely original.

In 1989 Gartner’s Howard Dresner defined BI as “a set of concepts and methods to improve business decision making by using fact-based support systems

More definitions can be found from Larry English and probably ad infinitum, or at the least, ad nauseam.

The depressing thing here is that we have only got as far as the “umbrella term” as BI becoming popularly known.

<Aside> A Dutch student at the University of Amsterdam even wrote a paper titled “Business Intelligence – The Umbrella Term” complete with an umbrella graphic on the cover page.  (It’s a doc file, so I won’t include the link.  Google it if you’re interested)</Aside>

When we start to address even Forrester’s BI buzzword hoard, never mind the others out there, it begins to lead to a total breakdown in the tried and tested categorization mechanism.

To revisit the source of the proliferation, it appears that analysts (likely as a proxy for large vendors) and vendors themselves are the main culprits.  The analysts, by virtue of some level of independence and a cross-vendor view can be seen to be the arbiters of the terms.  The problem here is that the analysts often use slightly different terms or at least different meanings for the same terms.

Naturally, both vendors and analysts want to proliferate and blur terms to aid in differentiation, or to try give the perception of innovation and progress. 

Although this is very seldom the case, as new terms are often just fracturing or rehashing existing categories and terms.

However, in some cases, drilling down into more narrow categories or updating terms due to changes in technologies or approaches is not necessarily a bad thing, if the terms/categories still aid in establishing a contract of understanding between vendor and consumer.

If we want to accommodate this, the ability to establish a common understanding, based on input from across the board – analysts and vendors, would be beneficial to all.  The problem is, you need a real independent organization that can accommodate the horse-trading, as well as maintaining an authoritative definition of terms which is acceptable to all parties.

Some amusing aspects of this I can foresee would be “Active Data Warehouse” – would you have to then create a new term “Passive Data Warehouse” to group the applications that did not fit the criteria of “Active”.  I imagine a semantic arms race that would have to be kept in check – IBMCognoStrategyFire pushes for a “Smart ETL” category, which forces the other ETL vendors into the “Dumb ETL” pigeonhole.  Dealing with this is what standards bodies do.

This is more musing than actually being stupid enough to think this is ever going to happen.  I do have sympathy with the poor customer trying to navigate the shelves of the BI supermarket though.  As someone just trying to keep a lazy eye on the machinations of the industry, it can be overwhelming.

Here’s a short quiz.

What BI term does this refer to?

“centralized repository of information about data such as meaning, relationships to other data, origin, usage, and format.”

Was this your first guess? This?

No.  Much earlier.

The more things change, the more they stay the same.

Maybe we could just provide a thesaurus, so when someone is puzzling over the latest buzzword, they can look it up and say ahh, I know what that is, we tried to implement something like that back in the early nineties.

 

UPDATE: Read this excellent article from Colin White – I didn’t see it before I wrote this – I promise!

The catalyst for Colin’s article (Stephen Few) can be found here and follow-up by Ted Cuzillo here.

Following an exchange with some BI folks on Twitter, in addition to the various articles on spreadmarts and compliance issues with Excel usage by those pesky ‘users’ there is definitely an element of the old ‘high priest’ model in all of this. However, it is most likely economics that calls the tune.

In essence, when something new is introduced, whether that be technology, or religion, there is always a period of control by the early adopters (and indeed charlatans), who are naturally keen to implement a hierarchy to benefit themselves.  They become the required intermediaries in order for the masses to get what they want.  In the IT world, I don’t necessary claim it’s full of charlatans, or the hierarchy is there for job security, but the traditional model is what it is.

Quakers, for example, broke away from this and decided to cut out the middle men. 

In their case, it was more a theological gap, but with BI, the gap is the one between supply and demand.

In real world BI today, this is reflected by the use of familiar and available tools, primarily Excel, to bridge requirement/delivery gaps.   To an extent, Microsoft have historically recognized this and provided plenty of rope for users to hang themselves with.

To backtract a little and revisit what I consider to be BI – some folks associate BI solely with the more complex (and/or esoteric) analysis found with data mining and heavy statistics on massive data sets, for example the (sadly untrue) beer and nappies (diapers) story. I am a little more catholic than that. 

To lift an idea from Michael Gentle, here’s Karl Marx on complex BI:

“A commodity appears at first sight an extremely obvious, trivial thing. But its analysis brings out that it is a very strange thing, abounding in metaphysical subtleties and theological niceties.”

BI is all that, but covers much more of the mundane as well.  If you cannot easily find out how many of a certain widget you sell in the week before a holiday, or even how much of a certain widget you sell in a given geographic area, then when you eventually get your hands on this information, it’s business intelligence, or decision support, depending on your vintage.

Marx again:

“From each, according to his ability; to each, according to his need”

Therein lies the rub.

The maths just doesnt add up in the arms race of IT vs information workers. IT’s ability is completely swamped by the needs of users.  Nothing new here, I would contend that this is accepted wisdom.

To give you an example.  Of 50 finance users, every week, 5 will require new custom information.  Sounds reasonable.  However, I am talking about a real world example here.  So, each piece of new information information could come come directly, or from a combination of 20 different systems.  In addition, some systems are not in-house, so a short specification has to be written, sent to the provider, a quote comes back, is examined, eventually approved and implemented. So we are talking about 250 requests per year, but as you can see, these requests can be pretty costly and/or difficult to fulfill. <potential_product_plug_warning>I know about this stuff, as our product is often used in a guerilla-style way to sidestep these issues</potential_product_plug_warning>.

The planned economy just don’t work here my friends, we’re going to have to consult Adam Smith.

Information needs come from many sources, many levels of seniority and also have different profiles in terms of how time-critical they are.  The internal market for information, when in a planned economy scenario (i.e. IT-centric) is often dictated by how important the information requestor is and how easy the request is to fulfill – to meet internal SLAs and so forth.

“The monopolists, by keeping the market constantly understocked, by never fully supplying the effectual demand, sell their commodities much above the natural price.”

When the market is opened up to users, who generally know their data and exactly what to do with it – look – there’s the benevolent figure of Adam Smith again!

“The greatest improvement in the productive powers of labour, and the greater part of the skill, dexterity and judgement with which it is any where directed, or applied, seem to have been the effects of the division of labour”

And the users rejoice (whilst mangling a Smith quote):

“The natural effort of every individual to do their own analysis  … is so powerful, that it is alone, and without any assistance, not only capable of carrying on the society to wealth and prosperity, but of surmounting a hundred impertinent obstructions with which the folly of IT too often encumbers its operations.

However, much as users would rejoice in having these freedoms, there is a downside.

“The property which every man has in his own labour; as it is the original foundation of all other property, so it is the most sacred and inviolable… To hinder him from employing this strength and dexterity in what manner he thinks proper without injury to his neighbour is a plain violation of this most sacred property.”

The downside is the small matter of the injury to his neighbour, which in this case is probably a whole street of neighbours – Database Team(s), Apps Team(s), Data Quality folks, Internal Audit/Compliance etc etc.

So how do you resolve the conundrum of creativity/productivity vs control/compliance?

Microsoft’s stab at this is Gemini, using an analogy of the twin aspects of IT control, with ETL processes feeding into SQL/DatAllegro/Analysis Services and the user empowerment of the ubiquitous, familiar Excel client plus the “social” aspect of sharing their creative works through SharePoint. 

I have only limited information on Gemini at the moment, so my summary is probably sketchy, although I hope, still accurate.

The hope is, as I see it, to prevent the users from poisoning the well, but still allowing them to drink deeply with the paternalistic arms of IT around them.  Kind of like socialism, rather than rabid, laissez-faire capitalism.

 

Hat tips & further reading

Sean O’Grady (Control vs Creativity)

Michael Gentle (Good guy/Bad guy)

Nigel Pendse (Gemini Comment & Review)

Brent Ozar

Doug Henschen

Watch MS BI Conference Keynotes (Mosha Pasumansky)

Andrew Brust

It seems like the rancour around the Open XML / ODF soap opera still simmers away.

Even though the volcano has erupted and the ash has cooled, it won’t be forgotten.

For those who don’t know the tale and want a recap:

Sun buy the commercial product StarOffice, then give it away for free. When that still doesn’t arouse too much interest, they decide to open source it instead in a move to reduce MSOffice dominance.  It is called OpenOffice.org, or OpenOffice ->

IBM and Sun dream up strategy of beating MS Office with government mandate stick via standardisation route.  Open Source folk find it an intoxicating bandwagon too->

OpenOffice file format is morphed into a standard and hurried through standardisation->

IBM put coin and feet on the street into strategy and start to make progress with some government bodies.  An IBM services bonanza beckons.  Sun wonders (as usual) how are THEY going to make money->

Microsoft, either reacting to this strategy, or in parallel, being forced by European Union, also morph their Office suite into a standard and hurry it through standardisation->

(note – slightly more fuss/interest around this than the ODF process)

IBM/Sun/Open Source folks call foul and IBM throws its toys out of the pram regarding ISO’s value.

During this debate, there are numerous mentions of corruption.  Some of it is well documented and looks like the usual lobbying that one might expect.  IBM + friends went at it hammer and tongs, so did Microsoft.

However, these nebulous claims still remain. I am reminded of Tim Bray every day I look out of my window, since the UK office of OpenText is visible from my home, a company which Tim co-founded.

One would have thought even a plain speaker like Tim would be able to voice his concerns, which should naturally have been weighed up in the context of the ‘theatre’ (in the war sense, and others) and must therefore be so heinous that he fears legal action (from Microsoft, one assumes) and has decided that discretion is the better part of valour.

I have an idea.  Anyone with any of these obviously very juicy stories can put them up on WikiLeaks.  I suppose you could always use Groklaw, but I can’t figure out if it’s one of those ‘ironic’ parody-style sites or not. It smacks of those religious site parodies, which are pretty much indistinguishable from the real thing.

I took a quick look on WikiLeaks and the only things I could find were all from Stefan Krempl, just links to published articles on heise.de.  Not very juicy.

So if anyone has any top notch dirt, I encourage them to use WikiLeaks.  I would welcome some new material to read, as the flywheel still seems to keep spinning on the same old energy.  Maybe we’ve found the perpetual motion machine.

Come on people, it’s catharsis time.  You’ll feel better. 

In a more positive vein, it looks like everyone is going to play nicely in the new ISO maintenance structures. I only said it looks like.

In the spirit of full disclosure, the company I work for are Microsoft partners (as we are IBM partners) and have implemented Open XML spreadsheet reading and writing in our products.  We are Excel-centric, so most of our customers have never heard of ODF.  Still, maybe once they get hold of Office 2007 SP2 and the ODF support, they’ll start clamouring for it.

“report as mature”

November 24, 2008

In starting to set this up, I noted this amusing option in the blog info section.  If you decide to grass someone up using this ageist option, then you get the following line of euphemistic tosh: 

“If you come across a blog which you believe to be primarily adult-related or too mature for a general audience, please let us know…”

There are a few possibilities:

primarily adult-related” – the target audience for WordPress blogs is so young and vapid, that anything that may be related to adults and their nefarious interests is to be frowned upon.  Shady news sites such as the BBC and Grauniad.co.kup or maybe those seedy e-government portals need to be informed upon immediately and exterminated.

too mature” – perhaps some kind of aversion to issues that might affect the elderly.  Maybe they fear a deluge of pensioners’ blogs discussing incontinence and other senior issues that the young just find too icky. 

-To provide balance, there is actually also a “report as immature” setting, but for some reason (probably IE8 standards mode) I can’t see it.  I would like this option implemented, if it does not exist.

-They are dancing daintily around the terminology they really want to use.

Most likely option 4. Maybe they can just say “smut”.  Non offensive word that you don’t have to be ashamed of using in front of your young children.  Or WordPress consumers.