Metadata Virtualization and Orchestration from Stone Bond Offers Enterprises a Way to Improve Response Time and ROI
1. Metadata Virtualization and Orchestration from Stone Bond
Offers Enterprises a Way to Improve Response Time and ROI
Transcript of a BriefingsDirect podcast on how companies can get a handle on exploding data
with new technologies that offer better data management.
Listen to the podcast. Find it on iTunes/iPod. Sponsor: Stone Bond Technologies
Dana Gardner: Hi. This is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're
listening to BriefingsDirect.
Today, we present a sponsored podcast discussion on the need to make sense of the deluge and
complexity of data and information swirling in and around modern enterprises. Most large
organizations today are able to identify, classify, and exploit only a small
percentage of the total data and information within their systems and processes.
Perhaps half of those enterprises actually have a strategy for improving on this
fact. But as business leaders recognize that managing and exploiting information
is a core business competency that will increasingly determine their overall
success. Broader solutions to data distress are being called for. [Disclosure:
Stone Bond is a sponsor of BriefingsDirect podcasts.]
Today, we'll look at how metadata-driven data virtualization and improved orchestration can help
provide the inclusivity and scale to accomplish far better data management. Such access then
leads to improved integration of all information into an approachable resource for actionable
business activities.
With us now to help better understand these issues and the market for solutions to these problems
are our guests. Please join me in welcoming Noel Yuhanna, Principal Analyst at Forrester
Research. Welcome to BriefingsDirect, Noel.
Noel Yuhanna: Thanks.
Gardner: We're also here with Todd Brinegar. He is the Senior Vice President for Sales and
Marketing at Stone Bond Technologies. Welcome, Todd.
Todd Brinegar: Dana, how are you? Noel, great to hear you too.
Gardner: Welcome to you both. Let me start with you, Noel. It's been said often, but it’s still
hard to overstate, that the size and rate of growth of data and information is just overwhelming
the business world. Why should we be concerned about this? It's been going on for a while. Why
is it at a critical stage now to change how we're addressing these issues?
2. Yuhanna: Well, data has been growing significantly over the last few years because of different
application deployments, different devices, such as mobile devices, and different environments,
such as globalization. These are obviously creating a bigger need for integration.
We have customers who have 55,000 databases, and they plan to double this in
the next three to four years. Imagine trying to manage 55,000 databases. It’s a
nightmare. In fact, they don’t even know what the count is actually.
Then, they're dealing with unstructured data, which is more than 75 percent
of the data. It’s a huge challenge trying to manage this unstructured data.
Forget about the intrusions and the hackers trying to break in. You can’t
even manage that data.
Then, obviously, we have challenges of heterogeneous data sources, structured, unstructured,
semi-structured. Then, we have different database types, and then, data is obviously duplicated
quite a lot as well. These are definitely bigger challenges than we've ever seen.
Different data sources
Gardner: We're not just dealing with an increase in data, but we have all these different data
sources. We're still dealing with mainframes. We're still adding on new types of data from mobile
devices and sensors. It has become overwhelming.
I hear many times people talking about big data, and that big data is one of the top trends in IT. It
seems to me that you can’t just deal with big data. You have to deal with the right data. It's about
picking and choosing the correct data that will bring value to the process, to the analysis, or
whatever it is you're trying to accomplish.
So Noel, again, to you, what’s the difference between big data and right data?
Yuhanna: It’s like GIGO, Garbage In, Garbage Out. A lot of times, organizations that deal with
data don’t know what data they're dealing with. They don’t know that it’s valuable data in the
organization. The big challenge is how to deal with this data.
The other thing is making business sense of this data. That's a very important point. And right
data is important. I know a lot of organizations think, "Well, we have big data, but then we want
to just aggregate the data and generate reports." But are these reports valuable? Fifty percent of
times they're not, and they've just burned away 1,000 CPU cycles of this data, big data.
That's where there's a huge opportunity for organizations that are dealing with big data. First of
all, you need to understand what this big data means, and ask are you going to be utilizing it.
Throwing something into the big data framework is useless and pointless, unless you know the
data.
3. Gardner: Todd, reacting to what Noel just said about this very impressive problem, it seems that
the old approaches, the old architectures, the connectors and the middleware, aren't going to be
up to the task. Why do we have to think differently then about a solution set, when we face this
deluge, and also getting to the right data rather than just all the data regardless of its value?
Brinegar: Noel is 100 percent correct, and it is all about the right data, not just a lot of data. It’s
interesting. We have clients that have a multiplicity of databases. Some they don’t even know
about or no longer use, but there's relevant data in
there.
Dana, when you were talking about the ability to
attach to mainframes, all legacy systems, as well as
incorporated into today’s environment, that's really a
big challenge for a lot of integration solutions and a
lot of companies.
So the ability to come in, attach, and get the right data and make that data actionable and make it
matter to a company is really key critical today. And being able to do that with the lowest cost of
ownership in the market and the highest time to value equation, so that the companies aren’t
creating a huge amount of tech on top of the tech that they already have to get this right data,
that’s really the key critical part.
Gardner: Noel, thinking about how to do this differently, I remember it didn’t seem that long
ago when the solution to data integration was to create one big honking database and try to put
everything in there. Then that's what you'd use to crunch it and do your queries. That clearly was
not going to work then and it’s certainly not going to work now.
So what’s this notion about orchestrating, metadata, and virtualization? Why are some of these
architectural approaches being arrived at, especially when we start thinking about the real-time
issues?
Holistic dataset
Yuhanna: You have to look at the holistic dataset. Today, most organizations or business users
want to look at the complete datasets in terms of how to make business decisions. Typically,
what they're seeing is that data has always been in silos, in different repositories, and different
data segregations. They did try to bring this all together like in a warehouse trying to deliver this
value.
But then the volumes of data, the real-time data needs are definitely a big challenge. Warehouses
weren't meant to be real time. They were able to handle data, but not in real time.
So this whole data segregation delivers a yet even better superior framework to deliver real-time
data and the right data to consumers, to processes, to applications, whether it’s structured data,
semi-structured, unstructured data, all coming together from different sources, not only on-
4. premise, also off-premise, such as partner's data and marketplace data coming together and
providing that framework towards different elements.
We talked about this many years ago and called it the information fabric, which is basically data
virtualization that delivers this whole segregation of data in that layer, so that it could be
consumed by different applications as a service, and this is all delivered in a real-time manner.
Now, an important point here is that it's not just read-only, but you can also write back through
this virtualized layer, so that it can back the data back.
Definitely, things have changed with this new framework and there are solutions out there that
offer this whole framework, not only just accessing data and integrating data, but they also have
frameworks, which includes metadata, security, integration, transformation.
Gardner: How about that Todd Brinegar? When we think about a fabric, when we think about
trying to access data, regardless, and get it closer to real time, what are the architectural
approaches that you think are working better? What are you putting in place yourselves to try to
solve this issue?
Brinegar: It's a great lead in from Noel, because this is exactly the fabric and the framework that
Enterprise Enabler, Stone Bond’s integration technology, is built on.
What we've done is look at it from a different approach than traditional integration. Instead of
taking old technologies and modifying those technologies linearly to effect an integration and
bring that data into a staging database and then do a transformation and then massage it, we've
looked at it three-dimensionally.
We attach with our AppComms, which are our connectors, to the metadata layer of an
application. We don’t agent within the application. We get the data of the data. We separate that
data from multiple sources, unlimited sources, and orchestrate that to a view that a client has. It
could be Salesforce.com, SharePoint, a portal, Excel spreadsheets, or anything that they're used
to consuming that data in.
Actionable data
Gardner: Just to be clear Todd, your architecture and solution approach is not only access for
analysis, for business intelligence (BI), for dashboards and insights, but this is also for real-time
running application sets. This is actionable data.
Brinegar: Absolutely. With Enterprise Enabler, we're not only a data-integration tool, we're an
applications-integration tool. So we are EAI/ETL. We cover that full spectrum of integration.
And as you said, it is the real-time solution, the ability to access and act on that information in
real time.
5. Gardner: We described why this is a problem and why it's getting worse. We've looked at one
approach to ameliorating these issues. But I'm interested in what you get if you do this right.
Let's go back to Noel. For some of the companies that you work with at Forrester, that you are
familiar with, the enterprises that are looking to really differentiate themselves, when they get a
better grasp of their data, when they can make it actionable, when they can pull it together from a
variety of sources, old and new, on-premises and off-premises, how impactful is this? What sort
of benefits are they able to accomplish?
Yuhanna: The good thing about data virtualization is that it's not just a single benefit. There are
many, many benefits of data virtualization, and there are customers who are doing real-time BI,
business with data virtualization. As I mentioned, there are drawbacks and limitations in some of
the older approaches, technologies, and architectures we've used for decades.
We want real-time BI, in the sense that you can’t just wait a day for this report to show up. You
need this every hour or every minute. So these are important decisions you've got to make for
that.
Real-time BI is definitely one of the big drivers for data virtualization, but also having a single
version of the truth. As you know, more than 30 percent of data is duplicated in an organization.
That’s a very conservative number. Many people don’t know how much data is duplicated.
And you have different duplication of data -- customer data, product data, or internal data. There
are many different types of data that is duplicated. Then the data has a quality issue, because you
may change customer data in one of the applications that may touch one database, but the other
database is not synchronized as such. What you get is inconsistent data, and customers and other
business users don’t really value the data actually anymore.
A single version of the truth is a very important deliverable from solutions, which has never been
done before, unless you have one single database actually, but most organizations have multiple
databases.
Also it's creating this whole dashboard. You want to get data from different sources, be able to
present business value to the consumers, to the business users, what have you, and the other
cases like enterprise search, you're able to search data very quickly.
Simpler compliance
Imagine if an auditor walks into an organization, they want to look at data for a particular event,
or an activity, or a customer, searching across a thousand resources. It could be a nightmare. The
compliance initiative through data virtualization becomes a lot simpler.
Then, you're doing things like content-management applications, which need to be delivered in
federation and integrate data from many sources to present more valuable information. Also,
6. smart phones and mobile devices want data from different systems so that they all tie together to
their consumers, to the business users, effectively.
So data virtualization has quite a strong value proposition and, typically, organizations get the
return on investment (ROI) within six months or less with data virtualization.
Gardner: Todd, at Stone Bond, when you look to some of your customers, what are some of the
salient paybacks that they're looking for? Is there some low-hanging fruit, for example? It sounds
from what Noel said that there are going to be payoffs in areas you might not even have
anticipated, but what are the drivers? What are the ones that are making people face the facts
when it comes to data virtualization and get going with it?
Brinegar: With Stone Bond and our technology Enterprise Enabler the ability to virtualize,
federate, orchestrate, all in real-time is a huge value. The biggest thing is time to value though.
How quickly can they get the software configured and operational within their enterprise. That is
really the key that is driving a lot of our clients’ actions.
When we do an installation, a client can be up and operational doing their first integration
transformations within the first day. That’s a huge time-to-value benefit for that client. Then, they
can be fully operational with complex integration in under three weeks. That's really astounding
in the marketplace.
I have one client that on one single project calculated $1.5 million cost savings in personnel in
the first year. That’s not even taking into account a technology that they may be displacing by
putting in Enterprise Enabler. Those are huge components.
Gardner: How about some examples Todd, use cases? I know sometimes you can name
companies and sometimes you can't, but if you do have some names that you can share about
what the data virtualization value proposition is doing for them, great, but maybe even some use
cases if not.
Brinegar: HP is a great example. HP runs Enterprise Enabler in their supply chain for their
Enterprise Server Group. That group provides data to all the suppliers within the Enterprise
Server Group on an on-time basis.
They are able to build on demand and take care of their financials in the manufacturing of the
servers much more efficiently than they ever have. They were experiencing, I believe, a 10X
return on investment within the first year. That’s a huge cost benefit for that organization. It's
really kept them a great client of ours.
We do quite a bit of work in the oil business and the oil-field services business, and each one of
our clients has experienced a faster ROI and a lower total cost of ownership (TCO).
We just announced recently that most of our clients experienced a 300 percent ROI in the first
year that they implemented Enterprise Enabler. CenterPoint Energy is a large client of Stone
Bond and they use us for their strategic transformation of how they're handling their data.
7. How to begin
Gardner: Let’s go back to Noel. When it comes to getting started, because this is such a big
problem, many times it’s trying to boil the ocean, because of all the different data types, the
legacy involvement. Do you have a sense of where companies that are successful at doing this
have begun?
Is there a pattern, is there a methodology that helps them get moving towards some of these
returns that Todd is talking about, that data virtualization is getting these assets into the hands of
people who can work with them? Any thoughts about where you get started, where you begin
your journey?
Yuhanna: One is taking an issue, like an application-specific strategy, and building blocks on
that, or maybe just going out and looking at an enterprise-wide strategy. For the enterprise-wide
strategy, I know that some of the large organizations in the financial services, retail, and sales
force are starting to embark on looking at all of these data in a more holistic manner:
"I've got customer data that is all over the place. I need to make it more consistent. I need to
make it more real-time." Those are the things that I'm dealing with, and I think those are going to
be seen more in the coming years.
Obviously, you can’t boil the ocean, but I think you want to start with some data which becomes
more valuable, and this comes back to the point that you talked about as the right data. Start with
the right data and look at those data points that are being shared and consumed by many users,
business users, and that’s going to be valuable for the business itself.
The important thing is also that you're building this block on the solution. You can definitely
leverage some existing technologies, if you wanted to. I would definitely recommend now
looking at newer technologies, because they definitely are faster. They do a lot of caching. They
do a lot of faster integration.
As Todd was mentioning, quicker ROI is important. You don’t have to wait for a year trying to
integrate data. So I think those are critical for organizations going forward. But you also have to
look at security, availability, and performance. All of these are critical, when you're making
decisions about what your architecture is going look like.
Gardner: Noel, you do a lot of research at Forrester. Are there any reports, white papers, or
studies that you could point to that would help people as they are starting to sort through this to
decide where to start, where the right data might be?
Yuhanna: We've actually done extensive research over the last four or five years on this topic. If
you look at Information Fabric, this is a reference architecture we've told customers to use when
you're building a data virtualization yourself. You can build the data virtualization yourself, but
8. obviously it will take a couple of years to build. It’s a bit complex to build, and I think that's why
solutions are better at that.
But Information Fabric reports are there. Also, information as a service is something that we've
written about -- best practices, use cases, and also vendor solutions around this topic of
discussion. So information as a service is something that customers could look at and gain
understanding.
Case studies
We have use cases or case studies that talk about the different types of deployments, whether
it’s a real-time BI implementations or doing single version of fraud detection, or any other
different types of environments they're doing. So we definitely have case studies as well.
There are case studies, reference architectures, and even product surveys, which talk about all of
these technologies and solutions.
Gardner: Todd, how about at Stone Bond? Do you have some white papers or research, reports
that you can point to in order to help people sort through this and perhaps get a better sense of
where your technologies are relevant and what your value is?
Brinegar: We do. On our website, stonebond.com, we have our CTO's blogs, Pamela Szabó's
blogs, which have a great perspective of data, big data, and the changing face of data usage and
virtualization.
I wish everybody would explore the different opportunities and the different technologies that
there are for integration and really determine not what you need today -- that’s important -- but
what will you need tomorrow. What’s the tech that you're going to carry forward, and how much
is the TCO going to be as you move forward, and really make that value decision past that one
specific project, because you're going to live with the solution for a long time.
Gardner: Very good. We've been listening to a sponsored podcast discussion on the need to
make sense of the deluge and the complexity of data and information swirling in and around
modern enterprises. We've also looked at how better data access can lead to improved integration
of all information into approachable resources for actual business activities and intelligence.
I want to thank our guests. We have been here with Noel Yuhanna, Principal Analyst at Forrester
Research. Thanks so much, Noel.
Yuhanna: Thanks a lot.
Gardner: And also Todd Brinegar, the Senior Vice President of Sales and Marketing at Stone
Bond Technologies. Thanks to you too, Todd.
Brinegar: Much appreciated. Thank you very much, Dana. Thank you very much, Noel.
9. Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks again for
listening, and come back next time.
Listen to the podcast. Find it on iTunes/iPod. Sponsor: Stone Bond Technologies
Transcript of a BriefingsDirect podcast on how companies can get a handle on exploding data
with new technologies that offer better data management. Copyright Interarbor Solutions, LLC,
2005-2011. All rights reserved.
You may also be interested in:
• Could Data Sprawl in the Cloud Cost You Your job?
• How to Deal with Data Sprawl? Could a Sticky Policy Standard Help?
• Tips for Managing System and Data Sprawl Issues
• Stone Bond Keeps Focus on Data Integration for the Masses