Sunday, December 28, 2008

Q: How Does Web 2.0 Make Money? A: Government.

A lot of folks are wondering how Twitter will monetize.  Will they sell premium services to businesses that want to make Twitter part of a communications strategy?  What about Flickr, YouTube, and Facebook?  Are the ads working?  Regardless, if this is Web 2.0 then why are we still talking about subscriptions and eyeballs on pages?! 


I want to suggest another strategy:  Sell to government.  

I don't mean the general sense.  I  mean, specifically, Twitter, YouTube, and Facebook and a great many others should set up for-government operations.  I wrote to the Delicious team years ago asking when they were coming out with a solution that I could bring to government, i.e., in-house.  I never got an answer.  Regardless, figuring out how to get these companies oriented toward government is not straightforward.  Most Web 2.0 companies literally could not be further outside The Beltway.  I suspect they don't have much in the way of strategy for state, tribal, or local either.  Google does a good job selling into government with its enterprise appliance model, and with more than just search.  But, of course, Google is a massive company.

Culture has a lot to do with things.  The Pentagon is not a T-shirt and flip flops kind of environment.  "The Bigs," i.e., large-cap companies that provide most of the contracting labor, are not at all oriented to innovate in the Web 2.0 technology space.  You don't see Macs anywhere.  You do see MS Office everywhere.  I'm not entirely sure what conclusions can be drawn from these observations but I am sure the observations are significant.

I suppose the best example of Web 2.0 penetration into the government space is tele-presence.  Adobe Breeze is ubiquitous on Defense Knowledge Online (DKO).  Just about anyone with a DKO government account can create or attend a meeting.  But tele-presence probably isn't the first thing that comes to people's minds when asked to name a Web 2.0 technology and I'm not sure Adobe is the best example of a Web 2.0 company.

Yes, there is something different about Twitter, Flickr, YouTube, Facebook, Delicious, et al.  I happen to think that something different - whatever it is - translates into unrealized opportunity for both buyers and sellers in the government space.  I choose to focus on these technologies specifically and there are others that I include.  ProgrammableWeb has a solution for the registry problem, for example.  I don't represent any of these companies, by the way.  I call them out as (mostly) well-known examples of capabilities the government needs.  I don't really care if Flickr, Picasa, or PhotoBucket is the image repository of choice.  Vimeo and YouTube can and should compete for the video infrastructure.  

The point is that government needs platform solutions.

Simple, content-based, platform solutions are the most obvious plays for Web 2.0 in government; images, video, audio.  The federal government processes a staggering amount of this stuff.  The DoD may be the first to get into the mix with TroopTube

There also are plenty of outfits that would use social tagging tools if only they could bring them in house.  By bring them in house, what I mean and recommend is providing enterprise services solutions.  It's nice to have applications that provide tagging, but applications are seldom the best enterprise solutions and are hardly social (except, perhaps, in MOSS).  Tagging is a domain, a Web 2.0 partition, if you will, unto itself.  It is a simple-enough-but-not-too-simple utility that scales and it can be integrated with just about any other application, regardless of whether or not the application was designed with tagging in mind.  Yes, government needs strategic guidance and support for tagging services and Web 2.0 tagging companies are just the ones to provide it...if we can figure out how.

The current providers, a.k.a. "The Bigs" are not oriented to provide Web 2.0 tech support.  

This is either an opportunity to get new business before The Bigs or to create a business of showing them the way.  In any case it is an opportunity to make money.  More specifically, it is a way for Web 2.0 technologies to make money.  In the government services space the biz-speak is generally referred to as "priming" and "subbing," as in:  you are either a prime contractor or a subcontractor.  I don't see many Web 2.0 companies subcontracting to big, traditional system integrators, though.

The government is not set up to acquire Web 2.0 technologies.  

While services contracts are fairly common and understood, things are less well-defined in the products and solutions space unless the products and solutions are ubiquitous or otherwise extremely well known.  Government, especially federal, wants to buy everything in bulk.  See also:  Indefinite Delivery, Indefinite Quantity (IDIQ).  New technology is extraordinarily difficult to insert, especially in secure environments.  Protocol, procedure, and process rule the land.  Force-fitting into an existing model is too often the preferred method.

We are getting better at defining and avoiding undue process but process, by definition, is:  1)  necessary and 2)  inherently situational.

When doing business with the federal government, it's important to know how business is done inside The Beltway.  There are most probably things in the mix that need to be undone, too.  To a meaningful extent the situation is not different at state, tribal, and local levels.  Education is needed on both sides of the buyer/seller relationship.  We might "change the world" in the process of implementing Web 2.0 for government, but requisite is the obligation to have a fair understanding of "the world" first.  The obligation goes both ways but mostly falls on the shoulders of sellers.

Choices for Web 2.0 companies to make money by doing business with government:
  1. Become or spin-off an enterprise systems integrations unit
  2. Sell enterprise systems solutions a) to government b) to system integrators (Note:  probably can't sell solutions to government without either being an integrator or having the support of one.)
  3. Consult a) to government b) to system integrators on enterprise systems
Of course, "partner" is an option, but still implies one or more the previously listed options.

[Update 9:47 PM - I've decided I really don't like these choices at all.  Need to come up with an altogether new business model, perhaps...probably]

There's so much more to this than I can wrap my head around now, certainly more than I am prepared or qualified to comment here.

[Update 9:47 PM - Forgot to comment on need for and evidence of government investment in backbone instracture and understanding of cloud architecture; significant issues arise once a bunch of these services are running around on a single network.  And as always, security is different and harder.]

I think it's time Web 2.0 companies, government, and large-cap contracting companies had a grand introduction to one another.  Believe it or not, there are plenty of people who have never even heard of Web 2.0.

Monday, December 22, 2008

Common Web 2.0 Services for Government Brainstorm

  1. SMS - ex., Twitter
  2. Expertise Location - ex., Facebook, LinkedIn, semantic profilers
  3. Tagging - ex., Delicious, Magnolia, Stumble Upon
  4. Image Sharing- ex., Flickr, Picasa, PhotoBucket
  5. Video Sharing- ex., YouTube, Vimeo, TroopTube! (12.28.08)
  6. Audio Sharing - ex., SoundCloud, HuffDuff
  7. Document Sharing - ex., Google Docs, MOSS, Word Press, Blogger, Tumblr
  8. Registry - ex., ACME.ProgrammableWeb, Wiki (?)
  9. Tele-presence - ex., Breeze, WebEx, GoToMeeting
  10. Search - ex. Google, Yahoo!
  11. Visualization - ex., ManyEyes, Swivel
  12. Dictionary - ex., Merriam-Webster.com (http://m-w.com/dictionary/[WORD])
  13. Data Transformation- ex., ??
  14. [Update 12.29.08] Geospatial, mapping - ex., Google Earth/Maps, Microsoft Live Virtual Earth/Maps
  • Utility model (i.e., like electric, water, natural gas)
  • Specialized content and application services infrastructure
  • Government needs a strategy for inserting these technologies
  • Government strategy must be capabilities-based and vendor indifferent, yet cannot be generic
  • How exactly/specifically can government do business with Facebook or LinkedIn, for example?
  • To what extent will/should third-party integrators be involved?  Is Twitter likely to provide labor resources for technology insertion or would they just want to license the platform to third parties?
  • Companies listed above need a strategy for doing business with government.  Some have them, but most don't.
  • The mutual strategy should be for companies to implement their architectures for these platform services in public and government-managed domains.
  • Probably the biggest hurdle is the massive amount of process and procedure required to navigate the government marketplace and interact with customers
  • Culture gaps

More later...

[Update 12.23.08]
  • Security model is much more complicated than just protecting access to personal information; ease of and tracking of information flow a greater concern
  • Security models need to be reconsidered by both buyers and sellers

Saturday, December 13, 2008

A Pattern is as a Pattern Does

I had a great conversation the other day with Mike McKinney.  One topic in particular really stuck with me and now I can't seem to shake it.  Mike and I had been talking about design philosophies and what it takes to do smart implementation, especially as a government contractor.  We both lamented the environment of design and implementation bloat in which we seemed to find ourselves.  Bloat does not fly in government contracting.  Time and money resources are extremely tight and skillsets are scarce.  Mike noted that "ever since the 'Gang of Four' book came out everyone thinks they are an architect and everyone wants to build a framework."  I couldn't have agreed more.  Ugh, that word:  "framework."  I've used it many times myself, but if I never see or hear or use it again I wouldn't be unhappy about it.

The purpose of design patterns isn't necessarily to build huge, abstract libraries of software components.  The purpose of design patterns isn't necessarily to solve common programming problems for other programmers.  A wonderful thing about design patterns is that they are something familiar to rely on when confronted with new problems.

I've used dozens of patterns and I've even participated in writing a  framework or two.  But the world needs only so many frameworks.  At the same time there are no shortages of hard problems to solve for which design patterns are quite, quite useful.  I don't have time, money, or expertise for all the frameworks I'd otherwise need to get the job done.  I need robust, reliable patterns and people skilled enough to recognize how to implement them to solve customers' problems.

So, attention all developers:  Don't just think of design patterns as architectural building blocks. Sure, they are that.  But think also of design patterns as tools in a toolbox.  

When we're confronted with a domain-specific problem we should build neither point-to-point solutions nor frameworks.  Instead, use patterns to solve a domain-specific problem in a reliable way.  Point-to-point solutions require niche skillsets, are complicated, and don't scale well.  Yet, we don't have time or resources to build frameworks and there wouldn't be much of a market for them if we did.  Either that or our frameworks have to be raised up to such a level of abstraction that we end up forcing domain-specific stakeholders into our patterns rather than molding our patterns around domain-specific use cases.

So, what is a good example of how we use patterns to solve domain-specific problems without building a framework?  ETLV is a good example:

You don't need a framework to implement this pattern.  Simply recognize that the solution to interoperability among domain-specific applications is the production and consumption of well-formed Unicode data.  If someone gives you a data source and is looking for a way to visualize it you could look for a visualization API and use point-to-point integration to read the data and constuct objects from the API.  But then what do you do when someone hands you another data source?  What if the new source lends itself to a new visualization technique?  What if the API you chose the first time around doesn't support that technique?  Well, perhaps you have job security as a software integrator.  On the other hand, if you want to maintain separation of concerns and implement a robust, flexible solution you could follow a pattern based on decoupling data from proprietary or domain-specific formats and transforming data into views using standard, ubiquitous processors.  This effectively changes the integration and interoperability problem into a easier-to-solve scalability problem.  Point-to-point integration is neither simple nor scalable.  However, given any data source expressed as well-formed Unicode we can write a practically boundless number of transformations to produce a practically boundless number of views and applications.  It takes a pattern but not a framework.

Tuesday, December 9, 2008

Models and Simulations as Data

A colleague, Anthony Watkins, and I recently conducted a feasbility assessment of ETL-V applied to a DoD tactical simulation called JCATS:  Joint Conflict and Tactical Simulation.  JCATS was created by Lawrence Livermore National Labs in 1997 and has been supported in some fashion or another ever since.  JCATS is not unlike other military sims I've encountered over the years; OTB, TDF.  In fact, JCATS is generally not unlike all other models and sims with which I have worked in the past 11 years - aerospace apps, power/energy sims, custom apps.

I AM NOT A MODELING AND SIMULATION EXPERT.  

I took a class in graduate school, did pretty well.  But I studied Computer Science and Applications.  That's my slant.

When I reflect on my scant 11 years of experience designing, developing, implementing, and integrating business processes and software - a fair amount of it in the M&S domain - I come to one conclusion.

Interoperability stinks.

A bit of personal history on why that I think that - three words:  separation of concerns.  As in, we haven't maintained it.  I observed the problem in my first job at ODU Research FoundationCenter for Coastal Physical Oceanography:  oceanographers were writing code...bad code.  Eventually the computer scientists got involved, but all we did was write occasionally elegant code wrapped tightly around a domain application.

Then, in September of 2001, I found Object Serialization, Reflection, and XML all at the same time (XML, casually, in '99).  My colleagues and I were using these to create 3D visualization, behavior, and human-computer interaction for the Web.  With the introduction of XML-RPC we were communicating with "back end" data services to drive client-side 3D.  The 3D applications were entirely dynamic without compilation,  the same way all other content was being delivered over HTTP to a standard browser.  It was cooler than dirt.  

Along came Web Services, circa 2003-4 for us. Finally, everything was starting to click.

That brings me back to models and simulations as data and where I am in 2009.  

Simulations aren't applications, they're data sources.

I've written previously about how we use enterprise architecture models as data sources for analytics.  It's basically the same thing for sim integration.  There is one additional component.  Simulations often stream data in real time.  The transactional model of the Web and relational databases is not always sufficient.  Regardless, the methodology is not dependent on transfer rate or protocol.  The methodology is based on separation of concerns.  The difference between yesterday and today is that we realized how to take what were doing above the level of a single application.

Most of the applications with which I am experienced are heavy...very heavy.  Whether we are talking about footprint, documentation, training, number of participants, facilities, or duration most simulation apps require a lot of it.  Technology insertion and training are costly.  Learning curves are steep.  Exposure and knowledge is limited to very small groups.

It doesn't have to be that way, either.

[Update:  12/10/08 - Well, maybe it does.  I appreciate that M&S is hard and hard takes time.  The point I really want to make is that we can and should reuse the data generated by M&S to increase their ROI and overall value of their insights as information assets.  We do this by decoupling their data so it can be more easily integrated with other things - things people who aren't modelers or enterprise architects care about.]

If I could wave a magic wand, I'd strip every simulation down to it's algorithms and databases.  To be sure, there are some sweet front-ends out there, but they aren't maintaining separation of concerns.

JCATS, for example, is a powerful tactical simulation.  There are good smarts in there.  But JCATS has a limited graphical user interface (2D) and is strictly designed for tactical operational scenarios on maps.  While the designers of JCATS may have thought about 3D and some statistical diagnostics, they certainly did not, nor could they have anticipated or accomodated all the many ways we can use the output of JCATS simulations.

The good news is that JCATS saves huge DAT files, i.e., data files, i.e., JCATS data is portable.  JCATS produces ordinary delimited text files (comma-separated) and puts similar data on the wire in real time (either TCP/IP or UDP, I think).

From here it's easy:
















All of these interactive views are transformations of JCATS data using ETL-V



Saturday, December 6, 2008

Government 1.0 -> Government 2.0

Observing, describing, and defining a 1.0 -> 2.0 transition may be a difficult undertaking.  Take Government 2.0, for example.  What is our context for "government;" local, state, federal, international?  All of these?  I think if we are going to attempt any articulation of a 1.0 -> 2.0 transition we should start by following a model.  In the case of all things 2.0, that model is obviously the seminal piece by Tim O' Reilly, entitled "What is Web 2.0?  Design Patterns and Business Models for the Next Generation of Software."

I suppose we should first start by questioning the appropriateness of trying to draw such inferences from one domain to another.  However, this is occuring regardless.  The 2.0 moniker is being attached to everything under the sun, perhaps without consideration of anything but brand/buzz recognition.  So, like it or not, we are now at a point where we must articulate what we mean by Government 2.0, indeed, by 2.0 in general.

I do think it is appropriate to use Web 2.0 as a model for other domains.  Web 2.0 is not just about technology.  It is about technology that recognizes and leverages the profound role of human behavior and scale.  It certainly seems relevant to Government 2.0

Literally, it may be appropriate to relate the seven top elements of O' Reilly's paper to other domains.  Intuitively I can see such relationships in Government 2.0.  Harnessing Collective Intelligence strikes me as very Democratic.  What could be more E Pluribus Unum?  Data is the next Intel Inside is Open Government Data (see also:  Kundra).  The sub-elements relate as well:  "A Platform Beats an Application Every Time" is exactly the point being made by Robinson, et al,  in "Government Data and the Invisible Hand."

But even before we get too carried away with that exercise, perhaps we should start where O'Reilly and MediaLive International started; with a brainstorm.  What are examples of the 1.0 to 2.0 transition in government?

Here are a few examples that are obvious to me:

NOAA active weather bulletins -> NOAA active weather RSS
Agency reports -> Open Government Data
Budget competition -> transparent investment
representatives' web sites -> representatives' blogs
Public council meeting -> public council WebEx
Requests for Proposals -> Contests for Apps (solutions)

Well, perhaps it's a start anyway.

Enterprise Architectures as Data

Enterprise architectures are more than static structures, use cases, and process/sequence models.  Enterprise architectures are also queryable data sources that, once constructed, can be used to answer a great many questions relevant to decision making based on multiple stakeholder concerns; operational, technical, financial, logistical.  In my experience, this assertion usually surprises people.  Operators want to know what complex (in their minds complicated) EA models have to do with getting the job done.  Appropriators want to know how system producer-consumer dependencies relate to purchasing decisions.  To anyone unfamiliar with enterprise architecture, EA can be seen as not only having no value, but as an unwelcome cost burden.  Yet, each of these perspectives is relevant to an enterprise architecture.  Many architects understand this problem but have been helpless to address it.  Enterprise architecture is a rigid, rigorous discipline.  The language and views of architects are complex and detailed.  The tools architects use are highly specialized.  All of this contributes to formidable barrier to information and knowledge sharing.


The good news is that there is a relatively simple technology solution that can cut through the complexity and lead to better decision making informed by a variety of stakeholder perspectives.  Now, I'm not saying this is a case of technology riding in on a white horse to save the day, but it's darn close.  To be sure, technology's job here is to get out of the way; to provide the least amount of resistance and friction to business processes and people communicating.  By focusing on a strategy for how EA data is stored, extracted, and transformed we can make the data more versatile.  By making the data more versatile we can make the information that data describe more useable.  

The solution strategy, then, is to make enterprise architecture data more versatile using standards.  

By implementing this strategy we can use enterprise architectures as data sources to answer a diverse set of typical stakeholder questions.  Using this strategy we clearly see that otherwise detailed, complex data can be easily queried, extracted, transformed, and visualized in entirely new ways.