Monday, December 7, 2009

Army's milBlog Runs on Word Press

Word Press is awesome.

Monday, November 23, 2009

Iconography Service

A Web Service that returns a MILSTD 2525C icon in PNG format given echelon (size) and proponent type (unit):

For example:

Infantry Brigade

This could easily be extended to include other iconography standards.

Saturday, November 14, 2009


Update: 12.03.09 - It seems is already providing most of what I'd like to see from a service: anyone can add (don't even have to auth), domain attributes, links to references, direct URL to entry: There are some obvious difference in approach. I want to go further with URLs. I'd like to be able to point to a unique defintion via a path, ex., /wiktionary/FOO/1. I also want to do lookups in other dimensions, ex., return all terms from domain:X.

The government is awash with acronyms. New acronyms are created daily. Acronyms create a barrier to understanding if they cannot be easily resolved, where easy = universal and universal = URL. There are many online dictionaries with entries that are found in Web searches. However, these return results only in highly formatted, not-well-formed HTML that is not always accessible through simple URLs. Furthermore, these dictionaries provide no way for the community to create and share new entries as they are needed. A simple solution to this is:
  1. Use the cloud to store terms and definitions
  2. Use Web services to return definitions through URLs as XML, JSON, and XHTML
  3. Provide a simple form that lets registered users add and edit terms
We have created a proof of concept here using XAMPP and AWS:

It's not perfect or even complete. For example, the XHTML returns errors from the W3C validator. But I think this is a solid start and I'd like to open it up and see it go further. To that end, I submitted it as an idea for Sunlight Labs. We'll see if it garners any votes of interest...

Saturday, November 7, 2009

Bad Practices of Linked Data

Urban EcoMap San Francisco is a great site that lets you explore emissions by zip code on a color-coded map:

Smartly, they also provide a "link" to download the data as comma-separated values.

Sadly, however, the link it not accessible!

Where is the URL for this data? It's hidden behind a Flash control.

Sunday, October 25, 2009

Mashing up Virginia Beach Police Data with Google Docs

Last Sunday, 10/18/2009, the editor of the neighborhood section of my local paper, The Virginian-Pilot, said that he was looking for ways to improve how he reports crime data. At the same time, I know that neighborhood crime is a hot topic in our community league meetings, casual get-togethers, and email lists.

Witness: hyperlocal supply and demand in Gov 2.0.

In fact, there's plenty my local paper can do to improve the reporting of [crime] data in our city. In the printed edition of the neighborhood section they only print the police "blotter" from the previous week. Online they use Google maps to report crimes by location, but interfaces are constrained and inconsistent.

What my paper should do to improve crime reporting is go to the source.

The Virginia Beach Police Department publishes crime data on a city website. Here's the URL:


You can query for all crimes going back to January 1, 2006 (why only that far back?) up to the present day. But you can look at exactly 15 results at a time.

So what my local paper should do is appeal to the city to make [crime] data more accessible to everyone; where everyone includes The Virginian-Pilot. Short of that, the Pilot should take the energy it invests in supporting its Web presence and use it to scrape and publish city data. If only there were a "Machine Friendly" link next to the "Printer Friendly" link. (Both are designed to help people.)

Shortcomings notwithstanding, here's what Pilot reporters can do right now for free:

Compile and publish Google spreadsheets with charts:

Wow, who knew that 72% of all crime in Virginia Beach> occurs at the Oceanfront?

Update: HUGE, HUGE ERROR in my reporting of this stat. The pie chart above does not show all crime in Virginia Beach. It shows only the top 5 neighborhoods.

This chart, below, is the correct chart. It shows that, while still a standout piece of the pie, 4% of all crime happens at the Oceanfront.

I'm feeling really dense right now, but in a way I'm glad this happened. This is a perfect illustration of a fundamental in data viz (and reporting).

Saturday, October 10, 2009

Ten Things My City Can Do to Improve Our Website

  1. Publish events in iCal format
  2. Publish electronic police reports as XML
  3. Publish 311 data (as XML)
  4. Geocode public works projects
  5. Implement short, guessable URLs
  6. Use a free mapping service
  7. Update the Transportation Data Management System
  8. Add a "Web-Friendly" link next to "Printer-Friendly"
  9. Create a data catalog
  10. Get a .gov address
Details to be added here...

Wednesday, September 30, 2009

Gov 2.0 Doers

There has been some criticism that Gov 2.0 is:

  1. Just another buzz word
  2. Just people talking about stuff, no one doing stuff
I disagree, certainly with the 2nd point. Most of the Gov 2.0 people I know and/or follow on Twitter are the doers. Here's a list of who some of them are, certainly not comprehensive, in no particular order:
  1. Brian Sobel, InnovationGeo, Are You Safe,
  2. Dmitry Kachaev, D.C. OCTO R&D
  3. Philip Ashlock, Open Planning Project, Open 311
  4. Josh Tauberer,
  5. Jim Gilliam,,,,, and
  6. Andrew Turner, GeoCommons, FortiusOne
  7. Everyone at Sunlight Labs
  8. Carl Malamud,
  9. Jen Pahlka, Code for America
  10. Leonard Lin, Code for America
  11. Steve Ressler, GovLoop
  12. Noel Hidalgo, New York State Office of the CIO
  13. Raymond Yee, UC Berkeley
  14. Peter Corbett, iStrategy Labs, Apps for Democracy
  15. Kevin Merritt,
  16. Hillary Hartley, NICUSA, Citizen Space
  17. Guy Martin,
  18. Silona Bonewald, League of Technical Voters,
  19. Kevin Connor,
  20. Greg Elin, United Cerebral Palsy
  21. Noel Dickover, DoD Office of the CIO,
  22. Jon Udell, Microsoft, ElmCity
  23. Kim Patrick Kobza, Neighborhood America
  24. Jay Nath, City of San Francisco
  25. Wayne Moses Burke, Open Forum Foundation
  26. Micah Sifry, Personal Democracy Forum
  27. George Thomas, GSA
  28. Alan Silberberg,
  29. Steve Lunceford,
  30. Joseph Porcelli, Neighbors for Neighbors
  31. Luke Fretwell, GovFresh
  32. Chris Rasmussen, Intellipedia, NGO
  33. Pam Broviak, MuniGov 2.0
  34. Bill Greeves, MuniGov 2.0, Roanoke County, VA
  35. Jeff Levy, EPA Web Manager, Federal Web Managers Council
  36. Adrian Holovaty, Everyblock,
BTW, we need both "talkers" and "doers." Some people are both. Some people are connectors. Not everyone is an implementer. I tried to stick to implementers; to pick people who have projects and/or organizations dedicated to or related to Gov 2.0. There are some obvious names not on this list. That's probably because they are not implementers. Doesn't mean they're not important. If I missed someone you think should be on the list leave a comment. Be sure to tell me what project they're on. I certainly don't know everyone.

I'll try to come back with links and pictures.

Thursday, August 27, 2009

Views, Transformations, and Sources

Visualizations, like any research product, should contain bibliographic hyperlinks to sources and transformations. (Click to englarge.)

Tuesday, August 25, 2009

Why Gov 2.0 Matters To Me

I posted this here last night but took it down because it felt too personal and too much about my story. The reason I wrote it this way is because I didn't want to presume why it should matter to other people, but rather describe why it matters to me and let others find commonality in it...or not. Tim O'Reilly just posed the question: "What does Gov 2.0 mean to me?" I think he may be asking a slightly different question (definition vs. motivation), but given that I decided to put this back out there.

Friends and family have been asking me why I'm into Gov 2.0 and what I get out of it. I'm going to attempt to answer these questions for them and for me here in this post.

For me, Gov 2.0 is personal. I can't answer the question without relating it to my own story:

I was born on October 14, 1970 at Portsmouth Naval Hospital in Portsmouth, Virginia. My father was stationed at Naval Air Station Oceana in Virginia Beach. From where I sit writing this, Oceana is not more than a quarter of a mile away (as the jets/crows fly).

My mother was a teacher in the Virginia Beach Public Schools system. I grew up in that school system and my daughter is in it now, 5th grade. I have not always lived here. I spent about 10 years in the New River Valley of southwest Virginia for college, work, and graduate school. I did not intend to come back to Virginia Beach after grad school. In 1999 I was graduating with a Master of Science in Computer Science and Applications degree from Virginia Tech. It was decent ticket to anywhere and I was looking west to San Antonio, San Francisco, and Redmond. But events unfolded and I transitioned from a part-time graduate research assistant at Virginia Tech to a full-time Research Associate at Old Dominion University Research Foundation, in Norfolk, VA. Fast-forward to today, in the late summer of 2009, and I'm right back where I started. I think my internal magnets have this lat/lon set as my home base.

I'm not at ODU anymore. Today I'm a government contractor at a company I co-founded. Mostly I am a Defense contractor. I didn't set out to be that either. In fact, I was explicitly trying to avoid it. I don't want to get into all of the reasons why. It's far too complicated for anyone but me to understand and I'm not even sure I do. At risk of abusing a metaphor, suffice it to say that my internal magnets draw me to other things. But I have grown and learned as Defense contractor. I have banished narrow prejudices and have adopted new world views. I have also rekindled something in me that was instilled at a very early age by my parents: the importance of public service.

I can still see my dad standing in our driveway at my childhood home, as I would have looked up to him, telling me about public service. I remember he tended to talk specifically about civil service. Though he was in the Navy he didn't necessarily talk about military service. Grandma Curry worked for the U.S. Customs Service and I know my dad admired that. Dad also talked a lot about how involved Grandpa Curry was with Miami-Dade Parks & Recreation, in Miami, FL.

I've rarely shied away from taking initiative where I see something needs to be done and participation in organizations comes naturally to me. I've ignored the call to service once or twice. After 17 years growing up a Navy brat, the effects of having a parent coming and going every 6-8 months, and lacking any of my own life direction, I steered away from ROTC and a long term commitment to a military career. I could have gone into civil service, I suppose, but another thing about my internal composition: I'm an entrepreneur and I don't care much for layered, abstract bureaucracy. I'm not so much into rising up through the ranks as making my own way. It's not that rising up through the ranks is bad. It's highly admirable, in fact. It's just not me. For me, patience is a trained virtue. If there's one thing I've learned as a government contractor is that working either with or for government requires intense patience.

So here I am. I'm a Generation Xer. I have not served in the military or civil service. I am easily frustrated by government process. Yet, I have an intense inclination toward public service. How do those add up? What can I do to serve government? (I think this is a question lots of people like me are asking themselves.)

I think today the answers are non-traditional.

I am active in the PTA for my daughter's school. I try to remain active with our Community League. For the past two years I've organized our neighborhood's participation in Clean the Bay Day. Before then our neighborhood didn't participate in CTBD. Of course, these aren't government activities. I think they help government activities, though, and that's significant.

For me, Gov 2.0 represents a near-perfect fit with my personality, direction, and goals. I feel like it was made just for me or that I was made just to live at this time. I really do.

There is and will continue to be no shortage of debate about what Gov 2.0 is, exactly. In fact, during the second week of September there is a "summit" in Washington D.C. dedicated to the topic. Is it tech? Does it include non tech? How does it work? Who can participate? I'm not sure framing it is as important as learning to recognize it from multiple perspectives and acting. I tend to take a literal interpretation. For me, that means there is a distinct connection between Gov 2.0 and Web 2.0. Therefore, for me, Gov 2.0 tends to be significantly Web/tech enabled. But as someone experienced in working for government, I know that tech requires policy and that both are intended to work for people.

This year I attended three important "unconferences" for Gov 2.0 movement: Transparency Camp East, Gov 2.0 Camp East, and Transparency Camp West. I will be at the Gov 2.0 Summit in a couple of weeks and at Gov 2.0 Expo next Spring. What most interests me is how I can use Gov 2.0 techniques to improve the way my city functions. I am interested also in how my customers will respond to Gov 2.0 and how I can apply it to their needs, but my motive is not winning more contracts. Sure, some will say that if they think it's what other people want to hear. Let there be no doubt that I want my business to succeed and I want to prosper from it. But I'm willing to take it on faith that both things will happen if I just keep following my passion and don't resist the pull of my internal magnets. I just want to "work on stuff that matters."

Increasingly, I want to work on stuff that matters here in Virginia Beach and in my region. But I also want to work on stuff that matters for my state, our nation, and our world. I am particularly drawn toward applications in education, where I see a growing gap between public and private education that is not just affecting the fringes, but is squarely squeezing out the middle and making our nation "dumber" as a whole. I see Gov 2.0 as a means for correcting this troublesome situation. (In general, I think Gov 2.0 can be particularly effective in areas that lack models, process, and funding.)

Early next year I hope to be kicking off the inaugural CityCamp, along with Jen Pahlka from Tech Web. Our goal is to start an unconference around the theme "Gov 2.0 goes local." Jen also has another project in the works that is very much in line with this theme. It's called Code for America and it's modeled after the successful Teach for America program. I look forward to seeing that take shape.

So, this is about the examples set by my parents (and their parents) and the lessons they taught me. It's about wanting to be a part of the solution in my community and my school system; not just a sideline complainer. It's about recognizing that I have something to offer that government needs. This is my opportunity to serve. This is why Gov 2.0 matters to me.

Friday, August 14, 2009

U.S. Code as Linked Data

"The Office of the Law Revision Counsel prepares and publishes the United States Code, which is a consolidation and codification by subject matter of the general and permanent laws of the United States."

We responded to that RFP. We = Bridgeborn, Inc, and Business Bullpen, LLC.

The Law Revision Council's recognition that the online home of the United States Code needs upgrading translates into a wonderful opportunity for the LRC, our companies, and the American People. This is much more than an opportunity to redesign web pages for an online presence.

This is an opportunity to publish the U.S. Code as linked data.

Linked data is important for the U.S. Code because it will make the Code more searchable, navigable, and usable by orders of magnitude. Linked data will also increase accessibility and lower costs of integration by making it easier for more consumers to treat the information according to their needs and possible constraints.

Sites exist already to provide the U.S. Code through styled web pages. The Legal Information Institute of the Cornell University Law School, for example, publishes a searchable HTML index of the U.S. Code. This version, however, is not well-formed, linked data. These sites also omit important text included in the official record published by the LRC, such as the Positive Law Codification actions that have been taken. These sites play an important roll is the dissemination of U.S. Code, so it is our hope that this effort will also make the U.S. Code more accessible and usable for consumers like Cornell's LII.

The LRC Web site isn't too bad really. Essentially what it needs is a global site navigation scheme, search on every page, and a good Cascading Style Sheet. There are features that could be added, such as public request for comments with voting. But the most important thing anyone can do with this site is Tag the

For example, the following text:


Could become:

From here, there is nowhere we can't go with the Code. We can put put it in any container, we can transform it into any view, we can access it from any device. Given tagged, well-formed, linked data, we can address every element of U.S. Code from a standard Internet URL.

While tagging the code with XML may not fully constitute linked data, it is a big step in the right direction. Decorating the those tags with RDF is easily accomplished.

There is no point in enumerating the potential applications of an endeavor such as this. They are infinite.

Friday, June 12, 2009

The Mr. Miyagi School of Software Engineering

This is a re-post from a journal entry I originally wrote over on Slashdot in 2004.  I've made some minor changes and updates for this entry.

Background:  This was an concept born out of necessity about 5 years ago.  I needed a way to train someone with very little experience to work with Bridgeworks but I couldn't afford to spend hours and hours of my own time working with him; too many of my own responsibilities and deliverables.  It was very successful and now he is our best Bridgeworks app developer.  Fast-forward to today when I'm interviewing CS students from a local university for a summer internship.  This is a (well) paid position for which I need someone with decent coding skills who is a self-starter.  Unfortunately none of my candidates have any practical skills whatsoever.  (I don't know what the universities are teaching, but it's not what we need in industry.)  With these candidates it would be like starting from square one.  Obviously I'm not going to pay what I'm willing to pay for people who don't have the quals.  I am willing to train such a person as an unpaid intern, but that is not without its catches.  I still don't have the time to spend hours and hours with these folks.  Furthermore, ethically, unpaid internships are intended for the sole benefit of the student.  My company should not benefit in any way from the labor of unpaid interns.  Practically speaking, though, you can't train someone without giving them tasks.  Re-enter the Mr. Miyagi School of Software Engineering.

Lessons on how to train a junior programmer:

Let's call him Danielsan.

Mr. Miyagi was a very wise and clever sensei. His methodology, loosely translated, is perfect for any small software company that is bringing new developers into the system. The reason Mr. Miyagi's method works so well is because it provides intense, immersive exposure to the most important lessons while demanding relatively few additional resources from the instructor(s). Think about it. While Danielsan was busy painting the fence and sanding the deck, Mr. Miyagi was out having the time of his life!

The length of each lesson is to be determined on a case by case basis.

Lesson 1: Write SDK Documentation

Even the best developers can be notorious for not adequately commenting their code. Good documentation of an software includes both programmer's notes and comments for automated documentation (e.g., doxygen). This oft neglected task is perfect for Danielsan. An excellent way for him to learn the software from a developer's perspective is to write the documentation that explains how it all works.

Listen carefully when Danielsan asks questions about the existing code base. Discourage him from asking too many questions, except regarding complex concepts. It is important that Danielsan develop his own understanding of algorithms, relationships, dependencies, etc.

Lesson 2: Build Company Demos

Documenting the code shows Danielsan how the software developer sees things. Danielsan also needs to see the software from users' perspectives. "Users" include programmers who develop applications from the software and end users of the applications that are developed. Ideally, these would be two separate lessons. Knowing that time and money are always issues, these lessons can be condensed into one by having Danielsan build the company demos.

Architects and senior engineers loathe building company demos. While they are often happy to write test apps for in-house use, company demos bring with them a mountain of maintenance headaches and customer support issues. Whether or not your company is big enough to have it's own department(s) for maintenance and support, it's worth putting Danielsan to work on company demos so he can get his hands a little dirty and see first hand the challenges facing maintenance and support team(s).

Lesson 3: Clean House

Many companies have coding standards that must be followed by all code writers. These standards help everyone to write clean, consistent code that everyone can understand. Unless you work at a sterlized laboratory, it's a safe bet that your house always needs cleaning. A great way for Danielsan to learn this important lesson and also develop habits that are consistent with the team is to set him to work checking for adherence to company coding standards, leaks, potential security issues and the like. It's also a convenient way for you to get your code checked by a fresh pair of eyes.

The Successful Sensei

The successful sensei will know that these lessons are not opportunities for him to relax his own standards or to set Danielsan to work unguided. The successful sensei practices what he preaches. He knows which lessons Danielson must learn on his own and which require guidance. Naturally, Danielsan may occasionally bemoan his instruction. Perhaps he will consider that his training is too rote or mundane. Since you can't just throw a bunch of karate maneuvers in his face to show him what he's learned, it's important to sometimes let Danielsan work on things he finds interesting and fun. Ask him to write stand-alone utility apps that your company might need. Give Danielsan isolated new tasks within the SDK, perhaps something that requires he work with others to design interface requirements, resource requirements, etc.

to be cont'd.

Axiom 1: Tooltips are better left on.

If you leave your tooltips on, chances are better that you will learn something new each time you use your application(s). For Danielsan, tooltips are especially useful when they instruct him about fundamental principles of programming, those that transcend applications.

Axiom 1a: Some tooltips are better than others.

Monday, May 18, 2009

On Ambient Visualization

I want visualization to be less a part of a specific application that I go to and to be more of a natural extension to the computer itself, available from everywhere.  I want visualization to an ambient experience.  When I encounter any table of data in any document container I'd like to be able to quickly view it as a column chart without starting up a chart-making or data processing application, without shuffling around through copy & paste.  I just want to select rows and columns an pop a "window" with a chart in it in one easy step.  If I can recognize with my eyes that a table contains place names or lat/lon pairs then a computer ought to be able to map it with minimal intervention on the part of the user.  I should also be able to put my selected, obviously geospatial data on a virtual Earth model.  With just a little more imagination I can see turning lists and tables into nodes and edges, viewable with graph layouts.  Think Enso for visualization.  I might want to do more than just look at my chart, map/Earth model, and graph.  I might want to start to interact with these views (assume independently for now).  It starts to seem like I need an application to do that, but I'm not ready to jump the gun.  This is still in the realm of a capability and not necessarily an application.  Applications start to assume containers and domain-specific use cases.  Most visualization techniques have standard, "off-the-shelf" things you can do with them given basic commands or input devices.  Charts can be sorted and transformed into different layouts.  One can pan, zoom, and rotate maps and terrain.  The technique of "drill-down" and "roll-up", which can be applied to any visualization technique, is nothing more than navigation of linked data at multiple levels of detail and sometimes across multiple view contexts.  At what point do is a specialized application needed more than a capability?  We may be overly conditioned to assume the application model when we think of software as having utility.  This is changing rapidly on the web.  (It was always thus on the Unix command line, yes?)  Visualization ought to change with it.  Leave the application building up to subject matter experts with an application domain, not to software programmers.  Ah but wait, lest too much be read into a passing editorial remark.  Obviously software programmers play a key role here.  The tendency among programmers who attempt to answer that call is to build an "application building framework."  Again, the assumption is that subject-matter experts always need an app to make use of viz.  I wonder why visualization software shouldn't be a part of an operating system; a core capability for any application or purpose.  There have been wonderful advances made through web browser extensions, but even here visualization is at best an after thought applied to a mostly universal application.  (I say "mostly" b/c there are no less than 3 different web browsers installed on my one operating system.)  What happens when someone emails me some data in a flat file that I open in a text editor?    Instead what is need are document object models for visualization techniques and runtime software that can parse viz documents on the fly.  The runtime is optimized for robust interaction and attribute manipulation of high level visual artifacts, not application-specific tasks.  This runtime can be invoked from a background process or "embedded" (contained in, called from) an application runtime.  Devices having different display and user interfaces can choose how to represent what are otherwise well understood visual metaphors.  Data can be more easily passed around and visualized simply by passing text documents describing interactive, dynamically updatable (or not) views.  (This seems inherently more secure, too.)  Only this way is ambient visualization possible; something that is available everywhere on my computer, no matter what kind of computer/device/hardware platform I am using.

Monday, March 9, 2009

Carl Malamud for Public Printer of the United States

I endorse Carl Malamud for Public Printer of the United States
(Administrator of the US GPO)

Sunday, February 15, 2009

Enterprise Architectures As Data 2

I wrote about this topic last December. Now I want to drill down on a key element of our approach for turning enterprise architecture models into queryable sources. Furthermore, I should address why anyone would ever care about "turning enterprise architecture...blah blah blah."

There is a technology aspect and a human aspect. Both are necessary, but technology alone is never sufficient. Far more important to our success has been a strategy focused on answering key stakeholder questions.

My experience is mostly with Department of Defense Architecture Framework (DoDAF), so I will use examples from that domain. But to the best of my ability I will use language that is portable to other domains.

So, what are "key stakeholder questions?" To what does "key" refer, "stakeholder" or "questions?" The answer to the second question is: both. The answer to the first question is: depends on which stakeholder(s) we are talking about.

In the Army, and DoD in general, organizations are broken down at the general officer level from 1-9 as follows:
  1. HR
  2. Intel
  3. Operations
  4. Logistics/Sustainment
  5. Planning/Doctrine
  6. IT
  7. Training
  8. Finance
  9. Civil-Military Cooperation
These are the key stakeholder perspectives at the highest level.

An Operator wants to know, "How will architecture help me win the fight (i.e., get the work done)?" A Budget Analyst wants to know, "How does architecture help me making smarter spending decisions?" These are reasonable questions.

UPDATE 2009-09-28: THERE IS AN ERROR IN THIS GRAPHIC. THE G8's perspective in this case is "Force Development." He is asking the question, "Is the Force we are developing in line with my capability expectations?"

Indeed, no model exists except to answer questions.

But when presented with architecture in forms that speak only to the IT/engineering perspective, i.e., schematics, flow diagrams and the like, the multi-stakeholder conclusion tends to be, "Architecture doesn't help answer my questions." That's unfortunate because architecture, especially at the scale and complexity of federal government, can be effective in answering many useful questions for all perspectives. The trick is to:
  1. Understand what questions are being asked from each perspective
  2. Extend architectures from models that produce schematics into queryable sources that can produce schematics and also answer questions
  3. Extract the answers to those questions automatically to an open, text-based format
  4. Transform answers into views that are "fit for purpose," i.e., can be understood and used from many perspectives
Therefore, the question IT people should be asking is, "What can I do to make architecture better able to answer stakeholder questions." To be sure, there are questions architectures can't answer. But it is more productive to focus on the questions that architecture can answer. Following are actual endpoints in use (though they don't point anywhere in this post):

  • GetCountOfPlatformsForArch
    Returns an Xml Node containing a NewDataSet for the distribution of platforms for the given architecture
    Answers the question:
    What are the platforms of this architecture and how many of each?

  • GetCountOfPlatformsForArchAndTOE
    Returns an Xml Node containing a NewDataSet for the distribution of platforms for the given architecture and TOE
    Answers the question:
    What are the platforms of this architecture and TOE and how many of each?

  • GetEquipmentForONN
  • Returns an Xml Node describing the equipment used by the given ONN in the given architecture
    Answers the question:
    What equipment does this ONN use in this architecture?

    Believe it or not, this type of view can be automatically produced
    from an enterprise architecture model

    There is added value in this approach, too. From the technology side of the coin, opening up architecture models through relational databases and ubiquitous, text-based formats makes it much easier to relate architecture data with other data. An architecture can tell you "how many," but "how much" is probably in another database. In fact, there are about 85 software programs the Army knows about that are being used to track costs for systems. To be sure some will be eliminated. But many will stay and those that do must support data portability.

    Ok, so...why does any of this matter?

    Let us start with the fact that government is an enormous compiler of data. It really doesn't matter if we are talking about enterprise architecture, a Line Item Number database, a "portfolio management" system, or even good, old-fashioned spreadsheet. Analysts will never part with their spreadsheets. Wonks stuff data into tools.

    Unfortunately, many useful tools that are good at capturing data are lousy at both storing it and saving it back out. Applications too often co-mingle their own data with the subject matter data, making it difficult to separate later. Many tools don't have export in mind. (All tools should have export in mind.) Owners of online, enterprise databases typically only offer their contents in human-readable formats; PDF and HTML pages. Applications built for one perspective are lousy at supporting other perspectives. The list of interoperability challenges goes on.

    A battalion commander is not typically nimble with UML. Same goes for a budget analyst and resource director. In the world of enterprise architecture the consequence is that a lot of people are taking a lot of time away from their primary tasks so that they can manually shuffle data into views their many customers demand. Time is money. My back of the napkin estimate is that architecture shops are working twice as hard for half the output that they could otherwise achieve if they offloaded the ETLV problem to IT and got back to building architectures. Of course, plenty will point out that architecture and reality are often different. That's a topic for another discussion. Like it or not, enterprise architecture plays a huge role in government organizations, especially DoD. This is just about getting better return on that investment. It's opportunity for both a current cost savings and future "force multiplier."

    Finally, I would argue that the strategy of answering key stakeholder questions is useful to knowledge management in general, regardless of subject matter domain or technology form factor.

    Tuesday, January 27, 2009

    "Response" to Previous

    Update 12.28.09:  I just had to record for posterity the flames from some folks who tagged my previous post in Reddit.  (So much for polite, constructive discourse.)  In truth, I would like to have this conversation with some of these folks.  Even the harshest ones.  But I don't want to have to register for a service I don't use and I don't want to get into flame wars over comment threads.  I get the frustration.  I've experienced it.  I too have been and remain rather skeptical of the hype.  Some level of hype gets things noticed and forces conversations that need to happen but aren't.  Some of what I said was written and/or read poorly.  For example, Mac/Office is about culture, not interoperability.  I know that Google came before Web 2.0.   That was expressed poorly.  My company also was doing a lot of this stuff before we ever heard of Web 2.0.  I can't speak for Tim O'Reilly, but I feel confident he realizes he didn't invent something.  He made salient observations about the Internet and human behavior and how the two have, do, and perhaps ought to work together.  Many of these concepts go back decades to the roots of computer science and the Internet.  I've been in workshops with Senior Technical Fellows, having extremely well-qualified CVs, who said, "we proposed all this in the beginning."  My response was, "Exactly.  Isn't it past time we got back to that?" 

    And I know how radical ("insane") my idea sounds, but I am not saying government should fund Web 2.0.  This is not about a bailout, as some seem to think.  Some of these comments come from the position of having no idea how government runs.  I'm saying government is actively spending money, money is being wasted, and I know from education and direct, relevant experience that several of the concepts articulated in the original essay can help tax paying citizens spend more wisely.

    Consider this from the Army:

    "The Army spends, under 85 programs, approximately $6.7B annually on Information Technology (I.T.) without a method to converge these systems into a centralized infrastructure designed to improve robustness and dynamically deliver web services to hundreds of thousands of users while reducing risk"

    And that is just service within DoD, DoD just being one (the biggest) government spender on IT.  It's full of waste and the reason isn't just because Web 2.0 is magic Koolaid or that we are already on to so-called Web 3.0.  It is because the government still looks at the Web the way it used to be in the 90s; i.e., with a 1.0 mindset.

    More concrete thoughts on this later on how exactly government might use Twitter, for example.  Think SMS.  Twitter is  a metaphor.  The folks who run Twitter - for example - just might be the Pros from Dover who can help save government from itself.  Think FEMA and USAID...

    Here's the juicy stuff:

    Haha, this nutbag thinks that the government should fund web 2.0. And he's serious.(

    submitted 4 days ago by candlejac

    Kitchenfire 3 points 4 days ago[-]

    ""The Bigs," i.e., large-cap companies that provide most of the contracting labor, are not at all oriented to innovate in the Web 2.0 technology space. You don't see Macs anywhere. You do see MS Office everywhere. "

    I wonder if this guy knows that Macs can run MS Office. Or that he's insane.

    turkourjurbs 3 points 4 days ago[-]

    "Google does a good job selling into government with its enterprise appliance model, and with more than just search. But, of course, Google is a massive company."

    SIGH!!! Google was aroung long before Tim O'Reilly decided he needed more money, and made up a completely inaccurate and bullshit term to describe something we already have. Even if he didn't decide to look like a technological ignoramus, we'd still have Twitter, Facebook, etc. without calling them something that makes absolutely no sense at all.

    Please, point out which "Web 2.0 Server" I should be using and which "Web 2.0" browsers will work with them. Every underlying technology that's considered "web 2.0" is the same technology that's been behind the web since before there was Web 2.0 We have the web. We have web sites. There is nothing more to it.

    Maybe if we somehow get rid of the head nutbag (O'Reilly), the rest of the web 2.0 delusionists like this one will go with him.

    skymt0 2 points 4 days ago[-]

    Please, point out which "Web 2.0 Server" I should be using

    That would be lighttpd, according to their home page.

    cochico 1 point 4 days ago[-]

    pffft! We're already working on Web 3.0

    funkah 1 point 4 days ago[-]

    Crazy, sure, but I'm not exactly loving the idea of giving tons of money to banks who lost hundreds of billions because of shitty risk analysis, either.

    candlejac 1 point 4 days ago[-]

    At least their risk analysis is better than Twitter's.

    funkah 2 points 4 days ago[-]

    I don't even know what that means.

    candlejac 1 point 4 days ago* [-]

    Twitter is a company with no business plan. They seriously are closest to the underpants gnomes

    1. Build website for 140 character microblog posts
    2. Pay carriers to send txts with updates to subscribers, while not charging for this service
    3. Pay to receive texts on a shortcode
    4. ???
    5. Profit!

    grilled_ch33z 1 point 4 days ago* [-]

    I'd argue that it's more like:

    • Build website for 140 character microblog posts
    • Pay carriers to send txts with updates to subscribers, while not charging for this service
    • Pay to receive texts on a shortcode
    • ???
    • ???

    edit: how do you do numbered lists?

    candlejac 2 points 4 days ago[-]

    numbers and periods, but you must not know much about the underpants gnomes

    Ac3 2 points 4 days ago[-]

    Well he clearly will not profit.