The below is a chapter I wrote for Threats In the Age of Obama (Amazon), recently published by Nimble Books. The book is divided into two portions: one set of chapters on future threats, and another set on ideas for dealing with them. My chapter–in the latter section–focused on information technology solutions.
What is the perfect information technology solution to coming national security threats?
There isn’t one solution to multiple threats. Rather than searching for a single solution, our national security community should adapt its IT procurement strategy to develop many solutions, each addressing a specific threat at the lowest possible cost.
The existing strategy is as follows: after being caught off guard by an unforeseen crisis–a terrorist attack, an outbreak of violence, a surprise nuclear test–we reflect on our failure and identify a single cause. Maybe we didn’t have enough information. Maybe we had too much information and couldn’t sort through it all. Or maybe we had the right information but we didn’t collaborate.
After pinpointing the cause we spend years–and tens of millions of dollars–trying to develop a handful of Perfect Software Tools to remedy the deficiency. Much of that time and money is spent on procurement bureaucracy: the first line of code is written after months of identifying requirements, issuing RFPs, waiting for bids, and awarding contracts.
This is an outdated strategy for two reasons. First, unforseen crises are rarely the fault of a single deficiency. Fixing one problem while ignoring others is the equivalent of inviting strategic surprise. (UPDATE: To clarify, solutions based on single failures will only solve identical failures; they won’t solve different scenarios. The job of intelligence is foresight, not reaction.) Second, the economics of innovation demand that creating a few tools will almost never solve our problem. After all, the vast majority of innovations fail. For every Google and Wikipedia there are hundreds of failed search engines and online communities. High-dollar attempts at the Perfect Tool puts all of our eggs into a basket that will almost certainly fail us.
If failure is so likely how do we ever build useful tools? By building more of them. And in order to do that, the price of each attempt must fall drastically. Instead of spending $10 million on one tool, spend it on a thousand. The most valuable lesson for government IT decision makers is that real innovation requires experimenting with many different options.
A much lower cost per product is not as unrealistic as it may seem. Modern software need not be expensive; in fact, it is getting cheaper and cheaper to create increasingly advanced systems. The Web has made this possible because it is a free market for innovation, defined by a few qualities:
- Only good innovations succeed. Users are the only people who decide if something is valuable, so bad products are not rewarded.
- Innovators (many of whom are individuals, not companies) are motivated by passion; after all, they had the idea in the first place. This usually means their work is better.
- Everyone with Internet access can learn how to program and can, within days, write a useful application and make it available to the whole world. No special permission is required.
Compare these qualities to government information technology:
- We buy software before our users can decide if it is valuable and user-friendly, meaning there’s less incentive for developers to make a high-quality product. It also means there’s a high risk that we’ll waste our money.
- The programmers who develop our tools are under contract, and their responsibility is simply to fulfill their contract requirements. They often have little contact with the intended users, which keeps them from understanding users’ needs and preferences.
- It’s near impossible for a new developer to enter the government contract market, leaving us a much smaller pool of software developers to work with.
That’s not to say that our systems haven’t come a long way in the last few years. We now have wikis, blogs, link sharing tools, and all the other basics associated with the “Web 2.0″ brand. But we still rely on an outdated, inefficient system for procuring our software. Our systems have recently begun looking like the Web. Now it’s time to start innovating like the Web, too. With recession-induced budget cuts on the way, it is the perfect time to justify a more cost-effective strategy. Here are three ways we can start.
Let analysts solve their own problems. Problems are best understood by those who experience them first hand. Many of the Web’s best tools were not intended to be products; rather, their creators built them simply to solve their own problems. Basecamp, the Web’s most popular project management tool, was created by a small Web development firm to help its remote staff collaborate. It proved so useful that the firm, 37Signals, quit creating Web sites and made the Basecamp suite its sole product. Based on the Web model, intelligence officers would ideally write their own software. But that’s a tall order: few officers know how to write software and few have the time to commit to construction.
The solution is to hire Web programmers and embed them in analysis cells. By working as analysts, they’ll understand our IT needs better than any outside contractor ever could. Give these developer-analysts the permission to write and deploy their own code, so that they may test various tools with their colleagues. Those colleagues would provide frequent, unfiltered feedback, a vital aspect of software development that is absent from the government procurement process.
Give independent Web developers a shot. Developer-analysts wouldn’t be the only people with good ideas for new software. Small companies and independent developers have created thousands of Web-based productivity tools that would instantly help intelligence officers do their jobs better: To-do lists, journals, calendars, Gantt chart generators, people directories, mapping tools, timeline builders, concept mappers, and more.
But right now, most small companies and individuals can’t compete for government contracts. Their products are too simple to justify the cost of the bidding process–not just for the developers, but probably for the government as well. Why spend several months acquiring something that takes only a few days to build? As a result, government networks don’t benefit from the tools that make the Web so powerful.
We need to make opportunities for outside developers to get involved. For inspiration, look to Vivek Kundra, the Chief Technology Officer of the Washington, DC government. In an October 2008 contest called Apps For Democracy he procured 40 Web applications from dozens of solo developers and small firms. And he did it in 30 days on a $50,000 budget. He avoided the standard procurement route, which would have cost over $2 million and taken more than a year. Instead, he sponsored a contest that awarded small cash prizes to the best entries. Among the best tools were a carpool coordinator, a bike route mapper, and a neighborhood data visualizer.
The DC government probably never would have thought to request these tools, let alone issue RFPs for them. The entrants–DC residents and everyday programmers–had their own good ideas. Given the chance to realize them, they did.
The national security community should do the same thing. Thousands of Americans would respond with worthwhile contributions. Intelligence officers could team up with developers to help them better understand our needs. Officers with programming experience might even submit their own applications–a digital version of the DNI’s Galileo Awards. In the end, we would have hundreds of tools at a very small cost. If the contest yielded just one useful application, the return on investment would be no less than that of the average mega-contract.
Open our eyes to Open Source Software. Even better than cheap software is free software. Many of the Web’s best software is “open source,” meaning anyone can download it at no cost. Most government networks are already running some open source products. Apache was created by volunteers and now powers over half of all Web servers, including Intelink. MediaWiki–the software that runs Wikipedia and Intellipedia–is also an open source project.
Open source software is useful because it is infinitely customizable. Anyone may modify an open source package to fit their precise needs. The government, on the other hand, too often reinvents the wheel: we pay exhorbitant prices for proprietary tools when a few modifications to existing open source products would suffice. The FBI wasted over $200 million on SAIC’s ultimately scrapped Virtual Case File, a content management system that would have done much the same thing as WordPress, the popular Weblogging software. In an attempt to fix the terrorist watch list, Lockheed Martin has spent $500 million on a database that cannot search for names. Anyone who has built a Web site with the open source MySQL database language knows what a shameful waste that is.
The Web is full of successful open source projects that would immediately prove useful to national security officers. Why aren’t they more abundant on government networks?
Many believe that open source software poses a security risk because anyone may view and contribute to the code. This transparency actually makes the code more secure: for malicious code to make its way into an application, it would have to be approved by the project managers and evade the watchful eyes of hundreds of honest contributors. Should a user have any concerns about portions of the code, he may always remove them from his own copy. Proprietary code, on the other hand, can never be modified by the user.
In the long term, open source software is not completely without cost. It takes time and money to maintain. But the initial investment of simply trying it is minimal. While custom solutions can take years to complete, open source packages literally take minutes to deploy. Instead of seeing them as inferior alternatives, they should be the first place we look to fill our software needs.
The defense industry used to set the standard for information technology. The military was using the Internet and e-mail decades before the general public first heard of them. Today technology transfer works in reverse. New information tools are created by and for the public; bureaucracies catch on years later.
What can we learn from this? Should we try to get our edge back? No. It’s not a bad thing that outsiders have gotten so good at software development. The only problem is our own refusal to adopt their superior innovation model, choosing instead to stick with our bureaucratic process. The Web’s lesson for us is twofold:
- Good software tools require lots of experimentation in order to find the sweet spot with users. Millions of dollars and months of planning for perfection are no replacement for simple trial and error.
- Such experimentation has to be fast and low-cost. It cannot be weighed down by a bureaucratic contracting process.
Becoming comfortable with experimentation is the best thing we can do to prepare for any threat, and here’s why: In order to be truly prepared, the available solutions must outnumber the problems. Otherwise, we’ll have sunk our resources into a handful of tools that address some threats and ignore the rest. And that is not preparation. It is gambling.