51.8 F
Washington D.C.
Thursday, March 28, 2024

The Danger of Stone-Age Habits in a Cyber World

One of the critical differences between “the cyber era” and the pre-cyber world, call it “the kinetic era,” is how much security was offered by distance in the kinetic world. In a kinetic world, a robber could only rob a bank by physically being at the bank. Similarly, should the German and Japanese navies have attempted to take down the U.S. power grid in WWII, they would have had to try to get close enough to hit it with carrier-launched aircraft or cannon fire, which was never a realistic possibility when facing the U.S. Navy.

In contrast, in the cyber era, both the U.S. power grid and every bank can be targeted from ANYWHERE in the world. In the kinetic world, an attempt to steal money from banks to take down the grid would have left clear evidence in the form of broken walls, bomb craters or other signs indicating that somebody had attempted a kinetic attack. These indicators were so clear that a repeat attack would be unlikely to be attempted if unsuccessful, because the defenders would be alert to this technique.

Similarly, the tools required to execute these attacks, whether aircraft carriers, submarines, bombers or diggers and blowtorches for banks would be very hard to build, and equally hard to replace if consumed or abandoned, much less captured by defenders, in the attempt. As a result of this, the peak of the kinetic era was one where the U.S. Air Force was the largest in the world and the second largest to the Air Force was the U.S. Navy. Even nuclear weapons were unlikely to be deployed, without making it fairly clear where they originated.

Because the kinetic era was so limited in terms of available tools, skills and proximity, the number of defenses required, including mechanical, legal, police and military were relatively small, even if still a multiple of those required by the attackers. Defensive measures such as fences were very simple to inspect and maintain while having a long lifetime. And a failed attack would take a lot of resources to rebuild the capabilities consumed in the process, while the defender was much better placed to deploy defensive resources as needed. As a result, specialized defensive forces and tools were able to defend relatively limited attack surfaces with good efficacy.

The cyber era turns this upside down: Offensive tools are easy to acquire, are easily replaced and a lot of people can acquire the necessary skills to build and use them. As an example, the first known attack to cause damage to the U.S. power grid appears to have been an amateur exploiting an unpatched firewall. Attacks leave a signature that may be missed completely or blends in with thousands of other attempts, and the odds of identifying every attacker are slim to none. And everywhere/anywhere that has a decent digital network can be used to launch an attack.

The damage that can be done can be multiplied as easily as an attacker desires. If one bank, or one power generator, does not succumb to an attack, the cost of trying another one is small, and many attacks can be launched simultaneously. Similarly, the benefits of an attack are portable in ways that cash never was, both because digital cash travels at the speed of light and weighs nothing, while the risk of getting caught, much less identified, is slim.

Cyber technology is also born with vulnerabilities that were never seen in fences, walls or military equipment. It is probably fair to say that no piece of cyber technology has ever been created without vulnerabilities that are unknown to both the manufacturer and the user. Add to that the deliberate vulnerabilities that aid the manufacturer or save them money, such as fixed passwords or backdoors, and then deliberate vulnerabilities inserted in the supply chain. As the vulnerabilities are discovered, they will need to be patched, both of which requires the manufacturer to continue to support the equipment and the user to have a process that updates it.

Although we are now well into the cyber era, we still think of the world in kinetic terms. As with buildings or lawnmowers, we think of security as we do functionality. A device that isn’t obviously broken must be OK. Something may need to be painted, but it will keep running and functioning as expected, so painting can be deferred until everything needs to be repainted. Kinetic machines are interdependent in various ways, but those ways are mostly simple to understand, while cyber devices have myriad impenetrable interdependencies, as well as vulnerabilities, that can be exploited in a sequential and escalatory manner.

Not only is the government limited in scale, but its efforts are also self-defeating in the sense that every time the government declares a capability, either by taking down an attacker’s infrastructure or declaring awareness of an attacker’s efforts, the attackers will know that their security has been compromised – and then improve it. This leads to a cycle of reactions, which improves the performance of attackers while making defensive efforts harder and harder. Some degree of capabilities being revealed is unavoidable, but the more a defender reveals, and the less sophisticated the breaches causing those revelations, the harder it is for the defenders to maintain an advantage. This is particularly relevant when multiple governments or defenders pursue the same hackers and those hackers either are criminals suborned by nation-states or nation-state hackers moonlighting with government tools.

The fragility of the cyber era requires new ways of managing security. Where critical infrastructure could rely on the government, in the form of legal or military powers, to protect it in the kinetic age, there is no way that the government can protect all the vulnerabilities of every critical network today. The government simply is not big enough to police and defend them. As an aside, you might wonder why the government oversees food processing plants when this service seems to have no other purpose than letting the people who own and profit from these plants avoid the responsibility of managing them in a healthy manner. Ditto for Wall Street oversight and drug manufacturing.

In the U.S. we have a long tradition of granting corporations rights or personhood, as well as the responsibility to pay some financial penalties. This has allowed corporate officers to use corporations as a proxy, which allows profits to flow in while deflecting almost all personal responsibility for deleterious outcomes away from management. The classic example of this is the 2008 financial crisis, which caused significant damage to large numbers of people, yet almost no corporate manager bore any personal responsibility for the actions that caused the crisis.

Among the many ways of guaranteeing responsible behavior, one is we have frequently relied on compliance with some set of rules. For example, the United States Department of Agriculture regulates food safety and the United States Environmental Protection Agency regulates emissions from cars and factories. In both cases they do this by issuing standards and then monitoring compliance with these regulations. Both food safety and pollution are fields where there are few changes, although there are always some people who will cause new regulations, for example by feeding cow’s brains and spinal cords to cows and causing an epidemic of prion diseases among both cows and humans.  Similarly, some car manufacturers coded their diesel engine control systems to detect when their pollution compliance was being tested and otherwise polluted far more than allowed for several years.

Both pollution and food safety are classic examples of the kinetic age in the sense that they apply to a limited number of instances where individual non-compliance has a limited, if still potentially large, impact. In contrast, a single bank, hotel chain or credit monitoring company that relies on cyber technology can impact millions of people in ways that were far less likely in the kinetic age. In addition, the kinds of oversight failures or mismanagement that lead to failures in food safety or pollution are as unsophisticated as the cyber breaches we see happening today. While there are 0-days that happen, just as there probably are pollution and food safety events that defied prediction, these account for a very small percentage of events. Most cybersecurity events could easily have been prevented and could have been predicted and prevented with adequate security management processes. As Verizon put it so eloquently: “The majority of breaches are discovered by a third party after about three months and could easily have been prevented.”

The technical simplicity of breaches that affect most people and the scale at which people are affected requires us to change the way we regulate in the cyber era. While it is understandable if organizations fail to prevent a 0-day attack, there should not be an excuse for a profitable corporation or public entity for failing to prevent technically unsophisticated breaches. The best way to achieve this at scale, while still adapting to the rapid pace of change in the cyber domain, is to hold management personally responsible for implementing adequate security and safety processes.

To date, the penalties for failing to prevent cyber breaches have been paltry. For example, the July settlement between Equifax and the Federal Trade Commission promised as much as $425 million to be shared between the 147 million people known to have been affected by the breach. The announcement of the settlement led to a rise in the stock value of Equifax and its stock value has grown slightly since the day the discovery of the breach was announced. Breaches may lead to reduced growth, but there is no indication that individuals are held responsible for allowing them. Indeed, the way we talk about breaches always suggest that the problem is external “hackers,” no matter how simple the tools they use.

From a technical perspective, the Equifax breach involved a vulnerability, publicly identified in early March 2017, that hackers exploited from May 13 until discovered on July 30, 2017 – about 77 days. Ironically the vulnerability, and the way to fix it, was publicly known about as long as it took for Equifax engineers to find the breach.

If instead third-party data breaches, such as the Equifax breach, relied on fines calculated by a fixed formula related to the scale and duration of the breach, rather than on negotiations between the company and the government, the incentives could change. If that fixed formula was $1 per victim per day of the breach from the beginning until it was stopped, and the public informed, Equifax would have paid about 26 times as much money. Add to this a clause holding senior management personally responsible for breaches – for example, a fine amounting to 1 percent of their pay for each million victims per day of a breach – they would have a significant stake in establishing adequate security control processes, even if paid more that $20 million per year. How do we move forward?

The way to change our mindset is to accept that all cyber technology has vulnerabilities – to consider the default posture as insecure – and then to work tirelessly to build and manage cyber technologies in ways that compensate for these known and unknown vulnerabilities. This does cost money in the short term, but the potential damage from failing to take the vulnerabilities seriously will quickly grow to be as serious as global warming or the end of effective antibiotics. As an example, the Internet of Things (IoT) is full of devices with known and unknown vulnerabilities. One way to limit the vulnerabilities is to force the manufacturers to deploy the devices online and provide government-mandated rewards to hackers who find and disclose vulnerabilities in them. If this is a pre-condition for selling the devices, manufacturers may find an incentive to make secure devices. Only once the IoT reaches a state where hackers cannot find vulnerabilities is it fit to be sold.

Mankind has a long history of ignoring obvious downsides to their actions. When inventors built the first steam engines, they did not consider the impact of burning coal to make steam. Similarly, we use fertilizers without considering the runoffs that damage streams, rivers and oceans and overuse antibiotics in the hope that the consequences will not happen while we are alive. These failures to consider the downsides have scale. The accumulation of fossil fuel combustion now has a measurable impact on the planet. Algae blooms join with heating oceans to damage the marine ecosystems and it appears that our invention of new antibiotics is seriously behind the ability of bacteria to survive them.

The damages caused by poorly managed cyber technologies scale as quickly as epidemics. such as the 1918 Influenza pandemic and the Black Death of the Middle Ages, which killed about 25 million people. In the past people had little knowledge of how the flu or the plague spread, but we do understand how cyber vulnerabilities spread. If we fail to manage this risk, we will likely see injuries and damages commensurate with the scale of cyber adoption. We are already deeply dependent on cyber technology, and any failure to ensure that the technical vulnerabilities do not lead to human injuries must be initiated while the technology, and the culture that manages it, is still malleable.

The Danger of Stone-Age Habits in a Cyber World Homeland Security Today
Hans Holmer
Hans Holmer is a retired CIA Officer. He learned to program computers in 1973, did some FORTRAN in college and was a System Administrator for the US Army from 1983-86. After that he joined the CIA and began working on the interface between humans and technology in the mid-1990's. He received the CIA Intelligence Star for a technology-related operation before the turn of the century and continued to be a thought-leader and pioneer in the domain until he retired in 2012. He is on the board of Bravatek and now lives in Vienna, Austria.
Hans Holmer
Hans Holmer
Hans Holmer is a retired CIA Officer. He learned to program computers in 1973, did some FORTRAN in college and was a System Administrator for the US Army from 1983-86. After that he joined the CIA and began working on the interface between humans and technology in the mid-1990's. He received the CIA Intelligence Star for a technology-related operation before the turn of the century and continued to be a thought-leader and pioneer in the domain until he retired in 2012. He is on the board of Bravatek and now lives in Vienna, Austria.

Related Articles

Latest Articles