From Black and Green Review no 4.
Pre-order no 4 here.
The launch of Smarter Planet was not merely the announcement of a new strategy, but an assertion of a new world view.
Amid the global economic crisis of 2008, IBM began a conversation with the world about the promise of a smarter planet and a new strategic agenda for progress and growth.
As the internet grew, so did technology-driven enterprise needs and a truly global workforce. Computational power was being infused into things no one had thought of as computers: phones, cars, roads, power lines, waterways and food crates. A trillion connected and intelligent things were becoming a system of systems — an “internet of things” — and producing oceans of raw data.
This system of systems that we find littering the orbit of the Earth, its environment and down to our immediate surroundings and bodies, is more than just the marketing campaign of a large multi-national tech-based corporation. It certainly has roots going deeper than the 2008 global economic crisis, but as it stands today, it is a system that the majority of civilized humanity has come to depend upon for existence.
Much of the promotional material for such a new world view is centered around the concept of interconnectedness. The promise being that the humans on disparate ends of the unifying system will now be connected, and remain connected, in ways not possible without the development of this so-called smarter planet. Of course, the connection is physically carried out by various apparatus of technology, not humans, and there is a definite qualitative difference between these types of connection. The human connection is reduced to symbolic representations that the technological apparatus can disseminate. As such, those representations of all physical existence become amplified, literally and figuratively, in their importance to such a civilization that depends on them. The message of IBM’s “new world view” is clear enough - if you can’t tread in the “oceans of raw data,” then be prepared to drown.
Those of us unwilling to have life defined by the ability to navigate these digital oceans have other ideas, though. On either side of the stormy seas of technology lies an existence not engulfed by our ephemeral technological aspirations. There are various actors in bringing about such an existence and not all of them are human. In fact, a formidable opponent to the life-blood of such systems, electricity, is the largest benefactor of all life on Earth: the sun.
Geomagnetic Storms and the Carrington Event
Geomagnetic storms are disturbances in Earth’s magnetic field that often occur when coronal mass ejection (CME) or a persistent high speed solar wind stream sweeps past Earth causing the magnetic field to become unsettled for an extended period of time. CME is described as “a giant cloud of solar plasma drenched with magnetic field lines that are blown away from the Sun during strong, long-duration solar flares and filament eruptions.” The effects of such storms on systems dependent on a steady and stable flow of electrons are not hard to imagine. One such event that came in modern times was the Carrington Event of 1859.
On September 2, 1859, one of the largest recorded geomagnetic storms hit the Earth’s atmosphere and wreaked havoc on the communication technology of the day - the cutting edge telegraph system. The system “shorted out in the United States and Europe, igniting widespread fires.” Telegraph operators experienced electric shocks, telegraph pylons sparked and the aurora generated by the storm was seen at locations near the equator, which is extremely rare. This geomagnetic storm of 1859 became known as the Carrington Event based on the solar flare that was observed and recorded by the English amateur astronomer Richard C. Carrington just before noon on the previous day.
Earth’s magnetic field can protect the surface of the planet from smaller solar storms (they are categorized similar to Hurricane strengths), but the storm in 1859 overwhelmed Earth’s magnetic field and penetrated to the surface of the planet. In 1989, there was a similar solar storm that caused a nine-hour power outage across Quebec, Canada. The 1859 Carrington Event was reported to be three times as strong as the 1989 storm that wiped out power in Quebec. If a storm as strong as the Carrington Event were to happen today, it could be a crippling blow to communications systems worldwide that may take multiple years of recovery and, according to U.S. Homeland Security Committee, “could pose the risk of the largest natural disaster that could affect the United States.”
In 2012, another strong solar storm was observed, however this time it came from a section of the Sun that wasn’t pointed directly at the Earth. Our technology was spared the onslaught. "If it had hit, we would still be picking up the pieces," reported Daniel Baker of the University of Colorado back in 2014.
The solar storm of 2012 wasn’t widely covered in the press and it actually took two more near misses before these events started reaching a wider audience. In March 2014, Michael Snyder published his article, “After Several Near Misses, Experts Warn the Next Carrington Event Will Plunge Us Back Into The Dark Ages,” where he wrote:
Most people have absolutely no idea that the Earth barely missed being fried by a massive [Electromagnetic Pulse] burst from the sun in 2012, in 2013 and just last month. If any of those storms would have directly hit us, the result would have been catastrophic. Electrical transformers would have burst into flames, power grids would have gone down and much of our technology would have been fried. In essence, life as we know it would have ceased to exist – at least for a time. These kinds of solar storms have hit the Earth many times before, and experts tell us that it is inevitable that it will happen again.
Previously in December of 2012, Snyder had warned of the devastation that geomagnetic storms could unleash on our smarter planet:
A single gigantic electromagnetic pulse over the central United States could potentially fry most of the electronics from coast to coast if it was powerful enough. This could occur in a couple of different ways. If a powerful nuclear weapon was exploded at a high enough altitude, it could produce an electromagnetic pulse powerful enough to knock out electronics all over the country. Alternatively, a massive solar storm could potentially cause a similar phenomenon to happen just about anywhere on the planet without much warning. Of course not all EMP events are created equal. An electromagnetic pulse can range from a minor inconvenience to a civilization-killing event. It just depends on how powerful it is. But in the worst case scenario, we could be facing a situation where our electrical grids have been fried, there is no heat for our homes, our computers don’t work, the Internet does not work, our cell phones do not work, there are no more banking records, nobody can use credit cards anymore, hospitals are unable to function, nobody can pump gas, and supermarkets cannot operate because there is no power and no refrigeration. Basically, we would witness the complete and total collapse of the economy. According to a government commission that looked into these things, approximately two-thirds of the U.S. population would die from starvation, disease and societal chaos within one year of a massive EMP attack. It would be a disaster unlike anything we have ever seen before in U.S. history.
Clearly, the administrators of the technological era have taken notice of the potential for destruction that these geomagnetic storms have and they’ve responded.
The UK government announced plans to fund a new space-weather forecasting service in 2013. This was intended to serve as a backup to the US based Space Weather Prediction Center, a laboratory and service center for the US National Weather Service which itself is part of the larger National Oceanic and Atmospheric Administration (NOAA). And in September 2016, NOAA announced that they were going to start releasing more accurate forecasts of electromagnetic storms based on a more sophisticated prediction model. In part, this model was to be informed by a NOAA satellite positioned 1.6 million kilometers away from the surface of Earth, known as DSCOVR. The new NOAA model will incorporate three models that describe the solar atmosphere through interplanetary space and into the Earth’s magnetic realm. One model describing the Earth’s entire magnetosphere, another describing the inner magnetosphere and a third model for electrical activity in the upper atmosphere.
As it has been with the history of technological developments, the problems of one form of technological innovation have to be ameliorated through the development of more technology. The vulnerability of techno-addicted life is exposed by its dependence on a stable and steady flow of electricity. If the flow of electricity is affected by events beyond the control of current technology, then the rush is on to create the technology to control those events. Or at least, the rush is on to create yet another ocean of raw data about those events. With the raw data, technocrats can craft a plan of action to minimize the damage to those technological systems of our creation.
So, what about that aforementioned “system of systems” comprised of a trillion interconnected machines? Perhaps it won’t take an external event, such as a geomagnetic storm, to bring the system to a grinding halt. Perhaps it can come from an internal manipulation of the system itself.
The Internet of Vulnerable Things
The Internet of Things (IoT) is a system of components that traditionally lacked computational power or a means to communicate with each other. The IoT is the result of a push to build computational power into technological devices and enable them to communicate with users, and each other, through the internet. There are many “smart devices” out there filling up that ocean of raw data, but being smart and connected doesn’t make the IoT safe from being deceived and manipulated. In fact, being smart and connected might just be IoT’s biggest vulnerability.
On September 20th, 2016, the largest distributed denial-of-service (DDoS) attack was launched against KrebsOnSecurity.com. Engineers at Akamai, a content delivery network service provider who protect the Krebs site from digital attacks, reported that the attack was nearly twice as large as any other attack they had seen and was among the biggest assaults the Internet has ever witnessed. What made the attack so large was the bandwidth of the attack directed at Krebs. It reached 620 Gbps (gigabits per second). The interesting thing here is the way it was carried out:
There are some indications that this attack was launched with the help of a botnet that has enslaved a large number of hacked so-called “Internet of Things,” (IoT) devices — routers, IP cameras and digital video recorders (DVRs) that are exposed to the Internet and protected with weak or hard-coded passwords.
Each botnet spreads to new hosts by scanning for vulnerable devices in order to install the malware. Two primary models for scanning exist. The first instructs bots to port scan for telnet servers and attempts to brute force the username and password to gain access to the device. The other model, which is becoming increasingly common, uses external scanners to find and harvest new bots, in some cases scanning from the [botnet control] servers themselves. The latter model adds a wide variety of infection methods, including brute forcing login credentials on SSH servers and exploiting known security weaknesses in other services.
In this way, one doesn’t need to personally own the hardware and the pipe to the internet big enough to flood it. This method is something like a “zombie army” of IoT devices that are manipulated into blasting data back into the system. The result is a targeted attack from, as IBM states, “a trillion connected and intelligent things.”
On October 1st, 2016, the following was published on the KrebsOnSecurity blog:
The source code that powers the “Internet of Things” (IoT) botnet responsible for launching the historically large distributed denial-of-service (DDoS) attack against KrebsOnSecurity last month has been publicly released, virtually guaranteeing that the Internet will soon be flooded with attacks from many new botnets powered by insecure routers, IP cameras, digital video recorders and other easily hackable devices.
The leak of the source code was announced Friday on the English-language hacking community Hackforums. The malware, dubbed “Mirai,” spreads to vulnerable devices by continuously scanning the Internet for IoT systems protected by factory default or hard-coded usernames and passwords.
Vulnerable devices are then seeded with malicious software that turns them into “bots,” forcing them to report to a central control server that can be used as a staging ground for launching powerful DDoS attacks designed to knock Web sites offline.
But even before this announcement was made on Krebs’s site, the Mirai botnet was implicated in another attack that more than doubled the size of the Krebs attack.
On September 23, 2016, OVH, the France-based Internet Service Provider, was blasted by the same Mirai botnet that hit Krebs. This time to the tune of 1.5Tbps (terabits per second), as tweeted by the founder of OVH, Octave Klaba. His tweet reported, “This botnet with 145607 cameras/dvr (1-30Mbps per IP) is able to send >1.5Tbps DDoS.” To quote Dave Larson, chief technology officer at security form Corero, “The tools and devices used to execute the attacks are readily available to just about anyone; combining this with almost complete anonymity creates a recipe to break the internet.”
Within a month of OVH being targeted, the hacking collective calling itself New World Hackers claimed to be behind a large-scale attack using the Mirai botnet code that targeted Dyn, a Domain Name System (DNS) provider. The attack occurred on October 21, 2016 and came in three waves roughly at 7am, noon and 6pm. Dave Allen, the general counsel at Dyn, reported that “tens of millions of internet addresses” were being used to blast internet traffic at the company’s servers. Dyn is a company that provides internet infrastructure services to large, popular sites including Amazon.com, BBC, Fox News, PayPal, Reddit, Starbucks, Twitter, Visa, Wired and Yelp. Clearly the attacks were coordinated and their target strategically chosen. The IoT devices strung together in such a botnet are at a disadvantage when it comes to security because they are unable to run standard security software intended for commercial operating systems. IoT devices are certainly aiding in the rise of DDoS attacks as evidenced by a report from Verisign that shows a 75% increase year over year in the frequency of such attacks.
Devices built to be controlled over the internet are beginning to proliferate. Business Insider reported, back in 2014, that by 2019 that “the Internet of Things” would be the largest device market in the world – “…more than double the size of the smartphone, PC, tablet, connected car and the wearable market combined.” Clearly, many users of these products leave the default security settings in place, meaning the units are open to being easily manipulated. Many devices are put into service without much thought to their configuration. Plug’n’play for the user is also making it plug’n’play for the hacker. While the Mirai botnet targeted consumer equipment and non-infrastructure websites and systems, the ramifications of a spreading IoT changes when industrial and business devices enter the picture. The National Security Agency’s Tailored Access Operations chief, Rob Joyce, recently said a large-scale attack on more critical infrastructure “…is something that keeps me up at night.”
Sweet Dreams of the NSA
Rob Joyce leads a team that is purported to be the best-resourced group of hackers in the world. They operate to infiltrate computer networks to gather foreign intelligence and they also probe the U.S. government networks to improve their security. In a presentation at the January 2016 Enigma conference in San Francisco, Joyce said he considered IoT as a major boon when his team needs to attack a target. He singled out heating and cooling systems as examples of internet-connected devices that offer national-level hackers a route into organizations that computer network administrators often overlook.
The above quote where Joyce revealed what keeps him up at night was specifically related to SCADA security. SCADA stands for “Supervisory Control and Data Acquisition” which is a system for remote monitoring and control that operates with coded signals over communication channels, typically using one communication channel per remote session. It isn’t just Joyce who acknowledges SCADA security as a source of sleep deprivation. Nicholas Weaver, a computer researcher at the International Computer Science Institute in Berkeley, California, was quoted as saying, “I don’t do SCADA research because I like to sleep at night.” However, those who do look into SCADA security find evidence of groups looking to infiltrate industrial systems and exploit their vulnerabilities.
At the Black Hat Conference of 2013 in Las Vegas, Kyle Wilhoit, a researcher with security company Trend Micro, gave a talk on what happened when he set up a dummy water control system that could be accessed via the internet. In his talk, he discussed how a Word document hiding malicious software was used to gain full access to his U.S. based decoy system, or “honeypot.” The attack was carried out, allegedly, by a Chinese based hacking group known as Comment Crew, or APT1, and it illustrated the point that there are hacking groups out there that target infrastructure systems. Wilhoit pointed out that between March and June in 2013, 12 honeypots were deployed in 8 different countries and they had attracted 74 international attacks. 10 of the 74 attacks were sophisticated enough that they took complete control of a dummy control system. Wilhoit reported that, while his findings were based on his decoy systems, “these attacks are happening [to real systems] and the engineers likely don’t know.”
In October, 2012, U.S. defense secretary, Leon Panetta, warned that successful attacks have been made on computer control systems of American electricity and water plants, as well as transportation systems. While the details were light in Panetta’s report, it prompted a response from Chris Blask, founder and CEO of ICS Cybersecurity, where he said, “Stability and reliability are more important than anything – you have to keep the lights on.” Because we “have to keep the lights on,” these power and water systems are running on older hardware and software that simply works. As such, systems remain unpatched and vulnerable to known security issues. Also, since some of these systems are located in remote areas, companies, contractors and employees have pushed for remote access to these systems. Thus exposing them to infiltration via the internet.
Roy Campbell, who researches the security of critical-infrastructure systems at the University of Illinois at Urbana-Champaign, reported that the pattern of connections between different parts of the electrical grid can create weak spots that would make it relatively easy for a hacker to bring down a wide area, “If you can isolate a power station, for example, it can be difficult to turn it back on because you need power to do that.”
More topics to keep the NSA up at night are discussed at the Black Hat Conference every year. I’m sure they’re listening very closely and building up their defenses, but with the expansion of IoT being so rapid and the network being so dispersed, it would be difficult for them to protect all of it, all of the time. Some demonstrations of hacking attacks covered at the conference in 2013 were:
· Spraying the audience with water from a replica water plant component forced to over-pressurize
· Showing how wireless sensors commonly used to monitor temperatures and pressures of oil pipelines and other industrial equipment can be made to give false readings that trick automatic controllers or human operators into taking damaging action
· Detailing of flaws in wireless technology used in 50 million energy meters across Europe that make it possible to spy on home or corporate energy use and even impose blackouts
· Exploiting a protocol called Dbus that has been used to control industrial equipment since the 1970s and is still in wide use today on devices often connected directly to the internet
It’s been reported that the incentive to update or patch these security flaws is low because current law doesn’t hold energy operators or the manufacturers of control systems liable for the consequences of poor security, such as damage from an explosion or lengthy power outage. So, maybe while the NSA is losing sleep over the possibility of infrastructure hacking attempts causing damage, the private companies don’t feel the same pressure because they aren’t exposed to monetary damages resulting from the failures of their systems.
One final note here related to the sweet dreams of the NSA. In his spare time, HD Moore, decided to carry out a personal census of every device on the internet. Moore’s census involved sending simple, automated messages to each of the 3.7 billion IP addresses assigned to devices connected to the internet around the world. Not even Google has publicly attempted such a census. The result of Moore’s census was that many of the 310 million IPs that responded were vulnerable to well-known flaws, or configured in a way that would let anyone take control of them. Moore was quoted as saying, “There [are] some fundamental problems with how we use the Internet today. We’re sitting on mountains of new vulnerabilities.”
Cutting the Cords
So while the IoT appears vulnerable to celestial and digital attacks, how does it stand up to some good old fashioned and therapeutic physical destruction? The good news is that even though the smarter planet invents a virtual layer for programmers to manipulate in the confines of cyberspace, it has to run on physically existing hardware. And that hardware is certainly available to physical attacks.
In April, 2016, Verizon reported that its equipment was sabotaged in the wake of a strike by its unionized workers. Fiber optic cabling was sliced at facilities in New Jersey, Pennsylvania and New York, which cut services to customers that included police and fire departments. A Verizon spokesperson reported that during a normal year, they might experience 5 or 6 deliberate cable cuts, but since the strike started, the number of suspected deliberate cable cuts rose to 49. If only humans could recognize the damage done by and for technology extends far beyond their paycheck…
Announced in July, 2015, the FBI began investigating at least 11 physical attacks on high-capacity Internet cables in the northern California. One such attack, which took place near Sacramento, was carried out by the perpetrators breaking into an underground vault and cutting three fiber-optic cables belonging to Internet service providers. In April 2009, underground fiber-optic cables were cut at four sites, knocking out landlines, cell phones and Internet Service for tens of thousands in Santa Clara, Santa Cruz and San Benito counties. Similarly, in Arizona, underground fiber-optic cables were sliced in early 2015 which resulted in tens of thousands of residents being cut off from the Internet. In response, JJ Thompson, CEO of Rook Security said, “When it’s situations that are scattered all in one geography, that raises the possibility that they are testing out capabilities, response times and impact. That is a security person’s nightmare.” Yes, and by the same token, what a nightmare our technological dependence has become! To turn FBI Special Agent Greg Wuthrich’s words around about carrying out attacks on technological infrastructure would be appropriate, “We definitely need the public’s assistance.”
In March, 2015, USA Today published an article entitled, “Bracing for a big power grid attack: One is too many.” In this article, it was revealed that the U.S. power grid is hit by cyber or physical attacks about once every four days. While many of the attacks are small-scale, the analysis carried out by USA Today on federal energy records lists some important facts:
· Transformers and other critical equipment sit in plain sight, often only protected by chain-link fencing and a few security cameras
· Suspects have never been identified in connection with many of the 300-plus attacks on electrical infrastructure since 2011
· In 2011, an intruder gained access to a critical hydro-electric converter station in Vermont by smashing a lock on a door. In 2013, a gunman fired multiple shots at a gas turbine power plant along the Missouri-Kansas border. Also in 2013, four bullets fired from a highway struck a power substation outside Colorado Springs. No suspects were apprehended in any of these incidents.
· The power grid is susceptible to cascading outage due to the reliance on a small number of critical substations and other physical equipment
And here’s a rundown of the “seven bullets theory” from John Axelrod, a former energy security regulator from a presentation at a 2013 security conference in Louisville. He describes how a mass outage could be triggered by a physical attack targeting key pieces of equipment:
The Eastern power grid is highly interconnected and relies on rolling power between different utilities, he said, according to a video of the presentation.
"If you know where to disable certain transformers, you can cause enough frequency and voltage fluctuation in order to disable the grid and cause cascading outages," said Axelrod, who now heads the power and utilities information security practice at Ernst & Young. "You can pick up a hunting rifle at your local sporting goods store … and go do what you need to do."
Sometimes the system doesn’t even need any external push, though. It regularly generates failures of its own. In March, 2016, Lockheed Martin faced months of delivery delays for the first six of eight GPS III satellites when it was discovered that small cracks in ceramic capacitors supplied to them by Harris Corporation were causing failures. Apparently, Lockheed has had so much trouble delivering a working system to the customer, the United States Air Force, that the Air Force was considering opening the troubled program up to multibillion-dollar competition. However, it’s not only the satellite project that is weighed down with technical problems, the ground based system to communicate with those satellites being built by Raytheon is running five years late. The system itself appears to hobble along running on technology it can barely produce well enough to keep up the charade.
John Zerzan has been cataloging the manufacturing defects and recalls from various industries for years on his weekly radio show, Anarchy Radio. One could, and I believe I have on previous shows, point out how entropy continually conspires against the aims of technology to dissolve its systems of control and force. Entropy, in simple terms, is the measure of energy of a system that is unavailable for doing useful work. Somewhat of a loaded definition with the phrase of “useful work,” the point is clear enough that the laws ascribed to nature by science end with all the sound and fury of technological innovation leading to a slow decay away from the order it attempts to impose. As such, entropy can describe, in general, the trajectory of technological systems and components – they will eventually fail to meet their desired operating specifications.
Keeping all this in mind, I suppose one could say the system will eventually cut its own cords, but with a little determination and help, those cords could be severed more readily.
And from their own website, https://dyn.com/about/, they describe themselves as – “a cloud-based Internet Performance Management (IPM) company that provides unrivaled visibility and control into cloud and public Internet resources. Dyn’s platform monitors, controls and optimizes applications and infrastructure through Data, Analytics, and Traffic Steering, ensuring traffic gets delivered faster, safer, and more reliably than ever.”