March 23, 2017 - From the March, 2017 issue

De-Risking Cities: Connectivity and Cybersecurity

With more and more infrastructure coming to rely on a network of computers communicating around the clock, the most critical functions of our cities could soon be more vulnerable than ever before. In the Internet age, urban “resilience” means not only responding to climate change, but also protecting our ubiquitous networks from both hack and glitch. At VX2017, cybersecurity experts David Alexander (LA Department of Water and Power), Cheryl Santor (pictured, Metropolitan Water District of Southern California), Nelson Gibbs (ISACA), and Miguel Villegas (K3DES) discussed how agencies are staying ahead of the curve of the new, advanced technology taking over our cities. TPR presents an excerpt of the illuminating panel. 


Cheryl Santor

"Everything related to smart grids or smart metering carries risk. When you introduce a massive network of millions of connected meters, those all become entry points to your infrastructure." – David Alexander, Director of Information Security, LADWP

David Alexander: We all know that the functions connectivity provides are potentially amazing—from cell phones to Nest and the Internet of Things. But of course, with every golden prize there comes a risk. There are many unrealized risks associated with connectivity, devices, and networking.

For example, all smart grid utility management is done through cyber connectivity. Therefore, everything related to smart grids or smart metering carries risk. When you introduce a massive network of millions of connected meters, those all become entry points to your infrastructure. You have to be mindful of that, and factor it in when you’re designing and architecting your environment.

The biggest challenge we face in cybersecurity is the lack of information and understanding. In many businesses, security decisions are often made by managers and executives who look at them in terms of the bottom line, sales, and operations—or, in cities, service. Risk tolerance is very high, especially in a competitive or service-driven world. But in cyber, a single action can cause a catastrophic failure. So many people can be impacted immediately by one breach.

The major focuses of cybersecurity are confidentiality, integrity, and availability. Combining these elements can result in assurance and reliability. Let’s begin with the item of privacy.

Cheryl Santor: Everything we do nowadays results in some information coming back to us. I recently bought a Ring video doorbell. It allows me to check who’s coming to my front door—and it records it. Is there now a privacy concern for the person coming to my door? If we talk, could that recording come back to haunt me?

We don’t necessarily have to be on pins and needles every single minute, but we need to start thinking beyond, “This is a great device; I love what this can do for me,” and about repercussions that could happen at any time.

Another example is the cars that drive us now. Shortly after I got my car about three years ago, I was driving on the freeway when suddenly a voice through my stereo and told me to avoid the direction I was going. That was an indication that I was being watched very closely in everything I was doing.

We all have instances where we try to avoid being technically connected. But can you ever truly be technically unconnected anymore? I don’t think so.

Nelson Gibbs: In the cyber space, the barrier to entry for risk and threats is incredibly low.

Connecting systems can enhance the benefits that we get from them, by allowing us to more easily share information within our own organizations or among organizations. But once you start to interconnect systems, you’re also exposing them to anyone else who wants to attempt to connect to them.

In the old days, when people had to gain physical access to systems, you used to be able to put proximity controls in place, and there were high costs for duplicating or connecting to equipment. Now, the internet is ubiquitous, always on, and always accessible. Independent actors, organizations, and nation-states can take advantage of that access at very low cost. Just a laptop is enough to allow many hacks to occur. For the cost of a few hundred dollars, somebody can expose you to the same level of risk that a well-funded, well-staffed adversary could.

Privacy is not the only concern. We also have to consider the availability of systems. If people deny you access to your system, that would inhibit your ability to fulfill your organization’s mission.

Miguel Villegas: We typically focus on risks to general-purpose devices in the office. But there are also risks in industrial and process-control systems, which we live with every day.

Auditors are finding in our reviews that basic controls are just not present. All of these controls are installation-selectable. The technology of cybersecurity controls is evolving and getting better, but people just don’t turn them on. They slow the system down, they’re cumbersome, you have to maintain them, and it isn’t until you’re breached that it becomes important to you.

To me, that’s one of the biggest risks—not technology, although there are inherent limitations to it, but people like you and me.

David Alexander: There are a lot of are other things to consider in terms of privacy. Social media use, even as part of business practices, can generate a lot of information about familial and personal habits.

Other issues are business spending habits and location monitoring. Could health insurance premiums be impacted by spending habits in the grocery store? Could GPS allow your driving habits to be monitored by an insurance company?

Next, let’s have the panelists speak to safety.

Cheryl Santor: Let me tell you about an experience I had at MWD, when we were putting in automatic metering devices. The information from these meters was to be fed back to our industrial control system—the supervisory control and data acquisition (SCADA) systems that run the core operations throughout the company. But the manufacturers did not provide the level of security that we wanted.

The meters were going to be located on the street, where they would be able to be accessed by the public. Yet they did not have enough memory to accommodate a basic password. They were free and open. That doesn’t make sense.

Additionally, all the ports on the devices were open. Anybody could walk up with any kind of serial port or Ethernet cable and plug it in. That’s questionable, too—even if the devices were on a pole 20 feet in the air. We actually made the manufacturer go back and reconstruct these devices. We devised a mechanism whereby the employee responsible for calibrating these devices would have to have a special password that would open the port where they could plug in their computer.

Manufacturers today are not ready to build in security as they should. Unfortunately, they don’t think they need it until customers say, “I need that.” You have to ask those questions, and you have to put somebody who’s knowledgeable about security—who can understand the ins and outs of how the devices work—on all these projects.

We need to get down the level of the physical firmware in our devices—including infrastructure, heart pumps, hearing aids, and anything that can be hacked. We need to start at the basic level, where those things are developed and put into production and sold to us.

Let’s say you have a device on your air conditioner. If it were hacked, what would you want to happen? You would want it to react by ceasing to function, rather than allowing the hacker to bring your temperature up to 120 degrees. In other words, you would want it to “fail safe.” All IOT devices—in your refrigerator, car, phone, everything you touch anymore—need to be manufactured with some mechanism that enable them to “fail safe.”

Advertisement

David Alexander: We have to engage security very early on in the planning stages—not bolt it on at the end. The cost associated with bolting a solution on at the end is magnitudes higher than if you just architect it from the beginning, or at least seek guidance on it.

Nelson Gibbs: I agree. We aren’t planning for security on the front end; we’re not engineering security into the systems before we deploy them. A good example of that is autonomous vehicles.

Cars have a sensor network that controls how the car operates. It monitors the fuel level, the brake pad level, traction on each of the wheels, etc.—which enables anti-lock braking, for instance. All those sensors are communicating within the car.

Today, cars also have a communications network that enables satellite radio, Bluetooth connection, etc. We’ve created a secondary communications network that reaches outside the car. The problem is that manufacturers have connected these two networks together. That was a very bad idea. You don’t want the safety sensor network inside the car to be accessible from outside the car.

Manufacturers didn’t think about that beforehand, and now there’s a problem. It’s now possible to connect to an individual car from a remote location and take control of it—turn it on, turn it off, activate the brakes, etc.

That’s a huge health and safety threat to individual drivers, and to cities as a whole. Think about bus systems that are connected to networks for routing. The ability to control these buses could become available outside the central command center—to individuals sitting halfway around the world. This is a design flaw. It was not adequately addressed in the engineering from the get-go.

Miguel Villegas: It’s critical to understand that this risk is here today.

I recently saw a presentation in which a UC Irvine professor was able to take control of a car. He was able to maneuver it, start it, turn it off, and everything—because it didn’t have security. Meanwhile, Uber has started to test self-driving cars in Pittsburgh.

There’s a lot of risk in technology today. A worst-case scenario might be someone driving someone else off the road. But more minor hacks can also affect our lives—and automated vehicles are just one example.

David Alexander: Let’s turn to the ways that we can identify risk. Cheryl, speak about the Cybersecurity Framework.

Cheryl Santor: In 2013, the Department of Homeland Security and the National Institute of Standards and Technology (NIST) were mandated by President Obama to create a cybersecurity framework.

The first question the assessment asks is: Do you have an inventory of all of your assets? It then goes through the whole gamut of questions to help you understand your business—what it does, what it’s connected to, how it functions, and what pieces are at play that might have risk associated with them. When you start with a basic understanding of your own core business functions, you can get a good idea of where your gaps are.

When Target was breached in 2013, it was because they didn’t realize that their HVAC system was connected directly to their primary network 24/7. Why wasn’t that ever looked at? Those assessments should have been done. They should have known exactly what touched what, where, when, how, and why.  That being said, you don’t have to take it all on at once. Address the big things right away, and get to the small things later. It’s like they say: How do you eat an elephant? One bite at a time.

We’re not going to rip out all of our old systems. You have to look at how you can put security around information that’s being conveyed through old systems. At times, maybe it’s better to have somebody drive in rather than connect through the internet.

Take NIST’s cybersecurity framework as your compass, and have somebody who works there—not an outside consultant—walk through those questions. The first thing you need to do is understand your environment.

David Alexander: Nelson and Miguel, speak to risk assessment and management.

Nelson Gibbs: There are libraries of standards and information available. If you’re a US organization, NIST has a Risk Management Framework (SP 800-37), which includes a guide to risk assessment (SP 800-30). If your organization is international, or deals with international suppliers, ISO 31000 from the International Standards Organization may be more relevant. But they both essentially say the same thing.

Once you have an inventory of what you have that’s at risk, you need to assess the security threats to those assets, the likelihood that something will happen to them, and the impact if something did.

You need to go through that exercise so that you can prioritize your assets, because you’re not going to be able to de-risk everything. You’re going to have to pick and choose those that are most important and focus on them.

Although these standards and frameworks can provide initial guidance, the best tool we’ve got is still between our ears. It’s informed, knowledgeable people asking what happens when.

If you’re going to connect a control system, reporting, or monitoring network to the internet, you have to ask yourself: What happens when that’s not available to us? Because that’s going to happen. And if you haven’t asked yourself that question beforehand, it’s going to be an incredibly painful experience to find out.

David Alexander: In a nutshell: Identify. Assess. Prioritize. Categorize. And then build your strategy to de-risk your cities.

<

Advertisement

© 2019 The Planning Report | David Abel, Publisher, ABL, Inc.