Physical security systems have benefited from the technology revolution. Control modules are network-aware, or at least they connect to a PC. Some are PCs. They interact with a website account, or they contain a website. They provide remote access. They get system updates.
The great thing about this is that users can interact with the system and configure and review it without needing all those special tools used in the old days. The convenience of using common software to configure systems, monitor online, send emails and SMS alerts, remotely access video feeds, and zoom in and out, is all too curious to resist. So, also, say the hackers.
As an adversary, anyone can jump online and buy a zero-day, spear-phishing template for a few dollars. This lets a hacker create an email to send to a target, infect and control his PC, find out what software he uses, what gadgets are connected, what passwords he uses, and what web addresses he frequents. If he has remote video monitoring, the hacker now has that too. Neat.
Included here are all detection, analysis, surveillance and alarm systems. For example, many people have global positioning system (GPS) trackers on their car for when it is stolen. What treasure might this decision expose to online adversaries? Could they hack the website account and monitor the car at whim? Would that let them turn on the hands-free kit and listen to a conversation?
History
Physical security systems have a long history. In recent decades, they have become quite sophisticated and have followed the evolution of other digital embedded systems; that is, mechanical systems that contain computer software.
Bigger or older physical security systems needed maintenance from skilled technicians with special cables. This meant that hacking these systems required high levels of creativity. As time passed, security systems became more like network devices that happened to relate to security. They were integrated into building management and automation systems, telecom systems and office networks. They were made easier to operate, and more feature-rich.
This is a natural evolution for such systems. It is necessary for these systems to keep pace with the population’s appetite for risk, convenience and gadgetry. But there is a downside.
While there is a range of sophistication and maturity in the systems, there is also a spectrum of self-preservation capability. The hardness of the system is often balanced against its ease of use. Concessions are also made for reasons of commercial efficiency. Sometimes, comically, there is no rational explanation for a risky design decision.
A lack of operator expertise can compromise the state of the system in terms of IT security. The end result is a wide range of vulnerabilities being exposed with little awareness of the associated risks.
Scenarios
Networked camera vulnerability
A few years ago, a significant manufacturer of domestic security cameras accidently introduced a bug into its camera software. The bug allowed an intruder to view the camera feed on the internet, without the need for a password.
Bulletin board websites and internet newsgroups started listing the web addresses at which the camera images could be viewed. Numerous breaches occurred, exposing the private video surveillance of households and elsewhere.
The bug remained active in the software for the cameras for a number of years! It was estimated that only five percent of customers had registered with the manufacturer, so it is difficult to know how many people are affected, or how many know of the compromised camera systems.
Data breaches
The fact that people’s information is held by a third party means it can be improperly obtained from another source. The infamous Sony data breach (an arbitrary example from among many) demonstrates that big budgets and ‘terms of use’ do not necessarily guarantee security.
If a physical security provider is compromised in this way, what information and capability is exposed?
Now that so-called advanced persistent threat resources are available for hire on the internet, the attack surface of physical security systems and their associated technology must be re-thought as “From where am I visible?”
Stuxnet
The Stuxnet computer worm became famous for targeting nuclear fuel refinement robots (centrifuges) in Iran. It is said to have caused real physical damage in an unmanned sabotage operation. In broad terms, the malware turned up the rev limiter on the robots so they spun out of control and blew up. Although not a completely accurate description, it shows the concept. The systems it targeted were not connected to the internet. It jumped, morphed and hid. The studies have revered the design of the malware as if it were a magnificent mythical beast.
This shows that a properly motivated intruder can overcome almost any obstacle via design ingenuity in the tactics or the tools.
Social mechanics
Quite often, the hardness of the system alone is not the deciding factor. Much hacker folklore is based on combination attacks. Social engineering is the practice of exploiting human behaviour for tactical advantage. In computer hacking, it is typified by examples such as:
- Arrive at reception dressed like a maintenance guy. Ask for a visitor pass to get in, perhaps to clear a blocked drain (and plug in a little box).
- Use 100 points of identification to change someone’s password or personal identification number (PIN) over the phone.
- Get hold of a support guy’s toolkit. They often contain master passwords – back doors.
- Drop a USB gadget in the car park for an employee to find, inherit and use at work.
- Follow someone through a secured door, like the door to the shared bathrooms corridor, which also has the telco wiring riser and a wiring distribution frame to play with – piggybacking.
Whose responsibilities are these? Do not consider computer security and physical security as separate forces. They must interweave.
Solutions
Resist gimmicks
The manufacturers of systems rush to give consumers a reason to be interested in them. They will give consumers half-baked software as long as the list of features sounds right. What many people fail to realise is that the gimmick which enticed them to purchase the software, or one of its many unused features, may be the very thing that makes it interesting to an intruder.
Include it in the risk matrix
Anyone on the risk committee, or who gives advice to such committees, should get these issues on the table and state that there are connections between systems and there may be vulnerabilities. What to do? It is okay to just say, “We acknowledge the question, and will consider how to increase our knowledge.” That is an important step.
Ask suppliers questions in writing
Especially if there are any specific concerns, consumers should ask questions via email so they get a written reply and therefore a record of the supplier’s stated position. If the supplier dodges the question, politely restate it. If the consumer ends up in a bad corner, it can be valuable to be able to show that he made conscious efforts.
Test suppliers
Users should ask suppliers for logs, or something, and see how they respond. Tell them an IT security scenario is being fire-drilled. Do they email a text-based log file that can be easily analysed, or do they fax shadowy pages that can hardly be read? Does it take minutes, or days?
Use security awareness education
Spear phishing cannot be forced on anyone; they have to fall for it. If consumers know what it looks like, they probably will not fall for it anymore.
Staff, clients and system users are the best guardians of the systems, and the best coaches for each other. Empower them to do the work. They will enjoy being competent, and their confidence will spread beyond the office to their personal lives.
Kim Khor is a computer forensics expert. He consults on network security, incident response, risk and compliance, investigations, and electronic evidence management in the Asia Pacific region. He can be contacted at kimkhor@gmail.com