In IT and IT Security there is a constant complaint about the risks of shadow IT, and the adoption of consumer collaboration and sharing tools. Over the last couple of years we also saw the emergence of novel exfiltration techniques including the persistent ultrasonic technique, where the infected devices communicates with other compromised hosts via high frequency; or the Twitter based technique, where malware sends out data 140 characters at a time for anyone to read; and the more recent Video technique, encrypting data in video files and putting corporate secrets onto video sites or later retrieval.
What can businesses do to protect themselves?
What each of these techniques do is create a way of exfiltrating data in a way that wasn’t directly attributable to the party trying to obtain the information, however, with the exception of the ultrasonic technique, that requires that the attacker is in close proximity to one of the devices that are compromised and relaying the data (that said, a compromised laptop that leaves a secure facility is fairly easily accessed) the other methods are able to be identified through IDS and DLP tools
Another method commonly used is the implementation of VDI (Virtual desktop infrastructure) to centralise and control access to the data needing to be protected. Even providing in only non-persistent instances of the virtual desktop to guard against potential malware persistence. Thankfully not everyone thinks that, but a lot do.
Combining the VDI with non persistent desktops and other security analytic techniques (IDS, statistical analysis and DLP) could give you a fairly strong security model, though not a foolproof one.
So …a legitimate remote user viewing a screen?
Can you protect yourself against someone just viewing a screen? when they need to as part of their job? what about when that job is remote technical support? Hold on, it gets worse.
In the true Information Security enthusiast tradition, Ian originally posed this threat vector to peers over 10 years ago and felt that it wasn’t taken seriously; most people believed that it was a novel, yet impracticle method of exfiltrating data. At the beginning of this year Ian decided to put his mind to the problem and with some help and moral support created the initial prototype of ThruGlassXfer (TGXf) hoping to shut some of the critics down. I’ve written about it before here, and here.
What does this mean?
The TGXf toolkit highlights a deficiency in the fundamental building blocks in security architecture. It should also force businesses to yet again reconsider the implications of data loss and what to consider when approaching the larger problem.
Some questions to ask are:
- What is the implication of loss of my data into public hands?
- What is the implication of the theft of my IP?
- Who is delivering me services that has access to my data?
- Where will the staff accessing my systems be located?
- What are the potential alternate motivations of staff, based on my systems and data?
- How will will staff, who have access to my data, be vetted?
- What controls are in place already and what do I need to consider?
As yet there are no obvious or easy methods to mitigate TGXf (see the TGXf FAQ). What it should do is force businesses who do care about their data security to not only consider who and how systems are accessed but where are these systems being accessed from; especially with the continued push for remote working of staff.