Archive

Archive for the ‘Security’ Category

Thoughtlet – Are we moving to a single device?

June 15th, 2013 Comments off
Reading Time: 6 minutes

This isn’t a fully fleshed out thought. It is the beginning of some musings after looking at the Apple WWDC announcements and how they are building tighter integration between OSX and iOS. It was also spurred on by this article. As users are being driven by portability and the lag between feature parity of devices is shrinking, and looking at the history and trends of personal computing purchases, are we finally moving to the “single device”? What will this new “single device” look like and what affect will it have on the current trends in the market?

Screen Shot 2013-06-17 at 9.42.33 AM

If you don’t like my picture there are others to choose from

Personal computing kicked off in the 1980s with the personal computer. This was the first time that general and flexible computing was available to the average person.

In the 1990s mobile phones took off as did the personal digital assistant (PDA) in the mid to late ’90s. This took communications and personal computing mobile. Given the limited capabilities of the PDAs at the time, most people still had a desktop PC. Those lucky enough, also had access to laptops in the ’90s, these too had limitations and for the more powerful users, increased their device count further.

In the late 1990s PDAs merged with phones to create the first smart phone, reducing the number of devices a person carried.

The 2000s brought the advancement of laptops as the norm and in the latter part of the decade saw the introduction of net books and ultrabooks as a way of increasing the portability of computing, it also saw the paradigm (can’t believe I used paradigm) shift in mobile telephony with the introduction of the iPhone. This new interface saw people’s view of mobile computing change forever.

By 2010 tablet computing, on the back of smart phones, came to market and introduced another compromise to computing. This now sees people with 3 devices, notebook, smart phone and tablet computer, each needed for a specific purpose, notebook as the data entry and manipulation device, smart-phone for the all purpose device and a tablet as the compromise of the two, meeting somewhere in the middle.

In 2013 we now see the decline in PC sales and increase in smartphone sales with tablets of varying specification and size, trying to balance capability and portability, as well as smart-phones that are so large that the challenge the smaller of the tablets on the market. Why? This jostling and positioning is trying to meet the consumers needs what are these needs?

 

I argue that people are trying to get that balance right. Ideally they don’t want a phone and a tablet, but the phone screen is too big, or the tablet too big to always have with them. If this is truly the case then the real future is going to look a lot different from where we are now, reaching an almost sci-fi climax.

 

I think what will eventually happen is that the processing power that a mobile phone can have will be comparable with that of the ultrabooks of today. Once this happens is there really a need for everyone to  have 16 devices? The new devices will be like the smart phone today with a docking capability to turn it into a powerful data entry and manipulation tool or a sleeve that allows it to have a bigger, interactive display like that of a tablet or laptop .

iphone_5_aluminum_oc_dock

vision of future of personal computing

If this is the case, what are the implications to current enterprise trends?

 

Cloud Services – Today file sharing tools like box and dropbox allow us to share files with others, but most people tend to use them as a way of syncing and backing up their own personal data. In the single device world this won’t need to change. whilst the sync capability will be less of a concern, the sharing capability will increase as it does today, moving from file sharing to collaborative content creation and manipulation.

BYOD – the Bring your own device phenomenon,like cloud, is moving past the disruptive trend and becoming the norm. With a single device, the only barrier is compartmentalisation of work and personal. As mobile computing power increases so will the ability to have capabilities like personas or profiles. Allowing the seamless switching between contexts work and personal contexts

Security implications – This will cement the concept of the micro perimeter (see really crappy Figure 2 below). Mobile computing and secure code execution is becoming more and more mature, so too has the shift in desktop computing. We’ve moved from the personal firewall and the Hypervisor to the Micro-visor (see Figure 1 below) providing the ability to secure the execution of the operating system itself, as well as temporary sandboxed instantiation of the applications as they are used. Incorporating the Mobile device management (MDM) platform concept into a policy based micro visor, allows the seamless movement from personal device to multifunction device, with employers being able to specify policies for the components under their control.

Hyper-Micro

Figure 1: Hypervisor to Micr-visor

Figure 2: Evolution of the micro-perimeter

Figure 2: Evolution of the micro-perimeter

I think that the trends of today are not going to change much or slow down, each seems to fuel the other in regards to personal computing. There are still niches in the market to be had to help consumers and businesses ease into this new paradigm (there you have it I use paradigm)!

UPDATE – 18/6/13: After a brief twitter exchange with Brian Katz (@bmkatz) and Ian Bray (@appsensetechie) I realised that I conflate the concept of Mobile Device Management, Mobile Application Management and Device Data management into the MDM terminology.

I see Mobile Device Management,  device control, as the initial stage in the evolution of dealing with the data management problem. Application management is controlling the conduit to the data via enforcing trusted applications (another potential flaw). Ultimately the data is the only thing that anyone truly cares about. This is an oversimplification of the problem as there are other concerns and factors that come into it.

UPDATE – 22/6/13: Further comment from Tal Klein (@VirtualTal) reminds me that there will always be a multi device driven by consumption/creation as well as an aggregation and administration drive to consolidation of devices. I can see that there will continue to be those that have specific needs and require multiple devices (driven by technology adaptation, or scenarios). I’m also driven by watching my family’s adoption. I’m the only one that really has multiple machines, everyone else really utilises dual devices, and only uses the secondary device due to lack of feature parity on the primary iDevices.

Bruteforce become DOS

May 27th, 2013 Comments off
Reading Time: 1

I noticed that I started too get a few emails from Wordfence about invalid login attempts. Now as I have both wordfence and Google two factor authentication happening I wasn’t worried, though I thought I’d do a large IP range block just to cut down on the noise.

 blocked login
What I found was that my provider was being really awesome in their pro-activeness and started automatically detecting brute force attacks on WordPress sites and removing the login.php
As I stated above I have both Wordfence installed, this will automatically block users and IP addresses that have attempted too many times to log in to a site. But what I also have is Google 2 Factor authentication set up as well, stopping these clowns.
 2FA
So whilst my provider was doing an awesome job preventing those-bad-guys™ from getting to my site, they in essence have locked me out too. Hats off to the support team for pulling this together. But the next stage really needs to include, not only scanning for the fact I run wordpress to block attacks, but scan for plugins too. Or even better, allow me to opt out..

Optimising Security

September 18th, 2012 Comments off
Reading Time: 1

There is a great post today by my friend Daniel Baird over at his site Outside the Asylum on Optimising Security.

It shows the relationship between the cost of security, risk and profitability of an organisation.

As I commented on his site, I can see a number of follow up posts on this and how you flesh out the data-points that support it. It is a juggling act that every one of us in the information security space plays.

Categories: Security Tags: ,

Are passwords the new security theatre?

September 10th, 2012 1 comment
Reading Time: 7 minutes
Offline Password

Offline password by binaryCoco available at “http://www.flickr.com/photos/binarycoco/2704267877/”

As you may have noticed there have been a lot of website and business breaches in the last 3-4 months where usernames, passwords and occasionally some personal information has been taken. You can see a consolidated, and up to date list here at liquidmatrix.org. Given that passwords are so easily “lost” these days, are they doing much more than security theatre?

This has been an ongoing topic of discussion for several years inside the info-sec community and I thought I’d get my current thoughts out on the subject as it seems to be coming to a head again.

It is becoming generally accepted that users cannot be trusted/expected to look after their credentials and more and more businesses are looking at offering additional ways in which to secure user accounts beyond the humble password.

Background

A bit of Background, the issue is really comprised of 2 parts, the businesses supplying their services and the people that use these services.

Part of the problems is that the businesses breached don’t always take the appropriate care when managing credentials, these are stored in plain text (readable by anyone) or in a poorly encrypted form (that allows the passwords to be cracked or reversed).

This is not always something that is malicious and there can be any number of reasons why this happens. For example, the people building these websites are web developers and not security people, they don’t necessarily know that the standard library or function that they call when building a web application is 10 years old, calls a deprecated function/hashing algorithm and doesn’t do what is required in this day and age.

A more pessimistic take is that you can see, historically, businesses that have been caught out by these breaches in the past don’t always take a hit financially (unless it leads to privacy violations and they are fined or sued) and weigh up the cost of doing things right vs. the likelihood of something going wrong and having to pay compensation. This attitude is definitely changing, as be described. More and more businesses are beginning to offer alternatives.

The other factor in this is that users tend to reuse their passwords across multiple sites. Users tend to do this for any number of reasons, mostly because it is convenient to only have to remember a small set of credentials to get around work and social media sites.

This too is understandable as most people don’t realise that once there is a breach, and your credentials are leaked, people (hackers or script kiddies) will automatically try them against other popular sites or even your place of work (as apparently one Dropbox employee found out).

What’s the hoopla anyway?

Those that say, yeah great for clear text passwords, but mine is/was encrypted, how does that cause an issue? For a great overview of the problem with password breaches and cracking, head over to Ars Technia. The summary of the article, however, is that with the cracking tools available today, each breach feeds the beast and makes it easier to crack each time there is another breach.

The other issue today is that Microsoft (live), Facebook, Google, and Yahoo!, to name a few, offer the ability to provide federated authentication services through OpenID, SAML, OAuth or similar services.

This means that you can use your credentials, username and password, for one of these systems to authenticate (verify you are who you say you are) to another completely separate system that then authorises (provides permissions to do things based on who you authenticated as) you. So if your Facebook account is compromised and you use it to login to any other account with your credentials that you have linked to Facebook.

People also tend to cascade the linking of their accounts so that when you’ve forgotten your password you have Facebook , Twitter or Apple  email your Gmail account with the password reset token, allowing the compromise of one account open up the possibility of access to a lot more.

Whilst you can point the finger and blame the companies that were breached, your username and password, and the management of them, are ultimately your responsibility.

What can you do?

Given that this looks like the sky is falling and that every password leaked means that it becomes easier and easier to get into systems, what can you do? You can invest in a password generation and management tool or look at 2 factor authentication methods offered by vendors.

Password Management

The first thing you can do is start using a password generation tool like LastPass or 1Password. Most of the tools out there have the ability to generate passwords given a number of different parameters like whether it is pronounceable, includes numbers, capitalisations, hyphens, etc (see the example below of 1Password browser plugin for password generation).

Couple this with a tool that remembers your passwords and you now have the ability to generate new and unique passwords for each and every application and website you can think of.

Most of these applications have browser plugins too that automate the entire process so there isn’t even the need to do more than follow the prompts.

Passwords – becoming too hard

Given that all of this is very complicated and relies heavily on you to do the work, more and more businesses are realising that trusting their user base to create unique passwords is not necessarily the best thing and offer a number of additional mechanisms to assist in the protection of themselves and the authentication of their users.

This second factor authentication mechanism is something that you should always take advantage of.

 

2-Factor authentication

What is 2-Factor authentication? Two factor authentication takes the something you know (your password) and then adds in either something you have (like a security token) or something you are (biometrics).

The “something you have” can be any number of things:

  • Digital certificate;
  • Smart card (generally stores a digital certificate);
  • Physical Token (generates a one time password or pin on a screen of a device);
  • Soft token (generates a one time password or pin via an application); or
  • SMS (short message service) one time password or pin.

The something you are is exactly that, something that is uniquely you:

  • Fingerprint;
  • Retina scan;
  • Palm print;
  •  etc.

This second factor when coupled with your password makes it a lot harder for your account and personal information to be compromised should one or the other components be lost.

Most financial organisations offer a number of options for 2 factor authentication. The most common of these are SMS based one time passwords for transactions. Others opt to provide their customers with physical tokens that generate one time passwords.

Other organisations have started offering 2-factor authentication methods for their users –

Google offers both SMS and soft tokens for unauthenticated devices or services across their services like Reader, Gmail, etc.- http://googleblog.blogspot.com.au/2011/02/advanced-sign-in-security-for-your.htm

Dropbox have just added soft tokens for previously unauthenticated devices – https://blog.dropbox.com/index.php/another-layer-of-security-for-your-dropbox-account/

WordPress are now offering Vasco tokens – http://www.scmagazine.com.au/News/313736,wordpress-adds-vasco-one-time-password-technology.aspx and support for the Google Authenticator application –  http://wordpress.org/extend/plugins/google-authenticator

Facebook  now supporting SMS based tokens for unauthenticated devices- http://www.facebook.com/note.php?note_id=10150172618258920

The above list is certainly not exhaustive, but shows that there is now a move away from the old Username and Password as the way in which to authenticate a person.

 

What should I do?

The short answer to this is as follows:

  • Never reuse passwords. Ever;
  • Be aware of the risks in linking accounts to each other;
  • Use a password manager; and
  • Take advantage of 2-factor authentication.

Following these 4 simple things won’t guarantee that you and your accounts will not be compromised, but it will guarantee that the damage will be mitigated.

All about the path

October 20th, 2011 Comments off
Reading Time: 7 minutes

Recently on the twittersphere there was a short exchange  as to why would  a security professional care about VXLAN when they don’t care about ASICs in a switch.

In stead of putting the conversation up (I’m lazy and can’t be bothered screen capturing it) I thought I’d share my $0.10 worth here.

In short, you always need to be concerned about the path of data. Just like Network Architects want to know packet paths for engineering purposes a Security Professional also is concerned with what systems or processes does it cross and where are the enforcement (choke) points. But most importantly, you need to know the data path so you can secure it.

I’ll start by briefly explaining what VXLAN is, its deficiencies and then a quick look as to why a Security professional needs to be concerned with the data paths, irrespective of whether or not they are virtual or physical.

What is VXLAN?

VXLAN (IETF REF: http://tools.ietf.org/html/draft-mahalingam-dutt-dcops-vxlan-00) is “A Framework for Overlaying Virtualised Layer 2 Networks over Layer 3 Networks“. This is a VMware and Cisco (amongst others) initiative. That’s the fluffy way of saying it is a tunneling protocol, more precisely a L4 tunneling protocol.

Why have VXLAN?

Personally I wouldn’t. It is a VMware kludge to a very specific problem. However, if you are a VMware proponent then you have your uses. For any one dealing with large virtualised environments, especially multi-tennant ones, you will find that you run into the limitations of your L2 network (even a well designed one) really quickly; 4000 VLANs doesn’t go far in multi-tennant environments .

  • How do you migrate VMs across L3 boundaries?
  • How do you move devices about without changing IP addresses?
  • Do all application support the readdressing of VMs?

Let’s take a fairly simple example of vMotion; this is not the ideal example but sufficient to get the concept across. vMotion is the movement of one Virtual Machine (VM) from a physical server to another physical server. Figure 1, below, shows a fairly simple setup with servers connected to the same switch, which we will assume are in the same Layer 2 Domain.

Figure 1 – Servers with VMs in a single L2 Domain

I won’t go into the gratuitous detail on how vMotion works, you can look that up for yourself, but for the VM to move from one physical machine to another there needs to be Layer 2 (L2) adjacency. Figure 2 below shows how the machine is replicated. The path between the virtual Machine on the left (highlighted in Green); through the Hypervisor (in this instance I’ve used VMware) out the physical NIC into the switch; out the switch; into the first NIC of the second server; up through it’s Hypervisor and finally coming to rest.

Stay with me there is a point to all of this.. I hope.

Figure 2 – VMotion path through switch.

Now, throw a router in the middle (see figure 3) and the whole process breaks. Because there isn’t the ability for L2 adjacency, it crosses the L3 boundary, the machine cannot move, let alone retain its IP address, etc.

Figure 3 – VM Servers in two L2 domains, separated by router.

 

As mentioned previously VXLAN is essentially a tunneling protocol. It is taking a Layer 2 frames and wrapping it in a Layer 4 Datagram, Figure 4 depicts 2 VXLAN networks, green and pink. This, with some other wizardry gets over some of the aforementioned limitations, the virtual machines are tied to a VXLAN ID by the Hypervisor (they need to be the same) which is then passed to the VTEP (VXLAN Tunnel End Point) which then wraps everything in a UDP packet (OK that is slightly simplified).

Figure 4 – VXLAN allows pseudo L2 connectivity across L3.

Now there is an L2 adjacency between the two servers, they can now replicate VMs between each other whilst maintaining the same L2 and L3 addresses. See Figure 5.

Figure 5 – vMotion across L3 boundaries is now possible with VXLAN.

Security Issues:

This is where I hope the rant has some value – If you didn’t care how the protocol works (even in a rudimentary sense) or how the data flowed in the scenarios above you wouldn’t understand what the limitations, and therefore potential security flaws, are or understand what issues it solves. That said VXLAN doesn’t solve any of the existing security issues we find in Ethernet today. It still requires processes to lock down deficiencies in the technology.

  1. Just like Ethernet you can spoofing ARPs;
  2. Broadcast storms still possible (turned into multicast within VXLAN domain);
  3. No security mechanisms built into VXLAN;
  4. If you have access to the network you can spoof anything? If you are on the same network as a VXLAN segment; and
  5. Through the very nature of the VXLAN function you are also now removed from the ability to provide hardware handoff for L2-L7 security measures (think ACLs, VACLs, Firewalls, NAT, Load-balancing, etc). You need to have this gated between VXLAN domains via a VM that spans both.

Even ignoring all these new issues (yes they are new issues, is this a new way of accessing an environment, of course it is!) what about policies, enforcement points, network taps, etc, that may be already in place, in-line/path with the new VXLAN tunnel that you are proposing?

These are all why a security engineer/architect/designer needs to care about the path.

Now to the comment “security guys don’t care about the path through a switch’s ASIC…”, generally speaking, most won’t care, but they should. Just because it is a switch doesn’t mean that nothing happens to a packet once it goes in and then goes out the other side. ASICs are customised, programmable chips that allow the device to perform a number of different functions; if you’ve ever looked into this you will know that no two platforms are the same (even from the same vendor).

In the diagram below (which is a fictitious switch construct) you can see that there are some capabilities built into the different components of the switch, specifically the ASICs.

Getting traffic from one port to another port isn’t necessarily a simple thing. Can I get straight from Eth1 to Eth2 directly, or do I have to go up through the switching fabric to the L2 switching engine? What happens when I have to route (below shows a path of a packet when going via the routing engine)? Does, or can, the packet be modified/duplicated inside the switch and if so what happens? At what point in the packet’s path through the switch is policy enforced, and in what order?

The list goes on and on.

Most enterprise switches, even basic ones, will allow you to mirror ports, I’m pretty sure that Security folk would care about what ports are set up as mirrors and more importantly, what’s at the other end of that port.

Conclusion:

The scenario’s above are fairly simple, more advanced switches allow far more sophisticated things to happen. You can rewrite packets, pass off packets to other processors (Routing, Load balancing, Packet inspection, policy enforcement, etc). Again you need to know the path through the switch and what touches a packet to understand the risk and potential impact to that application.

UPDATE:

Chris Neal sums up my rant nicely below: