Friday, February 3, 2017

Security+ Topic - Wireless Antennas and Power

I’m not sure I know of anyone who doesn’t use wifi.  It has become part of our everyday lives and without it I am not sure some people would know to cook.  When it comes to our homes, most of us will simply plug in an access point and we are good with the setup as long as we can get a signal in our living room.  In business, most IT departments will setup a distribution of access points to cover the entire area as seemless as possible.  Both of these are great strategies but there is another scenario that I would like to bring up as the normal setup provides an easy gateway for any attacker to monitor your wireless network.

I worked with a gentlemen that setup wireless communications between buildings for companies.  Now this sounds like a task that almost any IT person could accomplish but the reason he was contracted for it was that this connection needed to be secure.  Some of you may be saying at this point that they could just setup WPA2 and be done.  The little  bit of information here that is missing is the antenna used for the communication.  Generally, antennas will be omni-directional which means that the signal goes in every direction.  This is good for most setups but not for the ultra-secure setups needed by this company.  Antenna design is something that can go very in depth and I have experimented with designs as an amature radio operator so I will not be going into the details here except to give you an overview.

The antenna that needs to be used for this scenario is a yagi antenna that points the signal as much as possible in one direction.  This is not to say that the signal will be one hundred percent in one direction.  Signals will still propagate in every direction but the yagi antenna does a great job of focusing the signals in one direction.  Generally there will be a little bit of back-black of signal but it is not that big of radiation in the direction opposite of where you are pointing.  There are two benefits to this.  One being that signals can be pin-pointed to the target.  The second is that a yagi antenna can help extend the distance of the signal.  I encourage you to take a look into the yagi radiation pattern if this is something that sounds interesting to you.

Now comes the part of actually verifying that your antenna is doing what you want it to do by performing a site survey.  I personally use software called Heatmapper where I can import my own image (or floorplan) for where the signal is the strongest.  Basically you walk around the office clicking on where you are and the software creates a heat map of how strong the wireless signal is.  In the original application, it is good to see if every square inch of the office is able to get its wireless signal.  In the second part of this top, talking about a yagi, it can work wonders on if your antenna is working correctly by only giving signal in one direction.  Basically we are looking for an oblong shape of a signal and the heat map software will show strong signal in one direction away from the access point antenna.

Security+ Topic - End to End Security

As we move data from one computer to the next we can do it by transferring file in the clear text or be securing that data.  One of the big questions is how to secure that data for transfer.  We can use our browsers, a file transfer tool, or text shells to move this data and every one of them has some type of encryption that they can utilize.  The days of making excuses for why we do not encrypt our data over the wire are over.

When it comes to utilizing our browser, it comes pre-set for taking advantage of SSL and TLS. Secure Sockets Layer is a widely used security standard for establishing an encrypted link between a web server and a browser. It creates a behind the scenes connection for passing data between server and client in a secure manner.  Only a few years ago you could generate a self-signed certificate for a person web server and could rest a little easier about people seeing the private data transferred.  Recently the major browser developers said that if a certificate is self-signed, or not matching the URL, that it would give an error about being insecure.  This is due to man-in-the-middle attacks.  They would spoof the connection that you thought was secure and then forward the requests to the true web server so you wouldn’t know they were decrypting and re-encrypting all of the data on its way.

Now we have more secure protocols such as TLS which came out to address certain limitations of  the Secure Sockets Layer protocol. TLS gives additional security to the transfer of data over wide area network connections.  While the older SSL 3.0 is still in use today, there are minor upgrades made to TLS 1.0 which make it much more secure.  Where possible each one of your servers should be setup to force TLS encryption if the client is able to do so.

Now we have the matter of whole network connection encryption instead of just one protocol.  VPN connections are made which force the network to go across a virtual private network.  Most of the time these connections are then routed out the destination network and the public ip of the remote machine will be the ip of the vpn server network.  These connections are made using IPSec which is ideal for authentication, integrity, and confidentiality.  Each of these are a core item for this process to work because if any part is skipped or not authorized, then the connection could be compromised.

One final item I want to touch on is the use of SSH.  This is the default tool used for almost every linux server and is a required item for server deployments.  The secure shell created has a high level of encryption so anything sent over is sure to be safe.  I am actually a little surprised that windows hasn’t embraced the use of SSH to connect to windows servers in order to provide quicker remote session to their servers.  Even with windows core you need to make a remote desktop connection to use the command prompt…. weird.  SSH is able to do some really cool stuff such as tunneling.  Similar in behavior to the VPN connection, SSH is able to move more than just remote commands on the connection.  SCP and SFTP are built off SSH and are able to move files securely.  Even your browser can make a local proxy connection to the SSH connection to transfer all browser traffic over SSH.

Security+ Topic - Data Wiping, Retention, Storage

What happens to your devices when you are through with them?  Do you put them in a closet and call it a day?  When it comes to the expiration date of your hardware there are a few thing that need done to ensure that your data is safe after you are done with it.  Even after you hit the delete key, there are methods and tools available to recover data from your system even though it was deleted.  So what does it mean for you as an IT admin?  It means that you need to securely wipe your devices of all old data.

Just like a lot of things in the IT industry, there is more than one way to skin a cat.  The first option is a full format of a hard drive.  This will overwrite the drive to being blank and will make it much harder to recover data.  Still that data exists on the drive if someone were very motivated to get the data.  After it has been wiped, one option here would be to overwrite the data with new dummy data.  For most consumer tools, this basically guarantees that the most basic of tools will be unable to recover the data.  I’ll jump ahead at this point to the wiping standards of the military.  The tools used for this, such as the dukes boot and nuke live CD, make many passes over the entire hard drive to the point that it becomes nearly impossible to recover the data.  I say nearly impossible because without physically destroying it, there is a one in a gazillion chance that one sector may be recoverable.

The flip side of this whole situation is the retention of the data.  When it comes to how log you are to hold the data it boils down to company or industry standards.  For some companies they will only require that seven days of backup data be held while others such as monetary institutions will require the data to be held for years.  While this does touch into the realm of backups a bit more than security, the security aspect of the requirements must be addressed.  It is not enough to simply install a server somewhere, encrypted the transmission via SSL, and then call your backups good.  Take for example a remote datacenter that shuts down.  They let you pull your data off and then shut everything off.  All that hardware gets re-sold to salvage companies and the hard drives are scanned by curious people who are able to recover your secret files.  That data held in the long term retention must be encrypted the same or higher level than your local data because you may not have physical access to it.

One consideration here is that you may not be able to remotely wipe the data without the physical access.  That remote storage is way out of your control so it may be worth an investment in remote wiping capability.  In this area there are a lot of options from failed access attempts triggering a data wipe to  a timeout wipe.  In the first scenario, the remote server is setup to automatically wipe the data with a certain number of failed login attempts (similar to cell phones these days).  The other option is a data wipe that happens after a certain amount of time.  It tries to heartbeat with a certain user or group and if it doesn’t hear anything after awhile, it will automatically the data.

Security+ Topic - Removable Media Encryption

Do you remember the days of sneakernet?  That was a long time ago when people would move files between machines with a floppy disk as there was no network infrastructure.  These days its quite simple to transfer a file over the network but for some reason, the use of USB flash drives seems to have brought new light to the term sneakernet.  With how small they are and sometimes can fit nicely onto your keychain, USB flash drives have found their way back to being mainstream for moving files around.  Part of this is due to the mobility of laptops.  When in a desktop environment, files are usually moved on the network no problem but as people get together with laptops, it is much quicker to transfer files with a USB drive.

There are a couple big concerns with this process that needs to be addressed.  To start, you never know what is on that drive.  Most operating systems will have an automount and then an autoplay function to make it easy for you to open it up.  While this is a nice feature, it also lets in potential dangers.  An attacker may decide to have software setup on the flash drive that loads when inserted and then installs some sort of backdoor or phone home software.  There is even campfire stories of hackers installing malicious software onto cheap USB drives and then purposely leaving them around the city for people to plug into their computers.  So what is to prevent this, disabling autorun would be a good start.  Making sure your anti-virus software is up-to-date would also be good with on-load scanning.

The above paragraph is really the background that I want to give for this part though.  What about those files that YOU put on the drive.  Say you work from home sometimes and your internet service provider connection is really slow, so you decide that you will put your work onto a USB drive and offload it onto your desktop in the office the next day.  Sounds like a simple plan but what about that USB drive in transit?  Wouldn’t it be quite easy for it to slip out of your bag or fall out of the door of your car?  I could describe quite a few scenarios here but I hope you get the point.  Someone is most likely going to pick up that USB drive and plug it into their computer.  If that USB drive is not encrypted in some way, then you have opened up all your secret files to the public.

There are also a lot of cool ways to protect those files.  The easiest way would be to simply add a password to the file if the software allows you to.  This would still allow someone to see the files and possibly brute force their way into it.  Another option is encryption software such as bitlocker or truecrypt.  These can encrypt the entire USB drive so when someone plugs it in, the operating system just thinks it needs formatted as it cannot read the drive properly.  One of my favorite ‘cool’ ways of USB drive protection is my fingerprint reading USB flash drive.  When you first plug it in, the user is presented with a small accessible filesystem.  It also mounts a fake cd drive with fingerprint reading portable software.  After my fingerprint is authenticated, it unmounts the public filesystem and then mounts my private filesystem.  Neat eh?

Thursday, February 2, 2017

Security+ Topic - Drive Encryption

Data encryption is a major part of computer security and this comes in at every form that you can think of.  From where the data originates, how it is transferred, and stored long term must all be taken into consideration.  Take for instance your mother's secret recipe.  It was on her fridge for years so you decide to make a copy of it on your computer.  Years later you donate the computer to the thrift store.  Then someone checks out the hard drive and the secret recipe is used to make millions at a chain restaurant.  Is this a silly example?  No.  It happens all the time where data is not secured and thus is exploited down the road.

Securing your data starts with where it originates most of the time.  Your computer.  When it comes to making sure that your computer is encrypted is usually thought of when it comes to laptops but it really does impact every computer you ever touch.  In the laptop realm it boils down to the fear of the computer being stolen.  Something happens at a coffee shop and the next thing you know it was stolen with no recourse.  Things like this happening is why employers require laptops to be encrypted.  They never know what may be permanently or temporarily stored on your laptop while you are away from the office.  Its a level of insurance and safety for company secrets.

I’m going to break off here in the realm of full disk encryption in regards to full disk encryption on virtual machines.  This is something that you don’t see much (these days) and most people don’t think about it.  Our virtual machines are the first thing that people think about when it comes to spinning up an environment for their needs.  They will then go through the process of firewall, hardening, password management, and more to make sure they are secure.  As the environment grows, what about the VHD or VMDK?  Backups will be taken and snapshots made.  The important take away point here is the risk of whole virtual machine theft and the ease of access after they have the virtual machine file itself.  Without encrypting the drive, the inside of the virtual machine file, they can mount the drive and take what they want.

There are two lines of thought here for encrypting the drive.  One is to simply encrypt the HDD where the virtual machine drive files exist.  This is fine except it is not protecting you.  Sure if someone walks away with the physical hard drive then its useless to them but if they can copy the virtual machine hard drive while the system is turned on then you just handed them unencrypted data.  The other line of thought is installing the encryption software inside of each virtual machine.  In large deployments this can be a nightmare to manage.  Especially if you are rebooting the server remotely and have no way to see the console for entering the encryption password.  There are trade-offs for the scenarios that must be taken into account.

Finally there is concern about the speed of the HDD responses with encryption software in place.  With todays encryption opens such as truecrypt (no longer supported, sad) and bitlocker, they are basically as fast as writing directly to the hard drive.  In situations where encrypting the drive is not an option due to certain company requirements, you may be left with the only option of a literal lock and key.  Lock down the network so they have no way to copy off the virtual hard drive and place the virtual host behind lock and key with no options for removable media.

Wednesday, February 1, 2017

Security+ Topic - Basic Network Security Tools

A common topic that comes up from people just getting into the computer security realm is what tools they can use to break into a computer.  Well that is a seriously loaded question with a lot of different directions you could go.  I want to take a moment and step back from the question and really dig into what most people are really asking.  To me it seems they want to know right from the start how to hack their neighbor in under 10 seconds like in the movies.  While you may be able to accomplish something similar to this deep into your career, it most likely isn’t going to happen from the start.

What really needs to happen at this point is the basic understanding of ports and what services may be associated with them.  I’m going to assume at this point that you have some sort of networking knowledge and will be able to follow the conversation without breaking down every point moving forward.  Every service listens on a port to do its normal functions.  Using this information some smart people have developed tools that can scan through a set of common ports to see what is currently accepting connections.  These port scanners can be very simple such as opening a TCP connection and then that is it or they can see what service is listening on that port by sending a query.  Generally speaking though, the tools such as nmap will simply open a TCP connection and then close it when done.  The list of ports it finds is then reported back with the common services associated with it.

We can then move on with the information of what ports are listening to something a little deeper.  A vulnerability scanner will take the list of ports it found to be open and start analyzing the information on those ports.  An example here would be a SMB server that the vulnerability scanner would send specific packets to in order to get more information.  A locked down system would simply report back that the port is open.  A less secure system would give out all sorts of information such as software version number and then try to make a connection based off that.  This is where the part about being a vulnerability scanner comes into play.  A simple port scanner is just for ports but a vulnerability scanner uses the version information reported back and does a search through its databases for known issues.  If it finds that you are running version 1.2.3 and there is a known issue with that version, it could formulate a specially crafted packet to take advantage of the exploit.  Now not all software does this.  Some white-hat software simply lets you know that there is a known exploit and then provides you with the CVE numbers for you to take action.

The next step in this whole thing is the protocol analyzers.  Generally speaking these are not used by the average joe.  Sure they are great tools for seeing what is going on with your network but until there is something interesting (such as an exploit) that you are able to take action on, there is not a ton to see.  Oh you made it this far?  Good.  I then want to talk for a moment about protocol analyzers and insecure transmissions.  The easiest way to explain to people about insecure protocols such as FTP and Telnet is to capture one of the sessions.  It becomes clear as day how insecure these protocols are and how a protocol analyzer can capture the data.  There is even protocol analyzer software available which monitors the network for these insecure connections and provides them in a GUI for the user to review.

Tuesday, January 31, 2017

Security+ Topic - Account Policy Enforcement

Oh the dreaded ‘your password is about to expire’!  How many of you absolutely hate it when the time comes for a forced password update and you are waiting until the very last second where it forces it upon you?  Well this change is one that is for the better no matter how you slice it.  For the two big operating systems, Linux and Unix (ok Windows too), there are password enforcement options for your environment.  While I won’t go into how those get implemented in this article due to it being a generalized overview, it is simple enough to do an internet search on the implementation process.

Lets start with the credential management itself.  Most commonly used would be the Active Directory infrastructure and one of the most important servers to protect.  This server is mostly seen in the user realm of things for everyone to get logged in after having their morning coffee.  Sure this server may seem like its only function is making sure passwords get changed every 90 days but securing this server is a big deal in the scope of your security best practices.  In the event that this server were to be compromised, it may be possible for someone to inject their own administrative account into the domain.  Guess what happens after that?  The entire network is then at the mercy of the attacker.  They no longer need anything special as they gave themselves administrative credentials to every server and every application that uses a centralized credential system.  Now is this to say that we should not use a centralized credential system?  Oh no no no.  Without these important parts to the puzzle, IT admins would be going crazy with lost passwords and logins that may never exist in the first place.  It just boils down to ensuring that your server is protected.

Now lets move on to the password complexity portion.  I’m sure that you have encountered a password complexity requirement that adds on one more special character than your common password normally has.  How do deal with it?  Most people just add a ! at the end and call it a day.  My suggestion on this one is to ensure you start off with an extremely secure password to begin with.  Utilizing a password generator and manager is a little more than I want to go into at the moment but if you can generate some crazy passwords and still have security for easy access, I say go for it.

Well what about lockout and disabling of passwords in your environment?  This is one item of concern that must be part of your password security policy.  Some departments will cycle through employees quickly and some have rather long tenure.  No matter the situation you absolutely need password lockout and expiration.  If and when an employee leaves the company, that user can slip through the cracks of user removal.  Or it could be that due to the employee's status at time of leave (legal issue), the user account is required to remain active so that information can be gathered as part of the legal process.  By automatic lockout  and password expiration, you prevent an old employee from removing items after they leave the company.

What it boils down to is putting up with the crap side of security.  Inconvenience.  Make sure that you have a password and account policy in place per best practices and it goes a long way for giving attackers one more hurdle to jump through if they compromise the network.  Its all about the layers in security.

Security+ Topic - Data Backups with Security Policy and Procedures

At present I work for a backup company making sure that when your server goes down, you will be able to recover the data.  The big question that comes up is the security of that data off-site.  Traditional backup software would simply make a copy of your file and place it onto a drive someplace else.  Then when changes were made it would write a new file to another folder.  Later software then placed all those backed up files into a container file and did a differential or incremental into a new file.  The main issue with these backups is that once the file is at a remote location, there are extra security issues that need accounted for.

Your data is only as safe as what is protecting it.  Lets take for example an extremely locked down database server that you spent many hours protecting.  Your backup admin then has the needed permissions to perform that backup and sends the data over to their backup server on another subnet.  It is very possible that the backup data could be easily reached by an outsider by compromising the backup server instead of the production server.  A lot of hard work goes down the drain when the same safety precautions are not applied to the backup server as is the production server.  Sometimes it can be entirely possible for a company to have multiple backup servers in separate subnets so that production servers can have their backup servers locked down to high standards while a users backup server would be somewhere locked down but not needing to meet high standards such as PCI compliance.

This all boils down to your company's policy for their backups.  A lot of IT admin that I speak to do not have this policy in place but is best practice for their sake and their company.  Items are to be included such as over-the-wire encryption, encryption at rest, password authentication to retrieve data, and general requirements for making sure the server is secure.  While this does venture into another topic I will cover later, it presents an initial headache that acts as insurance for later.  In the event of a data breach, by ensuring the backup server has met the requirements of IT policy, as defined by management, it protects the IT admin.

The procedures for making sure your backups are secure can be quite different for a lot of companies.  This goes back to how they are securing their data.  One procedure or process could simply be an encrypted backup on an encrypted USB drive.  This may be enough for a small company to walk the backup to a bank safe once a week.  Another process or procedure may be to ensure that a backup server meets strict industry standards such as PCI compliance.  For larger companies it is not feasible to secure all their data at a bank vault weekly so they must make sure the server is protected from outside threats.

Before deciding on a backup software to use it is very important to cover where and how the backed up data will be stored. Each piece of software you find will have many options for your needs and by generating a list or going off company policy you can ensure that everything is meeting the security industry standards your company has agreed to abide by.

Monday, January 30, 2017

Security+ Topic - FTP & Telnet Security (or lack thereof)

Usually the secure protocols that we enjoy today have a rough and shaky past.  This is extremely true for protocols that I can assure will be used by you during your time in IT.  Even major companies are still using these insecure protocols because they are easy to use and reduce customer stress when moving files through the internet.  I am firstly referring to the FTP protocol of which almost every company happily uses.  Secondly I would like to address the use of Telnet which is much less common as a connection method across the internet but widely used on internal networks.

So why not use FTP?  What if I encrypt my files and then transfer via FTP?  What about SFTP?  Well lets talk about each one of these.  At the very basics of the situation is that FTP sends everything in clear text on the wire.  You may have heard of this before but you need to understand what this means.  When you open a network packet, there are of course ones and zeros.  More importantly is that the header of the packet contains everything it needs such as MAC addresses and IP information.  This stuff is required for network travel and cannot be encrypted for the network to work.  The same process used to view the header information is then used to analyze the payload of the packet.  While the actual data (such as a your passwords.xls document) may be encrypted, the FTP protocol itself with authentication information is sent in the clear.

After someone is able to compromise the username and password of your FTP connection, they can then log into the FTP server and grab anything that may have been uploaded.  Even if the files you transferred to the FTP server are encrypted themselves, you are still at risk.  Once those files are downloaded to the attacker computer, they can brute force or password crack a file as fast as their machine or server cluster will allow.  So why are big companies still using it?  Risk vs cost.  Generally speaking it is cheaper to just implement an FTP server and go from there and if something happens, it was the user who decided to transmit their data via the unsecure FTP protocol.

This brings up SFTP or SCP protocols that can be used to transfer files.  Why this hasn’t become the golden standard is beyond me for transferring files.  It is entirely possible to setup the SSH server (SSH is used for the SFTP/SCP to move data) that is locked down to nothing but transferring those files.

On a final note of this whole transmitting in clear text issue is Telnet.  Telnet has been historically used for the management of network routing and switching.  It is possible to have a server setup with Telnet but most IT admins are very aware that if they allow a telnet connection, they are basically opening the server to the world.  Telnet transmits everything, and I mean everything, in clear text on the wire.  This means that if you type “my password is abc123”, whoever is looking at the packets will see it word for word.  Being that everything can be viewed, unless you are on an extremely secure network being one hundred percent sure that no one was sniffing the wire, you could log into a device to reset the password whereupon you log out and the attacker will know exactly what the new password is.

Whats the moral of the story?  STOP USING FTP and TELNET!  If for some reason your device doesn’t support SFTP/SCP/SSH then there are alternate methods of connecting to that device.  One item I will specifically mention here is a Linux server with a console connection to an older Cisco router.  The older routers only supported Telnet but if you can SSH into a server, then console in the router, you completely eliminate the insecure protocol from the equation.

Security+ Topic - SSID Disable and MAC Filter

When it comes to wireless it seems that everyone and their dog (literally their dog) has a wireless network connection.  This particular post comes into play at the basic levels of wireless security for keeping out the next door neighbor kid.  Its a good starting point for consumer grade gear but do not be fooled.  Anyone with even a little bit of skill can blow your SSID disabling and MAC filtering totally out of the water.

Lets start out with what it is.  SSID disabling is exactly as it sounds like where the access point is not broadcasting the SSID used to make a connection.  This doesn’t mean that it doesn’t exist.  Just that it is not being advertised.  This can be a great deterrent for anyone driving by your access point and seeing if anything is available.  For anyone that drives by and is able to pick up on your wireless network but notices that its hidden, they will probably move on to an easier target.  But wait, didn’t I just say that it is disabled and not broadcasting?  Yes I did.  Even though you are not broadcasting your SSID the SSID is still in use for your network devices to talk to the access point.  Anyone with a little bit of time is able to sniff some wireless packets and determine the access point SSID even though it is not broadcasting.

The process of gathering SSID information is with a simple wireless network sniffer.  There are a lot of tools available from the linux savvy to the windows savvy.  These are not your simple network protocol analyzers such as wireshark or tcpdump.  One of those tools will analyze packs on the wire (or wireless) after a network connection is made.  Yes yes they could be used before that but for the sake of this post, lets not get too deep.  What happens is that your computer sends out network packets that get tagged for the specific SSID you are communicating with.  Many access points are able to utilize the same network frequency so the SSID is in use for the access point to determine if the packet is destined for them or not.  Just because you are connected to an access point doesn’t mean that everyone else isn’t getting those same packets.

Ok so now lets take it a step further by just saying that only specific computers are allowed onto your network.  This is where the MAC address filtering comes into play.  Pretty much every access point you can buy comes with this feature which is great for keeping the kids from connecting to your access point when they discover your password on the sticky note under your keyboard.  Basically the access point looks to an authorized (or unauthorized) list for if it will allow a node to authenticate with the access point.  Simple enough right?  Sure for some basic SOHO security.  If your password is discovered but you are allowing only specific MAC addresses, the attacker simply has to change their MAC address on their network adapter.  If you were paying attention in your CompTIA A+ class then you may be scratching your head at this point as MAC address should not be able to be changed.  Well, programmers can be some tricky people which makes it so that software can fool the hardware side of things and send out a fake MAC address.

The network side of things gets a little tricky at this point if there are two computers with the same MAC addresses.  An attacker trying to utilize an authenticated MAC address as their own will usually wait until that machine goes offline which will then clear up the network connection (think shutting the lid on your laptop or leaving the house with your phone).  Any way you dice the situation, these are great security measures for the cheap  to help with security in layers.  Layer the security deep and make your wireless network a harder target so the attacker will move on to something easier.

Security+ Topic - VLAN Management

Those of us that have been around for a while remember the days of one hub or switch being a physical connection that determined the users network.  Sure you and your neighbor at work could be on different subnets but the management of setting it up was much more hands on.  The wiring to the patch panel was all the same and the big difference came from the patch panel to the switch.  Ever seen one of those messy cable management photos?  Most likely it came from an older setup with vlans in place.  Any time that a user switch desks, the cable needed pulled from part of the patch panel to another patch panel.

Now with the implementation of vlans we are able to cleanly make network connections, sometimes without the need for a patch panel, and virtually move a users network connection on the back-end.  This becomes a big deal for ease of management as someone simply plugs in their computer to the new spot and off they go doing their daily activities.  The common phrase of ease of access leads to lack of security is extremely true in this case.  If your co-worker is able to move their computer to a new desk and immediately get network access, doesn’t that mean anyone could it?  Even someone from outside the company?  Yes.  Yes it does.

There are many ways to implement your VLAN to focus on security and the easiest one to enforce is the default vlan0 on most switches.  By putting your vlan0 on a network with no outside resources, it allows for users to plug into the network easily but not have any access they should not have.  An implementation of DHCP pared with DNS redirect to an internal HTTP server gives you a simple company spash page informing the user to contact the IT department to gain access.  Even if you do nothing with the vlan0 and have all connected machines get an APIPA address can do the trick from a basic security perspective.  First, any connected and authorized user will immediately go to the IT department with their issue of not being able to access company resources.  This is good.  Second, an unauthorized machine is stuck in that vlan0 with an APIPA address that they cannot do anything with…. Kind of.  If for some reason people on your network are using hostnames to talk to each other across that APIPA link then there could be cause for concern.

The next part of managing your VLAN’s to be secure would be making sure that no user access port is set to be enabled for trunking.  Even with the most secure VLAN setup, if a port is set to allow trunking then you have lost your awesome secure vlan implementation.  A user or attacker could use that port to negotiate a trunk and then gain access to any vlan on that switch they see fit to start scanning or attacking.  Most vendors have the option to set a trunk to allow for access or trunk or automatic.  Generally these are set for automatic and can be set to access with a simple command.  Set your trunks manually and you do not have to worry about it.

Wednesday, January 18, 2017

Security+ Topic - URL Squatting Hosts File Fake

Have you ever entered in the wrong URL and get that lovely browser message saying that it cannot find what you are looking for?  Lucky for you, that was all you got.  Lets say that you accidentally typed in the name of your bank incorrectly but this time a page loaded that looks just like you bank.  Would you have noticed that the URL is incorrect?  Might you have thought that the bank  just bought the mis-typed URL so that they can redirect you and make it easier on you?  What if that incorrect URL was actually not your bank at all?

The case of a URL being squatted on with a fake website is a real security threat.  It is also a very hard threat to protect against.  While it may be easy to say bankone.com is allowed and bnakone.com is not allowed, it can be a bit harder to say that bank1.com is or is not allowed.  A lot of this can be taken care of my making sure that users are educated and the HTTPS portion of the URL is looked for when loading a page.  Really the best answer would be to have the end user educated.  If something doesn’t look right, then STOP.  Make a phone call to the financial institution or an email to the company saying you want to verify if the URL you loaded is right for the company.

A big question at this point is if the invalid site, the copy, is easily recognized as a fake website.  Sometimes yes and sometimes no depending on the level of complexity by the bad guy.  The site may be setup with a simple splash page mimicking the original and that is all.  It could also be setup with all links working correctly and takes you to more fake pages.  The most simple way to verify real or fake is the URL at the top and every financial institution would have the ‘lock’ for secured with SSL.  The reality of it is that this impacts most people that are not security savvy.  They simply want to load their supposedly secure website and then move on.

One more item of concern here is the hosts file on your computer.  A little story here is some fun I had with a co-worker.  Playing a trick on them, we changed their hosts file to point google.com, yahoo.com, etc etc to another server in our network that made it look like their system was hijacked.  Based on the way the system was reacting, with only specific websites doing this, it was obvious that the hosts file was modified.  He was one hundred percent convinced that his system had a major virus and he had wiped the system within the hour.  Long story short I got called away for a bit and was not there to let him in on the prank and thus not able to prevent him from wiping his system.  Lets break this down though.  I could have used the exact same method to re-route his URL requests to a server of my own.  The major difference here is that the URL WOULD BE CORRECT.  It wouldn’t have the lock for being secured with SSL but everything else would LOOK ok.  This method for a remote bad guy is not quite as ideal though as their method can be easily identified.  They lose access to your system, then their fake website goes offline, you immediately see the page no longer loads and start investigating.  Plus you have a record of the fake servers IP address to block moving forward.

Monday, January 9, 2017

Security+ Topic - Vulnerability vs Penetration Testing

I had a college classmate who was in computer security penetration testing business.  I always wondered what that must be like and finally got a small taste of it when I started using scanning tools on my own network.  I say that I only got a sample of it because I wasn't really don't any penetration testing at all.  What I was doing is simply looking for any potential vulnerabilities.  There is a big difference between them of which I am going to go over here.

Starting with the vulnerability scanning, this is the easiest thing that you can do.  It starts with a basic scan of a network looking for any ports that may be open or any servers that you may be able to get access to.  By scanning the network for open ports, you could identify a rogue ftp server or someone has setup a personal file server not allowed by company policy.  This plays into the vulnerability scanning due to any old software that these systems may be running.  It doesn't even have to be a rogue device either as you can scan your own equipment to see if there is anything showing on the network that should be.

A scan of the network can be much more than just a scan.  A vulnerability scan is where you start to get into the meat of a simple port scan.  For basic network processing it is required that the listening software report back certain version or function information so that the software wanting to talk to the listening port knows what or how to communicate.  The vulnerability scan will utilize that information against an internal database of vulnerabilities it is aware of.  As an example, the code that it gets back could indicate an outdated FTP server that when certain attacks are performed, it allows login without password.  The information gained could also give away the operating system so that an attacker may know where to start on some specific attacks.  At this point in the game the vulnerability scanning is all about information gather.  Gathering for the bad guys to find a hole or gathering for the good guys to prevent a hole from being exploited.

Now comes the next part of the penetration testing.  Taking all of the previous information we can then push forward with actually breaking or getting into a remote system.  Penetration testing is simply the act of using exploited code or brute forcing your way through a barrier.  Sometimes an attacker can do a scan of the system and find that remote desktop is open.  Then they brute force their way into the system via the remote desktop protocol.  Another scenario is that the previous scan finds an exploit where if they pass certain code to the remote desktop protocol, it overloads the service and then allows passwordless login.  Penetration testing can get quite in-depth and so the actual work could be performed by you or a penetration test company.  Usually these companies do a great job as they don't care what you have in place.  What I mean by that is you may have some unsecure way to initially connect and then use a secure method to go through the WAN.  Well, that penetration company will try anything and if they find the hole you left open even though you thought it was secure then you can bet on it eventually being exploited by the bad guys.  Better for an unbiased company to find it and report it to you than deal with a security breach.

Friday, January 6, 2017

Security+ Topic - NAT

Network address translation was developed to provide one public network address to many internal private network nodes.  In the realm of network security this becomes a big part of layered security.  Any nodes that are behind the network translation device is then easily hidden from everything else on public side of the network.  For full disclosure, NAT in generic terms is a thrown around acronym for network address translation in combination with port address translation.  A google search can give you some good information about the two in combination as that overview is outside the scope of this blog article.

Take for example a small business network with limited finances to put forward on network security.  A network address translation solution is perfect for them as it requires minimal effort.  Another assumption here is that the small business has a DSL or Cable modem as their primary mode of internet connection.  It doesn’t have to be limited to it but makes the example easy to understand.  Using only one consumer grade router (used loosely) that comes standard with network address translation, the business can place their publicly facing server directly connected to the ISP router.  Any ports being forwarded to provide services can be done on the ISP router and makes for easy management.  Then the second router in the environment has its uplink/isp port plugged into another LAN port on the same ISP router.  Company desktops or laptops are then plugged into this second router and are invisible to the public facing server.  As far as the server is concerned, all requests from the company users comes from the uplink address on the second router.

As a company grows they are then able to expand on this idea while still keeping things cheap for the company.  Lets say the company has grown enough for them to have a sales department in addition to their developers.  You wouldn’t want them on the same network as its best practice to keep everyone on their own subnet.  With the purchase of just one more router, plugged into another LAN port on the ISP router, instantly provides another security boundary with its own network.  Each network will be able to reach the servers plugged into the ISP router without easy access to the other network.  Port forwarding could be put in place for any special needs but you need to practice good security setup and only allow through what is absolutely needed.  There is a server on the shared network that can be used as the central hub for any cross-network functions.

Obviously if the company has hit the point of needing to expand even further than a different solution would be needed such as dedicated Cisco router and firewall put in place.  One thing to also note is that by default each network may have the same addressing.  192.168.0.x would be the most common.  As part of setup and easier troubleshooting, each add-on router should have its internal DHCP setup into a different scope.  Example router 1 having internal addressing of 172.16.1.x and router 2 internal addressing of 172.16.2.x would work well.

The security aspect of the network visibility works in reverse as well.  If for some reason the servers directly connected to the ISP router become compromised, they will have no easy way to see any of the user computers behind the router.  Of course if the router you are putting users behind has a flaw to allow them GUI/TUI access then they could create a hop into the network or forward ports as  needed.  Either way the added layer of security provided for cheap makes for a great addition for small businesses.

Tuesday, January 3, 2017

Security+ Topic - Access Control Lists

When we talk about network and system security, there is an increasing thought of complicated attack scenarios with complex routines written for defending against it.  Sure there may be some complex attacks out there that we should prevent but lets get back to basics for a moment.  What is the one most simple thing that you can do to keep your network resources safe?  Access Control Lists.


Some environments could have an awesome firewall in place that takes care off all their security requirements and some environments could simply have a DSL router.  In either and any case, they can benefit from access control lists setup to prevent any number of network traffic.  In the simplest of terms lets take a scenario with two network segments.  One for servers and one for users.  Normally your users only need specific access to the servers so you would allow one or two ports into your server network from your users network.  Sounds like a firewall right?  Yes an ACL is similar to your firewall in this scenario.  An ACL take it a bit further though on the router level so that traffic gets dropped before it even hits a network or host.


When the environment gets a bit larger though is when a dedicated firewall will be an ideal solution for sorting what traffic is allowed to which networks.  Still though an access control list is very important.  Security is all about layers right?  So why not layer your firewall to specific access as it travels through the network.  It may not be as granular so that your router can focus on routing instead of firewall rules.  One example here is that you could put an ACL in place to block everything but outbound stateful connections on port 80 for a specific department.  Then set your fancy firewall to filter the URL’s or destinations that are trying to be reached.  This allows a drop in network traffic from even attempting escape from their network as well as reducing firewall load so that it can focus on the HTTP traffic.


This brings up ACL’s as part of a security breach network wide.  An access control list can be put in place to block ICMP traffic between networks but OK to the router.  If one machine gets compromised and the malicious user simply tries to ping  hosts for where to make their next attack, then you have slowed their host discovery and possibly eliminated the threat if they rely on that ping response for their next attack.  One common scenario you may encounter is a SQL server physically or logically behind an internet facing server.  The server would only have one network connection to a second NIC on the internet facing server making it look as if it were much more secure on an isolated network.  Again back to layers of security, a network ACL could be put into place here allowing only specific items through so if something does get compromised the attacker wouldn’t have one more layer to break before being able to move compromised data out of your network.


Are access control lists perfect?  No.  That is why there are network firewalls, IDS, IPS, and a slew of technologies available.  Are the still relevant?  YES!  Adding this layer of security is very important as every layer counts.  Most of the time it is already built into your network routing products so you may as well use it!