Tuesday, February 3, 2026

Security+ Topic - Backups

  Backups of the environment are an unsung hero of the security of servers and network devices.  It’s not just about having a backup.  You have to do the backups correctly.  Don’t get me wrong; having some sort of backup is better than nothing.  Just make sure you are securing your backups as well as your production servers.  A good backup solution implements the Grandfather-Father-Son aspect but it also takes into consideration the location of those backups, how often they run, and making sure they are secured with encryption.


Let’s start with the basics.  Plug in a USB hard drive, copy your files, and congrats you’ve got a backup!  Technically yes but please please take it a few steps further after this.  If your system becomes compromised, it’s not terribly difficult for a malicious actor to wipe the backup.  At that point, if ransomware were installed, they would have removed your ability to restore.  From a simple data availability standpoint, a USB drive could get you started but from a security perspective it is a very bad idea.  What about just disconnecting the USB drive after you do your backup?  Being air gapped is secure right?  That depends on how you reconnect that drive.  If your system is still compromised, plugging in the drive could wipe it or encrypt it too so you’d have to remember to not plug it in.  Plus it’s your only copy of the data which is a separate problem.


Now what about an upgrade to sending your data to a different server.  In this example we’ve decided to install backup software that sends our data to a backup server.  Nice upgrade!  This will secure your data from any local bad software that makes it onto your system.  My next question; how did you send that data to the server?  If it was simply over a SMB share, you might not be as safe as you think.  If you’ve got access to it, so does the attacker.  Make sure the backup software you install is placing it out-of-band to standard file sharing methods.  If it can be deleted, your backups are not secure.


Having the backup server on-site is not a bad thing though.  If you had to restore a server or data when your ISP is down or being very slow could be very problematic.  Make sure your on-site backup server is secured into its own segment of the network.  This could be via a simple SoHo router/NAT, separate segment of your Cisco routing with ACLs, or a network in-line firewall with specific firewall rules for backup only.  The point is that you need to secure your backup server so that when it comes time to restore, you are actually able to restore.  Quick note about offsites backups.  You can also send your data offsite to keep multiple copies and locations but ensure the offsite location is also secure.  There are a variety of ways you can accomplish multi-location backups but that’ll need to be a topic for another day.  Just know that you’ll be in a better security posture by doing so.


The last thing that I want to touch on are making sure your data is secure.  Every step of the data transmission needs to be reviewed to ensure things are not vulnerable.  At a high level you’ve got the data being read from the disk, encrypted by software, sent over a network which should use encryption on the fly, and then written to a backup server in an encrypted format.  I know that’s a lot but if any one of those vectors becomes compromised, you could be leaking data.


Security+ Topic - Monitoring

  Monitoring your network, servers, or general devices is one of the most important aspects of overall systems security.  I have reviewed many environments that simply put up a server and when there is a problem, they do not know about it because of a lack of monitoring.  When an attack takes over, they don’t know about it.  We like to think that everything is running just fine because the server is responding but monitoring goes beyond just a ping.  It even goes beyond a port check to make sure a service is responding.  True monitoring gets into the full flow of data through an application.


At the basics, a ping is better than nothing.  Most general monitoring software implements a ping function.  Great for a simple check but not so great for security.  Some security experts will even tell you to block ping for your servers or network nodes but there is always a trade-off with security.  If you block ping, you could trigger monitoring services which rely on it for a status or uptime information.  It could also prevent useful tools such as Trace Route.  From a security perspective it can be a layer of obscuring your servers but has other considerations.  For the first issue, that’s something more easily addressed.  Generally speaking if a server is on the network, it has a listening port to provide a service.  Thus you can determine that a server is offline by checking that service and no ping is needed.  On the second issue, a more round-about method can be used to check node-to-node connectivity utilizing documentation.  If you know that RouterA is supposed to talk to RouterB, check for a listening SSH port or otherwise throughout the network.  In general terms, ping is not the worst thing to have but if you really want to lock down the environment, disabling it does mean implimenting workarounds.


Next up is the extremely useful and easy port check.  Monitoring for a listening port is the basics of server ownership.  This type of monitoring gives you immediate information about the status because if you are the victim of a DDoS attack, you’ll know right away that your server is no longer responding.  The same thing for if the server goes offline, reboots, or a firewall change.  Many software suites such as Nagios or Solarwinds offer the ability to do a simple TCP connection to make sure the connection is alive.  This is where the monitoring of what ports are open on the server makes a large impact.  The software will give you alerts of an issue but there is an even better way to make sure your system is online, and not compromised.


Monitoring the actual function of a site or the content of the service gets you full-circle in verifying the status of the environment and the security of it’s setup.  While website defacement is much less common these days, it’s still a good idea to keep an eye on the status of the content of the website.  I use websites as my example here as it’s the most common source of a defacement attack.  One method that seems to work ok on static websites is a simple MD5SUM.  By running a curl on the website home page, you can then pipe the output to MD5SUM and get a fingerprint of the site.  This is a process that might require frequent updates to the whitelisted string but it is a suggestion of a starting point.  The other option that is much more intense but provides proof that a service is functional and not tampered with would be the submission of data and the retrieval of that data.  For example, I setup a script that sent an email through my exchange server to an email address outside of the environment.  Then I had second script that would get emails from the outside server and check the included security string and timestamp.  This verified to me that my email server was not maliciously shutdown, disabled, blocked, or otherwise toyed with.


There are a lot of options for monitoring your environment but I highly encourage you to at least take the first steps of securing your environment by at least some level of monitoring to ensure you can take action if something were to happen.


Monday, February 2, 2026

Security+ Topic - Disable Network Ports

  Disabling ports to secure your environment is one of the first things that should be done when setting up your servers or other network connected devices.  Let’s think about this for a minute.  What is the most simple way that someone is going to find your device to try and attack it.  Answer; with a port scan!  You could argue that setting your password might be the most important, or another setup item but as far as the network is concerned, you need to lock down the attack vector right away.


A lot of environments will start their setup journey behind a Network Address Translation (NAT) device which by default is going to provide a level of port protection.  If for some reason you’re setting up a brand new device with a public IP, keep a network firewall in between so you can make sure to block random internet traffic.  Sounds obvious right?


It seems so simple and really it is.  The issue is that people forget.  Or worse, an operating system update forces a service to turn on and/or enable a port.  When was the last time you reviewed your firewall ports?  For a lot of people, they probably never have.  The simple thought process is that “it’s working”.  Ask yourself if you really know what’s being allowed through the firewall because I’ll bet it will surprise you.  The last time I reviewed one of the windows firewalls on one of my servers, there was all sorts of xbox connections being allowed and sharing of odd sorts.


Now this brings up a huge security issue that I couldn’t believe was happening.  After taking a lot of time to go through my firewalls and disable rules that I explicitly wanted disabled, a windows update went through and ENABLED them again!  My monitoring software alerted me to this huge security flaw because I have it monitoring for firewall rules.  Could you imagine if you knew 100% that a service had a security flaw so you blocked it; then during an O.S. upgrade the developers decided they know better than you and enabled it as part of the update?  Well that’s really what happened!


This is why making sure to review the ports you’re allowing into your server is very very important.  It’s not just about watching out for what you know about.  It’s about watching out for what you don’t know about.  I understand that monitoring a server for all 65,535 ports is not a feasible undertaking but monitoring your firewall is something that can be done.  For example, I use Nagios with a custom script to complete the job.


Finally, what about your secure environment?  Why disable rules or make sure server ports are locked down when you’re behind a network firewall?  This is a matter of trust and layers.  Disabling the port is only one layer.  Blocking the port on the network firewall is another layer.  You never know when a bad actor is going to find a flaw in that network firewall which would allow connection deeper into your network.  That flaw could let them port forward 3389 RDP for example without your knowledge.  By locking down those ports locally on your server, it’s created another barrier to the attack vector even if one gets through.