If you have a public-facing website (or even if you just have an intranet), you need to protect your site from the threats that can put you, your customers and your site at risk. Security isn’t just a commodity — it’s a requirement. Generally, I talk about web performance, but if the server returns a 509 or 503, it really doesn’t matter whether I linted the CSS and JS or optimized all the images. Like performance, security is more than just a feature.
There are many potential threats, including probes for weakness through known and unknown exploits, and taking advantage of poor coding practices. Attacks can include SQL injections and cross-site scripting (XSS) attacks — two of the top 10 OWASP critical security risks (pdf). These can either be automated or done manually.
My personal site has been taken down twice. Once it was an SQL injection that overwrote my WordPress database. The other time it was a Distributed Denial of Service (DDoS) attack — someone created hundreds of thousands of requests to bring my site down. Even though my personal blog is not a money maker, it has been a huge investment of time and a good (great?) community resource. While I definitely want to protect my personal site from going down, security is even more important for my clients.
Some goals for most (or all) web properties include:
- Protecting user experience
- Protecting servers from attacks designed to infiltrate, modify, or steal
- Keeping web application available with 100% uptime for end users
- Minimizing the costs of threats
While the cobbler’s children may have no shoes, nine of the security features I looked for in protecting my client sites include:
- Shielding their IP address
- Dropping ports other than 80 or 443
- Web Application Firewall
- Security rule optimization
- Anomaly scoring
- Blacklisting IP ranges and geographic blocks
- Excellent monitoring, customer service and support
- Control over their DNS
1. Shielding origin server IP addresses
To find the IP address of most websites you can simply enter "ping example.com" at the command line. The last thing I want to do is provide easy access to any troll who gets momentarily upset at a Tweet. Shielding makes it more difficult (though not impossible) for the novice troll to identify my origin server IP. Pinging a shielded website provides the IP of the security service that is well equipped to cope with a DDoS attack.
Encrypting traffic between the network and the server helps to prevent a man-in-the-middle attack and otherwise protects data. Encrypting traffic is easier and cheaper than it has been in the past with the emergence of services like Let’s Encrypt that provide certificates at low to no cost.
3. Dropping requests to ports other than 80 or 443
TCP port 80 is the standard TCP port for traffic requested over HTTP. TCP port 443 is the standard for websites using SSL. Dropping all traffic to all ports except 80 and 443 by default is a good idea, as it blocks traffic not coming in over standard HTTP & HTTPS. This shouldn’t be used as your only defense strategy, as malicious traffic knows to go through these ports and firewalls. You still want to sanitize requests with a web application firewall.
4. Web Application Firewalls
Origin shielding, encryption and dropping requests received on non-standard ports is not enough to protect your application. A good Web Application Firewall (WAF) is also required. Basically, a WAF is a reverse proxy that filters out bad traffic and routes good traffic. With this comes the double-duty of detecting specific attack vectors such as injections, broken authentication and session management, XSS, and security misconfigurations.
5. Ability to fine tune and optimize security rules
One thing that is really important in choosing a server and security provider is the ability to tune and optimize rules based on specific client needs. A binary on/off for a particular rule is not the best solution. Rather, having a range of options for most features helps ensure there aren’t too many false positives or negatives.
6. Anomaly scoring
With so many variables impacting how we can distinguish good traffic from bad, does it make sense to have simple binary rules that can turn off wide swathes of traffic by accident? Of course not. Anomaly scoring, in which multiple rules have to be triggered in order to flag a request as malicious, makes for the most effective optimizations for reducing both false positives and false negatives. It allows highly targeted and granular rules that act as a system of checks and balances. Anomaly scoring is helpful, if not necessary, in security rule optimization.
7. Identifying and blacklisting IP address ranges and geographic blocks
When it comes to preventing DDoS attacks, identifying bad traffic IP address ranges and blacklisting should be a basic service of every security provider. Another feature to look for is being able to block IP addresses by geographic areas.
When I was DDoS’ed I was able to look at where the attack originated and added geographic blocking to block requests from where the bulk of the irregular traffic was coming from based on location. My site, like most blogs, isn’t available in China (you need a permit or license to do business in China). Even though Chinese IP addresses only accounted for about 6% of the traffic during the DDoS attack, temporarily blocking China was a no-brainer. I was also able to temporarily block Hungary and Romania, which was where this particular attack seemed to be emanating from.
8. Excellent monitoring, customer service and support
Make sure you can get always-available, excellent customer service and support BEFORE you actually need it. Like many vocal women online, I have had my share of trolls. One particular pesky one tried to take down my server by DDoS in 2011. Turns out he did me a HUGE favor. I was paying a fortune for a co-located server with 24x7 support. By “24x7 support” the provider meant a total of 24 hours per week: 10am to 4:00pm CST, Monday thru Thursday. Upon discovering their lack of support, I changed hosting providers. I have been saving $125/month and getting excellent service any time of day, every day of the week for 5 years. (Yes, my Internet troll has led me to save over $7,500. Joke's on him).
Lesson learned: look for a service provider with customer support that is always available and has a history of being able to respond during any attack. A good provider continuously monitors traffic. An even better provider can not only withstand a DDoS attack, but can add additional capacity on-demand — like when this post goes so viral it feels like a DDoS attack when it's actually legitimate traffic.
9. Ability to manage DNS
Lastly, you want to always be in control of your own DNS. When I was DDoS’ed, I was able to control and manage all my DNS records — I was able to control and direct my domain, subdomains and email. My site was temporarily difficult to reach, but I never lost control of my assets.
No matter the size of a web site, security must be a consideration. Being able to protect your and your customers’ assets is critical — you never know if or when an attack may occur. You may choose to start small and add additional security protections over time, but doing nothing is NOT an option.