Subscribe Now

* You will receive the latest news and updates on the Canadian IT marketplace.

Trending News

Blog Post

Reconsidering firewall design
SECURITY SHELF

Reconsidering firewall design 

In their words, “We cannot trust corporate system administrators to keep their machines secure. We are not even sure that we can do it, except perhaps in the case of a minimalist gateway machine. Therefore, we cannot afford to allow IP connectivity between the corporate networks and the Internet.”

At the time of publication, most organizations connecting to the Internet relied on simple stateless packet filters, usually implemented in routers. These first-generation firewalls were largely ineffective because large port ranges had to remain open to allow return packets. A common configuration was to allow any inbound traffic to UDP and TCP ports above 1024.

Cheswick and Bellovin’s book defined the second-generation firewall: Harden a UNIX server, turn off all network services, and implement application-layer gateways for each protocol required to traverse the firewall. Their design was to break all sessions at the gateway and send all traffic to and from the Internet via proxies that were capable of understanding them. These proxies also performed authentication where appropriate.

As network protocol diversity and complexity increased, application-layer gateways fell out of favour. Third-generation Stateful Packet Inspection (SPI) firewalls made it easier to handle new protocols. Instead of installing (or writing) a new proxy for each protocol, the firewall was simply configured to allow traffic in or out. SPI was definitely an improvement over first-generation packet filtering, but it came at the cost of reduced security.

In time, this security vs. ease of configuration trade-off became widely recognized as problematic. For example, configuring a SPI firewall to allow outbound packets destined to TCP port 80 is necessary to allow HTTP traffic. However, other applications including malware, could just as easily use a different protocol on the same port. SPI firewalls are not enough.

Despite this, some companies and the vast majority of consumers continue to rely upon basic third-generation SPI firewalls. While they do a good job restricting inbound sessions to authorized services and mitigate some network-related risks, SPI firewalls do not address prominent attack vectors. Inbound attacks pass through them on authorized ports. They do not prevent attacks on vulnerable web applications.

Malware in email, HTTP, and HTTPS traffic also pass straight through them. SPI firewalls also provide little control on outbound traffic because they are not capable of distinguishing between legitimate HTTP or HTTPS traffic and other protocols that simply use the same TCP ports. SPI firewalls are porous as far as malware, peer-to-peer, and other unwanted applications are concerned.

Security-conscious organizations responded by adding web and email proxies to their perimeter. While these products are more sophisticated than the 20-year-old proxies described by Cheswick and Bellovin, the concept is the same: force Internet traffic through an application that is capable of understanding it.

Standalone products continue to evolve. Websense, perhaps the best known product in the web proxy space, allows organizations to control the types of sites visited by employees and screen inbound traffic for known malware. While this may not prevent targeted attacks, it can significantly reduce the incidence of malware entering the organization.

The demand for better perimeter security also resulted in Unified Threat Management (UTM) and Next Generation Firewall (NGFW) products. These fourth-generation firewalls combine SPI and application-level gateways. Vendors such as Checkpoint, Fortinet, Palo Alto, and Sophos offer firewalls that allow some protocols to pass using SPI, while others are subject to application-layer policy enforcement and anti-malware scanning.

Product features and approaches to the market vary. Sophos offers physical appliances and software. To promote their products they offer a free 50-IP home-use licence for their full UTM software. Checkpoint offers software and appliances. Fortinet sells a range of appliances suitable for SOHO, branch office, and enterprise applications. Palo Alto boasts the ability to detect the protocol in use regardless of the TCP or UDP port on which traffic occurs.

Fourth-generation firewalls provide a higher level of security, and features such as web filtering and anti-malware help mitigate some threats. However, improvements in firewall technologies have been dwarfed by the large number of protocols in use on enterprise and home networks. Malicious applications have evolved to communicate using protocols such as DNS and HTTPS to masquerade as authorized traffic.

A significant challenge facing those seeking to enforce security at the perimeter is the sheer number of protocols currently in use. In the past, corporations might only have allowed outbound HTTP and HTTPS traffic via a proxy. Email and other applications could be constrained to the local network. Cloud-based services, mobile devices, and the Internet of Things have changed this. Enforcing outbound traffic control is increasingly complex.

Individuals and organizations invest in software and mobile devices that use numerous protocols to connect to cloud-based services such as email, instant messaging, video conferencing, and application notifications. They then face the choice of enabling a wide range of outbound connectivity, or investing in increasingly sophisticated firewalls that attempt to dynamically discover the protocols in use. This does not make sense from a security or economic perspective.

The situation is even worse for consumers and the increasing number of employees who work from home. The vast majority of firewalls in the consumer space are only designed to restrict inbound traffic and provide little hope of controlling what data leaves their homes.

To strengthen perimeter controls, vendors must start working together and adopt a permissions-based approach to Internet connectivity. Rather than just assume they have the unfettered right to make network connections, applications and devices should first identify themselves to the firewall, articulate their desired connectivity, and obtain permission. Instead of applying a static set of outbound rules, the firewall should be able to understand the type of device, the application, and the reason connectivity is required, thereby facilitating better security policy decisions.

Adopting a permission-based paradigm also benefits application developers and users. Instead of encountering error conditions when packets are dropped by the firewall, the application could accurately report to the user that the connection has been denied by policy. This in turn would facilitate productive discussions between developers, users, IT, and security operations staff.

In an ideal world we would not need firewalls. Computers, appliances, and other devices would be properly designed and capable of protecting themselves. However, our world is far from ideal and perimeter security continues to provide an essential defensive layer. Instead of accepting the fact that firewalls have become increasingly porous, it is time to reconsider firewall design.

Disclosure: Sophos kindly provided the author with a complimentary UTM software licence because he has more than 50 IP addresses in use on his home network.

Have a security question you’d like answered in a future column? Email eric.jacksch@iticonline.ca.

Related posts