Chapter 13

Tunneling

Introduction

Or “Where Are We Going, and Why Am I in This Handbasket?”

“Behold the beast, for which I have turned back;

Do thou protect me from her, famous Sage,

For she doth make my veins and pulses tremble.”

“Thee it behoves to take another road,”

Responded he, when he beheld me weeping,

“If from this savage place thou wouldst escape;

Because this beast, at which thou criest out,

Suffers not any one to pass her way,

But so doth harass him, that she destroys him …”

—Dante’s Inferno, Canto I, as Dante meets Virgil (trans. Henry Wadsworth Longfellow)

It is a universal rule of computer science (indeed, management itself) that no solution is perfectly scalable, that is, a process built to handle a small load rarely, if ever, can scale up to an arbitrarily large one, and vice versa. Databases built to handle tens of thousands of entries struggle mightily to handle millions; a word processor built to manage full length books becomes too baroque and unwieldy to tap out a simple e-mail. More than mere artifacts of programming skill (or lack thereof), such limitations are generally and unavoidably a consequence of design decisions regarding exactly how the system might be used. Presumptions are made in design that lead to systemic assumptions in implementation. The best designs have presumptions flexible enough to handle unimaginably diverse implementations, but everything assumes.

Transmission Control Protocol/Internet Protocol (TCP/IP) has been an astonishing success; over the course of the late 1990s, the suite of communication protocols did more than just supplant its competition—it eradicated it. This isn’t always appreciated for the cataclysmic event that it was: Windows 95 supported TCP/IP extraordinarily well, but not by default—by far the dominant networking protocols of the time were Novell’s IPX and Microsoft/IBM’s NetBIOS. A scant three years later, neither IPX nor NetBIOS was installed by default. Windows 98 had gone TCP/IP only, reflecting the networks it was being installed upon.

The TCP/IP protocol suite didn’t take over simply because Microsoft decided to “get” the “Net,” that much is obvious. Some might credit the widespread deployment of the protocol among the UNIX servers found throughout corporations and universities, or the fact that the World Wide Web, built upon TCP/IP, grew explosively during this time. Both answers ignore the underlying question: Why? Why was it widespread among UNIX servers? Why couldn’t the Web be deployed with anything else? In short, why TCP/IP?

Of course, many factors contributed to the success of the protocol suite (notably, the protocol and the reference BSD implementation were quite free), but certainly one of the most critical in a networking context can be summarized as “Think Globally, Route Locally.”

NetBIOS had no concept of an outside world beyond what was directly on your LAN. IPX had the concept of other networks that data needed to get to, but required each individual client to discover and specify in advance the complete route to the destination. TCP/IP, by contrast, allowed each host to simply know the next machine to send data along to—the full path was just assumed to eventually work itself out. If TCP/IP can be thought of as simply mailing a letter with the destination address, IPX was the equivalent of needing to give the mailman driving directions. That didn’t scale too well.

That being said, reasonably large scale networks were still built before TCP/IP, often using various solutions that made it appear that a far-away server was actually quite close and easy to access. Such systems were referred to as tunnels. The name is apt—one enters, passes through normally impenetrable terrain, and finds themselves in a completely different place afterwards. They’re nontrivial to build, but generally are point-to-point pathways that prevent you from jumping anywhere else in-between the two destinations. Their capacity varies, but it is generally less than might be built if there were no barriers in the first place.

TCP/IP, requiring much less central coordination and allowing for far more localized knowledge, obviated the need for “band-aid” tunnels spanning the vast gaps in networks and protocols. Something the scale of the Internet really couldn’t be built with much else, but the protocol was still light enough to scale down for LAN traffic. It worked well—then security happened.

Disturbingly quickly, the massively interconnected graph that was the Internet became a liability—the protections once afforded by network locality and limited interest were vastly overtaken by global connectivity and the Venture Capital Feeding Frenzy. The elegant presumptions of TCP/IP—how sessions can be initiated, how flexible port selection might be, the administrative trust that could be assumed to exist in any directly network-connected host—started falling apart. Eventually, global addressibility itself was weakened, as the concept of Network Address Translation (NAT)—which hides arbitrary numbers of backend clients behind a single network-layer server/firewall—was deployed in response to both a critical need for effective connection interrogation/limitation and a bureaucratic boondoggle in gaining access to IP address space.

And suddenly, old problems involving the interconnection of separated hosts popped up again. As always, old problems call for old solutions … and tunneling was reborn.

It’s not the same as it used to be. More than anything else, tunneling in the 21st century is about virtualizing the lack of connectivity through the judicious use of cryptography. We’ve gone through somewhat of a pendulum shift—first there was very limited global network access, then global network access was everywhere, then there was a clampdown on that connectivity, and finally holes are poked in the clampdown for those systems engineered well enough to be cryptographically secure. It’s this engineering that this chapter hopes to teach. These methods aren’t perfect, and they aren’t claimed to be—at times they’re down and dirty, but they work. The job is to get us from here to there and back again. We mostly use SSH and the paradigm of gateway cryptography to do it.

Strategic Constraints of Tunnel Design

Determining an appropriate method of tunneling between networks is far from trivial. Choosing from the wide range of available protocols, packages, and possible configurations can be a daunting task. The purpose of this chapter is to describe some of the more cutting-edge mechanisms available for establishing connectivity across any network architecture, but equally important is to understand just what makes a tunneling solution viable. Uncountable techniques could be implemented; the following helps you know what should be implemented … or else.

Make no bones about it: Tunneling is quite often a technique of bypassing overly restrictive security controls. This is not always a bad thing—remember, no organization exists merely for the purpose of being secure, and a bankrupt company is particularly insecure (especially when it comes to customer records). But, it’s difficult to argue against security restrictions when your own solution is blisteringly insecure! Particularly in the corporate realm, the key to getting permission (or forgiveness) for a firewall-busting tunnel is to preemptively absorb the security concerns the firewall was meant to address, thus blunting the accusation that you’re responsible for making systems vulnerable.

Tools & Traps …

Encapsulation versus Integration

Two basic methodologies exist for securing the link between two hosts. The first is to encapsulate a general purpose, unencrypted link inside of a system dedicated to encrypting such links generically. The second is to integrate the design of the cryptographic subsystem into the protocol being used for some specific application. Usually, pressures to integrate come from a desire to keep all code in-house, and to perhaps be able to directly tweak the cryptosystem to account for special needs, like inter-packet independence, partial public decryptability, or key escrow (where certain other parties retain the capability to decrypt traffic outside the end-to-end link).

Encapsulation, as this section shows, certainly has its risks that may possibly be exploited. But they are nothing compared to the embarrassing history of integrative approaches. Nobody trusts a vendor that creates its own encryption algorithm (“4096-bit custom encryption!”); similarly, a vendor that designs its own replacement to Secure Sockets Layer (SSL) is looked upon with justifiable suspicion. The cold reality is that most software can’t be trusted to manage passwords with any degree of cryptographic correctness, and security resources are much better spent addressing sanity checks against Trojan inputs rather than in engineering a communication system that can’t be broken into.

You need to understand that designing a security system really is quite different than designing anything else. Most code is built to add capabilities—render this, animate that, print a letter. Security code is built to remove capabilities—don’t break this, don’t allow that, prevent all the paper from being frittered away. What functionality giveth, security taketh away—mostly from the untrusted, but always a slight bit from those trusted as well. Much as newspapers found a successful model in the “Chinese wall” approach between their editorial departments (which brought in readership) and advertising departments (which resold readership), security protocols generally benefit greatly from as much separation between restriction of access and expansion of capabilities. Encapsulation provides a “sandbox” within which anything may be done—and although sometimes this sandbox can exceed the amount of trust really granted to the players, at least there are some trustable limits that can’t be integrated away.

The systems described in this chapter integrate methods suitable for encapsulating arbitrary content.

Privacy: “Where Is My Traffic Going?”

Primary questions for privacy of communications include the following:

image Can anyone else monitor the traffic within this tunnel? Read access, addressed by encryption.

image Can anyone else modify the traffic within this tunnel, or surreptitiously gain access to it? Write access, addressed primarily through authentication.

Privacy of communications is the bedrock of any secure tunnel design; in a sense, if you don’t know who is participating in the tunnel, you don’t know where you’re going or whether you’ve even gotten there. Some of the hardest problems in tunnel design involve achieving large scale n-to-n level security, and it turns out that unless a system is almost completely trusted as a private solution, no other trait will convince people to actually use it.

Routability: “Where Can This Go Through?”

Primary questions for facing routability concepts are:

image How well can this tunnel fit with my limited ability to route packets through my network? Ability to morph packet characteristics to something the network is permeable to.

image How obvious is it going to be that I’m “repurposing” some network functionality? Ability to exploit masking noise to blend with surrounding network environment.

The tunneling analogy is quite apropos for this trait, for sometimes you’re tunneling through the network equivalent of soft soil, and sometimes you’re trying to bore straight through the side of a mountain. Routability is a concept that normally refers to whether a path can be found at all; in this case, it refers to whether a data path can be established that does not violate any restrictions on types of traffic allowed. For example, many firewalls allow Web traffic and little else. It is a point of some humor in the networking world that the vast permeability of firewalls to HTTP traffic has led to all traffic eventually getting encapsulated into the capabilities allowed for the protocol.

Routability is divided into two separate but highly related concepts: First, the capability of the tunnel to exploit the permeability of a given network (as in, a set of paths from source to destination and back again) to a specific form of traffic, and to encapsulate traffic within that form regardless of its actual nature. Second, and very important for long-term availability of the tunneling solution in possibly hostile networks, is the capability of that encapsulated traffic to exploit the masking noise of similar but nontunneled data flows surrounding it.

For example, consider the difference between encapsulating traffic within HTTP and HTTPS, which is nothing more than HTTP wrapped in SSL. While most networks will pass through both types of traffic, on the basis of the large amount of legitimate traffic both streams may contain, illegitimate unencrypted HTTP traffic stands out—the tunnel, if you will, is transparent and open for investigation. By contrast, the HTTPS tunnel doesn’t even need to really run HTTP—because SSL renders the tunnel quite opaque to an inquisitive administrator, anything can be moving over it, and there’s no way to know someone isn’t just checking their bank statement.

Or is there? If nothing else, HTTP is not a protocol that generally has traffic in keystroke-like bursts. It is a stateless, quick, and short request driven protocol with much higher download rates than uploads. Traffic analysis can render even an encryption-shielded tunnel vulnerable to some degree of awareness of what’s going on. During periods of wartime, simply knowing who is talking to who can often lead to a great deal of knowledge about what moves the enemy will make—many calls in a short period of time to an ammunition depot very likely means ammo supplies are running dry.

The connection to routability, of course, is that a connection discovered to be undesirable can quickly be made unroutable pending an investigation. Traffic analysis can significantly contribute to such determinations, but it is not all powerful. Networks with large amounts of unclassifiable traffic provide the perfect cover for any sort of tunneling system; there is no need to be excessively covert when there’s someone, somewhere, legitimately doing exactly what you’re doing.

Deployability: “How Painful Is This to Get Up and Running?”

Primary questions involving deployment and installation include the following:

image What needs to be installed on clients that want to participate in the tunnel?

image What needs to be installed on servers that want to participate in the tunnel?

Software installation stinks. It does. The code has to be retrieved from somewhere—and there’s always a risk such code might be exchanged for a Trojan—it has to be run on a computer that was probably working just fine before, it might break a production system, and so on. There is always a cost; luckily, there’s often a benefit to offset it. Tunnels add connectivity, which can very well be the difference between a system being useful/profitable and a system not being worth the electricity needed to keep it running. Still, there is a question of who bears the cost… .

Client upgrades can have the advantage that they’re highly localized in exactly the right place: those who most need additional capabilities are often most motivated to upgrade their software, whereas server-level upgrades require those most detached from users need to do work that only benefits others. (The fact that upgrading stable servers is generally a good way to fix something that wasn’t broken for the vast majority of users can’t be ignored either.)

Other tunneling solutions take advantage of software already deployed on the client side and provide server support for them. This usually empowers an even greater set of clients to take advantage of new tunneling capabilities, and provides the opportunity for administrators to significantly increase security by using only a few simple configurations—like, for example, automatically redirecting all HTTP traffic through a HTTPS gateway, or forcing all wireless clients to tunnel in through the PPTP implementation that shipped standard in their operating system.

Generally, the most powerful but least convenient tunneling solutions require special software installation on both the client and server side. It should be emphasized that the operative word here is special—truly elegant solutions use what’s available to achieve the impossible, but sometimes it’s just not feasible to achieve certain results without spreading the “cost” of the tunnel across both the client and the server.

The obvious corollary is that the most convenient but least powerful systems require no software installation on either side—this happens most often when default systems installed on both sides for one purpose are suddenly found to be co-optable for completely different ones. By breaking past the perception of fixed functions for fixed applications, we can achieve results that can be surprising indeed.

Flexibility: “What Can We Use This for, Anyway?”

Primary questions in ensuring flexible usage are

image What can we move over this tunnel?

image Is there a threat from too much capacity in this tunnel?

“Sometimes you’re the windshield, sometimes you’re the bug.” In this case, sometimes you’ve got the Chunnel, but other times you’ve got a rickety rope bridge. Not all tunneling solutions carry identical traffic.

Many solutions, both hand-rolled and reasonably professionally done, simply encapsulate a bitstream in a crypto layer. TCP, being a system for reliably exchanging streams of data from one host to another, is accessed by software by the structure known as sockets. One gets the feeling that SSL, the Secure Sockets Layer, was originally intended to be a drop-in replacement for standard sockets, but various incompatibilities prevented this from being possible. (One also gets the feeling there will eventually be an SSL “function interposer,” that is, a system that will automatically convert all socket calls to Secure Socket calls.)

Although its best performance comes when forwarding TCP sessions, SSH is built to forward a wide range of traffic, from TCP to shell commands to X applications, in a generic but extraordinarily flexible manner. This flexibility makes it the weapon of choice for all sorts of tunneling solutions, but it can come at a cost.

To wit: Highly flexible tunneling solutions can suffer from the problem of “excess capacity”—in other words, if a tunnel is established to serve one purpose, could either side exploit the connection to achieve greater access than it’s trusted for?

X-Windows on the UNIX platform is a moderately hairy but reasonably usable architecture for graphical applications to display themselves within, and one of its big selling points is its network transparency: A given window doesn’t necessarily need to be displayed on the computer that’s running it. The idea was that slow and inexpensive hardware could be deployed all over the place for users, but each of the applications running on them would “seem” fast because they were really running on a very fast and expensive server sitting in the back room. (Business types like this, because it’s much easier to get higher profit margins on large servers than small desktops. This specific “carousel revolution” was most recently repeated with the Web, Java/network computers, and of course, .NET, to various degrees of success.)

One of the bigger problems with stock X-Windows is that the encryption is non-existent, and, worse than being non-existent, authentication is both difficult to use and not very secure (in the end, it’s a simple “Ability To Respond” check). Tatu Ylonen, in his development of the excellent Secure Shell (SSH) package for highly flexible secure networking, included a very elegant implementation of X-Forwarding. Tunneling all X traffic over a virtual display tunneled over SSH, a complex and ultimately useless procedure of managing DISPLAY variables and xhost/xauth arguments was replaced with simply typing ssh user@host and running an X application from the shell that came up. Security is nice, but let’s be blunt: Unlike before, it just worked!

The solution was and still is quite brilliant; it ranks as one of the better examples of the most obvious but often impossible to follow laws of upgrade design: “Don’t make it worse.” Even some of the best of security or tunneling solutions can be somewhat awkward to use—at a minimum, they require an extra step, a slight hesitation, perhaps a noticeable processing hit or reduced networking performance (in terms of either latency or bandwidth). This is part of the usually unavoidable exchange between security and liberty that extends quite a bit outside the realm of computer security. Even simply locking the door to your home obligates you to remember your keys, delays entry into your own home, and imposes a inordinately large cost should keys be forgotten (like, for example, the ever-compounding cost of leaving your keys in the possession of a friend or administrator, and what may indeed become an emergency call to that person to regain access to one’s own property). And of course, in the end a simple locked door is only a minor deterrent to a determined burglar! Overall, difficult to use and not too effective—this is a story we’ve heard before.

There was a problem, though, an instructive one at that: X Windows is a system that connects programs running in one place to displays running anywhere. To do so, it required the capability to channel images to the display and receive mouse motions and keystrokes in return.

And what if the server was compromised?

Suddenly, that capability to monitor keystrokes could be subverted for a completely different purpose—monitoring activity on the remote client. Type a password? Captured. Write a letter? Captured. And, of course, this sensitive information would tunnel quite nicely through the very elegantly encrypted and authenticated connection. Oh. The security of a tunnel can never be higher than that of the two endpoints.

The eventual solution was to disable X-Forwarding by default. ssh -X user@host in OpenSSH will now enable it, provided the server was willing to support it as well. (No, this isn’t a complete solution—a compromised server can still abuse the client if it really needs to forward X traffic—but at some level the problem becomes inherent to X itself, and with most SSH sessions having nothing to do with X, most sessions could be made secure simply by disabling the feature by default. Moving X traffic over VNC is a much more secure solution, and in many slower network topologies is faster, easier to set up, and much more stable—check www.tightvnc.org for details.)

In summary, the problem illustrated is simple: Flexibility can sometimes come back to bite you; the less you trust your endpoints, the more you must lock down the capabilities of your tunneling solutions.

Quality: “How Painful Will This System Be to Maintain?”

Primary questions to face regarding systems quality include

image Can we build it?

image Will this be stable?

image Will this be fast enough?

There are some things you’d think were obvious; some basic concepts so plainly true that nobody would ever assume otherwise. One of the most inherent of these regards usability: If a system is unusable, nobody is going to use it. You’d think that whole “not able to be used” thing might be a tip-off, but it really isn’t. Too many systems are out there that, by dint of their extraordinary complexity, cannot be upgraded, hacked upon, played with, specialized to the needs of a given site, or whatnot because all energy is being put towards making them work at all. Such systems suffer even in the realm of security, for those who are too afraid they’ll break something are loathe to fix anything. (Many, many servers remain unpatched against basic security holes on the simple logic that a malicious attack might be a possibility but a broken patch is guaranteed.) So a real question for any tunnel system is whether it can be reasonably built and maintained by those using it, and whether it is so precariously configured that any necessary modifications run the risk of causing production downtime.

Less important in some cases but occasionally the defining factor, particularly on server-side aggregators of many cryptographic tunnels, is the issue of speed. All designs have their performance requirements; no solution can efficiently meet all possible needs. When designing your tunneling systems, you need to make sure they have the necessary carrying capacity for your load.

Designing End-to-End Tunneling Systems

There are many types of tunnels one could implement; the study of gateway cryptography tends to focus on which tunneling methodologies should be implemented. One simple rule specifies that whenever possible, tunnels ought to be end-to-end secure. Only the client and the server will be able to decrypt and access the traffic traveling over the tunnel; though firewalls, routers, and even other servers may be involved in passing the encrypted streams of ciphertext around, only the endpoints should be able to participate in the tunnel. Of course, it’s always possible to request that an endpoint give you access to the network visible to it, rather than just services running on that specific host, but that is outside the scope of the tunnel itself—once you pass through the Chunnel from England to France, you’re quite free to travel on to Spain or Germany. What matters is that you do not drown underneath the English Channel!

End-to-end tunnels execute the following three functions without fail:

image Create a valid path from client to server.

image Independently authenticate and encrypt over this new valid path.

image Forward services over this independent link.

These functions can be collapsed into a single step—such as accessing an SSL encrypted Web site over a permeable network. They can also be expanded upon and recombined; for example, authenticating (and being authenticated by) intermediate hosts before being allowed to even attempt to authenticate against the final destination. But these are the three inherent functions to be built, and that’s what we’re going to do now.

Drilling Tunnels Using SSH

So we’re left with a bewildering set of constraints on our behavior, with little more than a sense that an encapsulating approach might be a method of going about satisfying our requirements. What to use? IPSec, for all its hype, is so extraordinarily difficult to configure correctly that even Bruce Schneier, practically the patron saint of computer security and author of Applied Cryptography, was compelled to state “Even though the protocol is a disappointment—our primary complaint is with its complexity—it is the best IP security protocol available at the moment.” (My words on the subject were something along the lines of “I’d rather stick red-hot kitchen utensils in my eyes than administer an IPSec network,” but that’s just me.)

SSL is nice, and well trusted—and there’s even a nonmiserable command-line implementation called Stunnel (www.stunnel.org) with a decent amount of functionality—but the protocol itself is limited and doesn’t facilitate many of the more interesting tunneling systems imaginable. SSL is encrypted TCP—in the end, little more than a secure bitstream with a nice authentication system. But SSL extends only to the next upstream host and becomes progressively unwieldy the more you try to encapsulate within. Furthermore, standard SSL implementations fail to be perfectly forward-secure, essentially meaning that a key compromise in the future will expose data sent today. This is unnecessary and honestly embarrassing.

We need something more powerful, yet still trusted. We need OpenSSH.

Security Analysis: OpenSSH 3.02

The de facto standard for secure remote connectivity, OpenSSH, is best known for being an elegant and secure replacement for both Telnet and the r* series of applications. It is an incredibly flexible implementation of one of the three trusted secure communication protocols (the other two being SSL and IPSec).

Security

One of the mainstays of open source security, OpenSSH is often the only point of entry made available to some of the most paranoid networks around. Trust in the first version of the SSH protocol is eroding in the face of years of intensive analysis; OpenSSH’s complete implementation of the SSH2 protocol, its completely free code, and its unique position as the only reliable migration path from SSH1 to SSH2 (this was bungled miserably by the original creators of SSH), have made this the de facto standard SSH implementation on the Internet. See Table 13.1 for a list of the encryption types and algorithms OpenSSH supports.

Table 13.1

Cryptographic Primitive Constructs Supported By OpenSSH

image

Routability

All traffic is multiplexed over a single outgoing TCP session, and most networks allow outgoing SSH traffic (on 22/tcp) to pass. ProxyCommand functionality provides a convenient interface for traffic maskers and redirectors to be applied, such as a SOCKS redirector or a HTTP encapsulator.

Deployability

Both client and server code is installed by default on most modern UNIX systems, and the system has been ported to a large number of platforms, including Win32.

Flexibility

Having the ability to seamlessly encapsulate a wide range of traffic (see Table 13.2) means that more care needs to be taken to prevent partially trusted clients from appropriating unexpected resources. Very much an embarrassment of riches. One major limitation is the inability to internally convert from one encapsulation context to another, that is, directly connecting the output of a command to a network port.

Table 13.2

Encapsulation Primitives of OpenSSH

Encapsulation Type Possible Uses
UNIX shell Interactive remote administration
Command FORWARDING Remote CD burning, automated backup, cluster management, toolchain interposition
Static TCP port forwarding Single-host network services, like IRC, Mail, VNC, and (very) limited Web traffic
Dynamic TCP port forwarding Multihost and multiport network services, like Web surfing, P2P systems, and Voice over IP
X forwarding Remote access to graphical UNIX applications
Quality

OpenSSH is very much a system that “just works.” Syntax is generally good, though network port forwarding does tend to confuse those new to the platform. Speed can be an issue for certain platforms, but the one-to-ten MB/s level appears to be the present performance ceiling for default builds of OpenSSH. Some issues with command forwarding can lead to zombie processes. Forked from Tatu Ylonen’s original implementation of SSH and expanded upon by Theo De Raadt, Markus Friedl, Damien Miller, and Ben “Mouring” Lindstrom of the highly secure OpenBSD project, it is under constant, near-obsessive development.

Setting Up OpenSSH

The full procedure for setting up OpenSSH is mostly outside the scope of this chapter, but you can find a good guide for Linux at www.helpdesk.umd.edu/linux/security/ssh_install.shtml. Windows is slightly more complicated; those using the excellent UNIX-On-Windows Cygwin environment can get guidance at http://tech.erdelynet.com/cygwin-sshd.asp; those who simply seek a daemon that will work and be done with it should grab Network Simplicity’s excellent SSHD build at www.networksimplicity.com/openssh/.

Note this very important warning about versions: Modern UNIX distributions all have SSH daemons installed by default, including Apple’s Macintosh OSX; unfortunately, a disturbing number of these daemons are either SSH 1.2.27 or OpenSSH 2.2.0p2 or earlier. The SSH1 implementations in these packages are highly vulnerable to a remote root compromise, and must be upgraded as soon as possible. If it is not feasible to upgrade the daemon on a machine using the latest available at www.openssh.com (or even the official SSH2 from ssh.com), you can secure builds of OpenSSH that support both SSH1 and SSH2 by editing /etc/sshd_config and changing Protocol 2,1 to Protocol 2. (This has the side effect of disabling SSH1 support entirely, which is a problem for older clients.) Obscurity is particularly no defense in this situation as well—the version of any SSH server can be easily queried remotely, as in the following:

effugas@OTHERSH0E -

$ telnet 10.0.1.II 22

Trying 10.0.1.11…

Connected to 10.0.1.11.

Escape character is ‘^]’

SSH-1.99-0penSSH_3.0.1pl

Another important note is that the SSH server does not necessarily require root permissions to execute the majority of its functionality. Any user may execute sshd on an alternate port and even authenticate himself against it. The SSH client in particular may be installed and executed by any normal user—this is particularly important when some of the newer features of OpenSSH, like ProxyCommand, are required but unavailable in older builds.

Tools & Traps …

OpenSSH under Windows

There are many “nice” implementations of the SSH protocols for Win32, including F-Secure SSH and SecureCRT. They’re not very flexible, at least not in terms of the flexibility we’re interested in: They’re great tools for fooling around with a shell on a remote machine, but most of the nonstandard techniques in this chapter are built on the ability for UNIX tools to be dynamically recombined, in all sorts of unexpected ways, simply using pipes and redirections provided by users themselves.

Luckily, there’s an alternative: Use the real thing!

Cygwin, available at www.cygwin.com, is an astonishingly complete and useful UNIX-like environment that runs directly under Windows. OpenSSH has been ported to this environment, and thus all the techniques of this chapter may be used natively within Microsoft environments. There are two ways to gain access to this environment:

image Install the entire Cygwin environment. At press time, this involves running www.cygwin.com/setup.exe, selecting a number of packages, and allowing the environment to install from one of many mirrors. One major thing to keep in mind: Although Cygwin ships with an excellent implementation of rxvt, a standard UNIX command window environment, it does not execute it by default. This can be easily remedied by right-clicking on the desktop, selecting New, then Shortcut, and inputting the following inordinately long path:

    c:cygwinin xvt.exe -rv -si 20000 -fn “Courier-12” -e /bin/ bash – - login -I

    (Be sure to amend the path listed if you installed Cygwin to an alternate directory.) Name the shortcut whatever you like. You may want to tweak your terminal slightly; this command line implements reverse video, a twenty-thousand line scroll-back buffer, 12-point Courier text, and a default Bash prompt.)

image Use Dox SSH, a miniature Open SSH/Cygwin distribution developed specifically for this chapter. You may find it at www.doxpara.com/doxssh or within the Syngress Solutions Web site for this book (www.syngress.com/solutions).

Both solutions look like Figure 13.1.

image

Figure 13.1 OpenSSH on Win32 through Cygwin and rxvt

That being said, two notable alternative SSH implementations exist. The first is MindTerm, by Mats Andersson and available at www.appgate.com/mindterm/. MindTerm, possibly the killer app for Java, is a complete SSH1/SSH2 implementation that can load securely off a Web page. The second, PuTTY, is a simple but absolutely tiny terminal-only implementation of SSH1/SSH2 for Windows. You can find it at www.chiark.greenend.org.uk/˜sgtatham/putty or www.doxpara.com/putty. Both implementations are compact, well featured, fast, and impressively written.

Open Sesame: Authentication

The first step to accessing a remote system in SSH is authenticating yourself to it. All systems that travel over SSH begin with this authentication process.

Basic Access: Authentication by Password

“In the beginning, there was the command line.” The core encapsulation of SSH is and will always be the command line of a remote machine. The syntax is simple:

dan@OTHERSH0E -

# ssh user@host

$ ssh claneiO.0.1.11

[email protected]’s password:

FreeBSD 4.3-RELEASE (CURRENT-12-2-01) #1: Mon Dec 3 13:44:59 GMT 2001$

Throw on a –X option, and if an X-Windows application is executed, it will automatically tunnel. SSH’s password handling is interesting—no matter where in the chain of commands ssh is, if a password is required, ssh will almost always manage to query for it. This isn’t trivial, but is quite useful.

However, passwords have their issues—primarily, if a user’s password is shared between hosts A and B, host A can spoof being the user to host B, and vice versa. Chapter 12 goes into significantly more detail about the weaknesses of passwords, and thus SSH supports a more advanced mechanism for authenticating the client to the server.

Transparent Access: Authentication by Private Key

Asymmetric key systems offer a powerful method of allowing one host to authenticate itself to many—much like many people can recognize a face but not copy its effect on other people, many hosts can recognize the private key referenced by their public component, but not copy the private component itself. So SSH generates private components—one for the SSH1 protocol, another for SSH2—which hosts all over may recognize.

Server to Client Authentication

Although it is optional for the client to authenticate using a public/private keypair, the server must provide key material such that the client, having trusted the host once, may recognize it in the future. This diverges from SSL, which presumes that the client trusts some certificate authority like VeriSign and then can transfer that trust to any arbitrary host. SSH instead accepts the risks of first introductions to a host and then tries to take that first risk and spread it over all future sessions. This has a much lower management burden, but presents a much weaker default model for server authentication. (It’s a tradeoff—one of many. Unmanageable systems aren’t deployed, and undeployed security systems generally are awfully insecure.) First connections to an SSH server generally look like this:

effugas@OTHERSHOE -

$ ssh [email protected]

The authenticity of host. ‘10.0.1.11 (10.0.1.11)’ can’t be established.

RSA key fingerprint is 6b:77:c8:4f:e1:ce:ab:cd:30:b2:70:20:2e:64:11:db.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added ‘10.0.1.11’ (RSA) to the list of known hosts.

[email protected]’s password:

FreeBSD 4.3-RELEASE (CURRENT-12-2-01) #1: Mon Dec 3 13:44:59 GMT 2001 $

The Host Key, as it’s known, is generated automatically upon installation of the SSH server. This often poses a problem—because the installation routines are pretty dumb, they’ll sometimes overwrite or misplace existing key material. This leads to a very scary error for clients that proclaim that there might be somebody faking the server—but usually it just means that the original key was legitimately lost. This means that users just go ahead and accept the new, possibly spoofed key. This is problematic and is being worked on. For systems that need to be very secure, the most important thing is to come up with decent methods for securely distributing ˜/.ssh/known_hosts and ˜/.ssh/known_hosts2, the files that contains the list of keys the client may recognize. Much of this chapter is devoted to discussing exactly how to distribute files of this type through arbitrarily disroutable networks; upon finding a technique that will work in your network, a “pull” design having each client go to a central host, query for a new known-hosts file, and pull it down might work well.

Client to Server Authentication

Client asymmetric keying is useful but optional. The two main steps are to generate the keys on the client, and then to inform the server that they’re to be accepted. First, key generation executed using ssh-keygen for SSH1 and ssh-keygen –t dsa for SSH2:

effugas@OTHERSH0E -

S ssh-keygen

Generating public/private rsal key pair.

Enter file in which to save the key (/home/ef fugas/. ssh/ident i ty) :

Enter passphrase (empty for no passphrase) : <ENTER>

Enter same passphrase again: <ENTER>

Your identification has been saved in /home/effugas/.ssh/identi ty.

Your public key has been saved in /home/effugas/.ssh/identity.pub.

The key fingerprint is:

c7:d9:12:f8:b4:7b:f2:94:2c:87:43:14:5a:c f:11:1d ef fugas@OTHERSHOE

effugas@OTHERSHOE -

$ ssh-keygen -t dsa

Generating public/private dsa key pair.

Enter file in which to save the key (/home/ef fugas/. ssh/id_dsa) :

Enter passphrase (empty for no passphrase): <ENTER>

Enter same passphrase again: <ENTER>

Your identification has been saved in /home/effugas/,ssh/id_dsa.

Your public key has been saved in /home/effugas/,ssh/id_dsa.pub.

The key fingerprint is:

e0:e2:a7:lb:02:ad:5b:0a:7f:f8:9c:dl:f8:3b:97:bd effugas@OTHERSHOE

Now, you need to inform the server to check connecting clients for possession of the private key (.ssh/identity for SSH1, .ssh/id_dsa for SSH2). Check for possession of the private key by sending the server its public element and adding it to a file in some given user’s home directory—.ssh/authorized_keys for SSH1; .ssh/authorized_keys2 for SSH2. There’s no real elegant way to do this built into SSH, and it is by far the biggest weakness in the toolkit and very arguably the protocol itself. William Stearns has done some decent work cleaning this up; his script at www.stearns.org/ssh-keyinstall/ssh-keyinstall-0.1.3.tar.gz. It’s messy and doesn’t try to hide that. But the following process will remove the need for password authentication using your newly downloaded keys, with the added advantage of not needing any special external applications (note that you need to enter a password):

effugas@OTHERSHOE -

$ ssh −1 effugas“*10.0.1.10

[email protected]’s password:

Last login: Mon Jan 14 05:38:05 2002 from 10.0.1.56

[effugas@localhost effugas]$

Okay, deep breath. Now you need to read in the key generated using ssh-keygen, pipe it out through ssh to 10.0.1.10, username effugas. Make sure you’re in the home directory, set file modes so nobody else can read what you’re about to create, create the directory if needed (the –p option makes directory creation optional), then receive whatever you’re being piped and add it to ˜/.ssh/authorized_keys, which the SSH daemon will use to authenticate remote private keys with. Why there isn’t standardized functionality for this is a great mystery; this extended multi-part command, however, will get the job done reasonably well:

effugas@OTHERSHOE -

$ cat -/.ssh/identity.pub | ssh −1 effugas^lO.O.1.10 “cd – && umask 077 &fc rokdir -p .ssh && cat” -/. ssh/authorized_keys”

[email protected]’s password:

Look ma, no password requested:

effugas@OTHERSHOE -

$ ssh −1 effugas<H0.0.1.10

Last login: Hon Jan 14 05:44:22 2002 from 10.0.1.56

[effugas@loca1 host effugas]$

The equivalent process for SSH2, the default protocol for OpenSSH:

effugas@OTHERSHOE -

$ cat -/.ssh/id_dsa.pub | ssh ef fugas**10.0. 1 .10 “cd – &fc umask 077 && mkdir -p .ssh &fc cat” -/ .ssh/authorized_keys2”

[email protected]. 10’s password:

effugas@OTHERSHOE -

$ ssh effugas#10.0.1.10

Last login: Mon Jan 14 05:47:30 2002 from 10.0.1.56

[e f fugas@1oca1hos t e f fugas]$

Tools & Traps …

Many Users, One Account: Preventing Password Leakage

One very important thing to realize is that there may be many entries in each user account’s authorized_keys files. This is often used to allow one user to authenticate to a server from many different accounts; hopefully the various end-to-end techniques described in this chapter will limit the usage of that insecure methodology. (The more hosts can log in, the more external compromises may lead to internal damage.)

However, there is still an excellent use for the fact that authorized_keys and authorized_keys2 may store many entries—giving multiple individuals access to a single account, with none of them knowing the permanent password to that account. New members of a group add their public component to some account with necessary permissions; from then on, their personal key gets them in. Should they leave the group, their individual public element is removed from the list of authorized_keys; nobody else has to remember a new password!

A slight caveat—known_hosts2 and authorized_keys2 are being slowly eliminated, being condensed into the master known_hosts and authorized_keys files. Servers that don’t work by using the SSH2-specific files may work simply by cutting off the 2 from the end of the file in question.

Passwords were avoided because we didn’t trust servers, but who says our clients are much better? Great crypto is nice, but we’re essentially taking something that was stored in the mind of the user and putting it on the hard drive of the client for possible grabbing. Remember that there is no secure way to store a password on a client without another password to protect it. Solutions to this problem aren’t great. One system supported by SSH involves passphrases—passwords that are parsed client-side and are used to decrypt the private key that the remote server wishes to verify possession of. You can add passphrases to both SSH2 keys:

# add passphrase to SSHl key

effugas@OTHERSHOE -

$ ssh-keygen.exe -p

Enter file in which the key is (/home/ef fugas/. ssh/ident i ty) :

Key has comment ‘effugas@OTHERSHOE’

Enter new passphrase (empty for no passphrase) :

Enter same passphrase again:

Your identification has been saved with the new passphrase.

# add passphrase to SSH2 key

effugas@OTHERSHOE -

$ ssh-keygen.exe -t dsa -p

Enter file in which the key is (/home/ef fugas/. ssh/id_dsa) :

Key has comment ‘/home/effugas/,ssh/id_dsa’

Enter new passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved with the new passphrase.

# Note the new request for passphrases

effugas@OTHERSHOE -

$ ssh [email protected]

Enter passphrase for key ‘/home/effugas/,ssh/id_dsa’ :

FreeBSD 4.3-RELEASE (CURRENT-12-2-01) #1: Mon Dec 3 13:44:59 GUT 2001 $

Of course, now we’re back where we started—we have to enter a password every time we want to log into a remote host! What now?

Well, the dark truth is that most people just trust their clients and stay completely passphrase-free, much to the annoyance of IT administrators who think disabling passwords entirely will drive people towards a really nice crypto solution that has no huge wide-open holes. SSH does have a system that tries to address the problem of passphrases being no better than passwords, by allowing a single entry of the passphrase to spread among many authentication attempts. This is done through an agent, which sits around and serves private key computations to SSH clients run under it. (This means, importantly, that only SSH clients running under the shell of the agent get access to its key.) Passphrases are given to the agent, which then decrypts the private key and lets clients access it password-free. A sample implementation of this, assuming keys created as in the earlier example and authorized on both 10.0.1.11 and 10.0.1.10:

First, we start the agent. Note that there is a child shell that is named. If you don’t name a shell, you’ll get an error along the lines of “Could not open a connection to your authentication agent.”

effugas@OTHERSH0E -

$ ssh-agent bash

Now, add the keys. If there’s no argument, the SSH1 key is added:

effugas@OTHERSHOE -

$ ssh-add

Enter passphrase for effugas@OTHERSHOE:

Identity added: /home/effugas/.ssh/identi ty (effugas@OTHE“RSHOE)

With an argument, the SSH2 key is tossed on:

effugas@OTHERSHOE -

$ ssh-add -/.ssh/id_dsa

Enter passphrase for /home/effugas/.ssh/id_dsa:

Ident i ty added: /home/effugas/.ssh/id_dsa (/home/effugas/.ssh/id_dsa)

Now, let’s try to connect to a couple hosts that have been programmed to accept both keys:

effugas@OTHERSHOE -

$ ssh −1 [email protected]

Last login: Mon Jan 14 06:20:21 2002 from 10.0.1.56

[effugas@!ocalhost effugas]$ ^D

effugas@0THERSH0E -

$ ssh −2 [email protected]

FreeBSD 4.3-RELEASE (CURRENT-12-2-01) #1: Mon Dec 3 13:44:59 GMT 2001

$

Having achieved a connection to a remote host, we now have to figure what to do. For any given SSH connection, we may execute commands on the remote server or establish various forms of network connectivity. We may even do both, sometimes providing ourselves a network path to the very server we just initiated.

Command Forwarding: Direct Execution for Scripts and Pipes

One of the most useful features of SSH derives from its heritage as a replacement for the r* series of UNIX applications. SSH possesses the capability to cleanly execute remote commands, as if they were local. For example, instead of typing:

effugas@OTHERSHOE -

$ ssh [email protected]

[email protected]’s password:

FreeBSD 4.3-RELEASE (CURRENT-12-2-01) #1: Mon Dec 3 13:44:59 GMT 2001

$ uptime

3:19AM up 18 days. 8:48. 5 users. load averages: 2.02. 2.04. 1.97

$

We could just type:

effugas@OTHERSHOE -

$ ssh effugas#10.0.1.11 uptime

[email protected]’s password:

3:20AM up 18 days. 8:49. 4 users. load averages: 2.01. 2.03. 1.97

Indeed, we can pipe output between hosts, such as in this trivial example:

image

Such functionality is extraordinarily useful for tunneling purposes. The basic concept of a tunnel is something that creates a data flow across a normally impenetrable boundary; there is little that is generically as impenetrable as the separation between two independent pieces of hardware. (A massive amount of work has been done in process compartmentalization, where a failure in one piece of code is almost absolutely positively not going to cause a failure somewhere else, due to absolute memory protection, CPU scheduling, and what not. Meanwhile, simply running your Web server and mail server code on different systems, possible many different systems, possibly geographically spread over the globe provides a completely different class of process separation.) SSH turns pipes into an inter–host communication subsystem—the rule becomes: Almost any time you’d use a pipe to transfer data between processes, SSH allows the processes to be located on other hosts.

NOTE

Not all commands were built to be piped—those that take over the terminal screen and draw to it, like lynx, elm, pine, or tin, require what’s known as a TTY to function correctly. TTYs use unused characters to allow for various drawing modes and styles, and as such are not 8–bit clean in the way pipes need to be. SSH still supports TTY–using commands, but the –t option must be specified.

Remote pipe execution can be used to great effect—very simple command pipelines, suddenly able to cross server boundaries, can have extraordinarily useful effects. For example, most file transfer operations can be built using little more than a few basic tools that ship with almost all UNIX and Cygwin distributions. Some base elements are listed in Table 13.3:

Table 13.3

Useful Shell Script Components for SSH Command Forwards

image

From such simple beginnings, we can actually implement the basic elements of a file transfer system (see Table 13.4).

Table 13.4

Transferring Files Using Generic Shell Components

image

One of the very nice things about SSH is that, when it executes commands remotely, it does so in an extraordinarily restricted context. Trusted paths are actually compiled into the SSH daemon, and the only binaries SSH will execute without an absolute path are those in /usr/local/bin, /usr/bin, and /bin. (SSH also has the capability to forward environment variables, so if the client shell has any interesting paths, their names will be sent to the server as well. This is a slight sacrifice of security for a pretty decent jump in functionality.)

Notes from the Underground …

su: Silly User, Root Is For Kids

The su tool is probably the ultimate paper tiger of the secure software world. As a command-line tool intended to allow an individual to “switch user” permissions, it is held up as a far superior alternative to directly connecting to the required account in the first place. Even the venerable OpenBSD makes this mistake:

$ ssh [email protected]

[email protected],220’s password:

Last login: Fri Dec 28 02:02:16 2001 from 10.0.1.150

OpenBSD 2.7 (GENERIC) #13: Sat May 13 17:41:03 MDT 2000

Welcome to OpenBSD: The proactively secure Unix–like operating system.

Please use the sendbug(l) utility to report bugs in the system. Before reporting a bug. please try to reproduce it with the latestversion of the code. With bug reports, please try to ensure thatenough information to reproduce the problem is enclosed, and if aknown fix for it exists, include that as well.

Terminal type? [xterm]

Don’t login as root, use su

spork#

This advice is ridiculous, as it’s intended: The idea is that a user should go about his business normally in his normal account and, in case he needs to complete some administrative task, he should then instruct his user shell—the one not trusted to administer the system—to launch a program that will ask for a root password and in return provide a shell that is indeed trusted.

That would be great if we had any assurance that the user shell was actually going to execute SU! Think about it—there are innumerable opportunity for a shell to be corrupted, if nothing else by .bashrc/.profile/.tcshrc automatic and invisible configuration files. Each of these files could specify an alternate executable to load, rather than the genuine su, which would capture the keyboard traffic of a root password being entered in and either write that to a file or send it over the network. If there is to be a dividing line between the account of an average user and the root account, what sense does it make to pipe that which upgrades from the former untrusted to the latter trusted through a resource wholly owned and controlled in “enemy territory?” It’s exactly analogous to leaving the fox in charge of the henhouse; the specific entity we fail to trust is being given the keys to that realm we absolutely need to maintain secure, and our assumption is that with those keys no evil will be done.

If we trusted it to do no evil, we wouldn’t be putting restrictions upon it in the first place!

Unfortunately, particularly when multiple people share root access on a machine, it’s critical to know who came in and broke something at what time. The su tool is nice because it provides a very clean log entry that shows who traveled from lower security to high. Even creating individual authorized_keys entries in root doesn’t handle this sufficiently, because it doesn’t really log which key was used to get into what account (this should be fixed in a later release). This need for accountability is so great that it actually can reasonably outweigh the restriction concept on individual accounts, which may not even be there as a real security system anyway—in other words, root is something you always have access to, but you want to be able to prevent accidental and casual command–line work from wiping out the server!

Can we keep this accountability without necessarily forcing a critical password through an insecure space? Yes—using SSH. When SSH executes a command forward, it does so using the very limited default environment that the shell provides. This default environment—a combination of the root–owned sshd and the root owned /bin/sh, with an ignorable bit from the client—is immune to whatever corruptions might happen to the shell in its configuration files or whatnot. That makes it a perfect environment for su!

ssh user@host –t “/bin/su −1 user2”

This drops down into the first user’s account just long enough to authenticate—the environment is kept as pure as the root–owned processes that spawned it. In this pure environment, su is given a TTY and told to switch to some second user. Because it’s the pure environment, we know it’s actually su that’s being executed, not anything else.

Note that only /bin/sh can be trusted to maintain command environment purity. Bash, for example, will load its config files even when simply being used to execute a command. A chsh (change shell) command will need to be executed for this method to remain safe. This doesn’t, however, mean that users need to switch from bash to /bin/sh—using a .profile configuration in their home directory, a user could place exec bash —login –i and have bash access when logged in interactively while still having the safe environment available for remote commands.

There is another problem, little known but of some import. Even for command forwards, the file ˜/.ssh/environment is loaded by SSHD to set custom environmental parameters. The primary environment parameter to attack would be the launch path for the remote su; by redirecting the path to some corrupted binary owned by the user, anything typed at the command line would be vulnerable. It’s nontrivial to disable ˜/.ssh/environment file parsing, but it’s easy to simply specify an absolute path to su—/bin/su, usually, though it’s occasionally /usr/bin/su—that path hacking can’t touch. The other major environment hack involves library preloads, which change the functions that a given app might depend on to execute. Because su is a setuid app, the system will automatically ignore any library preloads.

Finally, it is critical to use the –l option to su to specify that the full login environment should be cleared once the connection is established. Otherwise, pollution from the user shell will spread up to the root shell!

Port Forwarding: Accessing Resources on Remote Networks

Once we’ve got a link, SSH gives us the capability to create a “portal” of limited network connectivity from the client to the server, or vice versa. The portal is not total—simply running SSH does not magically encapsulate all network traffic on your system, any more than the existence of airplanes means you can flap your arms and fly. However, there do exist methods and systems for making SSH an extraordinarily useful network tunneling system.

Local Port Forwards

A local port forward is essentially a request for SSH to listen on one client TCP port (UDP is not supported, for good reason but greater annoyance), and should any traffic come to it, to pipe it through the SSH connection into some specified machine visible from the server. Such local traffic could be sent to the external IP address of the machine, but for convenience purposes “127.0.0.1” and usually “localhost” refer to “this host”, no matter the external IP address.

The syntax for a Local Port Forward is pretty simple:

ssh –L listening_port:destination_host:destination_port

user@forwarding_host

Let’s walk through the effects of starting up a port forward, using IRC as an example.

This is the port we want to access from within another network—very useful when IRC doesn’t work from behind your firewall due to identd. This is the raw traffic that arrives when the port is connected to:

effugas@OTHERSH0E –

$ telnet newyork.ny.us.undernet.org 6667

Trying 66.100.191.2…

Connected to newyork.ny.us.undernet.org.

Escape character is ‘^]’.

NOTICE AUTH :*** Looking up your hostname

NOTICE AUTH :*** Found your hostname, cached

NOTICE AUTH :*** Checking Ident

We connect to a remote server and tell our SSH client to listen for localhost IRC connection attempts. If any are received, they are to be sent to what the remote host sees as newyork.ny.us.undernet.org, port 6667.

effugas@OTHERSH0E –

$ ssh [email protected] L6667:newyork.ny.us.undernet.org:6667

Password:

Last login: Mon Jan 14 06:22:19 2002 from some.net on pts/0

Linux libertiee.net 2.4.17 #2 Mon Dec 31 21:28:05 PST 2001 i686 unknown

Last login: Mon Jan 14 06:23:45 2002 from some.net

1ibertiee:–>

Let’s see if the forwarding worked—do we get the same output from localhost that we used to be getting from a direct connection? Better—identd is timing out, so we’ll actually be able to talk on IRC.

effugas@OTHERSHOE –

$ telnet 127.0.0.1 6667

Trying 127.0.0.1…

Connected to 127.0.0.1.

Escape character is ‘^]’.

NOTICE AUTH :*** Looking up your hostname

NOTICE AUTH :*** Found your hostname, cached

NOTICE AUTH :*** Checking Ident

NOTICE AUTH :*** No ident response

Establishing a port forward is not enough; we must configure our systems to actually use the forwards we’ve created. This means going through localhost instead of direct to the final destination. The first method is to simply inform the app of the new address—quite doable when addressing is done “live,” that is, is not stored in configuration files:

$ irc Effugas 127.0.0.1

*** Connecting to port 6667 of server 127.0.0.1

*** Looking up your hostname

*** Found your hostname, cached

*** Checking Ident

*** No ident response

*** Welcome to the Internet Relay Network Effugas (from newyork.ny.us.undernet.org)

More difficult is when configurations are down a long tree of menus that are annoying to modify each time a simple server change is desired. For these cases, we actually need to remap the name—instead of the name newyork.ny.us.undernet.org returning its actual IP address to the application; it needs to instead return 127.0.0.1. For this, we modify the hosts file. This file is almost always checked before a DNS lookup is issued, and allows a user to manually map names to IP addressed. The syntax is trivial:

bash−2.05a$ tail –nl /etc/hosts

10.0.1.44 alephdox

Instead of sending IRC to 127.0.0.1 directly, we can modify the hosts file to contain the line:

effugas@OTHERSH0E /cygdr ive/c/wi ndows/system32/drivers/etc

$ tai1 –nl hosts

127.0.0.1 newyork.ny.us.undernet.org

Now, when we run IRC, we can connect to the host using the original name—and it’ll still route correctly through the port forward!

effugas@OTHERSH0E /cygdrive/c/windows/system32/drivers/etc

$ irc Timmy newyork.ny.us.undernet.org

*** Connecting to port 6667 of server newyork.ny.us.undernet.org

*** Looking up your hostname

*** Found your hostname, cached

*** Checking Ident

*** No ident response

*** Welcome to the Internet Relay Network Timmy

Note that the location of the hosts file varies by platform. Almost all UNIX systems use /etc/hosts, Win9x uses WINDOWSHOSTS;WinNT uses WINNTSYSTEM32DRIVERSETCHOSTS; and WinXP uses WINDOWSSYSTEM32DRIVERSETCHOSTS. Considering that Cygwin supports Symlinks(using Windows Shortcut files, no less!), it would probably be good for your sanity to execute something like ln –s HOSTSPATH HOSTS /etc/hosts.

Note that SSH Port Forwards aren’t really that flexible. They require destinations to be declared in advance, have a significant administrative expense, and have all sorts of limitations. Among other things, although it’s possible to forward one port for a listener and another for the sender(for example, –L16667:irc.slashnet.org:6667), you can’t address different port forwards by name, because they all end up resolving back to 127.0.0.1. You also need to know exactly what hosts need to get forwarded—attempting to browse the Web, for example, is a dangerous proposition. Besides the fact that it’s impossible to adequately deal with pages that are served off multiple addresses (each of the port 80 HTTP connections is sent to the same server), any servers that aren’t included in the hosts file will “leak” onto the outside network.

Mind you, SSL has similar weaknesses for Web traffic—it’s just that HTTPS (HTTP–over–SSL) pages are generally engineered to not spread themselves across multiple servers (indeed, it’s a violation of the spec, because the lock and the address would refer to multiple hosts).

Local forwards, however, are far from useless. They’re amazingly useful for forwarding all single–port, single–host services. SSH itself is a single–port, single–host service—and as we show a bit later, that makes all the difference.

Dynamic Port Forwards

That local port forwards are a bit unwieldy doesn’t mean that SSH can’t be used to tunnel many different types of traffic. It just means that a more elegant solution needs to be employed—and indeed, one has been found. Some examination of the SSH protocols themselves revealed that, while the listening port began awaiting connections at the beginning of the session, the client didn’t actually inform the server of the destination of a given forwarding until the connection was actually established. Furthermore, this destination information could change from TCP session to TCP session, with one listener being redirected, through the SSH tunnel, to several different endpoints. If only there was a simple way for applications to dynamically inform SSH of where they intended a given socket to point to, the client could create the appropriate forwarding on demand—enter SOCKS4… .

An ancient protocol, the SOCKS4 protocol was designed to provide the absolute simplest way for a client to inform a proxy of which server it actually intended to connect to. Proxies are little more than servers with a network connection clients wish to access; the client issues to the proxy a request for the server it really wanted to connect to, and the proxy actually issues the network request and sends the response back to the client. That’s exactly what we need for the dynamic directing of SSH port forwards—perhaps we could use a proxy control protocol like SOCKS4? Composed of but a few bytes back and forth at the beginning of a TCP session, the protocol has zero per–packet overhead, is already integrated into large numbers of pre–existing applications, and even has mature wrappers available to make any (non–suid) network–enabled application proxy–aware.

It was a perfect fit. The applications could request and the protocol could respond—all that was needed was for the client to understand. And so we built support for it into OpenSSH, with first public release in 2.9.2p2 (only the client needs to upgraded, though newer servers are much more stable when used for this purpose)—and suddenly, the poor man’s VPN was born. Starting up a dynamic forwarder is trivial; the syntax merely requires a port to listen on: ssh –Dlistening_port user@host. For example:

effugas@OTHERSH0E –/.ssh

$ ssh [email protected] –D1080

Enter passphrase for key ‘/home/effugas/,ssh/id_dsa’:

Last login: Mon Jan 14 12:08:15 2002 from localhost.localdomain

[effugas@loca1 host effugas]$

This will cause all connections to 127.0.0.1:1080 to be sent encrypted through 10.0.1.10 to any destination requested by an application. Getting applications to make these requests is a bit inelegant, but is much simpler than the contortions required for static local port forwards. We’ll provide some sample configurations now.

Internet Explorer 6: Making the Web Safe for Work

Though simple Web pages can easily be forwarded over a simple, static local port forward, complex Web pages just fail miserably over SSH—or at least, they used to. Configuring a Web browser to use the dynamic forwarder described earlier is pretty trivial. The process for Internet Explorer involves the following steps:

1. Select Tools | Internet Options.

2. Choose the Connections tab.

3. Click LAN Settings. Check Use a Proxy Server and click Advanced.

4. Go to the text box for SOCKS. Fill in 127.0.0.1 as the host, and 1080 (or whatever port you chose for the dynamic forward) for the port.

5. Close all three open windows by clicking OK.

Now go access the Web—if it works at all, it’s most likely being proxied over SSH. Assuming everything worked, you’ll see something like Figure 13.2.

image

Figure 13.2 FARK over SSH

To verify that the link is indeed traveling over SSH, type ˜# in your SSH window. This will bring up a live view of which port forwards are active:

image

image

Tools & Traps …

Limitations of Dynamic Forwarding and SOCKS4

No special software needs to be installed on a server already running the SSH daemon to use it as a “poor man’s VPN,” but the newer the version of SSHD, the more stable the forwarded link will be. Older daemons will freeze a connection temporarily if a connection attempt is made to a non–existent or unreachable host. These failures would also occur if a static local port forward was pointing to a broken host; the difference is that static forwards are usually pointed only at hosts that are completely stable. This issue can be resolved by installing a more advanced build of OpenSSH on the remote machine (see the setup section for how to do this; you don’t necessarily need root).

Of much more serious concern is the fact that SOCKS4 forwards only the traffic itself; it does not forward the DNS request used to direct the traffic. So although your connection itself may be secure, an administrator on your local link can monitor who you’re connecting to and even change the destination. This may very well be a severe security risk, and will hopefully be resolved in the near future with a SOCKS5 Dynamic Forwarding implementation in the stock OpenSSH client.

In the meantime, both problems of ancient servers and protocols being pushed past their limits can be mitigated slightly by installing a small piece of code on the server to take over SOCKS handling. My preferred system is usocksd, available at http://sites.inka.de/sites/bigred/sw/usocksd−0.9.3.tar.gz. Usocksd supports only SOCKS5, but will remotely resolve names and remain stable through adverse network conditions. Launching it isn’t too bad:

Dan@EFFUGAS –

$ ssh –L2080:127.0.0.1:2080 [email protected] “./usocksd –p 2080”

[email protected]’s password:

usocksd version 0.9.3 (c) Olaf Titz 1997–1999

Accepting connnections from (anywhere) ident (anyone)

Relaying UDP from (anywhere)

Listening on port 2080.

We use both command forwarding and port forwarding here—the SSH session starts the daemon by command and forwards its output back to the client, then the port forward lets the client access the daemon’s TCP port. It’s a bit awkward, but it works.

Speak Freely: Instant Messaging over SSH

Though there will probably be a few old–school hackers who might howl about this, instant messaging is one of the killer applications of the Net. There are two major things that are incredibly annoying about public–level (as opposed to corporate/internal) instant messaging circa early 2002: First, to be blunt, there’s really very little privacy. Messages are generally sent in plaintext from your desktop to the central servers and back out—and anyone in your school or your work might very well sniff the messages along the way.

The other major annoying thing is the lack of decent standards for instant messaging. Though the IETF is working on something known as SIMPLE (an extension on SIP), everyone has their own protocol, and nobody can interact. We don’t need four phones to communicate voice across the world, yet we need up to four clients to communicate words across the Internet.

But such has been the cost of centralized instant messaging, which has significantly more reliability and firewall penetration than a peer–to–peer system like ICQ (which eventually absorbed some amount of centralization). Still, it’d be nice if there was some way to mitigate the downsides of chat.

One Ring To Bind Them: Trillian over SSH

Trillian, a free and absolutely brilliant piece of Win32 code, is an extraordinarily elegant and full–featured chat client with no ads but support for Yahoo, MSN, ICQ, AOL, and even IRC. It provides a unified interface to all five services as well as multiuser profiles for shared systems.

It also directly supports SOCKS4 proxies—meaning that although we can’t easily avoid raw plaintext hitting the servers (although there is a SecureIM mode that allows two Trillian users to communicate more securely), we can at least export our plaintext outside our own local networks, where eyes pry hardest if the traffic can pass through at all. Setting up SOCKS4 support in Trillian is pretty simple:

1. Click on the big globe in the lower left–hand corner and select Preferences.

2. Select Proxy from the list of items on the left side—it’s about nine entries down.

3. Check off Use Proxy and SOCKS4.

4. Insert 127.0.0.1 as the host and 1080 (or whatever other port you used) for the port.

5. Click OK and start logging into your services. They’ll all go over SSH now.

You Who? Yahoo IM 5.0 over SSH

Yahoo should just work automatically when Internet Explorer is configured for the localhost SOCKS proxy, but it tries to use SOCKS version 5 instead of 4, which isn’t supported yet. Setting up Yahoo over SOCKS4/SSH is pretty simple anyway:

1. Select Login | Preferences before logging in.

2. Select Use Proxy.

3. Check Enable SOCKS Proxy.

4. Use Server Name 127.0.0.1 and Port 1080 (or whatever else you used).

5. Select Ver 4.

6. Click OK.

Just make sure you actually have a dynamic forward bouncing off an SSH server somewhere and you’ll be online. Remember to disable the proxy configuration later if you lose the dynamic forward.

Cryptokiddies: AOL Instant Messenger 5.0 over SSH

Setting this up is also pretty trivial. Remember—without that dynamic forward bouncing off somewhere, like your server at home or school, you’re not going anywhere.

1. Select My AIM | Edit Options | Edit Preferences.

2. Click Sign On/Off along the bar on the left.

3. Click Connection to “configure AIM for your proxy server”.

4. Check Connect Using Proxy, and select SOCKS4 as your protocol.

5. Use 127.0.0.1 as your host and 1080 (or whatever else you used) for your port.

6. Click OK on both windows that are up. You’ll now be able to log in—just remember to disable the proxy configuration if you want to directly connect through the Internet once again.

BorgChat: Microsoft Windows Messenger over SSH

Just more of the same:

1. Select Tools | Options.

2. Click the Connections tab.

3. Check I Use A Proxy Server, and make sure SOCKS4 is selected.

4. Enter 127.0.0.1 as your Server Name and 1080 (or whatever) as your port.

5. Click OK.

That’s a Wrap: Encapsulating Arbitrary Win32 Apps within the Dynamic Forwarder

Pretty much any application that runs on outgoing TCP messages can be pretty easily run through Dynamic Forwarding. The standard tool on Win32 (we discuss UNIX in a bit) for SOCKS Encapsulation is SocksCap, available from the company that brought you the TurboGrafx-16: NEC. NEC invented the SOCKS protocol, so this isn’t too surprising. Found at www.socks.nec.com/reference/sockscap.html, SocksCap provides an alternate launcher for apps that may on occasion need to travel through the other side of a SOCKS proxy without necessarily having the benefit of the 10 lines of code needed to support the SOCKS4 protocol (sigh).

SocksCap is trivial to use. The first thing to do upon launching it is go to File | Settings, put 127.0.0.1 into the Server field and 1080 for the port. After you click OK, simply drag shortcuts of apps you’d rather run through the SSH tunnel onto the SocksCap window—you can actually drag entries straight off the Start menu into SocksCap Control (see Figure 13.3). These entries can either be run directly or can be added as a “profile” for later execution.

image

Figure 13.3 Windows SOCKS Configuration with SocksCap

Most things “just work;” one thing in particular is good to see going fast through SSH: FTP.

File This: FTP over SSH Using LeechFTP

FTP support for SSH has long been a bit of an albatross for it; the need to somehow manage a highly necessary but completely inelegant protocol has long haunted the package. SSH.com and MindTerm both implemented special FTP translation layers for their latest releases to address this need; OpenSSH by contrast treats FTP as any other nontrivial protocol and handles it well.

The preeminent FTP client for Windows is almost certainly Jan Debis’ LeechFTP, available at http://stud.fh-heilbronn.de/˜jdebis/leechftp/files/lftp13.zip. Free, multithreaded, and simple to use, LeechFTP encapsulates beautifully within SocksCap and OpenSSH. The one important configuration it requires is to switch from Active FTP (where the server initiates additional TCP connections to the client, within which individual files will be transferred) to Passive FTP (where the server names TCP ports that, should the client connect to them, the content transmitted would be an individual file); this is done like this:

image 1. Select File | Options.

image 2. Click the Firewall tab.

image 3. Check PASV Mode.

image 4. Click OK and connect to some server. The lightning bolt in the upper left-hand corner (see Figure 13.4) is a good start.

image

Figure 13.4 LeechFTP at Work

And how well does it do? Take a look at Figure 13.4. Seven threads are sucking data at full speed using dynamically specified ports—works for me:

Summoning Virgil: Using Dante’s Socksify to Wrap UNIX Applications

Though some UNIX tools directly support SOCKS for firewall traversal, the vast majority don’t. Luckily, we can add support for SOCKS at runtime to all dynamically linked applications using the client component of Dante, Inferno Nettverks’ industrial-strength implementation of SOCKS4/SOCKS5. You can find Dante at ftp://ftp.inet.no/pub/socks/dante–1.1.11.tar.gz, and though complex, compiles on most platforms.

After installation, the first thing to do is set up the system–level SOCKS configuration. It’s incredibly annoying that we have to do this, but there’s no other way (for now). Create a file named /etc/socks.conf and place this into it:

image

Now, when you execute applications, prefacing them with socksify will cause them to communicate over a dynamic forwarder set up on 1080. Because we’re stuck with a centralized SOCKS configuration file, we need to both have root access to the system we’re working on and restrict ourselves to only one dynamic forwarder at a time—check www.doxpara.com/tradecraft or the book’s Web site www.syngress.com/solutions for updates on this annoying limitation. Luckily, a few applications—Mozilla and Netscape, most usefully—do have internal SOCKS support and can be configured much like Internet Explorer could. Unluckily, setuid apps (ssh often included, though it doesn’t need setuid anymore) cannot be generically forwarded in this manner. All in all, though, most things work. After SSHing into libertiee with –D1080, this works:

image

Of course, we verify the connection is going through our SSH forward like so:

image

Remote Port Forwards

The final type of port forward that SSH supports is known as the remote port forward. Although both local and dynamic forwards effectively imported network resources—an IRC server on the outside world became mapped to localhost, or every app under the sun started talking through 127.0.0.1:1080—remote port forwards actually export connectivity available to the client onto the server it’s connected to. Syntax is as follows:

ssh –R 1istening_port:destination_host:destination_port

user@forwarding_host

It’s just the same as a local port forward, except now the listening port is on the remote machine, and the destination ports are the ones normally visible to the client.

One of the more useful services to forward, especially on the Windows platform (we talk about UNIX style forwards later) is WinVNC. WinVNC, available at www.tightvnc.com, provides a simple to configure remote desktop management interface—in other words, I see your desktop and can fix what you broke. Remote port forwarding lets you export that desktop interface outside your firewall into mine.

Do we have the VNC server running? Yup:

Dan@EFFUGAS –

$ telnet 127.0.0.1 5900

Trying 127.0.0.1…

Connected to 127.0.0.1.

Escape character is ‘^]’.

RFB 003.003

telnet> quit

Connection closed.

Connect to another machine, forwarding its port 5900 to our own port 5900.

Dan@EFFUGAS –

$ ssh –R5900:127.0.0.1:5900 [email protected]

[email protected]‘s password:

FreeBSD 4.3–RELEASE (CURRENT–12–2–01) #1: Mon Dec 3 13:44:59 GMT 2001

Test if the remote machine sees its own port 5900 just like we did when we tested our own port:

$ telnet 127.0.0.1 5900

Trying 127.0.0.1…

Connected to localhost.

Escape character is ‘^]’.

RFB 003.003

Note that remote forwards are not particularly public; other machines on 10.0.1.11’s network can’t see this port 5900. The GatewayPorts option in SSHD must be set to allow this—however, such a setting is unnecessary, as later sections of this chapter will show.

When in Rome: Traversing the Recalcitrant Network

You have a server running sshd and a client with ssh. They want to communicate, but the network isn’t permeable enough to allow it—packets are getting dropped on the floor, and the link isn’t happening. What to do? Permeability, in this context, is usually determined by one of two things: What’s being sent, and who’s sending. Increasing permeability then means either changing the way SSH is perceived on the network, or changing the path the data takes through the network itself.

Crossing the Bridge: Accessing Proxies through ProxyCommands

It is actually a pretty rare network that doesn’t directly permit outgoing SSH connectivity; when such access isn’t available, often it is because those networks are restricting all outgoing network connectivity, forcing it to be routed through application layer proxies. This isn’t completely misguided, proxies are a much simpler method of providing back–end network access than modern NAT solutions, and for certain protocols have the added benefit of being much more amenable to caching. So proxies aren’t useless. There are many, many different proxy methodologies, but because they generally add little or nothing to the cause of outgoing connection security, the OpenSSH developers had no desire to place support for any of them directly inside of the SSH client. Implementing each of these proxying methodologies directly into SSH would be a Herculean task.

So instead of direct integration, OpenSSH added a general–purpose option known as ProxyCommand. Normally, SSH directly establishes a TCP connection to some port on a given host and negotiates an SSH protocol link with whatever daemon it finds there. ProxyCommand disables this TCP connection, instead routing the entire session through a standard I/O stream passed into and out of some arbitrary application. This application would apply whatever transformations were necessary to get the data through the proxy, and as long as the end result was a completely clean link to the SSH daemon, the software would be happy. The developers even added a minimal amount of variable completion with a %h and %p flag, corresponding to the host and port that the SSH client would be expecting, if it was actually initiating the TCP session itself. (Host authentication, of course, matches this expectation.)

A quick demo of ProxyCommand:

image

The most flexible ProxyCommand developed has been Shun–Ichi Goto’s connect.c. You can find this elegant little application at www.imasy.or.jp/˜gotoh/connect.c, or www.doxpara.com/tradecraft/connect.c. It supports SOCKS4 and SOCKS5 with authentication, and HTTP without:

image SSH over SOCKS4

    effugas@OTHERSHOE -

    $ ssh –o ProxyCommand=“connect.exe −4 –S [email protected]:20080 %h %p”

    effugas#10.0.1.10

    [email protected]’s password:

    Last login: Mori Jan 14 03:24:06 2002 from 10.0.1.11

    [effugas@1oca1hosteffugas]$

image SSH over SOCKS5

    effugas@OTHERSHOE –

    $ ssh –o ProxyCommand=“connect.exe −5 –S foo#10.0.1.11:20080 %h %p”

    effugas#10.0.1.10 [email protected]’s password:

    Last login: Hon Jan 14 03:24:06 2002 from 10.0.1.11

    [effugas@loca1 host effugas]$

image SSH over HTTP (HTTP CONNECT, using connect.c)

    effugas@OTHERSH0E -

    $ ssh –o ProxyCommand=“connect.exe –H 10.0.1.11:20080 %h %p”

    effugas#10.0.1.10 [email protected]’s password:

    Last login: Hon Jan 14 03:24:06 2002 from 10.0.1.11

    [effugas@loca1 host effugas]$

Tools & Traps …

Borrowing Trails: Using Other Services’ Ports

So you’re working on a network that won’t allow you to directly establish an SSH connection to the server of your choice—but there aren’t any obvious proxies in place, and indeed HTTP and HTTPS traffic works just fine. It may be the case that SSH is simply being blocked for no other reason that it is trafficking over a port separate from 80/tcp (HTTP) or 443/tcp (HTTP over SSL).

One really obvious solution is to just run an SSH daemon on these ports! There are a couple ways to implement this:

image Reconfigure SSHD Add additional Port entries in sshd_config. Now, which sshd_config is actually interesting; due to various configuration screwups, a particular machine can often have several different sshd configurations, only one of which is actually being loaded. Generally, logging in as root and typing ps –xf | grep sshd will reveal the path of the SSH daemon being run; executing /path/sbin/sshd –h will then show which sshd_config file is being located by default—there will something along the lines of this:

    –f file Configuration file (default /usr/local/etc/sshd_config)

    Simply adding Port 80 or Port 443 below the default Port 22 will be sufficient.

image Reconfigure inetd Most UNIX systems run a general–purpose network services daemon called inetd, with its configuration file in /etc/inetd.conf. Inetd listens on a TCP port named in /etc/services and launches a specified application when a connection to its TCP port is received. Netcat (nc) can be quite effectively chained with inetd to create port forwardings, as in the following modification to /etc/inetd.conf:

    https stream tcp nowait nobody /usr/Iocal/bin/nc nc 127.0.0.1 22

    It is significant to note that nothing forces netcat to point at localhost; we could just as well point to some other backend SSH daemon by specifying this:

    https stream tcp nowait nobody /usr/local/bin/nc nc 10.0.1.11 22

image Create a localhost gateway port forward This is cheap but effective for temporary use: Execute ssh [email protected] –g –L443:127.0.0.1:22 –L80:127.0.0.1:22. The –g option, meaning Gateway, allows nonlocal hosts to connect to local port forwards. That we’re logged in as root means we can create listeners on ports lower than 1024. So, without having to permanently install any code or modify any configurations, we get to spawn additional listening ports on ports 80 and 443 for our SSH daemon. The port forward persists only as long as the SSH client stays up, though.

However it’s done, verify TCP connectivity to the SSH daemon from the client to the server by executing telnet host 80 or telnet host 443. If either works, simply running ssh user@host –p 80 or ssh user@host –p 443 is significantly simpler than jonesing for a proxy of some sort.

No Habla HTTP? Permuting thy Traffic

ProxyCommand functionality depends on the capability to redirect the necessary datastream through standard input/output—essentially, what comes from the “keyboard” and is sent to the “screen” (though these concepts get abstracted). Not all systems support doing this level of communication, and one in particular—nocrew.org’s httptunnel, available at www.nocrew.org/software/httptunnel.html —is extraordinarily useful, for it allows SSH connectivity over a network that will pass genuine HTTP traffic and nothing else. Any proxy that supports Web traffic will support httptunnel—although, to be frank, you’ll certainly stick out even if your traffic is encrypted.

Httptunnel operates much like a local port forward—a port on the local machine is set to point at a port on a remote machine, though in this case the remote port must be specially configured to support the server side of the httptunnel connection. Furthermore, whereas with local port forwards the client may specify the destination, httptunnel’s are configured at server launch time. This isn’t a problem for us, though, because we’re using httptunnel as a method of establishing a link to a remote SSH daemon.

Start the httptunnel server on 10.0.1.10 that will listen on port 10080 and forward all httptunnel requests to its own port 22:

[effugas@loca1 host effugas]$ hts 10080 –F 127.0.0.1:22

Start a httptunnel client on the client that will listen on port 10022, bounce any traffic that arrives through the HTTP proxy on 10.0.1.11:8888 into whatever is being hosted by the httptunnel server at 10.0.1.10:10080:

effugas@UTHERSHOE –/.ssh

$ htc –F 10022 –P 10.0.1.11:8888 10.0.1.10:10080

Connect ssh to the local listener on port 10022, making sure that we end up at 10.0.1.10:

effugas@OTHERSH0E –/.ssh

$ ssh –o HostKeyAlias=10.0.1.10 –o Port=10022 [email protected]

Enter passphrase for key ’/home/effugas/.ssh/id_dsa’

Last login: Mon Jan 14 08:45:40 2002 from 10.0.1.10

[effugas@localhost effugas]$

Latency suffers a bit (everything is going over standard GETs and POSTs), but it works. Sometimes, however, the problem is less in the protocol and more in the fact that there’s just no route to the other host. For these issues, we use path–based hacks.

Show Your Badge: Restricted Bastion Authentication

Many networks are set up as follows: One server is publicly accessible on the global Internet, and provides firewall, routing, and possibly address translation services for a set of systems behind it. These systems are known as bastion hosts—they are the interface between the private network and the real world.

It is very common that the occasion will arise that an administrator will want to remotely administer one of the systems behind the bastion. This is usually done like this:

effugas@OTHERSHOE –

$ ssh effugas&lO.O.l.ll

[email protected]’s password:

FreeBSD 4.3–RELEASE (CURRENT–12–2–01) #1: Mon Dec 3 13:44:59 GMT 2001

$ ssh rootelO.O. 1.10

[email protected]’s password:

Last login: Thu Jan 10 12:43:40 2002 from 10.0.1.11

[root@loca!host root]#

Sometimes it’s even summarized nicely as ssh [email protected]ssh [email protected]”.However it’s done, this method is brutally insecure and leads to horribly effective mass penetrations of backend systems. The reason is simple: Which host is legitimately trusted to access the private destination? The original client, generally with the user physically sitting in front of its CPU. What host is actually accessing the private destination? Whose SSH client is accessing the final SSH server? The bastion’s! It is the bastion host that receives and retransmits the plaintext password. It is the bastion host that decrypts the private traffic and may or may not choose to retransmit it unmolested to the original client. It is only by choice that the bastion host may or may not decide to permanently retain that root access to the backend host. (Even one time passwords will not protect you from a corrupted server that simply does not report the fact that it never logged out.) These threats are not merely theoretical—major compromises on Apache.org and Sourceforge, two critical services in the Open Source community, were traced back to Trojan horses in SSH clients on prominent servers.

These threats can, however, be almost completely eliminated.

Bastion hosts provide the means to access hosts that are otherwise inaccessible from the global Internet. People authenticate against them so as to gain access to these pathways. This authentication is completed using an SSH client, against an SSH daemon on the server. Because we already have one SSH client that we (have to) trust, why are we depending on someone else’s as well? Using port forwarding, we can parlay the trust the bastion has in us into a direct connection into the host we wanted to connect to in the first place. We can even gain end–to–end secure access to network resources available on the private host, from the middle of the public Net!

image

Like any static port forward, this works great for one or two hosts when the user can remember which local ports map to which remote destinations, but usability begins to suffer terribly as the need for connectivity increases. Dynamic forwarding provides the answer: We’ll have OpenSSH dynamically specify the tunnels it requires to administer the private hosts behind the bastion. Because OpenSSH lacks the SOCKS4 Client support necessary to direct its own Dynamic Forwards, we’ll once again use Goto’s connect as a ProxyCommand—only this time, we’re bouncing off our own SSH client instead of some open proxy on the network.

image

Access another host without reconfiguring the bastion link. Note that nothing at all changes except for the final destination:

image

Still, it is honestly inconvenient to have to set up a forwarding connection in advance. One solution would be to, by some method, have the bastion SSH daemon pass you, via standard I/O, a direct link to the SSH port on the destination host. With this capability, SSH could act as its own ProxyCommand: The connection attempt to the final destination would proxy through the connection attempt to the intermediate bastion.

This can actually be implemented, with some inelegance. SSH, as of yet, does not have the capacity to translate between encapsulation types—port forwarders can’t point to executed commands, and executed commands can’t directly travel to TCP ports. Such functionality would be useful, but we can do without it by installing, server side, a translator from standard I/O to TCP. Netcat, by Hobbit (Windows port by Chris Wysopal), exists as a sort of “Network Swiss Army Knife” and provides this exact service.

effugas@OTHERSHOE ˜

$ ssh –o ProxyCommand=“ssh [email protected] nc %h %p“ [email protected]

[email protected]’s password:

[email protected]’s password:

Last login: Thu Jan 10 15:10:41 2002 from 10.0.1.11

[root@localhost root]#

Such a solution is moderately inelegant—the client should really be able do this translation internally, and in the near future there might very well soon be a patch to ssh providing a –W host:port that does this translation client side instead of server side. But at least using netcat works, right?

There is a problem. Some obscure cases of remote command execution have commands leaving file descriptors open even after the SSH connection dies. The daemon, wishing to serve these descriptors, refuses to kill either the app or itself. The end result is zombified processes—and unfortunately, command forwarding nc can cause this case to occur. As of the beginning of 2002, these issues are a point of serious discord among OpenSSH developers, for the same code that obsessively prevents data loss from forwarded commands also quickly forms zombie processes out of slightly quirky forwarded commands. Caveat Hacker!

Network administrators wishing to enforce safe bastion activity may go to such lengths as to remove all network client code from the server, including Telnet, ssh, even lynx. As a choke point running user–supplied software, the bastion host makes for uniquely attractive and vulnerable concentration of connectivity to attack. If it wasn’t even less secure (or technically infeasible) to trust every backend host to completely manage its own security, the bastion concept would be more dangerous than it was worth.

Bringing the Mountain: Exporting SSHD Access

A bastion host is quite useful, for it allows a network administrator to centrally authenticate mere access to internal hosts. Using the standards discussed in the previous chapter, without providing strong authentication to the host in the middle, the ability to even transmit connection attempts to backend hosts is suppressed. But centralization has its own downsides, as Apache.org and Sourceforge found—catastrophic and widespread failure is only a single Trojan horse away. We got around this by restricting our use of the bastion host: As soon as we had enough access to connect to the one unique resource the bastion host offered—network connectivity to hosts behind the firewall—we immediately combined it with our own trusted resources and refused to unnecessarily expose ourselves any further.

End result? We are left as immune to corruption of the bastion host as we are to corruption of the dozens of routers that may stand between us and the hosts we seek. This isn’t unexpected—we’re basically treating the bastion host as an authenticating router and little more. Quite useful.

But what if there is no bastion host?

What if the machine to manage is at home, on a DSL line, behind one of LinkSys’s excellent Cable/DSL NAT Routers (the only devices known that can NAT IPSec reliably), and there’s no possibility of an SSH daemon showing up directly on an external interface?

What if, possibly for good reason, there’s a desire to expose no services to the global Internet? Older versions of SSH and OpenSSH ended up developing severe issues in their SSH1 implementations, so even the enormous respect the Internet community has for SSH doesn’t justify the risk of being penetrated?

What if the need for remote management is far too fleeting to justify the hardware or even the administration cost of a permanent bastion host?

No problem. Just don’t have a permanent server. A bastion host is little more than a system through which the client can successfully communicate with the server; although it is convenient to have permanent infrastructure and user accounts set up to manage this communication, it’s not particularly necessary. SSH can quite effectively export access to its own daemon through the process of setting up Remote Port Forwards. Let’s suppose that the server can access the client, but not vice versa—a common occurrence in the realm of multilayered security, where higher levels can communicate down:

# 10.0.1.11 at work here

bash–2.05a$ ssh –R2022:10.0.1.11:22 [email protected]

[email protected]’s password:

[effugas@localhost effugas]$

image

So even though the host at work that we are sitting on is firewalled from the outside world, we can SSH to our box at home, and give it a local port to connect to, which will give it access to the SSH daemon on our work machine.

Tools & Traps …

“Reverse” Clients

The problem of client access when servers can initiate sessions with a client but not vice versa is usually solved with “clients” that wait around for “servers” to send them a session, X–Windows style, and indeed every so often somebody asks publicly for a mode to the SSH client that allows sshd to connect to it. Such solutions, if not engineered in from the beginnings of the protocol and implementation, are misguided at best and horribly insecure at worse. Using remote port forwards to forward SSHD, instead of Web access or something else is merely a unique extension of well established and generically secure methodologies that are used all the time; embedding a barely used client in sshd and server in ssh is an overspecialized and unnecessary disaster waiting to happen.

This is primarily in response to a constant stream of requests I’ve seen for this type of feature. (Take the vitriol with a grain of salt, however: Somebody’s going to have a bone to pick with half the techniques in this chapter, if not this book.)

Echoes in a Foreign Tongue: Cross–Connecting Mutually Firewalled Hosts

Common usage of the File Transfer Protocol among administrators managing variously firewalled networks involves the host that can’t receive connections always generating outgoing links to the host that can, regardless of the eventual direction of data flow. (FTP itself, a strange protocol to say the least, needs to be put into something called Passive Mode in order to keep its connections ordered in the same direction. Passive Mode FTP involves the server telling the client a port that, if connected to, will output the contents of a file. By contrast, Active Mode involves the client, which had earlier initiated an outgoing connection to the server, now asking the server to make an outgoing connection back to the client on some random port in order to deposit a file. Since the direction of the session changes, and the ports vary unpredictably, firewalls have had great difficulty adjusting to what otherwise is one of the grand old protocols of the Internet.) Both Napster and Gnutella have systems for automatically negotiating which side of a transaction can’t receive connection requests, and having the other one create the TCP link. Upon an establishment of the link, the file is either pushed (with a PUT) or pulled (with a GET) onto the host that requires the file.

Notes from the Underground …

Handshake-Only Connection Brokering

Full connection bouncing can place a serious bottleneck on the bouncer in the middle, because it must see all traffic in either direction twice—once, as it receives the packets, and again as it sends them away—thus, the lack of support for these systems within even the most ambitious P2P projects. There are highly experimental systems for allowing the host in the middle to simply broker the connection, providing connection acceptance “glue” for the two hosts both requesting outgoing links. Those methods are described at the end of Chapter 12 and are not guaranteed to work at all (we barely developed them in time for the production of this book!). The methods described here, by contrast, are far more proven and reliable.

Works great when one side or the other can receive connection requests, but what if neither side can? What if both hosts are behind home NAT routers, and even have the exact same private IP address? Worse, what happens when both hosts are running behind a hardcore Cisco corporate firewall layer, and there’s a critical business need for the two to be able to communicate? Generally, management orders both IT staffs to fight it out over which one has to pop a hole in their firewall to let the other side through. Because the most paranoid members of IT are necessarily the ones who manage the firewall, this can be a ludicrously slow and painful process, completely impossible unless the need is utterly undeniable—and possibly permanent.

Sometimes, a more elegant (if maverick and possibly job–threatening—Caveat Hacker Redux) solution is in order. The general purpose solution to a lack of direct network connectivity is for a third host, called a Connection Bouncer, to receive outgoing connections from both hosts, then bounce traffic from the first to the second and vice versa.

Proxy servers in general are a form of connection bouncer, but they rarely do any gender changing—an outgoing connection request is forwarded along for an incoming connection response from some remote Web server or something of that sort. That’s not going to be useful here. There are small little applications that will turn a server into a bouncer, but they’re slightly obscure and not always particularly portable. They also almost universally lack cryptographic functionality—not always necessary, but useful to have available.

Luckily, we don’t need either. If you look, we first described a system by which a client, unable to initiate a link directly with a server, instead authenticated itself to a bastion host and used the network path available through that host to create an end–to–end secure SSH link. Then, we described a system where, there being no bastion host for the client to connect to, the server itself initiated its own link to the outside world, exporting a path via a remote port forward for the client to tunnel back through. Now, it just so happened that this path was exported directly onto the client—but it didn’t need to be. In fact, the server could have remote port forwarded its own SSH daemon onto any host mutually accessible to both itself and the client; the client would merely then have to treat this mutually accessible host as the bastion host it suddenly was. Combining the two methods:

image

image

Not In Denver, Not Dead: Now What?

After any number of contortions, you’ve finally found yourself at the endpoint you’ve been attempting to tunnel to this entire time. And that begs the question: Now what? Of course, you can administer whatever you need to through the remote shell, or connect to various network hosts that this launching point possesses network access to. But SSH offers quite a bit more, especially once command forwarding is brought into the picture. The most important thing to take away from this chapter is that all these methods chain together quite well; the following examples show methods described earlier being connected together, LEGO–style, in new and interesting ways.

Standard File Transfer over SSH

The standard tool for copying files inside of an SSH tunnel is Secure Copy (scp). The general syntax mirrors cp quite closely, with paths on remote machines being specific by user@host:/path. For example, the following copies the local file dhcp.figure.pdf to /tmp on the remote host 10.0.1.11:

image

Much like cp, copying a directory requires the addition of the –r flag, ordering the tool to recursively travel down through the directory tree. Scp is modeled after rcp, and does the job, but honestly doesn’t work very well. Misconfigured paths often cause the server side of scp to break, and it is impossible to specify ssh command–line options. That doesn’t mean it’s impossible to use some of the more interesting tunneling systems; scp does allow ssh to be reconfigured through the more verbose config file interface. You can find the full list of configurable options by typing man ssh; the following specifies a HostKeyAlias for verifying the destination of a locally forwarded SSH port:

image

Now, we’re getting root access to 10.0.1.10, and it’s being piped through 10.0.1.11. What if 10.0.1.11, instead of respecting our command to forward packets along to another host’s SSH daemon, sent them off to its own? In other words, what if the server was corrupted to act as if it had been issued – L2022:127.0.0.1:22 instead of –L2022:10.0.1.10:22? Lets try it:

image

image

There is a major caveat to this: It is very important to actually manage identity keys for SSH! It is only because a valid key was in the known_hosts2 file in the first place that we were able to differentiate the SSH daemon that responded when we were negotiating with the correct host versus when we were negotiating with the wrong one. One of the biggest failings of SSH is that, due to some peculiarities in upgrading the servers, it’s a regular occurrence for servers to change their identity keys. This trains users to accept any change in keys, even if such change comes from an attacker. Dug Song exploited this usability pitfall in his brilliant sniffing package, dsniff, available at www.monkey.org/˜dugsong/dsniff/, and showed how users can be easily tricked into allowing a “monkey in the middle” to take over even a SSH1 session.

Incremental File Transfer over SSH

Though only a standard component of the most modern UNIX environments, rsync is one of the most highly respected pieces of code in the Open Source constellation. rsync is essentially an incremental file updater; both the client and the server exchange a small amount of summary data about the file contents they possess, determine which blocks of data require updating, and exchange only those blocks. If only 5MB of a 10GB disk have changed since the last rsync, total bandwidth spent syncing the client with the server will be only little more than five megs.

You can find rsync at http://rsync.samba.org, which is unsurprising considering that its author, Andrew Tridgell, was also responsible for starting the Samba project that allows UNIX machines to participate in Windows file sharing.

The tool is quite simple to use, especially over ssh. Basic syntax closely mirrors scp:

dan@OTHERSHOE –

$ rsync –e ssh dhcp.figure.pdf [email protected]:/tmp

[email protected]’s password:

Unlike scp, rsync is rather silent by default; the –v flag will provide more debugging output. Like scp, –r is required to copy directory trees; particularly on the Windows platform, there is a significant delay for directory scanning before any copying will begin.

rsync has a nicer syntax for using alternate variations of the ssh transport; the – e option directly specifies the command line to be used for remote command execution. To force use of not only SSH but specifically the SSH1 protocol, simply use the following command:

dan@OTHERSHOE –

$ rsync –e “ssh −1” dhcp.figure.pdf [email protected]:/tmp

[email protected]’s password:

rsync is an extraordinarily efficient method of preventing redundant traffic, and would be particularly well suited for efficient updates to the type of dynamic content we see regularly on Web sites. A recent entry on the inimitable Sweetcode (www.sweetcode.org) described Martin Pool’s rproxy, an interesting attempt to migrate the rsync protocol into HTTP itself. It’s a good idea, elegantly and efficiently implemented as well. Martin reports “An early implementation of rproxy achieved bandwidth savings on the order of 90 percent for portal Web sites.” This is not insignificant, and certainly justifies additional processing load. Though it remains to be seen how successful his effort will be, rsync through httptunnel’d SSH works quite well. (Again, httptunnel is available from the folks at nocrew; point your browser at www.nocrew.org/software/httptunnel.html). To wit:

Start the httptunnel server:

[effugas@localhost effugas]$ hts 10080 –F 127.0.0.1:22

Start a httptunnel client:

effugas@OTHERSHOE –/.ssh

$ htc –F 10022 –P 10.0.1.11:8888 10.0.1.10:10080

Rsync a directory, local port 10001, verifying that the tunnel terminates at 10.0.1.11. Show which files are being copied as we copy them by using the –v flag:

image

Tools & Traps …

Improving the Performance of SSH

SSH has been designed with many goals in mind; performance, actually, has not until quite recently become a point of serious development. (The observant will note that, for all the discussion of file transfer methodologies, SFTP, the heir apparent for secure remote file access, is not discussed at all. I don’t feel it’s mature yet, though this is debatable.) There are a number of steps that can be taken to speed up traffic on an SSH session that are useful to know:

image Enable compression by using the –C flag. At the cost of some processor time and probably latency, SSH will apply zlib compression to the datastream. This can significantly increase overall throughput for many kinds of traffic.

image Change symmetric crypto algorithms by using the –c cipher–flag. Triple–DES is many things, but even remotely efficient is not among them. AES128–cbc, for 128–bit AES in Cipher Block Chaining mode, will be used by default for SSH2 connections. This is generally agreed to be as trustable as Triple–DES, despite the mild hand–wringing over its number of rounds. However, both blowfish and especially arcfour are much faster algorithms, and they work in both SSH1 and SSH2.

image Downgrade to SSH1 using the –1 flag. This is honestly not recommended, but it is still better than spewing plaintext over the wire.

image Obviously, the more hacks in place to achieve network connectivity, the slower the system is going to be. Often, it is useful to use SSH as a method of solving chicken–and–egg problems where a change won’t occur until value is shown, but value cannot be shown until the change has occurred. Once the hack (call it a “proof of concept”) is in place via SSH, the value can be shown and the change approved.

CD Burning over SSH

The standard UNIX method for burning a set of files onto a CD–ROM disc uses two tools. First, mkisofs (Make ISO9660 File System) is invoked to pack a set of files into the standard file system recognized on CD–ROMs. Then, the resulting “ISO” is sent to a separate app, cdrecord, for burning purposes. The entire procedure usually proceeds as follows:

image

Then, we select a directory or set of files we wish to burn, and have mkisofs attach both Joliet and Rock Ridge attributes to the filenames—this enables longer filenames than the standard ISO9660 standard supports. It’s also often useful to add a –f flag to mkisofs, so that it will follow symlinks, but we’ll keep it simple for now:

image

If you notice, we had to sit around and wait while a bunch of disk space got wasted. A much more elegant solution is to take the output from mkisofs and stream it directly into cdrecord—and indeed, this is how most burning occurs on UNIX:

image

image

Once again, the important rule to remember is that almost any time you’d use a pipe to transfer data between processes, SSH allows the processes to be located on other hosts. Because file system creation and file system burning are split, we can create on one machine and burn onto another:

image

The speed and reliability of the underlying network architecture is critical to maintaining a stable burn; an excessive period of time without updated content to send to the disc leads to nothing being written at all—the disc is left wasted (unless your drive supports a new and useful technology called BurnProof, which most do not). If a burn needs to be executed over a slow or unreliable network, we can take advantage of SSH’s ability to remotely execute not just one but a sequence of commands—in this case, to retrieve the ISO, burn it, then delete it after. The following formatting exists for readability only; the only thing necessary to execute multiple commands using a single invocation of ssh is a semicolon between commands.

image

Acoustic Tubing: Audio Distribution over TCP and SSH

Occasionally, you need to do something just because, well, it’s actually cool. Although copying files all around is useful, it’s not necessarily entertaining. Using a FreeBSD machine hooked up to your stereo system as output for Winamp in your lab/office/living room—now that’s entertainment! How can it work? Winamp has a plug–in, called the SHOUTcast DSP, built for streaming the output of the player to an online radio station for redistribution to other players. They encapsulate whatever comes out of Winamp in a compressed fixed–bitrate MP3 stream and expect to send it off to the radio server. I see a general purpose encapsulator for Winamp sound, and have a better idea:

1. Because you’re going to be playing a streaming MP3 directly to speakers from a UNIX environment, you’ll need player software—either mpg123 or madplay. Mpg123 is the de facto standard UNIX MP3 player, but has its weaknesses in sound quality. Madplay is an extremely high quality player, but at least on FreeBSD has occasional stability issue. You can find Mpg123 at www.mpg123.de; Madplay is retrievable from www.mars.org/home/rob/proj/mpeg/.

2. You’re not just streaming an MP3 brought in from somewhere—you have to look like you’re a radio station, at least a little. Don’t worry, there’s no need to re–implement their entire protocol. You just need to act like you accept their password, whatever it is. That basically means sending them an “OK” the moment they connect, upon which you start receiving their MP3 stream. So, instead of

    mpg123 – # play mp3’s being piped in

    we use

    sh –c ‘echo OK; exec mpg123 –’ # first say OK, then play MP3s being piped in

3. Choose a port for shoutcast—now add one, the port you chose refers to what users would listen from, not what your player will stream into. Shoutcast on port 8000 serves data to users on 8000 but receives music on 8001. It’s a bit nonstandard, but does simplify things. Add the Port+1 to /etc/services as the service “shout”, like so:

    su–2.05a# grep shout /etc/services shout 8001/tcp

    (We’ll presume for the rest of this document that you picked 8000.)

4. Now that you’ve got a port to listen on and a “daemon” that knows what to do, you can combine the two in inetd.conf and actually play whatever comes in:

    shout stream tcp nowait root /bin/sh sh –c ‘echo OK; exec mpg123’ –

    It’s almost always a bad thing to see “root” next to “sh” in an application that’s connecting something to the network (it is guaranteed that efficiency–obsessed MP3 players have buffer overflows), but you do need to gain access to the sound device. You can do this by loosening permission on the sound device by typing chmod 0666 /dev/dsp or chmod 0666 /dev/dsp0 and execute mpg123 with no special permissions except the right to be noisy:

    shout stream tcp nowait nobody /bin/sh sh –c ‘echo OK; exec mpg123 –’

    Linux Users, Especially Red Hat: It is possible that your distribution ships with xinetd instead of inetd—you’ll know because of the presence of the directory /etc/xinetd.d. In that case, your process is instead:

a. Create a file, /etc/xinetd.d/shout.

b. Throw the following text into it:

image

c. Restart xinetd by typing /etc/rc.d/init.d/xinetd restart.

5. Finally, you need the SHOUTcast DSP, available at www.shoutcast.com/download/broadcast.phtml. For various reasons, you’re going to encapsulate it inside of Mariano Hernan Lopez’s excellent SqrSoft Advanced Crossfading Output plug–in, available at www.winamp.com/plugins/detail.jhtml?componentId=32368. First, you need to set up the cross–fader:

a. Load Winamp and right-click on the face of it. Choose Options | Preferences, then Plugins—Output. Choose SqrSoft Advanced Crossfading and click Configure.

b. Click the Buffer tab. Match the setting shown in Figure 13.5.

image

Figure 13.5 Cross Fading Configuration

c. Click the Advanced tab. Activate Fade-On–Seek.

d. Click the DSP tab. Choose the Nullsoft SHOUTcast Source DSP.

e. Click OK for everything and restart Winamp.

6. At this point, a new window will pop up with Winamp—this controls the SHOUTcast DSP and annoyingly can’t be minimized. Here’s how to configure it:

a. Click the Input tab. Make sure the Input Device is Winamp. (You can also set this system to work off your sound card, meaning you could pipe the output of your system microphone out to the world.)

b. Click the Encoder tab. Make sure Encoder 1 is set to MP3 Encoder with settings of 256kbps, 44,100 KHz, Stereo.

c. Click the Output tab. Set Address to the IP address of your server, and use port 8000—one less than the port you’re actually listening with on the server. Make sure Encoder is equal to 1.

d. Click Connect and Play on Winamp itself. Ta–dah! (see Figure 13.6)

image

Figure 13.6 Winamp Streaming to a Remote Audio System

7. This wouldn’t be complete without a discussion about how to tunnel this over SSH. There are two main methods—the first applies when the daemon exists independent of the tunnel (like, for example, if you’re streaming to an offsite radio server after all!), the second, if the daemon is started up with the tunnel. The second has the advantage of not leaving a permanent path open for anyone to spew noise out what might be good speakers … for a short while.

    

image Independent daemon Assuming you had enough access to modify inetd.conf or xinetd, just execute ssh –L8001:127.0.0.1:8001 user@mp3player. Either launch Winamp using SocksCap, or more likely, just change the IP address for server output to 127.0.0.1. If you’re actually trying to tunnel into a real shoutcast/icecast server, replace 8001 with the port everyone listens on plus one.

image Dependent daemon This requires netcat, compiled with –DGAPING_SECURITY_HOLE at the client side no less. Still, it’s a decently useful general purpose method to know. It works like this:

    $ ssh –L18001:127.0.0.1:18001 [email protected] “nc –l –p 18001 –e ./plaympg.sh”

    [email protected]’s password:

    (Plaympg is little more than a file containing #!/bin/sh –c ‘echo OK; exec mpg123 –’.)

Summary

“My son, you’ve seen the temporary fire and the eternal fire; you have reached the place past which my powers cannot see. I’ve brought you here through intellect and art; from now on, let your pleasure be your guide; you’re past the steep and past the narrow paths. Look at the sun that shines upon your brow; look at the grasses, flowers, and the shrubs born here, spontaneously, of the earth. Among them, you can rest or walk until the coming of the glad and lovely eyes–those eyes that, weeping, sent me to your side. Await no further word or sign from me: your will is free, erect, and whole–to act against that will would be to err: therefore I crown and miter you over yourself.” — [Virgil’s last words to Dante as he gives Dante the power to guide himself. Canto XXVII, Purgatorio (IGD Solutions)]

Various issues have forced the return of explicit tunneling solutions. When designing these solutions, looking for generic encapsulations usually leads to more effective solutions, though your mileage may vary. Primary concerns for tunnel design include the following:

image Privacy (“Where Is My Traffic Going?”)

image Routability (“Where Can This Go Through?”)

image Deployability (“How Painful Is This to Get Up and Running?”)

image Flexibility (“What Can We Use This for, Anyway?”)

image Quality (“How Painful Will This System Be to Maintain?”)

As a general rule, we want to create tunnels that are end-to–end secure—despite whatever methods are needed to get a link from point A to point B, the cryptography should be between these two endpoints alone whenever possible. To be specific, the process involves creating a path from client to server, independently authenticating and encrypting over this new valid path, then forwarding services over this virtual independent link. OpenSSH is one of the better packages available for creating end–to–end tunnels.

Authentication in OpenSSH is handled as follows: Clients authenticate servers using stored host keys; the first connection is used to authenticate all future links. The keys may be distributed in advance but no unified and particularly elegant solution yet exists to do this. Servers authenticate clients using passwords or remotely verified private keys. Clients may place a password on their keys, and use agent software to prevent themselves from needing to once again type in a password for every connection attempt. It deserves special note that a single account—even a root account—can authorize access to multiple keyholders.

OpenSSH can forward commands. Simply appending the command name you wish to execute at the end of an ssh invocation will cause the command to be executed remotely as if it was a local command. A –t option is needed if the remote command expects to be able to draw to the screen. Command forwarding allows for significant work to be done with simple pipes, like highly customized file transfer. Finally, su can be made secure, due to the highly restricted environment ssh can be made to execute commands within.

OpenSSH can also forward TCP ports. Local port forwards import a single port of connectivity from afar, limiting their usefulness for many protocols. Dynamic port forwards import an entire range of connectivity from afar, but require applications to be able to issue SOCKS requests to point their forwards as needed. Many Windows applications have inherent SOCKS support, and most apps on both Windows and UNIX can be “socksified” using publicly available wrappers. Finally, remote port forwards export a single port of connectivity to the outside world.

OpenSSH has special capabilities for traversing hard to navigate networks. ProxyCommands allow SSH’s connectivity to be redirected through arbitrary command–line applications. One application, Connect, grants SSH the capability to tunnel over a wide range of proxies. This can be overkill, though—often simply using SSH over the HTTP or HTTPS ports (80 or 443) is enough to get through many networks. When this isn’t possible, HTTPTunnel allows for SSH to travel over any network that supports normal Web traffic.

OpenSSH can also authenticate itself against a bastion host that stands between client and server, set up a route through that host, and independently authenticate against the originally desired server. The server can also SSH into the client, export access to its own SSH daemon, and thus be remotely administered. These can be combined, thus access can be both imported and exported allowing two mutually firewalled hosts to meet at some middle ad–hoc bastion host and establish a session through there.

There are some interesting and useful techniques you can deploy. You can easily copy files over scp, which itself can be forwarded using methods described earlier. You can incrementally (and efficiently) update entire directory trees using rsync, even through an HTTP tunnel. You can burn CDs over a network by running mkisofs locally and piping the output into a remote cdrecord process. You can stream audio over a network directly into an audio system using SHOUTcast, inetd, and mpg123. You can also encrypt that audio while in transit.

Solutions Fast Track

Strategic Constraints of Tunnel Design

image Encapsulating approaches that capture traffic without needing to know the nature of it are generally more effective solutions.

image End-to-end security will limit threats from intermediary hosts and routers. Primary concerns of tunnel design include privacy (where is my traffic going?), routability (where can this go through?), deployability (how painful is this to get up and running?), flexibility (what can we use this for, anyway?), and quality (how painful will this system be to maintain?).

Designing End–to–End Tunneling Systems

image End–to–end tunnels a la gateway cryptography create a valid path from client to server, independently authenticate and encrypt over this new valid path, and forward services over this independent link.

image End–to–end security limits threats from intermediary hosts and routers.

image OpenSSH is one of the best packages available for creating end–to–end tunnels.

Open Sesame: Authentication

image Basic SSH connection syntax: ssh user@host

image Clients authenticate servers by using stored host keys; the first connection is used to authenticate all future links. The keys may be distributed in advance but no elegant solution yet exists to do this.

image Servers authenticate clients by using passwords or remotely verified private keys. Clients may place a password on their keys and use agent software to prevent themselves from needing to once again type in a password for every connection attempt.

image A single account—even a root account—can authorize access to multiple keyholders.

image OpenSSH public key authentication commands include:

image Generate SSH1 or SSH2 keypair ssh–keygen or ssh–keygen –t dsa

image Cause remote host to accept SSH1 keypair in lieu of password cat ˜/.ssh/identity.pub | ssh −1 [email protected] “cd ˜ && umask 077 && mkdir –p .ssh && cat >> ˜/.ssh/authorized_keys”

image Cause remote host to accept SSH2 keypair in lieu of password cat ˜/.ssh/id_dsa.pub | ssh [email protected] “cd ˜ && umask 077 && mkdir –p .ssh && cat >> ˜/.ssh/authorized_keys2”

image Add passphrase to SSH1 or SSH2 key ssh–keygen.exe –p or ssh–keygen.exe –d –p

image Start SSH key agent (prevents you from having to type the passphrase each time) ssh–agent bash

image Add SSH1 or SSH2 key to agent ssh–add or ssh–add ˜/.ssh/id_dsa

Command Forwarding: Direct Execution for Scripts and Pipes

image Simply appending the command name you wish to execute at the end of an SSH invocation will cause the command to be executed remotely as if it was a local command. A –t option is needed if the remote command expects to be able to draw to the screen.

image Command forwarding allows for significant work to be done with simple pipes, like highly customized file transfer.

image Execute command remotely ssh user@host command

image Pipe output from remote command into local command ssh user@host “remote_command” | “local_command”

image Get file ssh user@host “cat file” > file

image Put file cat file | ssh user@host “cat > file”

image List directory ssh user@host ls /path

image Get many files ssh user@host “tar cf – /path” | tar –xf –

image Put many files tar –cf – /path | ssh user@host “tar –xf –”

image Resume a download ssh user@host “tail –c remote_filesize –local_filesize file” >> file

image Resume an upload tail –c local_filesize–remote_filesize file >> file

image su can be made secure; due to the highly restricted environment, ssh can be made to execute commands within.

image Safely switch users ssh user@host –t “/bin/su –l user2”

Port Forwarding: Accessing Resources on Remote Networks

image Local port forwards import a single port of connectivity from afar, limiting their usefulness for many protocols.

image Dynamic port forwards import an entire range of connectivity from afar, but require applications to be able to issue SOCKS requests to point their forwards as needed.

image Many Windows applications have inherent SOCKS support, and most apps on both Windows and UNIX can be “socksified” using publicly available wrappers.

image Remote port forwards export a single port of connectivity to the outside world.

image OpenSSH port forwarding commands include:

image Forward local port 6667 to some random host’s port 6667 as accessed through an SSH daemon ssh user@host -L6667:remotely_visible_host:6667

image Dynamically forward local port 1080 to some application specified host and port, accessed through an SSH daemon ssh user@host –D1080

image Forward remote port 5900 to some random host’s port 5900 as accessible by our own SSH client ssh user@host – R5900:locally_visible_host:5900

When in Rome: Traversing the Recalcitrant Network

image ProxyCommands allow SSH’s connectivity to be redirected through arbitrary command–line applications. One application, Connect, grants SSH the ability to tunnel over a wide range of proxies.

image To summarize OpenSSH ProxyCommands:

image Basic usage ssh –o ProxyCommand=“command” user@port

image Use netcat instead of internal TCP socket to connect to remote host ssh –o ProxyCommand=“nc %h %p” user@host

image Use Goto’s connect.c to route through SOCKS4 daemon on proxy_host:20080 to connect to remote host ssh –o ProxyCommand=“connect.exe −4 –S proxy_user@proxy:20080 %h %p” user@host

image Use Goto’s connect.c to route through SOCKS5 daemon on proxy_host:20080 to connect to remote host ssh –o ProxyCommand=“connect.exe −5 –S proxy_user@proxy:20080 %h %p” user@host

image Use Goto’s connect.c to route through HTTP daemon on proxy_host:20080 to connect to remote host ssh –o ProxyCommand=“connect.exe –H proxy_user@proxy:20080 %h %p” user@host

image Often, simply using SSH over the HTTP or HTTPS ports (80 or 443) is enough to get through many networks.

image HTTPTunnel allows for SSH to travel over any network that supports normal Web traffic.

image Forward HTTP traffic from local port 10080 to the SSH daemon on localhost hts 10080 –F 127.0.0.1:22

image Listen for SSH traffic on port 10022, translate it into HTTP–friendly packets and throw it through the proxy on proxy_host:8888, and have it delivered to the httptunnel server on host 10080 htc –F 10022 –P proxy_host:8888 host:10080

image Send traffic to localhost port 10022, but make sure we verify our eventual forwarding to the final host ssh –o HostKeyAlias=host –o Port=10022 [email protected]

image SSH can authenticate itself against a bastion host that stands between client and server, set up a route through that host, and independently authenticate against the originally desired server.

image The server can also SSH into the client, export access to its own SSH daemon, and thus be remotely administered.

image Access can be both imported and exported, allowing two mutually firewalled hosts to meet at some middle ad–hoc bastion host and establish a session through there.

image Commands for importing access to an SSH daemon from a bastion host:

image Set up a local forward to an SSH daemon accessible through a bastion host ssh –L2022:backend_host:22 user@bastion

image Independently connect to the SSH daemon made accessible in the preceding bullet ssh –o HostKeyAlias=backend_host –p 2022 [email protected]

image Set up a dynamic forwarder to access the network visible behind some bastion host ssh –D1080 user@bastion

image Connect to some SSH daemon visible to the bastion host connected in preceding bullet ssh –o ProxyCommand=“connect −4 –S 127.0.0.1:1080 %h %p” user@backend_host

image Set up no advance forwarder; directly issue a command to the bastion host to link you with some backend host ssh –o ProxyCommand=“ssh user@bastion nc %h %p” user@backend_hos

image Commands for exporting SSH connectivity to a bastion host (or client) from a system with an SSH daemon:

image Export access to our SSH daemon to some client’s local port 2022 ssh –R2022:127.0.0.1:22 user@client

image Connect back through an exported port forward, while verifying the server’s identity ssh –O HostKeyAlias=backend_host [email protected]

image It’s possible to both import and export, creating a “floating bastion host” both hosts meet at. This is most useful for allowing two hosts, mutually firewalled from one another, to securely meet at some arbitrary site and safely communicate with one another.

Not in Denver, Not Dead: Now What?

image Files may easily be copied using scp, which itself can be forwarded.

image Copy a file to a remote host scp file user@host:/path

image Copy a file over a local port forward scp –o ‘HostKeyAlias backend_host’ –o ‘Port 2022’ file user@backend_host:/tmp

image Entire directory trees can be incrementally (and efficiently) updated by using rsync, even through an HTTP tunnel.

image Synchronize a file with a remote host (only update what’s necessary) rsync –e ssh file user@host:/path/file

image Specify SSH1 for rsync rsync –e “ssh –1” file user@host:/path/file

image Rsync through an HTTP tunnel:

image Start HTTPTunnel server hts 10080 –F 127.0.0.1:22

image Start HTTPTunnel client htc –F 10022 –P proxy_host:8888 host:10080

image Rsync entire directory through file, with details rsync –v –r –e “ssh –o HostKeyAlias=host path [email protected]:/path

image CDs can be burned directly over a network by running mkisofs locally and piping the output into a remote cdrecord process.

image Directly burn a CD over SSH mkisofs –JR path/ | ssh user@burning_host “cdrecord dev=scsi_id speed=# –”

image Burn a CD over SSH after caching the data on the remote host mkisofs –JR path/ | ssh user@host “cat > /tmp/burn.iso && cdrecord dev=scsi_id speed=# /tmp/burn.iso && rm /tmp/burn.iso”

image Music may be streamed over a network directly into an audio system by using SHOUTcast, inetd, and mpg123. You can also encrypt that audio while in transit.

image Forward all MP3 data sent to localhost:18001 to an MP3 decoder on a remote server ssh –L18001:127.0.0.1:18001 [email protected] “nc –l –p 18001 –e ./plaympg.sh” (plaympg.sh contents: #!/bin/sh –c ‘echo OK; exec mpg123 –)

Frequently Asked Questions

The following Frequently Asked Questions, answered by the authors of this book, are designed to both measure your understanding of the concepts presented in this chapter and to assist you with real–life implementation of these concepts. To have your questions about this chapter answered by the author, browse to www.syngress.com/solutions and click on the “Ask the Author” form.

Q: Don’t all these techniques mean that any attempt at regional network control are doomed, especially systems that try to divine where your computer is sitting by what its IP address is?

A: For the most part, oh yes. This isn’t a particularly new discovery—proxy hopping of this type has been done for years in places that, without which, there would be no real Internet access. There are probably techniques out there in the hands of average people that put this chapter’s theatrics to shame—necessity is the mother of invention and all. However, keep in mind that traffic analysis is a powerful thing, and connections that start in one direction and end up sending the vast majority of their data in the other don’t particularly blend in. Even systems that bounce data off hosts in the middle aren’t impervious to simply monitoring the flows of traffic; even without a content correlation of data between what is sent to the midpoint and what the midpoint sends to the final destination, there’s a near–unavoidable time correlation between when data hits the midpoint and when some equivalently sized chunk of data hits the endpoint. This is a consequence of minimizing latency and not including masking noise.

Q: Port forwards aren’t working for me. Even though I set up an encrypted tunnel to www.host.com, port 80, using –L80:www.host.com:80, my connections to http://www.host.com don’t seem to be tunneling. Why?

A: It’s critical to understand that local port forwards remap connectivity in userspace—tell your operating system to connect to www.host.com, and it will try to do so correctly. You have to tell your operating system to loop back through this userspace forwarder, in this case placed on 127.0.0.1 port 80. This is done by either providing your application with the alternate IP or by modifying the name lookup rules in your host file.

Q: Your methods are wrong, inelegant, horrifying.

A: I never said they were perfect; in fact there are security risks with them as there are with anything else. In fact, I mostly agree with the above assessment. They are the wrong way to build a network; but the wrong networks have been built. TCP/IP has had all sorts of restrictivity, particularly route level, patched onto it by necessity in an integration framework gone quite awry. Inelegance brought us here … it will have to take us out.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.142.248