The OSI data link layer is intended to frame data from higher layers for transmission across the physical layer. The messageframes are used to mitigate data errors received from thephysical layer. For multinode networks, the data link layer provides node-addressing support, so nodes do not spend time processing message frames that are intended for some other node. The data link layer also manages the transmission andflow control for the physical layer, ensuring that two nodes do not collide at the physical layer. Even though this layer provides all three of these functions, the implementation can be significantly simplified on a point-to-point network.

A point-to-point network implies that there are only two nodes on the network. All data transmitted by the first node is intended for the second node, and vice versa. As such, the data link layer can be simplified. Two common protocols that provide data link services over point-to point networks are the Serial Line Internet Protocol (SLIP) and Point-to-Point Protocol (PPP).

SIMPLIFIED DATA LINK SERVICES

In a point-to-point network, many of the complexities from a multinode network vanish, and core functionality can be simplified. This includes simpler framing methods, flow control, and address support.

Simplified Flow Control

The physical layer ensures that the data received matches the data transmitted, but it does not address transmission collisions. When two nodes transmit at the same time, data becomes overwritten and corrupted. The data link layer determines when it appears safe to transmit, detects collisions, and retries when there are data errors. This is required functionality on a multinode network because any node may transmit at any time.

In a ring network, such as Token Ring or FDDI, the data link layer arbitrates when data is safe to transmit. Without this arbitration, any node could transmit at any time.

In a point-to-point network, there are only two nodes, so arbitration and flow management can be dramatically simplified. The most common types of point-topoint networks use simplex, half-duplex, or full-duplex connections.

Simplex

A simplex channel means that all data flows in one direction; there is never a reply from a simplex channel. For networking, there are two simplex channels, with one in each direction. Because there is never another node transmitting on the same channel, the flow control is simplified: there is no data link layer flow control because it is always safe to transmit. Examples of simplex networks include ATM, FDDI, satellite networks, and cable modems.

Half-Duplex

Half-duplex channels are bidirectional, but only one node may transmit at a time. In this situation, either a node is transmitting, or it is listening. It is very plausible for one node to dominate the channel, essentially locking out the other node. In half-duplex networks, each node must periodically check to see if the other node wants to transmit. Examples of half-duplex networks include some dialup, serial cable, and wireless network configurations.

Full-Duplex

In a full-duplex network, one channel is bidirectional, but both nodes can transmit at the same time. Examples include many fiber-optic systems, where the transmitted light pulses do not interfere with the received light pulses. Some FDDI, optical cable networks, and telephone modem protocols use full-duplex channels. When using a full-duplex channel in a point-to-point network, the data link flow control system is unnecessary because it is always clear to transmit.

Simplified Message Framing

Message framing ensures that the transmitted data is received in its entirety. Although the physical layer ensures that the data receivedmatches the data transmitted, simultaneous transmissions can corrupt the data. The message frame identifies possible collisions, but the flow control from point-to-point networks limits the likelihood of a collision. In point-to-point networks, the message frame only needs to indicate when it is clear for the other side to transmit.

Simplified Address Support

In a multinode network, the message frame must be addressed to the appropriate node. In a point-to-point network, however, alltransmitted data are only intended for the other node on the network. As such, hardware addresses are not required. For backward compatibility, the LLC may report the hardware address as being all zeros or a set value based on the current connection’s session.

For example, a user with a dialup modem may have the data link layer assign a random MAC address to the connection. After the connection terminates, the next connection may have a different MAC address. In many implementations, the data link layer’s MAC address may appear to be the system’s assigned IP address; an assigned dialup IP address of 1.2.3.4 may generate the MAC address 00:00:01:02:03:04. The next time the user dials into the service provider, the system may be assigned a new IP address and generate a new MAC address.

POINT-TO-POINT PROTOCOLS

Two common point-to-point protocols are SLIP and PPP. These are commonly used over direct-wired connections, such as serial cables, telephone lines, or highspeed dialup such as DSL. Besides direct-wired connections, they can also be used for VPN tunneling over a network.

SLIP

The Serial Line Internet Protocol (SLIP) is defined by RFC1055. This simple protocol was originally designed for serial connection. SLIP provides no initial headers and only offers a few defined bytes:


• END: The byte 0xC0 denotes the end of a frame.
• ESC: The byte 0xDB denotes an escape character.
• Encoded END: If the data stream contains the END byte (0xC0), then SLIP reencodes the character as 0xDB 0xDC.
• Encoded ESC: If the data stream contains the ESC byte (0xDB), then SLIP reencodes the
• character as 0xDB 0xDD.

When the OSI network layer has data to transmit, SLIP immediately begins transmitting the data. At the end of the transmission, an END byte is sent. The only modifications to the data are the encoded escape sequences, which are used to prevent the interpretation of a premature END.

Although SLIP is one of the simplest data link protocols, it leaves the network open to a variety of risks, including error detection, data size issues, network service support, and configuration options.

Error Detection

SLIP contains no mechanism for error detection, such as a framing checksum. SLIP assumes that higher layer protocols will detect any possible transmission corruption. This can lead to serious problems when the physical layer is prone to errors (e.g., noisy phone line) or when the network layer uses a simple checksum system.

Maximum Transmission Units

The maximum transmission unit (MTU) defines the largest size message frame that is permitted. Larger message frames are usually rejected in lieu of causing a buffer overflow in the data link driver. The SLIP message frame has MTU restrictions. The receiving system must be able to handle arbitrarily large SLIP message frames.

As a convention, most SLIP drivers restrict frames to 1006 bytes (not including the END character). This size is based on the Berkeley Unix SLIP drivers, a de facto standard. But nothing in the protocol specifies an actual maximum size. SLIP also permits transmitting an END END sequence, resulting in a zero-length datagram transmission. Higher layer protocols must be able to handle (and properly reject) zero-length data.

No Network Service

Many multinode data link protocols include support for multiple network layer protocols. When data is received, the information is passed to the appropriate network layer driver. SLIP contains no header information and no network service identifier. As such, SLIP is generally used with TCP/IP only.

Parameter Negotiations

A SLIP connection requires a number of parameters. Because it relies on a TCP/IP network layer, both nodes need IP addresses. The MTU should be consistent between both ends of the network, and the network should authenticate users to prevent unauthorized connections.

Unfortunately, SLIP provides no mechanism for negotiating protocol options or authenticating traffic. The user must configure the SLIP driver with an appropriate network layer IP address and MTU. Flow control, such as simplex or halfduplex, is determined by the physical layer.

Authentication for SLIP connections generally occurs at the physical layer. As users connect to the service provider, they are prompted for login credentials. After authentication, the SLIP connection is initiated. After authenticating, the service provider may supply the appropriate MTU and IP addresses before switching to the SLIP connection. SLIP does not support other options, such as encryption and compression.

PPP

In contrast to SLIP, the Point-to-Point Protocol (PPP) provides all data link functionality. Many RFCs cover different aspects of PPP: RFC 1332, 1333, 1334, 1377, 1378, 1549, 1551, 1638, 1762, 1763, and 1764. The main standard for PPP is defined in RFC1661. The PPP message frame includes a header that specifies the size of data within the frame and type of network service that should receive the frame. PPP also includes a data link control protocol for negotiating IP addresses, MTU, and basic authentication. In addition, PPP provides link quality support, such as an echo control frame for determining if the other side is still connected. Unfortunately, PPP does not provide a means to protect authentication credentials, detect transmission errors, or deter replay attacks.

Authentication

For authentication, PPP supports both PAP and CHAP, but only one should be used. The PPP driver should first attempt the stronger of the two (usually CHAP) and then regress to the other option if the authentication method is unavailable. Although CHAP and PAP both provide a means to transmit authentication credentials, neither provides encryption; the authentication is vulnerable to eavesdropping and replay attacks at the physical layer.

Neither PAP nor CHAP natively supports password encoding. Some extensions to these protocols, such as DH-CHAP (Diffie-Hellman negotiation for hash algorithms), MS-CHAP and MS-CHAPv2 (Microsoft NT hashing), and EAP-MSCHAPv2 (includes the use of the Extensible Authentication Protocol), extend the authentication system to use one-way encryption rather than a plaintext transfer. Although PPP is vulnerable to physical layer attacks, these attacks are rare. The larger risk comes from the storage of login credentials on the server. For PAP and CHAP, the login credentials must be stored on the PPP server. If they are stored in plaintext, then anyone with access to the server may compromise the PAP/CHAP authentication. Even if the file is stored encrypted, the method for decrypting the file must be located on the PPP server.

Transmission Error Detection

Although PPP includes a frame header and tail, it does not include a checksum for frame validation. The recipient will not detect frames modified in transit. Noisy phone lines, bad network connections, and intentional insertion attacks are not detected. As with SLIP, PPP assumes that error detection will occur at a higher OSI layer.

Replay Attack Transmission

PPP’s message frame header contains packet and protocol information but not sequence identification. Because PPP was designed for a point-to-point network, it assumes that packets cannot be received out of order. Non-sequential, missing, and duplicate packets are not identified by PPP. As with SLIP, PPP assumes that a higher OSI layer will detect these issues.

Although these are not significant issues with serial and dialup connections, PPP can be used with most physical layer options. If the physical layer permits Non-sequential transmissions, then PPP may generate transmission problems. Examples include using PPP with asynchronous transfer mode (ATM) or grid networks. In these examples, data may take any number of paths, may not be received sequentially, and in some situations may generate duplicate transmissions. Similarly, PPP may be used as a tunnel protocol. Tunneling PPP over UDP (another asynchronous protocol) can lead to dropped or non-sequential PPP message frames. Ideally, PPP should be used with physical layer protocols that only provide a single route between nodes.

An attacker with physical access to the network may insert or duplicate PPP traffic without difficulty. This opens the network to insertion and replay attacks.

Other Attacks

As with SLIP, PPP provides no encryption. An attacker on the network can readily observe, corrupt, modify, or insert PPP message frames transmissions. Because the network is point-to-point, both ends of the network can be attacked simultaneously.

Tunneling

VPNs are commonly supported using PPP (and less common with SLIP). In a VPN, the physical layer, data link layer, and possibly higher OSI layers operate as a virtual physical medium. A true network connection is established between two nodes, and then PPP is initiated over the connection, establishing the virtual pointto- point network. Common tunneling systems include PPP over SSH and PPP over Ethernet (PPPoE). When tunneling with PPP or SLIP, the tunneled packet header can degrade throughput performance. CPPP and CSLIP address this issue.

PPP over SSH

Secure Shell is an OSI layer 6 port forwarding protocol that employs strong authentication and encryption. SSH creates a single network connection that is authenticated, validated, and encrypted. Any TCP (OSI layer 4) connection can be tunneled over the SSH connection, but SSH only tunnels TCP ports. Under normal usage, the remote server’s network is not directly accessible by the SSH client. By using an SSH tunneled port as a virtual physical medium, PPP can establish a point-to-point network connection. The result is an encrypted, authenticated, and validated PPP tunnel between hosts. This tunnel can forward packets from one network to the other—even over the Internet—with little threat from eavesdropping, replay, or insertion attacks.

An example PPP over SSH connection can be established using a single PPP command (Listing 1). This command establishes a SSH connection between the local system and remote host, and then initiates a PPP connection. The PPP connection is created with the local IP address 10.100.100.100 and remote address 10.200.200.200. No PPP authentication (noauth) is provided because the SSH connection already authenticates the user.

LISTING 1 Example PPP over SSH Connection

% sudo pppd nodetach noauth passive pty \ "ssh root@remotehost pppd nodetach notty noauth" \ ipparam vpn 10.100.100.100:10.200.200.200

Using interface ppp0

Connect: ppp0 /dev/pts/5

Deflate (15) compression enabled

local IP address 10.100.100.100

remote IP address 10.200.200.200

For the sample in Listing 1, the remote user must be able to run pppd noauth. This usually requires root access. When using this command without SSH certificates, there are two password prompts. The first is for the local sudo command, and the second is for the remote ssh login. Because pppd is executing the ssh command, no other user-level prompting is permitted.

After establishing the PPP over SSH connection, one or both sides of the network may add a network routing entry such as route add -net 10.200.200.200 netmask 255.0.0.0 or route add default gw 10.200.200.200. The former specifies routing all 10.x.x.x subnet traffic over the VPN tunnel. The latter sets the VPN as the default gateway; this is a desirable option for tunneling all network traffic.

PPPoE

PPP may be tunneled over another data link layer. PPP over Ethernet (PPPoE), defined in RFC2516, extends PPP to tunnel over IEEE 802.3 networks. Many cable and DSL connections use PPPoE because PPP provides a mechanism for negotiating authentication, IP address, connection options, and connection management.

The PPPoE server is called the access concentrator (AC) because it provides access for all PPPoE clients. PPPoE does have a few specified limitations:

MTU: The largest MTU value can be no greater than 1,492 octets. The MTU for Ethernet is defined as 1,500 octets, but PPPoE uses 6 octets and PPP uses 2.

Broadcast: PPPoE uses a broadcast address to identify the PPPoE server. Normally the PPPoE server responds, but a hostile node may also reply.

AC DoS: The AC can undergo a DoS if there are too many requests for connection allocations. To mitigate this risk, the AC may use a cookie to tag requests. Multiple requests from the same hardware address should receive the same client address.

CSLIP and CPPP

When tunneling PPP or SLIP over another protocol, the overhead from repeated packet headers can significantly impact performance. For example, TCP surrounds data with at least 20 bytes of header. IP adds another 20 bytes. So IP(TCP(data)) increases the transmitted size by at least 40 bytes more than the data size. When tunneling PPP over a SSH connection, the resulting stack MAC(IP(TCP(SSH(PPP(IP (TCP(data))))))) contains at least 80 bytes of TCP/IP header plus the PPP and SSH headers. The resulting overhead leads to significant speed impacts:

Transmit Size: When tunneling, there are 80 additional bytes of data per packet. More headers mean more data to transmit and slower throughput.

Message Frames: Many data link protocols have well-defined MTUs. Ethernet, for example, has an MTU of 1,500 bytes. Without tunneling, TCP/IP and MAC headers are at least 54 bytes, or about 4 percent of the MTU. With tunneling, the TCP/IP and PPP headers add an additional percent plus the overhead from SSH.

Stack Processing: When transmitting and receiving, twice as many layers must process each byte of data. This leads to computational overhead. Although the increase in transmitted data and message frames can be negligible over a high-speed network, these can lead to significant performance impacts over low-bandwidth connections, such as dialup lines. For modem connections, a 3 percent speed loss is noticeable. The stack-processing overhead may not be visible on a fast home computer or workstation, but the overhead can be prohibitive for hardware systems, mission-critical systems, or real-time systems. Generally hardware-based network devices can process packets faster than software, but they use slower processors and do not analyze all data within the packet. Tunneling through a hardware device is relatively quick; however, tunneling to a hardware device requires processing the entire packet. The limited processing capabilities can significantly impact the capabilities to hand le tunneled connections.

To address the increased data size, RFC1144 defines a method to compress TCP/IP headers. The technique includes removing unnecessary fields and applying Lempel-Ziv compression. For example, every TCP and IP header includes a checksum. Because the inner data is not being transmitted immediately, however, there is no need to include the inner checksums. This results in a 4-byte reduction. Together with Lempel-Ziv compression, the initial 40-byte header can be reduced to approximately 16 bytes. Using this compression system with PPP and SLIP yields the CPPP and CSLIP protocols. Although there is still a size increase from the tunnel, the overhead is not as dramatic with CPPP/CSLIP as it is with PPP and SLIP.

Although CPPP and CSLIP compress the tunneled TCP/IP headers, they do not attempt to compress internal data. Some intermediate protocols support data compression. For example, SSH supports compressing data. Although this may not be desirable for high-speed connections, the SSH compression can be enabled for slower networks or tunneled connections.

Unfortunately, there are few options for resolving the stack processing issue. Tunneling implies a duplication of OSI layer processing. To resolve this, a non-tunneling system should be used. For example, IPsec provides VPN support without stack repetition.

COMMON RISKS

The largest risks from PPP and SLIP concern authentication, bidirectional communication, and user education. Although eavesdropping, replay, and insertion attacks are possible, these attacks require access to the physical layer. Because a point-to-point network only contains two nodes on the network, physical layer threats against the data link layer are usually not a significant consideration.

Authentication

SLIP provides no authentication mechanism. Instead, an authentication stack is usually used prior to enabling SLIP. In contrast, PPP supports both PAP and CHAP for authentication [RFC1334]. PAP uses a simple credential system containing a username and password; the username and password are sent to the server unencrypted and then validated against the known credentials. In contrast to PAP, CHAP uses a more complicated system based on a key exchange and a shared secret key (Figure 1). Rather than transmitting the credentials directly, CHAP transmits a username from the client (peer) to the server (authenticator). The server replies with an 8-bit ID and a variable-length random number. The ID is used to match challenges with responses—it maintains the CHAP session. The client returns an MD5 hash of the ID, shared secret (account password), and random number. The server computes its own hash and compares it with the client’s hash. A match indicates the same shared secret.

Unlike PAP, CHAP authentication is relatively secure and can be used across a network that may be vulnerable to eavesdropping; however, CHAP has two limitations. First, it only authenticates the initial connection. After authentication, PPP provides no additional security—someone with eavesdropping capabilities can hijack the connection after the authentication. Second, if an eavesdropper captures two CHAP negotiations with small-length random numbers, then the hash may be vulnerable to a brute-force cracking attack.

Bidirectional Communication

PPP and SLIP provide full-data link support in that the node may communicate with a remote network, and the remote network may communicate with the node. PPP and SLIP provide full bidirectional communication support. As such, any network service operating on the remote client is accessible by the entire network. Most dialup users do not use home firewalls, so any open network service may leave the system vulnerable. Software firewalls, or home dialup firewalls such as some models of the SMC Barricade, offer approaches to mitigate the risk from open network services.

User Education

More important than bidirectional communication is user education. Most dialup, DSL, and cable modem users may not be consciously aware that their connections are bidirectional. Moreover, home firewalls can interfere with some online games and conferencing software such as Microsoft NetMeeting. As such, these preventative measures may be disabled and leave systems at risk.

You Can’t Do That!

Most universities provide dialup support for students and faculty. One faculty member was positive that his dialup connection was not bidirectional. He challenged members of the computer department’s support staff to prove him wrong. The proof took three minutes. First, a login monitor was enabled to identify when the faculty member was online. Then his IP address was identified. This allowed the attackers to know which of the dialup connections to attack. After being alerted to the victim’s presence, the attackers simply used Telnet to connect to the address and were greeted with a login prompt. The login was guessed: root, with no password. Then the attacker saw the command prompt, showing a successful login. (The faculty member had never bothered to set the root password on his home computer.) A friendly telephone call and recital of some of his directories changed his viewpoint on dialup security. Ironically, this faculty member worked for a computer science departmen t. If he was not aware of the threat, how many home users are aware?

SIMILAR THREATS

The risks from point-to-point networks, such as PPP and SLIP, extend to other point-to-point systems. High-speed dialup connections such as DSL and ATM use point-to-point physical connections—DSL uses PPPoE [RFC2516], and ATM uses PPPoA [RFC2364]. For these configurations, the data link layer provides a virtually transparent connection. An attacker with physical layer access is not impeded by any data link security.

The data link layer performs three main tasks: segment data into message frames, address message frames, and control the physical layer flow. In a point-to-point network, these tasks are greatly simplified because there are assumed to be only two nodes on the network. Because of this assumption, physical layer attacks remain the greatest risk to the data link layer; the assumption is that there is no third node capable of accessing the network.

The SLIP and PPP protocols take advantage of the simplified network architecture. SLIP reduces the data link layer to a minimal frame requirement, whereas PPP provides minimal support for network layer protocols, authentication, and link control. Unfortunately, PPP provides no additional security beyond the initial authentication.

Both SLIP and PPP can be used as tunneling protocols, creating a virtual pointto- point network over a more secure network connection. Although useful for establishing a VPN, the primary impact is performance. As with any data link protocol, SLIP and PPP are bidirectional. When connecting to the Internet, both enable external attackers to access the local computer system.


Like it on Facebook, Tweet it or share this article on other bookmarking websites.

Comments (0)

There are no comments posted here yet