Rating: 8.0/10.
Chapter 1: The History of Cryptography. Attempts to encode messages date back to ancient times. However, up until WW1, many of these encoding methods weren’t very effective. Codes were frequently cracked, and various issues often rendered messages unreadable due to coordination problems between the sender and receiver. WW2 marked a significant shift; with the advent of the first computers, more sophisticated mathematical codes emerged. There were also attempts to communicate coded messages using North American Native languages. During the Cold War, computers evolved further, and public key cryptography emerged in the 1970s. This period, especially the 70s and 80s, also saw the rise of hacking and cybercrimes, issues that persist today.
Chapter 2: Overview of Security and Privacy. This chapter covers definitions of different types of security, the primary objectives of security, and other attributes that might conflict with these objectives. A prevalent attack method is social engineering, such as phishing, which aims to deceive humans rather than compromise computer systems. Authentication deals with identifying a system’s user, while authorization is about determining a user’s permissions. The Unix system in the US, uses three bits to determine what the user, the group, and the world can do to a file. It’s a good idea to maintain audit logs, which are read-only, to trace activities and detect potential breaches. Frequently, computer security involves human interaction; for instance, while passwords need to be secure, they shouldn’t be so complex that users have to write them down. Similarly, with certificates, there are UI challenges related to determining trustworthy authorities and displaying error conditions in the UI, all of which intertwine with the core computer security protocols.
Case study on hacking automobiles. Cars can be targeted in various ways. A most obvious entry point is to steal vehicles that can be started with wireless keys; other points are to exploit interfaces designed for cell phones and MP3 players. Proposals for vehicle networks, where cars connect to optimize traffic flow, present new vulnerabilities, and some vehicles have remote services for a subscription fee, making them potential targets for cyberattacks.
Chapter 3: Cryptography Primer. Symmetric key cryptography uses the same key to encrypt and decrypt; in contrast, asymmetric key cryptography uses different keys for these tasks. Typically, cryptographic algorithms aren’t kept secret; only the keys are. There are several types of attacks. The most common types include ciphertext-only, where the attacker has just the ciphertext; known-plaintext, where the attacker knows some plaintext messages; and chosen-plaintext, where the attacker chooses the message to encrypt. Sometimes, attackers can even modify the key. However, often, the attack isn’t on the cryptography itself but on its implementation, management errors, or through social engineering.
The simplest cipher is the rotation cipher, like Caesar or ROT13. A slightly more advanced version is the substitution or monoalphabetic cipher, which can be deciphered with frequency analysis. A polyalphabetic cipher, such as the Vigenère cipher, uses a key repeated over and over to determine the shift amount; however, it can be broken with periodic analysis that divides it into multiple monoalphabetic ciphers.
A block cipher encrypts several blocks at once using a key through multiple rounds. Each round uses various operations to scramble the information. DES, which uses a 56-bit key, was vulnerable by the 90s to brute-force attacks and differential analysis, which involves examining differences in clear text inputs and finding statistical correlations that break it in less time than brute force. Hence, AES was developed, having a more intricate design and longer key, making it more secure. However, it still has some vulnerabilities, such as when attackers have physical access to the device and can modify key bits. In electronic codebook (ECB) mode, block ciphers encrypt and decrypt each block independently, making them susceptible to replay attacks. It’s advisable to use cipher block chaining (CBC) mode where one block’s output is fed into the next block.
Public key cryptography, like RSA and elliptic curve cryptography, is math-heavy. The book doesn’t go into these mathematical details. Hash functions map values into a smaller range, and constructing a hash collision should ideally be as challenging as the birthday attack, which requires the square root of n trials.
Key management: Shamir’s protocol allows two parties to obtain a shared key without eavesdroppers learning the key, though it needs a third trusted server for identity verification. With public key cryptography, a certificate authority signs a key that can be easily verified.
Chapter 4. SSL and TLS are often used interchangeably, and they act as a layer between the TCP and HTTP layers to implement HTTPS. The process starts with a handshake; the server sends its X.509 certificate, which contains the information the client needs to verify the server’s identity. The entire protocol can be represented with the Casper script, which allows for automatic formal analysis to ensure its security.
However, some web elements remain insecure, like DNS. With DNS cache poisoning, if an attacker’s output is cached by various DNS servers, it becomes easier for them to redirect websites to malicious IPs. This vulnerability exists because DNS isn’t authenticated and is transmitted in plain text. The IP layer also lacks security by default; with routes and target destinations being in plain text, it’s an open field for man-in-the-middle attacks to spoof packets and insert them into the stream.
The X.509 certificate contains details about the certificate authority (CA) and is signed with the CA’s private key. This setup allows the client to verify it using a known public key. However, sometimes CAs are deceived into issuing fraudulent certificates. To counter this, there’s a revocation list, but it often proves problematic. Attacking the certificate parsing routine is also possible through bugs associated with null-terminated strings. Implementing SSL presents multiple challenges, especially when CAs aren’t trustworthy and issue certificates without thorough verification. Users often encounter unclear warning messages and feel pressured to click “allow” just to continue their tasks.
The author expresses skepticism about SSL and HTTPS due to these issues with trusting CAs. It’s worth noting, however, that this book was published in 2013. By the late 2010s, around 2017, SSL became much more common, and now most websites use HTTPS.
Chapter 5. Firewalls secure networks by filtering packets. They can be as simple as a packet filter that examines one packet at a time, or a more complex stateful filter that inspects a stream of multiple TCP packets. Virtual private networks (VPNs) essentially tunnel traffic through a secure layer, allowing devices across the internet to appear as if they’re on the same local network.
Wireless security — WPA2 is currently preferred over both WEP and WPA protocols. While the newer protocol is more secure, it’s still fairly easy to launch a denial-of-service attack on a Wi-Fi system. Intrusion detection systems (IDS) aim to use statistical methods to identify unusual network patterns that might indicate an intrusion. However, distinguishing these from regular network traffic is often challenging and usually, these systems either have high false positive rates or are computationally intensive.
Denial-of-service (DoS) attacks focus on overloading a server with traffic. These attacks can be executed at various layers of the network protocol. For example, the SYN flood attack forces the server to keep connections open, draining its memory.
Chapter 6. This chapter involved hands-on exercises with network probing tools: seting up a VPN between two computers and used a virtual machine (VM) – which is highly recommended for security research, Wireshark to inspect network packets.
Chapter 7: insertion attacks. SQL injection is the most common and relies on improperly formed SQL queries that use string concatenation. This makes it relatively easy to execute arbitrary SQL queries. Other commands can also be injected; for instance, if the printf command isn’t used correctly, it can read and write to arbitrary memory locations.
Viruses are programs that infect executable files and can spread across networks, while worms operate similarly. The SIR model, commonly used to study the spread of epidemics, can also model the spread of viruses in a network. In many scenarios, the virus will spread exponentially before any countermeasures can take effect.
Chapter 8. This chapter demonstrates how to execute a buffer overflow attack. The most straightforward method is stack smashing. Here, if a variable on the stack is overwritten beyond its allocated memory region, it can overwrite the return address stored nearby in the stack frame. This leads the program to jump to an arbitrary location, continuing to execute instructions from there. For clarity, it’s helpful to disable all compiler optimizations, which can be confusing for beginners, and use GDB to set memory locations relative to the array pointer. The payload of the exploit is known as shell code. Since instructions are stored as binary values, we can encode these to correspond with assembly instructions and have them perform any desired action.
A related attack is heap smashing, it’s more challenging because there’s typically no executable code in data structures allocated in the heap, but it’s possible to target some linked list structures used for memory management. Arc injection involves overwriting a string when an application makes a system call command. Pointer clobbering is about overwriting function pointers, which are sometimes passed into functions. In situations where the exact jump location is unknown, a NOP sled is practically useful: this technique involves overwriting a long section of memory with NOP instructions, followed by the actual payload, ensuring that any jump within this region will execute as intended.
There are various countermeasures to buffer overflow vulnerabilities: safe type systems that ensure array types can’t be accidentally overflowed; making memory for executable code read-only; using canary values that detect and stop array overflow attempts; and using address space randomization to make memory layout unpredictable. The chapter concludes by presenting a short C program source code that’s vulnerable to many of these attacks, suitable for practice.
Chapter 9. A virus has several parts: it looks for executables on the system to infect, then it runs its payload before launching the original executable. The virus also needs to check if the executable has already been infected to avoid re-infections. Detecting viruses is a challenging task; it’s akin to solving the halting problem. In general, antiviruses can’t reliably detect new, unseen viruses without high false alarm rates. Instead, they look for specific signatures tied to known viruses.
A basic example of a virus is the Timid virus, which targets .COM executable files. These are easier to infect than .exe and Linux ELF files because .COM files have simpler structures. A .COM executable just copies the entire file into memory and starts executing at a predefined address. This virus then goes on to infect any executable the user can access. If the user doesn’t have root access, the spread of the virus is limited.
To evade detection, some viruses display polymorphism, changing their code when they copy themselves into another executable. This can be done by rearranging the order of routines or inserting irrelevant code.
Chapter 10. Common web security attacks include cross-site scripting (XSS), where attackers trick the browser into executing arbitrary JavaScript; and cross-site request forgery (CSRF), where the browser is tricked into making requests to other sites using the user’s session cookies. A less common attack is man-in-the-browser (MITB), where attackers insert code directly into the browser. Penetration testing involves a company hiring a team to find vulnerabilities in a website; they need to be more systematic than a hacker, who only needs to find a single vulnerability to succeed.
Chapter 11. Anonymity allows users to perform actions without revealing their identity. A common measure of this is the anonymity set, where an individual is only known to belong to a specific group, which can be quite large. While many encrypted protocols aren’t entirely anonymous (for instance, SSL doesn’t hide the destination of packets), there are solutions like proxy servers that forward requests to their actual destinations. Another tool is mix networks, like Tor, which route packets randomly through a network, ensuring no single node knows both the source and destination of a packet. Computer forensics involves tracking past actions and ensuring the history’s integrity. The chapter also touches on privacy laws and the ethics of publishing private individual details.
Chapter 12. Side-channel attacks, like power analysis, try to obtain bits of a key by observing power consumption, as different code branches can have distinct power usage patterns. Statistical analysis of traffic and timing can reveal patterns about user actions, such as which keys they’re likely pressing. Defenses include restricting physical computer access or ensuring consistent timing and power usage; simply adding noise often doesn’t work, as statistical methods can filter it out over many trials.
Chapter 13. Gives a brief history of copyright and fair use. Attempts at copy protection, like for DVDs, have generally failed.
Chapter 14. Achieving security can be challenging in real-world scenarios, as it often conflicts with other vital business objectives like user convenience and speed to market.