Rating: 7.1/10.
Identity-Native Infrastructure Access Management: Preventing Breaches by Eliminating Secrets and Adopting Zero Trust by Ev Kontsevoy, Sakshyam Shah, Peter Conrad
Book about security best practices is aimed at enterprise and large-scale systems, with the goal of protecting cloud infrastructure in the most secure way. It also aims to prevent the accidental sharing of secrets in improper ways, or allowing employees or services to gain more privileged access than intended. Although it’s a fairly short book of about 130 pages, it only provides some high-level design principles and doesn’t offer much in the way of specifics about which tools to use practically for setup. Therefore, it may not be too useful, especially for smaller tech companies.
Chapter 1. Many security breaches are similar in that hackers somehow get inside a resource and pivot to gain access to the rest of the network. We can assume that human error will eventually lead to some secrets being leaked, so the perimeter design is unreliable when an attacker may eventually get inside the perimeter. Instead, we should design a zero trust model where the entire architecture assumes it is running on an unsecured public network.
Authorization involves granting fine-grain permissions based on identity, and this process relies on authentication. Security often conflicts with engineering, which aims to build things quickly; if security becomes too cumbersome, engineers usually devise workarounds that are often insecure. A centralized SSO flow involves a user going to an Identity Management (IdM) service for a token and passing it into a specific application. This process grants very limited access for a short period of time, making it much less valuable to steal. However, this may be difficult to integrate with a lot of older technologies that cannot accept authorization tokens.
Chapter 2. Identity is tied to your physical self, and the way you prove it is through a combination of something you know (like a password), plus either something you have (such as a multifactor authentication key), or something you are (like biometrics). Once this is proven, a credential is issued that represents your identity. Static secrets are bad because they are vulnerable to human error, shared secrets between multiple team members are especially bad. A digital certificate is issued by a certificate authority and contains both authentication and authorization, it has an expiry date and an audit trail. A hardware TPM (Trusted Program Module), proves hardware identity and there is no data that can be stolen from it. A credential or certificate is meant to attach to an identity plus a specific usage context (eg, the IP of a user, time window, and fine grained permission), so that even when it is stolen, it is not very useful to the attacker. A certificate authority in an organization effectively reduces the number of static secrets to just one private key used by the CA to issue certificates.
Chapter 3: Symmetrical encryption is fast but requires both parties to know a key. Therefore, asymmetric encryption, which is slow but uses a public and private key pair, can exchange keys which can then be used for symmetric encryption. The permieter-less design is recommended: instead of a VPN that separates inside from outside traffic, this design assumes that all network traffic is untrusted; every network request must handle authorization, including requests between internal services. The benefit of this approach is that if an attacker somehow gets inside the perimeter, it does not automatically compromise the whole system.
Chapter 4: Authentication has several desirable properties: robustness (to brute force attacks and human errors), ubiquity in different situations, and scalability. A static secret is the most convenient (whether passwords, session tokens, API keys, etc), but it is not the most secure: there is no way to know if you are sending a secret to a legitimate party, and additional infrastructure is needed for sharing secrets. Public key cryptography is more secure but still requires lots of public keys to be managed. Certificates are much more scalable and avoid any human error since the entire process is automated. Single sign-on (SSO) is a method where you authenticate once, and then the identity service passes the credential to the resources you want to access, either by injecting them into the session or using federated authentication protocols such as SAML or OpenID Connect. Device attestation is a form of identity proofing in which the device contains a TPM that cannot be cloned, and when combined with a biometric from the user, it allows authentication without any secrets at all.
Chapter 5: Authorization is the process of determining which resources someone can access. The simplest way to do this is through an Access Control List (ACL), which defines a matrix of which users can read, write, and execute which objects, this method is used in Linux, but it requires a lot of centralized oversight and is suitable for a small number of users and when the security requirement is not too strict. Mandatory Access Control (MAC) systems, like multics, defines multiple levels of security, and properties of which actions are allowed by users on objects of different levels.
Best practices is to use the principle of least privilege, giving the least permissions to each person to do their job; full admin privileges should not be given to anybody, and for some sensitive actions, two people should be required to authorize before it can be done. However, in modern systems, infrastructure complexity makes it easy to misconfigure things, eg: giving access to an instance or a CI/CD pipeline that has more permissions, allowing privilege escalation. Cloud permissions are complex, for example EC2 itself has more than 500 different permissions. Administrators often give elevated permissions to developers to get them unblocked, as their time is expensive. The book recommends a centralized service to handle identity and credentials for everything in a company, treating humans and machines equally.
Chapter 6. It is useful to have access logs that record who accessed which resource and when. Also avoid using aliases, such as ‘admin’, which can obscure the true identity of the user. For ease of aggregation, a consistent format is necessary; syslog is a standard Linux logging system, includes the facility (source) and severity level. There are several open-source tools, such as auditd and osquery, that can augment syslog. When these logs are collected, they can be stored and gradually transitioned from hot to cold: hot storage is more expensive but provides easier access and searchability, while cold storage is for long-term use and is not easily searchable. Log aggregation is useful for tracking an attempted intrusion across multiple network services, with the aim of stopping it before too much damage is done.
Chapter 7. Teleport is an open-source tool for infrastructure access platform, provides an auth service and an identity-aware proxy.