Security engineering is the discipline of building secure systems.
Its lessons are not just applicable to computer security. In fact, in this repo, I aim to document a process for securing anything, whether it's a medieval castle, an art museum, or a computer network.
Please contribute! Create a pull request or just create a issue for content you'd like to add: I'll add it for you!
- What is security engineering?
- High level process
- Follow known best practices
- Understand your adversaries
- Security policies
- Security models
- Improve defenses
- Find vulnerabilities
- Assurance
- Popular mechanisms
- Learn about how real world systems are secured
- Physical facilities
- Nuclear command and control
- Monitoring and metering
- Banking and bookkeeping
- Distributed systems
- Copyright and DRM
- Web browsers
- Web applications
- Android
- BeyondCorp & zero trust
- Apple
- Cloud providers
- Computer networks
- Operating systems
- sel4
- SELinux
- AppArmor
- Chromebook
- Prisons
- Voting
- Museums
- Counterintelligence
- Casinos
- Military Architecture
- Books
- Papers
- Content wanted
- In the future
Security engineering isn't about adding a bunch of controls to something.
It's about coming up with security properties you'd like a system to have, choosing mechanisms that enforce these properties, and assuring yourself that your security properties hold.
- "What is security engineering?" (from Anderson's book) - my notes
- What's the problem? (from Saydjari's book) - my notes
- Computer security in the real world
- Human Adversaries – Why Information Security Is Unlike Engineering
- Natural Advantages of Defense: What Military History Can Teach Network Security - Part 1, Part 2
- Availability and Security: Choose One
Here's the process I like for securing things:
- We follow as many known best practices as we can. If humans already know how to secure something well, why try to derive the answer ourselves?
- Learn about the adversaries you want to defend against
- We write down our security policies, or high level security goals
- We develop a security model, or a spec we follow to satisfy our policies
- We reduce attack surface, follow security design principles, brainstorm ideas for and implement additonal security controls, and more -- to improve our security
- We test our design by assessing our controls, assessing protocols, looking for side channels, and more
- We write assurance cases to prove we satisfy our security policy.
Before anything else, I'd Google for the best practices for securing whatever you're trying to secure and implement all of them.
If you're in a corporate environment, set up SSO and 2FA. If you're securing a physical facility, see if there's a well-regarded physical security standard you can comply with.
I'd study how people have defended what you're defending now in the past. Also, I'd talk to the people who are the very best at defending what I'm defending now, and learn what they do that most people don't do.
Doing this will make you significantly more secure than the majority of people, who don't do this.
There's no such thing as a system being secure, only being secure against a particular adversary.
This is why it's important to understand who your adversaries are, as well as the motivation behind and capabilities of each adversary.
Consider non-human threats, too. If you're asked to secure a painting in a museum, a fire may technically not be a security issue -- but it's something to guard against, regardless.
Also, study the history of attacks. If I was designing a prison, I'd learn about all the past prison breakouts that I could.
- Agile threat modeling - my notes
- Threat Modeling: Designing for Security
- Awesome Threat Modeling
- Some thoughts on threat modeling
- Threat modeling: 12 available methods
- Approachable threat modeling
- OWASP Threat Model Cookbook
- Threat modeling for dummies
- Threat modeling cheatsheet (from OWASP)
- A Guide to Understanding Security Modeling in Trusted Systems
- Adversaries: Know Thy Opponent (from Saydjari's book)
- "Who is your opponent?" (from Anderson's book)
Policies are the high level properties we want our system to have. Policies are what we want to happen.
Let's say we're designing a prison.
I'd start with a strong policy:
No prisoner may escape the prison.
Of course, time, money, and manpower are all limited. The goal isn't to eliminate risk entirely, but bring it down to an acceptable level.
As I go through the next couple steps and learn what controls I need and how costly they'll be, I might refine my security policy to something like this:
No more than 10 out of 10,000 (0.1%) prisoners may escape our prison in any given time period.
Looking at benchmarks may help us come up with this number.
Any system has additional requirements in addition to its security requirements. These two sets of requirements may conflict, so you may need to relax your security requirements.
Going back to the example above, our policy is that only a tiny percentage of prisoners may leave the prison without permission. But what if there's a fire?
If you've achieved this low escape rate by building a fully autonomous fortress with no fire detection or human override, the results may be suboptimal.
We can then turn our policy into a more detailed model. A model is a set of rules, a specification, we can follow to achieve our policy. Our policy is our "what", the model is our "how".
Each individual in the prison facility must have a ID that identifies him/her as a "prisoner" or "not a prisoner"
A prisoner may have the written consent of the warden to leave.
A non-prisoner may leave at any time.
- Computer Security: Art and Science covers this topic very well
- Flask security architecture
- Sancus 2.0: A Low-Cost Security Architecture for IoT Devices
Luckily, in information security, our policies often revolve around confidentiality, integrity, and availability and so there are popular existing security models for each of these policies.
For confidentiality, for example, you can choose between:
- multilevel security, for which Bell-Padula can be used
- multilateral security, for which compartmentation, BMA, Chinese wall can be used (according to Anderson's book)
See also this Wikipedia article and this one on computer models.
Here are some useful techniques I've found for improving the security of a system.
Also see if you any of the mechanisms in popular mechanisms would help.
See tptacek's HN comment on this:
For instance: you can set up fail2ban, sure. But what's it doing for you? If you have password SSH authentication enabled anywhere, you're already playing to lose, and logging and reactive blocking isn't really going to help you. Don't scan your logs for this problem; scan your configurations and make sure the brute-force attack simply can't work.
The same goes for most of the stuff shrink-wrap tools look for in web logs. OSSEC isn't bad, but the things you're going to light up on with OSSEC out of the box all mean something went so wrong that you got owned up.
Same with URL regexes. You can set up log detection for people hitting your admin interfaces. But then you have to ask: why is your admin interface available on routable IPs to begin with?
- OWASP Attack Surface Analysis Cheat Sheet
- See the papers in this folder
When evaluating a design, it's useful to see how much of the system must be trusted in order for a security goal to be achieved. The smaller this trusted computing base is, the better.
Also, once you identify the TCB for an existing system, you know that you only need to secure your TCB. You don't need to worry about securing components outside your TCB.
You want to make your TCB as small, simple, unbypassable, tamper-resistant, and verifiable as you can, as I write about here.
- OS Security Concepts (from CS 161 from UC Berkeley)
- Design patterns for building secure systems - my notes
- TSAFE: Building a Trusted Computing Base forAir Traffic Control Software
- Ten page intro to trusted computing
- Reducing TCB Complexity for Security-Sensitive Applications: Three Case Studies
- The Nizza Secure-System Architecture
When designing a system, a great way to mitigate the impact of a successful attack is to break the system down into components based upon their privilege level.
Then, ask what's the least amount of privilege each component needs -- and then enforce the allowed privileges with a sandbox (if applicable).
Say one of our SRE SSH's into a production EC2 instance as root
to check the instance's memory and CPU usage. Instead, we can assign the SRE a non-root account. Even better, we can whitelist the commands this account can run.
Even better, we can even remove SSH access entirely and set up Prometheus for monitoring.
- Lecture 4: Privilege Separation (6.858 from MIT) - my notes
- SSH daemon (from Niels Provos)
- OKWS paper
- Security architecture of the Chromium browser
- Nested Kernel: An Operating System Architecture for Intra-Kernel Privilege Separation
- Privilege bracketing - see also this page
- Exploit mitigation techniques in OpenBSD
- Privtrans: Automatically Partitioning Programs for Privilege Separation
- Ways to minimize privileges
- Make least privilege a right (not a privilege)
- Plash: tools for least privilege
- SHILL: A Secure Shell Scripting Language
- Security wrappers and Bernstein chaining
- Weakest link security - ignore the title!
- Secure by default – the case of TLS
- Configure Safely and Use Safe Defaults
The way I see it, every defense falls into one of these categories:
- Prevent: consists of deter, stop
- Detect
- Respond: consists of delay, contain, investigate, remediate
Take any attack. Then, for each of the seven categories, brainstorm defenses that fall into that category.
By mapping out an adversary's kill chain, we can then identify controls to counteract each step in the kill chain. Check out MITRE ATT&CK.
I would go down this list and see if there's any principles which you can apply to your system.
- Secure the weakest link
- Defense in depth
- Fail securely
- Secure by default - discussed earlier in the repo
- Least privilege - discussed earlier in the repo
- Separation of privilege - discussed earlier in the repo
- Economy of mechanism - controls should be as simple as possible
- Least common mechanism - limit unnecessary sharing. see this
- Open design - your design should be secure without obscurity. obscurity is discussed later in the repo
- Complete mediation - applies to reference monitors, which many controls are. The idea is to perform a check on every request. If you cache results, then a request that should be rejected after things changed might be allowed. See this link
- Work factor - find ways to make the attacker need to do several times more work to break something than it takes you, the defender. Here's a paper on dynamic network reconfiguration being used to increase recon work for attackers
- Security is economics - discussed later in the repo
- Human factors matter - if a control relies on a human to do something, make sure your control is usable or the person just won't do it
- Know your threat model & update it - keep your threat model up to date with threats, and your defenses too
- Trust only trustworthy channels - see this article
- Set up a trusted path - see this article
Sources
- Stop buying bad security prescriptions
- Design principles (from US CERT)
- Principles for building secure systems
- More principles
- Even more principles (from David Wagner)
- The Information Security Practice Principles
The techniques below help you find vulnerabilities in a proposed design for you to fix.
Theories of security derive from theories of insecurity. - Unknown
If you're a great attacker you can be "logically" a great defender. However, a great defender cannot be a great attacker, nor would I say they could be a "great" defender. - Caleb Sima, VP of Security at Databricks
Any person can invent a security system so clever that she or he can't think of how to break it - Schneier's Law
More important than the attacks in subsequent sections is being able to think creatively, like an attacker. I do believe this skill is essential if you want in order to assess the security of your designs effectively.
This section describes some techniques for developing this skill that I've gathered.
Read this post by John Lambert first. It's about how attackers think in graphs, while defenders think in lists, so attackers win.
I've copied the list of links below from John's post above.
- Heat-ray: Combating Identity Snowball Attacks Using Machine Learning, Combinatorial Optimization and Attack Graph
- Two Formal Analyses of Attack Graphs
- Using Model Checking to Analyze Network Vulnerabilities
- A Graph-Based System for Network-Vulnerability Analysis
- Automated Generation and Analysis of Attack Graphs
- Modern Intrusion Practices
- Attack Planning in the Real World
After building an attack tree, you can query it easily: "list all the attack paths costing less than $100k". (Remember: we don't seek absolute security, but rather security against a certain set of adversaries.)
Also, remember the weakest link principle. You can query your attack tree for the lowest cost attack path and ensure that the cost isn't too low.
If a security control does not have the qualities above, then an attacker can violate a system's security properties by subverting its controls.
- Can the attacker turn off the control?
- Can the attacker get you to turn off the control?
- Can the attacker get around your control?
- Does the control depend on something that the attacker can disable?
- Are there any cases where the control doesn't work?
- Does the control fail open or closed? If it fails open, can the attacker make the control fail?
Take a burglar confronting a home security system which calls the police if someone crosses the lawn at night
- Can the burglar turn off the control? Probably not
- Can the burglar get you to turn off the control? Yes, they could set off the alarm everyday until you turn it off
- Can the burglar get around your control? Yes, they could land on the roof
- Does the control depend on something that the burglar can disable? Yes, the burglar can cut the electric wire or the fiber cable used to call the police
- Are there any cases where the control doesn't work? The burglar can buy the control and learn the alarm doesn't go off if they tip toes.
I like using a statement/conclusion format to draw out my assumptions about my controls.
Statement: I have a home security system which calls the police if someone crosses the lawn.
Conclusion: I won't get robbed.
Assumptions:
- For every single attacker that tries to cross my lawn, my home security system calls the police. (If the answer to any of the questions above is yes, this assumption is false.)
- The police will arrive before any attacker is able to steal anything and stop the theft.
- What if the attacker impersonates the homeowner and tells the police that my home security system is faulty; don't come if it calls you?
- What if the attacker makes hundreds of 911 calls while they are robbing the house?
- What if the police is blocked by a "car accident"? What if the attacker has arranged for a getaway helicopter?
Saydjari writes an entire chapter on this:
We want our security controls to fail closed, not open. There's two ways to analyze the ways something might fail: failure tree analysis (FTA), which is top down, and failure modes and effects analysis (FMEA), which is bottom up.
- How to avoid failures
- Hazard Analysis Techniques for System Safety
- Intro to Systems Theoretic Process Analysis (STPA)
Protocols aren't a tool for securing something. But all communication between two components of a system is done through a protocol, so it's worth learning how to analyze protocols for vulnerabilities.
- "Protocols" (from Anderson's book)
- A logic of authentication
- Programming Satan's computer
- Using Encryption for Authentication in Large Networks of Computers
- Three systems for cryptographic protocol analysis
Even if something isn't vulnerable to attacks (on confidentiality, integrity, or availability), it may leak information which makes these attacks easier.
For example, take a login program that checks if the username is valid, returns a generic "login failed" error if it's not, then checks if the password is valid, and returns the same generic error if it's not.
At a first glance, determining if a particular username is valid may seem impossible. After all, the error message is the same regardless of whether the username is invalid or the username is valid and the password is invalid.
However, an attacker could examine the time it takes to get the error to determine if the username is valid or not.
- "Side channels" (from Anderson's book)
- Covert vs overt vs side channels - See this Stack Overflow answer too
- A Guide to Understanding Covert Channel Analysis of Trusted Systems
- List of ways to find side channels in hardware/software
The goal of security engineering is to build a system that satisfies certain security properties -- not just to add a lot of controls. Assurance is how we prove that our system satisfies the properties we want it to.
- Assuring cybersecurity: getting it right (from Saydjari's book)
- Notes on high-assurance security methods by nickpsecurity - see also his writeup on software distribution
- Guidelines for Formal Verification Systems
- Constructing a high assurance mail guard
- Designing The Gemsos Security Kernel For Security And Performance
- SIFT: Design and Analysis of a Fault-Tolerant Computer for Aircraft Control
- Design and Verification of Secure Systems
- Commercial Product Evaluations
- Make computers keep secrets
- Hints for High-Assurance Cyber-Physical System Design
- High-Assurance Separation Kernels: A Survey on Formal Methods
- Separation Virtual Machine Monitors
- The Cross Domain Desktop Compositor: Using hardware-based video compositing for a multi-level secure user interface
- CH26 Managing the Development of Secure Systems (from Anderson's book)
- CH27 Assurance & Sustainability (from Anderson's book)
- We need assurance!
- The Orange Book
- Public Pentesting Reports
- High-Assurance Smart Card Operating System for Electronic Visas
- A Touch of Evil: High-Assurance Cryptographic Hardware from Untrusted Components
- Lessons learned from building a high assurance crypto gateway
- Formal Specification and Verification of a Microkernel
- Final evaluation report of SCOMP
- See the papers in this folder
In order to secure something, you need to know what tools are available to you. Here are some that which can be used in many different contexts.
A lot of tools are context-specific, however. Before I start trying to secure a building, for example, I'd spend the time to learn about all the tools I can use: walls, sensors, natural barriers, guards, CCTV cameras, etc
- Cryptography: A Sharp and Fragile Tool (from Saydjari's book)
- Applied Cryptography: Protocols, Algorithms and Source Code in C
- "Cryptography Engineering" book
- "Cryptography" (from Ross Anderson's book)
- "Advanced Cryptographic Engineering" (from Ross Anderson's book)
- Don't trust the math (Bruce Schneier)
The idea here is to make it economically, not technically, infeasible for the attacker to attack us. He can still attack us, but his expected effort will exceed his expected gain.
Say a scammer manages to scam one of every hundred people out of $5. If we can add a $0.10 fee to every call, then they'd need to pay $10 in fees to earn $5.
Another example would be not storing credit card data ourselves, and instead outsourcing this to a payment processor, so the reward of attacking us is less.
If the attacker isn't motivated by money, this doesn't work.
Deterrence has three parts: certainty, severity, and swiftness. In other words, to deter attackers most effectively, someone should be able to catch most or all of them -- and do this quickly -- and then sufficiently punish them once you do catch them.
This someone could be the government, via laws and regulations against whatever you're trying to defend against. The government may not catch everyone, but these laws and regulations will deter most people. Copyright protection, anti-shoplifting, and anti-trespassing laws all are examples of this.
The government is not the only third party who can deter attacks on you. Organizations, like NATO, can as well.
Alternatively, you can try to retaliate against attacks yourself. Take, for example, media companies that sue people that pirate their movies.
- "Physical tamper resistance" (from Anderson's book)
- Cryptographic processors – a survey
- Smartcard Handbook
- Tamper Resistance - a Cautionary Note
- Low Cost Attacks on Tamper Resistant Devices
- Design Principles for Tamper-Resistant Smartcard Processors
If we can't prevent tampering, we can try to make it obvious when something has been tampered with.
This is one reason why bags of chips or gallons of milk, for example, are sealed.
- "Access Control" (from Anderson's book)
- OS Security Concepts (from CS 161 from UC Berkeley)
- Understanding Discretionary Access Control In Trusted Systems
- Awesome Object Capabilities and Capability-based Security
- What are capabilities?
- Capability Security Model
The three ways to authenticate someone are:
- what you know (eg, PIN, password, picture passwords)
- what you have (eg, Yubikey, smartphone, smartcard, token hardware)
- what you are (eg, a fingerprint)
While not a standalone factor, you can consider the environment, too, such as where the user is or what time it is.
- DOD Password Management Guideline
- A Guide to Understanding Identification and Authentication in Trusted Systems
- Authentication (from Saydjari's book)
Without authorization, anyone who authenticates to our system would have full access to everything. We'd like to make it more difficult than that for attackers, and likely don't trust all insiders that much, either.
- Authorization (from Saydjari's book)
- Domain type enforcement
- Type enforcement chapter (from SELinux book)
- Domain and type enforcement for Linux
Think about the intel classification hierarchy: some documents are top secret, others are secret, others are confidential, and so on. This is a multi-level scheme.
- "Multilevel Security" (from Anderson's book)
- An analysis of the systemic security weaknesses of the U.S. Navy fleet broadcasting system, 1967-1974, as exploited by CWO John Walker
Even if an analyst has a secret clearance, you may not want him to be able to access any documents from other departments. This is a multi-lateral scheme.
- "Boundaries" (from Anderson's book)
- Security in clinical information systems
- Implementing access control to protect the confidentiality of patient information in clinical information systems in the acute hospital
- Privacy in clinical information systems in secondary care
- THE Chinese wall security policy
The idea is simple: to authorize certain actions, more than one person must consent. This helps protect against malicious insiders.
While an individual, anonymized database may not be enough to de-anonymize people, a combination of anonymized databases may make this possible. Inference control aims to prevent this.
I haven't seen this concept outside of computer security, yet.
Privilege separation is dividing a system into different components, based on what permission level each component should have.
Least privilege is then making the permission level for each component as small as possible.
The way you enforce this minimal permission level is via a sandbox.
I haven't seen this concept outside of computer security, yet.
- On Safes, Sandboxes, and Spies (CS 161 at UC Berkeley)
- A Theory and Tools for Applying Sandboxes Effectively
- Chrome Sandbox Design Doc
- Chrome Sandbox Design FAQ
- Sandboxing Applications
- A Security Study of Chrome’s Process-based Sandboxing
- SELinux, Seccomp, Sysdig Falco, and you: A technical discussion
- gvisor
- sandy
To me, logging is the act of collecting event data, and auditing is looking for malicious activity in those events. The terms are used interchangeably, however.
Logging is useful for deterrence (insiders especially are less likely to do bad things if they're being recorded), detection, and investigation. It can provide non-repudiation, or the inability of an attacker to deny their malicious activity.
It's practiced in many fields from information security (think SIEMs) to healthcare (tracking who accesses someone's medical records).
- An approach to air-gapped deployment
- Network air locks, not air gaps, to preserve LAN security
- Air gaps (post by Bruce Schneier)
- Bin Laden Maintained Computer Security with an Air Gap
Obscurity, not its own, does not count as security. However, it can be added on top of real security measures, to make attacks on you require more time and a higher skill level.
- Obscurity is a valid security layer - see the HN comments as well
- Techniques for defeating high-strength attackers
- Replacing Intel or x86 chips for security reasons
The chapters in Anderson's book fall into two categories, in my view: mechanisms for securing systems and examples of how some real world systems are secured.
We've already learned about the first category; this section is about the second category.
- Introduction to physical security
- "Physical protection" (from Anderson's book)
- Design and evaluation of physical protection systems
- Physical security: 150 things you should know
- The complete guide to physical security
- Physical security systems handbook
- A Burglar's Guide to the City
- The Feather Thief: Beauty, Obsession, and the Natural History Heist of the Century
- The Man Who Robbed the Pierre
- "Nuclear command and control" (from Anderson's book)
- Nuclear Security Recommendations on Physical Protection of Nuclear Material and Nuclear Facilities
- Nuclear Security Series
- "Monitoring and metering" (from Anderson's book)
- Electronic Postage Systems: Technology, Security, Economics
- Reliability of Electronic Payment Systems
- Security and Privacy Analysis of Automatic Meter Reading Systems
- On the Security of Digital Tachographs
- "Banking and bookkeeping" (from Anderson's book)
- The Bank Employee's Fraud and Security Handbook: Everything You Need to Know to Detect and Prevent Loss
- How Coinbase Builds Secure Infrastructure To Store Bitcoin In The Cloud - my notes
- Future Banks Live in The Cloud: Building a Usable Cloud with Uncompromising Security
- Where the Money Is: True Tales from the Bank Robbery Capital of the World
- Norco '80: The True Story of the Most Spectacular Bank Robbery in American History
- Pizza Bomber: The Untold Story of America's Most Shocking Bank Robbery
- The Great Heist - The Story of the Biggest Bank Robbery in History: And Why All the Money Was Returned
- "Copyright and DRM" (from Anderson's book)
- The Protection of Computer Software: Its Technology and Application
- European Scrambling Systems, Circuits, Tactics and Techniques
- Security architecture of the Chromium browser
- Native OKL4 Web Browser
- Designing and Implementing the OP and OP2 Web Browsers
- Secure Browser Architecture Based on Hardware Virtualization
- Browser security: lessons from Google Chrome
- Towards High Assurance HTML5 Applications
- Privilege separation in HTML5 applications
- Principled and Practical Web Application Security
Most companies need to be able to answer the question, "is this client one of ours," when protecting sensitive resources.
Most companies will instead answer the question, "is the client on our network," and pretend that it was the same question. The fact that it clearly is not has some very obvious security implications and attack vectors that we've been living with for decades.
Beyondcorp tries to more directly answer the original question about device identity rather than subbing in the network question in its place.
The fact that this approach is novel says a lot about the maturity of our industry.
-- tyler_larson, a Hacker News comment, 01/22/2018
Google's BeyondCorp removes the concept of firewalls and VPNs altogether.
Instead, every request to access internal services must be authenticated, authorized, and encrypted, and that's all -- regardless from what network the request originates from.
For a request to be authenticated, it must be from:
- an authenticated user
- who's on a corporate device (a device in Google's Device Inventory Database, identified with a certificate stored in the device's TPM or in certificate store).
-
All of Google's services are put behind an access proxy, which "enforces encryption between the client and the application"
- The user's device must present a valid certificate, and the user must log on via SSO + hardware security key, to pass the access proxy.
-
BeyondCorp's Trust Inference dynamically determines how much trust to assign a user or a device
- The user accessing services from a strange location would decrease trust. A less secure device would decrease trust.
-
BeyondCorp's Access Control Engine ingests device inventory data, user data, this trust score, and decides whether to allow access to the requested service or not.
- The Access Control Engine can also "enforce location-based access control" and can restrict access to services based on user role + device type.
Quoting from the paper linked above:
- The request is directed to the access proxy. The laptop provides its device certificate.
- The access proxy does not recognize the user and redirects to the SSO system.
- The engineer provides his or her primary and second-factor authentication credentials, is authenticated by the SSO system, is issued a token, and is redirected back to the access proxy.
- The access proxy now has the device certificate, which identifies the device, and the SSO token, which identifies the user.
- The Access Control Engine performs the specific authorization
check configured for codereview.corp.google.com. This authorization check is made on every request:
a. The user is confirmed to be in the engineering group.
b. The user is confirmed to possess a sufficient trust level.
c. The device is confirmed to be a managed device in good standing.
d. The device is confirmed to possess a sufficient trust level.
e. If all these checks pass, the request is passed to an appropriate back end to be serviced.
f. If any of the above checks fails, the request is denied.
For an attacker to gain access to a service under BeyondCorp, they'd need to:
- choose an employee who can access this service
- obtain that employee's SSO credentials
- obtain an employee's hardware security key
- obtain an employee's (any employee's?) managed device which can access this service
- obtain the password to this managed device
- bypass any location based access control
- do all of this before either the user's or device's access is cut off (as every request is checked)
Before: the attacker has to execute one digital attack (gain VPN access) to gain access to services.
Even if VPN requires 2FA, but it's not done with a hardware security key, the attacker can phish the employee into giving up his 2FA code or accepting the Duo push.
After: the attacker has to execute two digital attacks (obtain SSO password, obtain device password) and two physical attacks, which might be done at once (device, hardware security key).
Learning lesson: shift digital attacks to physical attacks wherever possible (and safe). Google does this using hardware security keys and only letting managed laptops access services.
- BeyondCorp I: A new approach to enterprise security
- BeyondCorp II: Design to deployment at Google
- BeyondCorp III: The access proxy
- Migrating to BeyondCorp
- BeyondCorp: The user experience
- Maintaining a healthy fleet
- Zero Trust Networks: Building Secure Systems in Untrusted Networks
- Apple platform security
- There are no secure smartphones - and HN discussion with nickpsecurity's comments here
- Google Cloud Security Whitepapers (97 page PDF)
- Google Infrastructure Security Design Overview
- NEXT 2016 Keynote: Security by Niels Provos
- Google Infrastructure Security Design (Google Cloud Next '17)
- Google Data Center Security: 6 Layers Deep
- How Do You Explain The Unreasonable Effectiveness Of Cloud Security?
- Securing a community cloud
- Trusted Network Interpretation (272 pages)
- Improved Port Knocking with Strong Authentication
- Coreguard from Dover Systems
- Wireguard: fast, modern, secure VPN tunnel (Blackhat 2018)
- Operating System Security (by Trent Jaegar)
- List of security-focused operating systems
- List of UNIX alternatives with desirable capabilities
- Linux, OpenBSD, AND Talisker: A comparative complexity analysis
- The Jury Is In: Monolithic OS Design Is Flawed
- Unikernels: The Next Stage of Linux’s Dominance
- Design of the EROS Trusted Window System
- Setuid Demystified
- A distributed secure system
- Lessons from VAX/SVS for High-Assurance VM Systems
- UCLA Secure UNIX
- The Evolution of Operating Systems
- Jail Design Guide (National Institute of Corrections). See Chapter 8 in particular. - notes
- Technical Guidance for Prison Planning - notes
- Correctional Facility Design and Detailing
- Free and Fair - see their ElectionGuard project
- Museum Property Security and Fire Protection (from Interior Dept. Museum Property Handbook)
- Suggested Practices For Museum Security
- Museum Collections Security
- Museum Security and Protection
- Why Corporate Security Should Be Like Museums
- Stealing the Show: A History of Art and Crime in Six Thefts
- Stealing Rembrandts: The Untold Stories of Notorious Art Heists
- Nazi Gold: The Sensational Story of the World's Greatest Robbery – and the Greatest Criminal Cover-Up
- Master Thieves: The Boston Gangsters Who Pulled Off the World's Greatest Art Heist
- Casino Security and Gaming Surveillance
- Casino Surveillance and Security: 150 Things You Should Know
- Casino Surveillance: the Eye That Never Blinks
- 150 things You Should Know About Security - see the casino chapter
- Exploiting Online Games: Cheating Massively Distributed Systems
- Great Gambling Scams: True Stories of the World's Most Amazing Hustlers
- Loaded Dice: True Story of a Casino Cheat
Also known as: fortifications
- Fortifications and Siegecraft: Defense and Attack through the Ages
- Defending Your Castle: Build Catapults, Crossbows, Moats, Bulletproof Shields, and More Defensive Devices to Fend Off the Invading Hordes
"Recommended" is subjective...YMMV!
- Computer Security: Art and Science (by Bishop) - I'd read this first; it teaches security engineering in the right order: policies and models, then mechanisms, then assurance.
- Security Engineering (by Ross Anderson)
- Engineering Trustworthy Systems (by Sami Saydjari)
- "Security in Computing" (by Pfleeger) - I liked the chapter on trusted operating systems in particular.
- Building Secure and Reliable Systems
- Time Based Security - my notes
- "Engineering Information Security" (by Jacobs)
- "The Craft of System Security" (by Smith and Marchesini)
- "Cyber Security Engineering" (by Woody and Mead)
- Cybersecurity for Space
- Engineering Security
- NSA STIGs
- NSA/DOD Rainbow Series
- Building a Secure Computer System
- NIST 800-16 Vol I: System Security Engineering
- NIST 800-16 Vol II: Developing Cyber Resilient Systems
- Learning from the enemy: the GUNMAN project (NSA)
- The spy in Moscow station
- Principled Assuredly Trustworthy Composable Architectures
- Analogue Network Security
- Security from First Principles
- Electronic Access Control
This list is from Science of Cybersecurity.
- The Protection of Information in Computer Systems - my notes
- Proposed Technical Evaluation Criteria for Trusted Computer Systems - my notes
- Thirty Years Later: Lessons from the Multics Security Evaluation - my notes
- Dynamic protection structures
- A note on the confinement problem - my notes
- Security Controls for Computer Systems: Report of Defense Science Board Task Force on Computer Security
- Computer Security Technology Planning Study
- Computer Security Threat Monitoring and Surveillance
- Secure Computer System: Unified Exposition And Multics Interpretation
- Integrity Considerations For Secure Computer Systems
- Protection Analysis: Final Report
- Preliminary Notes on the Design of Secure Military Computer Systems
- Jobstream Separator System Design
- The Design and Specification of a Security Kernel for the PDP-11/45
- Security controls in the ADEPT-50 time-sharing system
- A postmortem for a time sharing system
- Protection (by Lampson)
- Protection in an information processing utility
- Protection and the Control of Information Sharing in Multics
- A Hardware Architecture for Implementing Protection Rings
- Design for Multics Security Enhancements
- Initial Structured Specifications for an Uncompromisable Computer Security System
- Reflections on Trusting Trust - revisited
- Protection in Operating Systems
- Subversion: The Neglected Aspect of Computer Security
- Multics Security Evaluation: Vulnerability Analysis
- Secure Minicomputer Operating System
- Information Security: An Elusive Goal
- Implementation Of A Secure Data Management System For The Secure Unix Operating System
- Audit and Evaluation of Computer Secuirity: System Vulnerabilities and Controls II
- Operating System Structures to Support Security and Reliable Software
- A Provably Secure Operating System
- Relational database access controls
- Looking back at the Bell-LaPadula Model
- Original Bell-LaPadula Paper
- Encapsulation: an approach to operating system security
- Specification of a trusted computing base
- Minicomputer Architectures for Effective Security Kernel Implementations
- Security Analysis And Enhancements Of Computer Operating Systems
- Access Control based on Execution History
- A Distributed Trust Model
- Abstraction and Refinement of Layered Security Policy
- Policy Management for Networked Systems and Applications
- Containing the Hype
- Distributed trust model
- Issues in Designing a Policy Language for Distributed Management of IT Infrastructures
- Planning and Integrating Deception into Computer Security Defenses
- Applying the TCSEC Guidelines to a Real-time Embedded System Environment
- Safety analysis for the extended schematic protection model
- A Secure and Reliable Bootstrap Architecture
- Adaptive Cyberdefense for Survival and Intrusion Tolerance
- Basic Concepts and Taxonomy of Dependable and Secure Computing
- The Base-Rate Fallacy and the Difficulty of Intrusion Detection
- A Linear Time Algorithm for Deciding Subject Security
- Applying the take-grant protection model
- Formal models for computer security
- Capability myths demolished
- The Inevitability of Failure: The Flawed Assumption of Security in Modern Computing Environments
- Lattice-based access control models
- A lattice model of secure information flow
- A framework for dynamic subversion
- NGSCB: A Trusted Open System
- EROS: a fast capability system
- LOMAC: Low Water-Mark Integrity Protection for Linux
- Access Control Model For Distributed Conferencing System
- A Comparison of Commercial and Military Computer Security Policies
- Security Policies and Security Models
If you know of any good books, talks, papers, or other resources on the topics below, please submit a pull request, or even easier, just create an issue and I'll add the resources to the repo for you.
- How is online gambling kept secure?
- How are casino slot machines kept secure both from insiders (see Ocean's 13!) and outsiders?
- How are facilities containing hazardous biological, chemical material secured?
- What about nuclear facilities?
- What security requirements does the DOD have for in its JEDI Cloud?
- How do chip fabs and bio facilities prevent contamination?
- Are there any things we can apply from safety engineering to security engineering?
- Do unhackable systems exist? How would you build one?
- Write up case studies on how I'd use my process to secure different things
- Create practical, step by step checklists for doing each of the parts of my process
- Have interviews with people who design security for museums, banks, prisons, casinos, etc