iTranslated by AI
Why Apple's Product Security Documentation is So Fascinating
Apple has published a white paper that explains its product security in considerable detail. For some reason, there is also a Japanese version.
- ~~(PDF version) https://manuals.info.apple.com/MANUALS/1000/MA1902/ja_JP/apple-platform-security-guide-j.pdf~~ EDIT:
The Japanese version seems to have disappeared - (PDF version) https://help.apple.com/pdf/security/ja_JP/apple-platform-security-guide-j.pdf EDIT: Published at a new URL
- (PDF version) https://help.apple.com/pdf/security/en_US/apple-platform-security-guide.pdf
This document is, so to speak, a declaration of intent to build a business on user privacy, and it can be said that Apple believes this level of mechanism and disclosure is necessary to gain the "trust" of users. (However, this is not unique to Apple; for example, 1Password provides a similar level of disclosure: https://1password.com/files/1Password-White-Paper.pdf)
The web version is a bit lacking in terms of overview, so I recommend taking a look at the PDF version from top to bottom.
Interesting Points
...Since just a recommendation isn't enough, here are some interesting points below.
Items Not Mentioned (= Real-world Security Crises)
This document is continuously updated, but there are some details that are not described.
EDIT: Fixed vulnerabilities are announced via the URLs displayed during system updates. For example, there is a page for the IOMobileFrameBuffer fix mentioned below with a brief explanation.
For example, the iMessage zero-click attacks made famous by reports like The Great iPwn were mitigated in iOS 14 with the introduction of BlastDoor, a fundamental mitigation measure. From this document, you can hardly tell that such attacks existed or were actually addressed.
(The sudden appearance of Tamagotchi illustrations in the zero-click attack slides is because the Google researcher is a fan and is also famous for reverse engineering Tamagotchis.)
However, as noted in a recent Amnesty International report (and an Engadget article), reports suggest that zero-click attacks via iMessage continue to exist even in iOS 14. This implies that there are limits to methods like BlastDoor that protect the parsing of untrusted messages on the device.
EDIT: Corellium reached a settlement. Also, since Corellium hasn't released specific information about the operating hardware, it's highly likely that the devices are simply being emulated.
Also, real-world iOS is being jailbroken even in its latest versions. There are even services that host hardware up to jailbroken iPhone 12s with their own hypervisor as remote devices and are in litigation with Apple. Jailbreaking should not be possible if the security mechanisms described in this document were working perfectly. Conversely, the text does not state what Apple is currently "failing at." Even the latest iOS 14.7.1 fixes a kernel vulnerability in IOMobileFrameBuffer, showing that security is an endless process.
Dedicated C Compiler
While the explanation of BlastDoor highlighted its Swift implementation, Apple also utilizes a dedicated C compiler for its bootloader.
In iOS 14 and iPadOS 14, to improve security, the C compiler toolchain used to build the iBoot bootloader was changed. The modified toolchain implements code designed to prevent memory and type safety issues that typically occur in C programs.
Although not mentioned in this document, Apple has traditionally used Scheme (a dialect of Lisp) for configuring security sandboxes. In the Google write-up of BlastDoor mentioned above, sandbox definition code written in Scheme (S-expressions) can be seen.
Use of L4 Kernel
Security systems sometimes surprise the world by adopting minor operating systems. For example, Intel ME is said to run MINIX, and its creator, Andrew Tanenbaum—the same person who had the Linux vs. Minix debate with Linus Torvalds—even issued an open letter.
Apple, like MINIX, uses L4—an OS that adopts a microkernel design—as the OS for the Secure Enclave Processor:
The Secure Enclave processor runs an Apple-customized version of the L4 microkernel. It’s designed to work efficiently at a lower clock speed, which helps protect it against clock and power attacks.
However, compared to MINIX, L4 is a relatively well-known OS in the field and had been used in Qualcomm chipsets even before the iPhone.
Differences in Implementation Security by SoC Generation
Recently, as iOS 15's continued support for iPhone 6s and later became a topic of interest, people are concerned about when their devices will lose support. Currently, the oldest SoC supported by the iOS 15 family is the A8 in Apple TV HD, which is also the oldest SoC mentioned in this document at this time.
Note: A12, A13, S4, and S5 products first released in the fall of 2020 have second-generation secure storage components; while earlier products based on these SoCs have first-generation secure storage components.
Additionally, the security feature implementation table only includes SoCs from the A10 onwards.
Currently, iOS support is determined by whether the SoC supports Metal, but in the future, it might be differentiated based on the presence of security features. If so, it might start from the A12 onwards, where security features are more fully equipped.
Up to the A11, there is the well-known bootrom vulnerability (checkm8), which allows arbitrary code execution on devices in USB DFU mode. Combining this with another vulnerability in the Secure Enclave has become a major jailbreaking method today.
Personalization of Firmware Updates
A critical aspect of platform security is "preventing persistence." For example, the current jailbreak using checkm8 is called a "tethered jailbreak," and its effects disappear when the power is turned off. This makes it difficult to surreptitiously install on someone else's phone and hard to exploit.
Since installing new firmware via an iOS update is equivalent to making code changes persistent, it appears to be designed and implemented very carefully.
Proper use of these secure processes allows Apple to stop signing older versions of operating systems with known vulnerabilities and helps prevent downgrade attacks.
On Apple-designed silicon (such as Mac computers with a security chip and an Intel processor), the requirement for a network connection to Apple to update the device is to perform the personalization process.
Firmware updates are performed after mutual authentication. In other words, a device requesting a firmware update must communicate with Apple's servers and obtain approval from Apple along with the signature of the update to be installed.
One important aspect is that Apple devices operating offline basically do not generate profit for Apple (they don't upload user photo data or make purchases in the App Store), so there is no need to support them.
Without the assumption of being online, personalization wouldn't function. ...Perhaps one could imagine calling to provide a device activation key and having update files mailed to them, but...
Physical Buttons Directly Connected to the Secure Enclave
The Secure Enclave features the capability to authorize the use of cryptographic functions through physical buttons.
...There are further considerations for the M1 Mac + Wireless Keyboard since they are wireless.
(As a result, the security features of a wireless keyboard with Touch ID are exclusive to M1 Macs.)
One big caveat is that Touch ID only works on Macs with M1 chips. That means a whole lot of the Macs currently on the market. If you have one of those fancy new systems, you can use Touch ID for secure logins, purchases, and the like. These limitations are likely because Touch ID uses the Secure Enclave built into the newer chips.
The use of dedicated buttons for expressing security intent is effective in preventing malicious software from performing operations against the user's will. For example, the reason Windows requires CTRL+ALT+DEL at login is that this key combination cannot be manipulated via Windows APIs (it is used as a Secure Attention Sequence).
The Weakest Encryption
Basically, it appears that a reasonable level of encryption is used in most places. The weakest encryption mentioned in the document seems to be RSA 1024-bit.
AirPlay also uses an authentication IC to verify that the receiver is certified by Apple. For AirPlay audio and CarPlay video streams, communication between the accessory and the device is encrypted using MFi-SAP (Secure Association Protocol) in AES128 counter (CTR) mode. Also, as part of the Station-to-Station (STS) protocol, ephemeral keys are exchanged via ECDH key exchange (Curve25519) and signed using the authentication IC’s 1024-bit RSA key.
This is for verifying the authenticity of the authentication IC and does not guarantee the correctness of the protocol implementation itself (it appears to have been optional until AirPlay 2), so it might not be considered a very critical cipher.
English Version
There are also English versions of the web pages and PDFs.
...The English version has more pages. Even though the content seems the same at first glance... There are two or three very minor typos unique to the Japanese version.
Thoughts
Is all of this really necessary to be a platformer?
Apple Key
A unique lock symbol, designed to look like the Apple logo as a padlock, is used as a page icon. This key mark is used in communications regarding security, particularly in documents discussing the harms of app sideloading released during the litigation with Epic.
- (PDF) https://www.apple.com/privacy/docs/Building_a_Trusted_Ecosystem_for_Millions_of_Apps.pdf -- Does a Japanese version not exist?
- Engadget Japan article: https://japanese.engadget.com/apple-sideloading-privacy-risk-110026727.html
As a result, users would have to constantly be on the lookout for scams and would only be able to download a handful of apps from a limited number of developers. On the other hand, Apple describes the App Store as a "trusted place," stating that multiple layers of security provide "unparalleled levels of protection from malicious software" and give users peace of mind.
While the lawsuit itself focuses on the so-called "Apple Tax," the key point is that the impact of sideloading is discussed within the context of security.
In fact, as demonstrated by AltStore (a sideloading helper created to distribute game console emulators), sideloading is currently possible as long as the application is signed. It wouldn't be difficult for Apple to block AltStore—for instance, by requiring manual operations on a webpage via ReCAPTCHA for free signatures—but in practice, they haven't gone that far.
Therefore, what Apple calls "sideloading" probably doesn't include things as cumbersome as AltStore (which requires a re-signing daemon to remain resident on the PC side); rather, they likely want to prevent apps that haven't been approved by Apple from running on top of the "trust" they've built at a massive cost.
Is Security Based on Imperfect Trust Possible?
Apple builds complete security at a significant cost and, based on that, has users trust them implicitly to provide various services. This model enables applications like Apple Pay, but it also creates dissatisfaction, as seen in the lawsuit with Epic.
Is it possible to provide equivalent services in a model that does not require ultimate trust?
For example, current DRM systems function without requiring ultimate trust. In Microsoft's PlayReady, the Insecure world and Secure world (TEE) are clearly distinguished, and it is designed to function even in the presence of an Insecure world.
What if there were a USB-connected credit card?
Suppose a credit card issued by VISA had a USB terminal, and you could perform payment processing on a PC using that card. What kind of problems would arise?
(Excluding the fact that it would look extremely uncool.)
An immediate problem is that there is no secure way to enter a PIN on a PC. Since PINs are not updated frequently, if a keylogger or similar is installed, it's game over. In the case of Apple Pay, this problem does not exist (provided there are no bugs) because a secure OS handles the PIN entry or proxy entry via fingerprint authentication.
How to Securely Connect User Intent to the Computer
I believe this reveals part of the true nature of the "trust" that Apple product users have. Specifically, being certain that the UI provided by Apple:
- Reflects the user's intent accurately, without that intent being communicated to unintended third parties.
- Ensures that no person other than the user can communicate intent by impersonating them.
Confirming that these points are being realized is the essence of the trust users place in Apple products, and with that trust, Apple implements applications that rely heavily on user privacy, such as Apple Pay, photos, and Siri.
Requiring that only authorized applications can run is the simplest way to fulfill this expectation. Other methods are difficult. For instance, no matter how much a web browser displays SSL information in the address bar, it has little effect on phishing scams—it is difficult to establish trust in this method through user education.
Currently, the industry is moving toward gaining trust through system-wide consistency in a somewhat simplistic way, such as Windows 11 requiring TPM 2.0. However, I wonder if it might be possible to consider more seriously what it is that users actually want to trust, and gain that trust in a more compact way.
Discussion