forked from xen-project/xen
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathvtpm-platforms.txt
141 lines (117 loc) · 6.47 KB
/
vtpm-platforms.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
Author: Daniel De Graaf <[email protected]>
================================================================================
Overview
================================================================================
This document describes example platforms which use virtual TPMs to provide
security properties for guests running on the platforms. There are several
tradeoffs between flexibility and trust which must be considered when
implementing a platform containing vTPMs.
================================================================================
Example 1: Trusted Domain 0
================================================================================
This is the simplest example and provides maximal flexibility for testing the
vTPM Manager and vTPMs. The vtpmmgr, vtpm, and guest domains are created using
xl from the command line on domain 0.
Provisioning on domain 0:
# dd if=/dev/zero of=/images/vtpmmgr-stubdom.img bs=2M count=1
# dd if=/dev/zero of=/images/vtpm-guest1.img bs=2M count=1
# dd if=/dev/zero of=/images/vtpm-guest2.img bs=2M count=1
The vtpmmgr configuration file (vtpmmgr.cfg):
name="vtpmmgr"
kernel="/usr/lib/xen/boot/vtpmmgr-stubdom.gz"
extra="tpmlocality=2"
memory=8
disk=["file:/images/vtpmmgr-stubdom.img,hda,w"]
iomem=["fed42,1"]
The vtpm configuration files (vtpm-guest1.cfg, vtpm-guest2.cfg):
name="vtpm-guest1"
kernel="/usr/lib/xen/boot/vtpm-stubdom.gz"
extra="loglevel=debug"
memory=8
disk=["file:/images/vtpm-guest1.img,hda,w"]
vtpm=["backend=vtpmmgr,uuid=ac0a5b9e-cbe2-4c07-b43b-1d69e46fb839"]
name="vtpm-guest2"
kernel="/usr/lib/xen/boot/vtpm-stubdom.gz"
extra="loglevel=debug"
memory=8
disk=["file:/images/vtpm-guest2.img,hda,w"]
vtpm=["backend=vtpmmgr,uuid=6c3ff5f1-8d58-4fed-b00d-a5ea9a817f7f"]
The guest configuration files (guest1.cfg, guest2.cfg):
name="guest1"
kernel="/usr/lib/xen/boot/pv-grub-x86_64.gz"
memory=1024
disk=["file:/images/guest1.img,xvda,w"]
vif=['mac=00:01:02:03:04:05,bridge=br0']
vtpm=["backend=vtpm-guest1"]
name="guest2"
kernel="/usr/lib/xen/boot/pv-grub-x86_64.gz"
memory=1024
disk=["file:/images/guest2.img,xvda,w"]
vif=['mac=00:01:02:03:04:06,bridge=br0']
vtpm=["backend=vtpm-guest2"]
Starting domains:
# xl create vtpmmgr.cfg
# xl create vtpm-guest1.cfg
# xl create guest1.cfg
================================================================================
Example 2: Domain Builder with Static vTPMs
================================================================================
This example uses the domain builder to construct a TPM Manager and vTPM which
do not require trusting the hardware domain with the vTPM's secrets. However,
it is not possible to construct additional vTPMs after the system is booted, and
the guests with access to vTPMs may not be rebooted without rebooting the entire
platform.
The domain builder (dom0) constructs:
dom1 - xenstore system_u:system_r:xenstore_t
dom2 - hardware system_u:system_r:hwdom_t
dom3 - vtpmmgr system_u:system_r:vtpmmgr_t
dom4 - vtpm-hw system_u:system_r:vtpm_t
dom5 - vtpm-g1 guest1_u:vm_r:vtpm_t
dom6 - vtpm-g2 guest2_u:vm_r:vtpm_t
dom7 - guest1 guest1_u:vm_r:guest_t
dom8 - guest2 guest2_u:vm_r:guest_t
It unpauses dom1 and dom2 after setting up Xenstore. The hardware domain is not
permitted access to IO memory at 0xfed42; this IO memory is accessible to the
vtpmmgr domain. The two guest domains may be instantiated using pv-grub or
using the same kernel as the hardware domain to conserve space in the domain
builder's initrd.
Once the hardware domain boots, it runs:
# xl block-attach vtpmmgr 'backendtype=phy,backend=hardware,vdev=hda,access=w,target=/dev/lvm/vtpmmgr'
# xl block-attach vtpm-hw 'backendtype=phy,backend=hardware,vdev=hda,access=w,target=/dev/lvm/vtpm-hw'
# xl block-attach vtpm-g1 'backendtype=phy,backend=hardware,vdev=hda,access=w,target=/dev/lvm/vtpm-g1'
# xl block-attach vtpm-g2 'backendtype=phy,backend=hardware,vdev=hda,access=w,target=/dev/lvm/vtpm-g2'
# xl block-attach guest1 'backendtype=phy,backend=hardware,vdev=xvda,access=w,target=/dev/lvm/guest1'
# xl block-attach guest2 'backendtype=phy,backend=hardware,vdev=xvda,access=w,target=/dev/lvm/guest2'
# xl vtpm-attach vtpm-hw uuid=062b6416-ed46-492a-9e65-a2f92dc07f7f backend=vtpmmgr
# xl vtpm-attach vtpm-g1 uuid=e9aa9d0f-ece5-4b84-b129-93004ba61a5f backend=vtpmmgr
# xl vtpm-attach vtpm-g2 uuid=3fb2caf0-d305-4516-96c7-420618d98efb backend=vtpmmgr
# xl vtpm-attach hardware uuid=062b6416-ed46-492a-9e65-a2f92dc07f7f backend=vtpm-hw
# xl vtpm-attach guest1 uuid=e9aa9d0f-ece5-4b84-b129-93004ba61a5f backend=vtpm-g1
# xl vtpm-attach guest2 uuid=3fb2caf0-d305-4516-96c7-420618d98efb backend=vtpm-g2
Once these commands are complete, the domains are unpaused and may boot. The XSM
policy must be configured to not allow any of the domain types named above to be
created by any domain except the domain builder; guests created by the hardware
domain or one of the primary guests acting as a control domain must have a
different type. The type vtpmmgr_t may only map grants from vtpm_t; vtpm_t may
only map grants from a domain of type guest_t or hwdom_t with the same user
field.
This example may be extended to allow dynamic creation of domains by using a
domain builder that accepts build requests. A single build request would create
a pair of domains using an unused XSM user field: a vTPM and a pv-grub domain
which requires the presence of a vTPM. To bind the configuration of the guest
to the vTPM, the guest may use full-disk encryption which can be unlocked using
an unseal operation; using the wrong vTPM will then yield a non-functioning
guest.
In order to use pv-grub to obtain measurements of the guest kernel in PCRs 4 and
5, it must not be possible to attach to a guest's vTPM without booting a fresh
guest image. This requires pairing every vTPM's launch with the launch of a
guest, as described above, and using the --vtpm-label= argument to pv-grub so
that it refuses to launch a guest if it could not write to the vTPM. To permit
the hardware domain, which cannot use pv-grub, to use a vTPM in this situation,
multiple vTPM groups must be used in the TPM Manager. Group 0 would be for the
hardware domain only, and would only support vTPMs with label
"system_u:system_r:vtpm_t". Group 1 would support vTPMs with label
"*:vm_r:vtpm_t", and would be used for all guest vTPMs. The EK quote used in
initial provisioning and any deep quotes produced later would include the label,
which would allow a verifier to reliably determine if the value of the vTPM's
PCR 4 contains the hash of the domain's kernel.