Skip to content

Commit

Permalink
kvm: Disallow wraparound in kvm_gfn_to_hva_cache_init
Browse files Browse the repository at this point in the history
Previously, in the case where (gpa + len) wrapped around, the entire
region was not validated, as the comment claimed. It doesn't actually
seem that wraparound should be allowed here at all.

Furthermore, since some callers don't check the return code from this
function, it seems prudent to clear ghc->memslot in the event of an
error.

Fixes: 8f96452 ("KVM: Allow cross page reads and writes from cached translations.")
Reported-by: Cfir Cohen <[email protected]>
Signed-off-by: Jim Mattson <[email protected]>
Reviewed-by: Cfir Cohen <[email protected]>
Reviewed-by: Marc Orr <[email protected]>
Cc: Andrew Honig <[email protected]>
Signed-off-by: Radim Krčmář <[email protected]>
  • Loading branch information
jsmattsonjr authored and bonzini committed Dec 21, 2018
1 parent ba7424b commit f1b9dd5
Showing 1 changed file with 21 additions and 20 deletions.
41 changes: 21 additions & 20 deletions virt/kvm/kvm_main.c
Original file line number Diff line number Diff line change
Expand Up @@ -2005,32 +2005,33 @@ static int __kvm_gfn_to_hva_cache_init(struct kvm_memslots *slots,
gfn_t end_gfn = (gpa + len - 1) >> PAGE_SHIFT;
gfn_t nr_pages_needed = end_gfn - start_gfn + 1;
gfn_t nr_pages_avail;
int r = start_gfn <= end_gfn ? 0 : -EINVAL;

ghc->gpa = gpa;
ghc->generation = slots->generation;
ghc->len = len;
ghc->memslot = __gfn_to_memslot(slots, start_gfn);
ghc->hva = gfn_to_hva_many(ghc->memslot, start_gfn, NULL);
if (!kvm_is_error_hva(ghc->hva) && nr_pages_needed <= 1) {
ghc->hva = KVM_HVA_ERR_BAD;

/*
* If the requested region crosses two memslots, we still
* verify that the entire region is valid here.
*/
while (!r && start_gfn <= end_gfn) {
ghc->memslot = __gfn_to_memslot(slots, start_gfn);
ghc->hva = gfn_to_hva_many(ghc->memslot, start_gfn,
&nr_pages_avail);
if (kvm_is_error_hva(ghc->hva))
r = -EFAULT;
start_gfn += nr_pages_avail;
}

/* Use the slow path for cross page reads and writes. */
if (!r && nr_pages_needed == 1)
ghc->hva += offset;
} else {
/*
* If the requested region crosses two memslots, we still
* verify that the entire region is valid here.
*/
while (start_gfn <= end_gfn) {
nr_pages_avail = 0;
ghc->memslot = __gfn_to_memslot(slots, start_gfn);
ghc->hva = gfn_to_hva_many(ghc->memslot, start_gfn,
&nr_pages_avail);
if (kvm_is_error_hva(ghc->hva))
return -EFAULT;
start_gfn += nr_pages_avail;
}
/* Use the slow path for cross page reads and writes. */
else
ghc->memslot = NULL;
}
return 0;

return r;
}

int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
Expand Down

0 comments on commit f1b9dd5

Please sign in to comment.