Message ID | 20200903110831.353476-1-philmd@redhat.com |
---|---|
Headers | show |
Series | hw: Forbid DMA write accesses to MMIO regions | expand |
On 03/09/20 13:08, Philippe Mathieu-Daudé wrote: > Do not allow PCI slaves to write to indirect memory > regions such MMIO. > > This fixes LP#1886362 and LP#1888606. What is a "PCI slave"? Which devices would still be allowed to write? I'm worried that there are cases of MMIO reads that would be broken. They are certainly niche these days, but they should still work; the most "famous" one is perhaps the old BASIC DEF SEG=&HB800 BLOAD "picture.pic", 0 Paolo
On 9/3/20 2:26 PM, Paolo Bonzini wrote: > On 03/09/20 13:08, Philippe Mathieu-Daudé wrote: >> Do not allow PCI slaves to write to indirect memory >> regions such MMIO. >> >> This fixes LP#1886362 and LP#1888606. > > What is a "PCI slave"? TBH I at a quick look at the PCIe SPEC, but not PCI. PCIe might be able to check the requester_id to check if the access is valid, but I'm not sure where to add this check. > Which devices would still be allowed to write? As of this patch, all the non-PCI, but I plan to add a similar check for USB on top of this series. > I'm worried that there are cases of MMIO reads that would be broken. > They are certainly niche these days, but they should still work; the > most "famous" one is perhaps the old BASIC > > DEF SEG=&HB800 > BLOAD "picture.pic", 0 This looks like ISA stuff. I don't think ISA does such checks (and didn't plan to add them there) but I'd need to verify. Do you have an acceptance test? > > Paolo >
On Thu, Sep 03, 2020 at 02:58:19PM +0100, Peter Maydell wrote: > On Thu, 3 Sep 2020 at 14:37, Laszlo Ersek <lersek@redhat.com> wrote: > > Peter mentions an approach at the end of > > <https://bugs.launchpad.net/qemu/+bug/1886362/comments/5> that I believe > > to understand, but -- according to him -- it seems too much work. > > It also would only be effective for MMIO, not for qemu_irq lines... > > > I don't think such chains work unto arbitrary depths on physical > > hardware either. > > Real hardware by and large doesn't get designed with this kind > of DMA-to-self as a consideration either, but unfortunately it's > not really very useful as a model to base QEMU's behaviour on: > > (1) real hardware is usually massively parallel, so the logic > that handles incoming MMIO is decoupled anyway from logic > that does outgoing DMA. (Arguably the "do all DMA in a > bottom-half" idea is kind of following the hardware design.) > Similarly simple "raise this outbound signal" logic just > works as an instantaneous action that causes the device on > the other end to change its state/do something parallel, > whereas for QEMU we need to actually call some code in the > device on the other end and so we serialize this stuff, > sandwiching a bit of "device B code" in the middle of a > run of "device A code". So a lot more of this stuff "just > happens to work" on h/w than we get with QEMU. > (2) if software running on real h/w does do something silly with > programming a device to DMA to itself then the worst case is > generally that they manage to wedge that device (or the whole > machine, if you're really unlucky), in which case the response > is "don't do that then". There isn't the same "guest code > can escape the VM" security boundary that QEMU needs to guard > against [*]. > > [*] I do wonder about hardware-device-passthrough setups; I > don't think I would care to pass through an arbitrary device > to an untrusted guest... Hmm, I guess it would make sense to have a configurable option in KVM to isolate passthrough devices so they only can DMA to guest RAM... Cheers, Edgar
On 03/09/20 16:24, Edgar E. Iglesias wrote: >> [*] I do wonder about hardware-device-passthrough setups; I >> don't think I would care to pass through an arbitrary device >> to an untrusted guest... > Hmm, I guess it would make sense to have a configurable option in KVM > to isolate passthrough devices so they only can DMA to guest RAM... Passthrough devices are always protected by the IOMMU, anything else would be obviously insane^H^H^Hecure. :) Paolo
On 03/09/20 17:50, Edgar E. Iglesias wrote: >>> Hmm, I guess it would make sense to have a configurable option in KVM >>> to isolate passthrough devices so they only can DMA to guest RAM... >> >> Passthrough devices are always protected by the IOMMU, anything else >> would be obviously insane^H^H^Hecure. :) > > Really? To always do that blindly seems wrong. > > I'm refering to the passthrough device not being able to reach registers > of other passthrough devices within the same guest. Ah okay; sorry, I misunderstood. That makes more sense now! Multiple devices are put in the same IOMMU "container" (page table basically), and that takes care of reaching registers of other passthrough devices. Paolo > Obviously the IOMMU should be setup so that passthrough devices don't reach\ > other guests or the host.
On Thu, Sep 03, 2020 at 07:53:33PM +0200, Paolo Bonzini wrote: > On 03/09/20 17:50, Edgar E. Iglesias wrote: > >>> Hmm, I guess it would make sense to have a configurable option in KVM > >>> to isolate passthrough devices so they only can DMA to guest RAM... > >> > >> Passthrough devices are always protected by the IOMMU, anything else > >> would be obviously insane^H^H^Hecure. :) > > > > Really? To always do that blindly seems wrong. > > > > I'm refering to the passthrough device not being able to reach registers > > of other passthrough devices within the same guest. > > Ah okay; sorry, I misunderstood. That makes more sense now! > > Multiple devices are put in the same IOMMU "container" (page table > basically), and that takes care of reaching registers of other > passthrough devices. Thanks, yes, that's a sane default. What I was trying to say before is that it may make sense to allow the user to "harden" the setup by selectivly putting certain passthrough devs on a separate group that can *only* DMA access guest RAM (not other device regs). Some devs need access to other device's regs but many passthrough devs don't need DMA access to anything else but RAM (e.g an Ethernet MAC). That could mitigate the damage caused by wild DMA pointers... Cheers, Edgar
On Wed, 9 Sep 2020 at 10:12, Stefan Hajnoczi <stefanha@gmail.com> wrote: > > On Thu, Sep 03, 2020 at 01:08:19PM +0200, Philippe Mathieu-Daudé wrote: > > The main patch is: > > "exec/memattrs: Introduce MemTxAttrs::direct_access field". > > This way we can restrict accesses to ROM/RAM by setting the > > 'direct_access' field. Illegal accesses return MEMTX_BUS_ERROR. > > QEMU needs to simulate the behavior of real hardware. What is the > behavior of real hardware? It varies, depending on the hardware. The most common thing is probably "happens to work by luck", which is OK for hardware but doesn't help us much since our software implementation is naturally more serialized than hardware is and since we don't want to allow guests to make QEMU crash or misbehave. thanks -- PMM
On Wed, Sep 9, 2020 at 2:23 PM Peter Maydell <peter.maydell@linaro.org> wrote: > On Wed, 9 Sep 2020 at 10:12, Stefan Hajnoczi <stefanha@gmail.com> wrote: > > On Thu, Sep 03, 2020 at 01:08:19PM +0200, Philippe Mathieu-Daudé wrote: > > > The main patch is: > > > "exec/memattrs: Introduce MemTxAttrs::direct_access field". > > > This way we can restrict accesses to ROM/RAM by setting the > > > 'direct_access' field. Illegal accesses return MEMTX_BUS_ERROR. > > > > QEMU needs to simulate the behavior of real hardware. What is the > > behavior of real hardware? > > It varies, depending on the hardware. The most common thing > is probably "happens to work by luck", which is OK for hardware > but doesn't help us much since our software implementation is > naturally more serialized than hardware is and since we don't > want to allow guests to make QEMU crash or misbehave. The memory API bounce buffer mechanism is evidence that some board(s) need or needed it. At a minimum we need to find out the reason for the bounce buffer mechanism to avoid breaking guests. Stefan