Message ID | 1457446804-18892-4-git-send-email-lersek@redhat.com |
---|---|
State | New |
Headers | show |
On 03/18/16 15:38, David Woodhouse wrote: > On Tue, 2016-03-08 at 15:20 +0100, Laszlo Ersek wrote: >> + UINT64 PciExBarBase; > ... >> if (mHostBridgeDevId == INTEL_Q35_MCH_DEVICE_ID) { > ... >> + PciExBarBase = FixedPcdGet64 (PcdPciExpressBaseAddress); > ... >> } > ... >> if (mHostBridgeDevId == INTEL_Q35_MCH_DEVICE_ID) { > ... >> + AddReservedMemoryBaseSizeHob (PciExBarBase, SIZE_256MB, FALSE); Ugh, please don't snip context from patches. It's much harder to find the context without the quote. > The mHostBridgeDevId variable is global. Therefore as far as the > compiler knows, it can change between those two if() statements. > > VS2008 complains thus: > > e:\edk2\ovmfpkg\platformpei\platform.c(275) : warning C4701: potentially uninitialized local variable 'PciExBarBase' used Sure, VS2008 is wrong. :) > If only we had some kind of code submission process where your > submission could be automatically subjected to build tests on all > platforms, and the status of those builds could be automatically > displayed right there with your request... :) I would already be using the Microsoft toolchains for test building, if they were available at zero cost for the purposes I need them for. :) You don't have to convince me about the utility of a build farm. I think I (co-)proposed it years ago. Who will build it and administer it? Who will foot the electricity & cooling bills? Etc etc etc. If github can cross-build (or, heck, natively build) ArmVirtPkg for aarch64, and also build OVMF with all Microsoft toolchains we care about, now *that* would be "value add". (And, if such a facility existed by now, I wouldn't depend on github to test-build my stuff for me; I'd use the facility myself.) Thanks Laszlo _______________________________________________ edk2-devel mailing list edk2-devel@lists.01.org https://lists.01.org/mailman/listinfo/edk2-devel
On 03/18/16 16:12, David Woodhouse wrote: > Commit 7b8fe6356 ("OvmfPkg: PlatformPei: enable PCIEXBAR (aka MMCONFIG / > ECAM) on Q35") broke the VS2008 build, causing it to complain of > "potentially uninitialized local variable 'PciExBarBase' used" at > line 275. > > On this occasion it isn't actually wrong — the mHostBridgeDevId variable > is global, so theoretically it *could* change between the two if() > statements. > > Of course VS2008 is also a steaming pile of poo, and even if we use > a local boolean variable which definitely *can't* change in the middle, > it still can't work it out and still emits the warning. > > So just initialise PciExBarBase to zero to shut the compiler up. > > Contributed-under: TianoCore Contribution Agreement 1.0 > Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> > --- > Oops, my build test on Windows still had the temporary 'PciExBarBase=0' > to shut the compiler up, so wasn't really exercising the fix I was > submitting. Sorry for the noise. > > It turns out that VS2008 really is just a stupid pile of crap. > > OvmfPkg/PlatformPei/Platform.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/OvmfPkg/PlatformPei/Platform.c b/OvmfPkg/PlatformPei/Platform.c > index 0fc2278..20008d0 100644 > --- a/OvmfPkg/PlatformPei/Platform.c > +++ b/OvmfPkg/PlatformPei/Platform.c > @@ -212,7 +212,7 @@ MemMapInitialization ( > > if (!mXen) { > UINT32 TopOfLowRam; > - UINT64 PciExBarBase; > + UINT64 PciExBarBase = 0; > UINT32 PciBase; > UINT32 PciSize; > > -- > 2.5.5 > Thank you for the patch. I have three problems with it: (1) The commit message uses at least one non-ASCII character, the EM DASH (U+2014). Can you please replace it with a "--"? Yes, I know you hate me for asking this. Please just write me off as stupid and replace the character. (2) In the edk2 coding style, initialization of local variables is not permitted. Can you please add an assignment instead? (3) I tried to apply your patch nonetheless. It doesn't apply. I looked at the patch email that I saved from Thunderbird. The code section of the email contains =C2=A0 quoted-printable sequences. Since the charset is UTF-8 (according to the Content-Type header), 0xC2 0xA0 translates to U+00A0, "NO-BREAK SPACE". In the code, we just have plain space characters. Can you please submit a v3 with the above addressed? Thanks! Laszlo
On 03/18/16 18:45, David Woodhouse wrote: > On Fri, 2016-03-18 at 17:53 +0100, Laszlo Ersek wrote: >> >> (1) The commit message uses at least one non-ASCII character, the EM >> DASH (U+2014). Can you please replace it with a "--"? Yes, I know you >> hate me for asking this. Please just write me off as stupid and replace >> the character. > > No. This isn't 1994. Really? I wouldn't know. If I look at what ISO C features we're allowed to use, we certainly seem to be stuck in 1995. Or, well, sometime before 1989, because we can't even use structure assignment. > If you wish to change it, go ahead. Thank you, I will. > Don't ask me > to do stupid things for no good reason. There is a good reason. You know it and you don't care about it. Not a problem; I'll fix up the commit message. >> (2) In the edk2 coding style, initialization of local variables is not >> permitted. Can you please add an assignment instead? > > Er, really? What insanity is this? Please ask your colleagues at Intel (but please also make sure that you don't take offense on their behalf, when you call their guidelines insane ;) ). > Where did these guidelines come from > and can they be fixed? Why would initialisation of local variables > *ever* be frowned upon? How do they make this stuff up? Please consult "EDK II C Coding Standards Specification.pdf", section 6.8 "C Function Layout": *Initializing a variable as part of its declaration is illegal.* (Emphasis theirs.) > >> (3) I tried to apply your patch nonetheless. It doesn't apply. I looked >> at the patch email that I saved from Thunderbird. The code section of >> the email contains =C2=A0 quoted-printable sequences. Since the charset >> is UTF-8 (according to the Content-Type header), 0xC2 0xA0 translates to >> U+00A0, "NO-BREAK SPACE". > > Hm, that would appear to be an Evolution bug. \o/ Finally something I'm not held responsible for! (BTW, why did you mail out the patch with Evolution, rather than git-send-email? Are you sure you are using the tools as they were intended? :)) > I think it triggers when > lines have trailing whitespace. I'll investigate. Thanks for pointing > it out. > > I can put it in a PR if you prefer... :) Works for me, but please send the pull request to the mailing list. I'll rebase the commit for the commit message fixup, and for adding my R-b, but I'll keep the base commit / history intact. Thanks! Laszlo _______________________________________________ edk2-devel mailing list edk2-devel@lists.01.org https://lists.01.org/mailman/listinfo/edk2-devel
On 03/18/16 19:40, David Woodhouse wrote: > But this is different. This is the commit messages. And what would you > know... the last commit message in the log which isn't ASCII *isn't* > that other one I pointed out; it's one from you (7daf2401) in which you > commit the heinous crime of slepping Michał Zegan's name correctly :) I went to extreme lengths to commit his name correctly. It wasn't at all natural for me. But names are names. > No, I genuinely don't know of any reason to eschew non-ASCII characters > in commit messages. Because they are a PITA for people who don't use UTF-8 generally? (I don't.) People still write RFC's today, and I think those are all pure ASCII as well. Anyway, we won't convince each other; we've been through this. Feel free to post non-ASCII in the commit messages; should I come across them, I'll try to fix them up (except in names). Thanks Laszlo
On 03/18/16 22:13, David Woodhouse wrote: > On Fri, 2016-03-18 at 20:02 +0100, Laszlo Ersek wrote: >> On 03/18/16 19:40, David Woodhouse wrote: >> >>> >>> But this is different. This is the commit messages. And what would you >>> know... the last commit message in the log which isn't ASCII *isn't* >>> that other one I pointed out; it's one from you (7daf2401) in which you >>> commit the heinous crime of slepping Michał Zegan's name correctly :) >> I went to extreme lengths to commit his name correctly. It wasn't at all >> natural for me. But names are names. >> >>> >>> No, I genuinely don't know of any reason to eschew non-ASCII characters >>> in commit messages. > >> Because they are a PITA for people who don't use UTF-8 generally? (I >> don't.) > > This doesn't really make sense at any level. You must have to jump > through hoops to still operate a legacy 20th-century character set on a > modern Linux distribution, surely? Not too many. Good locale support and good internationalization means that my ISO8859-2 preference is well accommodated. > And it must confer *no* benefit to > you It certainly does. It allows me to use my text editor of choice. I've been using that editor for ~15 years, and in the last five years, it's been running 10 hours per day or more. > — unless you count it as a benefit that you have to go to 'extreme > lengths' just to get someone's *name* right — something that normally > ought to Just Work™ when you use 'git am'. > > And once it's been committed, either your system just doesn't display > the commit logs correctly at all (and gets names like Michał's wrong), > or you shouldn't even *notice* the emdash. Which is it? It happens to display Michał's name correctly, because it fits in latin2. [i18n] logOutputEncoding = latin2 commitencoding = latin2 The extreme lengths that I had to go to were necessary to convince git-send-email not to mess up Michał's CC in the email headers, picking it up from the commit message. The commit message was all right, but the email header got mis-encoded (it's a git bug; it thought the CC line was UTF-8, despite knowing that the full commit message was latin2). emdash doesn't display correctly for me indeed, in the output of "git log". It's not a big annoyance, but if I am to apply a patch, I try to prevent it. > I strongly suspect the latter, and that you only noticed because you > were looking closely at the encoding because of that Evolution bug? No. I tend to notice glyphs by the naked eye that are not ascii / latin1 / latin2. Here's another example: http://thread.gmane.org/gmane.comp.bios.edk2.devel/8563/focus=8566 >> Anyway, we won't convince each other; we've been through this. Feel free >> to post non-ASCII in the commit messages; should I come across them, >> I'll try to fix them up (except in names). > > No. Please don't. If this was another of the bizarre but enforced-by- > the-project things then I was prepared to tolerate it until it could be > fixed, but if you are just mangling my commit messages to make them > less grammatically correct for an unsupportable personal preference, > then that's not OK. > > The correct character to use in that situation is the emdash, If you > *absolutely* must, then rewrite the whole sentence to avoid using it. > Do *not* replace it with hyphens. Okay. I've googled the use of emdash in the English language, and it seems to be more or less interchangeable with parens. Is that okay? > And when you do tweak a commit > message, it is best practice to *note* that you have done so, and why. Absolutely. > Rewriting it for this reason is *not* acceptable. Fine. Please ask Jordan to apply the patch then. Laszlo
On 03/18/16 22:13, David Woodhouse wrote:
> Rewriting it for this reason is *not* acceptable.
BTW I have no clue why not, if I keep your S-o-b in the first place, and
document the changes. In other projects maintainers rewrite
contributors' patches even, and (at best) credit the contributor with a
tag. I think it's entirely in a maintainer's jurisdiction to adapt the
commit message as he sees fit.
It may be idiosyncratic, and you can yell at me for it; but all that
will do is convince me to avert my eyes when you post an OvmfPkg patch.
Whenever you contribute to a project, do you always start with making a
huge noise, calling everyone around (or their rules) insane, "makes no
sense at all", and so on? The stuff we've been operating by thus far
have been mostly okay. I have updated a few commit messages as well, and
haven't received complaints. The operational changes you are
force-feeding the project (or the rate thereof) seem to surpass all the
changes it has undergone since I started participating.
CRLF has been working out for most people. Rebasing patches has been
working out. Staying away from non-ASCII in commit messages ditto.
Keeping full context when quoting patches ditto. You want edk2 to change
its ways in all aspects at once. And since I seem to be the guy who
talks to you the most, I'm the one you yell at the most. I've never
experienced this before. I'm stunned.
Laszlo
_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel
On 03/18/16 23:52, David Woodhouse wrote: > On Fri, 2016-03-18 at 22:53 +0100, Laszlo Ersek wrote: >> The extreme lengths that I had to go to were necessary to convince >> git-send-email not to mess up Michał's CC in the email headers, picking >> it up from the commit message. The commit message was all right, but the >> email header got mis-encoded (it's a git bug; it thought the CC line was >> UTF-8, despite knowing that the full commit message was latin2). > > I'd be interested in a reference to the upstream bug report for that > one. I didn't report it. First, I expected to be beat down with "just use UTF-8". Second, I'm using a fairly old git release (git-1.8.3.1-6.el7.x86_64, on RHEL-7.2), and the most recent git release might not have the bug. > It's going to be caused by the use of 'i18n.commitencoding=latin2'. > > Hm... did you realise that when you use that setup and you commit and > push directly, that means you are putting latin2-encoded objects into > the upstream EDK2 git repository? Yes, I realized it. Latin2 text can be cleanly transcoded to UTF-8 (which is how most people will read it), i.e., from i18n.commitEncoding to i18n.logOutputEncoding. > I'm surprised that even works without corrupting things for everyone — > it looks like your commits actually carry an explicit label marking > them as latin2. Yes. I had read the discussion in the git-config and git-log manuals, and saw it was safe. > So yes, I'm interested in the bug because it should be fixed. But > basically, you brought it upon yourself by operating in a mode that is > *known* to invite such errors, and was abandoned by most other people a > *long* time ago. (Hm, didn't I just predict exactly this argument above?...) > And you're pushing that choice into the EDK2 history > so that others are exposed to the same class of bugs. :( The facility I'm using is a documented feature of git; git will transcode Michał's name to UTF-8 for others (or whatever they have in logOutputEncoding). Laszlo
On 03/19/16 01:19, David Woodhouse wrote: > Isn't that i18n.commitencoding=latin2 config on your part somewhat > gratuitous? The idea is that the tools are converting everything on the > way in and out, and that *only* affects the commit objects that are > actually encoded in the repository, right? So letting that be UTF-8 to > match the default shouldn't even be *visible* to you, should it? Except > that you probably wouldn't have suffered that git-send-email bug you > mentioned? With i18n.commitencoding=UTF-8, git expects me to write the commit messages in UTF-8. My main editor doesn't do UTF-8. Most of the time neither UTF-8 nor latin[12] are necessary; I write my own commit messages only in ASCII. So that is certainly accommodated by sticking with the default (UTF-8). However, occasionally I have to make an excursion into latin2 (which partially overlaps latin1 too), for people's names. This can be covered by setting i18n.commitencoding to latin2 -- it doesn't break ASCII, and allows me to paste most (latin script) names transparently, using my main editor. If I switched i18n.commitencoding back to UTF-8, then I'd have to edit commit messages with a UTF-8 capable editor whenever ASCII didn't suffice, even for names in simple latin script that would otherwise fit in latin2. That's a huge chore for me. I utterly hate most wide-spread editors that happen to have good UTF-8 support (emacs, vi*, gedit). "joe" I don't hate, but it still doesn't have all the macros and commands that I use all the time in my main editor, even while editing commit messages. (Sometimes I draw ASCII diagrams in commit messages. I have shortcuts for pasting frequent Cc:'s quickly. I have a dedicated wrapping mode for commit messages. And so on.) People customize the heck out of their programmable editors (mostly emacs and vim); I do the same with mine, it just happens to lack UTF-8. Hm.... Wait a second. I just realized there is a git hook called "commit-msg": commit-msg This hook is invoked by git commit, and can be bypassed with --no-verify option. It takes a single parameter, the name of the file that holds the proposed commit log message. Exiting with non-zero status causes the git commit to abort. The hook is allowed to edit the message file in place, and can be used to normalize the message into some project standard format (if the project has one). It can also be used to refuse the commit after inspecting the message file. The default commit-msg hook, when enabled, detects duplicate "Signed-off-by" lines, and aborts the commit if one is found. Okay, here's what I'll do. I will switch i18n.commitencoding back to UTF-8. And, I will add a commit-msg hook that converts the commit message in-place from latin2 to UTF-8, with "iconv". That should keep us both happy. Deal? Laszlo _______________________________________________ edk2-devel mailing list edk2-devel@lists.01.org https://lists.01.org/mailman/listinfo/edk2-devel
On 03/19/16 02:15, David Woodhouse wrote: > So we treat it as an opaque sequence of bytes on the way *in*, then > make assumptions on the way *out* about what it was? On the way in, it is assumed to be UTF-8, unless the user says otherwise. If the user says otherwise (in i18n.commitencoding), that statement is captured in the commit object. Either way, the commit message is not converted. On the way out, the commit message is always converted. From: what the commit object says; default is UTF-8, or else what the user stuck in there. To: UTF-8 by default, or what the user specified in i18n.logOutputEncoding. So normally there is a transparent UTF-8 --> UTF-8 conversion on output. > Which is just one of the set of classic "oops, I dropped the label" > bugs which rendered the legacy charsets unworkable, but this time it > almost seems to be *deliberate*. > > The logic behind not re-coding is silly. Because throwing away the > charset label on the input text and assuming it's already in > i18n.commitencoding definitely *isn't* a reversible operation :) Right; if git had considered my LC_CTYPE locale category setting, and had automatically converted between it and its internal UTF-8 representation, I would have never looked at "i18n.*". :( Laszlo _______________________________________________ edk2-devel mailing list edk2-devel@lists.01.org https://lists.01.org/mailman/listinfo/edk2-devel
On 03/19/16 08:52, David Woodhouse wrote: > If you're providing a message with -m on the comment line, then sure, > LC_CTYPE is correct. But when it comes from a file? Every file can come > from a different place, and can be in its *own* character set. I bet > you have *plenty* of text files on your system which aren't in latin2. Probably. (I didn't create them though! ;)) > And if you're applying a patch with 'git-am' then the temporary file > storing the extracted message is almost *certainly* in UTF-8 or the > original charset from the email, not your LC_CTYPE. Right? git-am seems to work surprisingly well in this regard; I think. It seems to match git-format-patch. When you format a patch whose git-internal encoding is not UTF-8 (let's say, it's latin2), and the commit message actually digresses outside of ASCII, then the formatted patch email will receive: MIME-Version: 1.0 Content-Type: text/plain; charset=latin2 Content-Transfer-Encoding: 8bit This will make the message body display correctly in all MUAs. (The only snag, as I mentioned, is that the Cc: mail header that git constructs from the (otherwise correct) body-cc incorrectly becomes: =?UTF-8?q?QUOTED_PRINTABLE_LATIN2_BYTE_STRING?= <email@example.com> I.e., the embedded encoding identifier doesn't actually match the byte string. ) In turn, when git-am processes such a patch email, it picks up the charset from Content-Type, and passes it to git-commit as the commit-encoding. If I remember correctly. > Few people fully comprehend the true horror of the "simple" system they > try to cling to... My primary motivation is not the "simplicity" of single-byte encodings -- I agree with the quotes here. My primary motivation is this killer app called NEdit that I can't exist without. Laszlo _______________________________________________ edk2-devel mailing list edk2-devel@lists.01.org https://lists.01.org/mailman/listinfo/edk2-devel
diff --git a/OvmfPkg/OvmfPkgIa32.dsc b/OvmfPkg/OvmfPkgIa32.dsc index 9b603f001fe5..aae1972950ee 100644 --- a/OvmfPkg/OvmfPkgIa32.dsc +++ b/OvmfPkg/OvmfPkgIa32.dsc @@ -395,6 +395,14 @@ [PcdsFixedAtBuild] gEfiMdePkgTokenSpaceGuid.PcdDebugPropertyMask|0x2F !endif + # This PCD is used to set the base address of the PCI express hierarchy. It + # is only consulted when OVMF runs on Q35. In that case it is programmed into + # the PCIEXBAR register. + # + # On Q35 machine types that QEMU intends to support in the long term, QEMU + # never lets the RAM below 4 GB exceed 2 GB. + gEfiMdePkgTokenSpaceGuid.PcdPciExpressBaseAddress|0x80000000 + !ifdef $(SOURCE_DEBUG_ENABLE) gEfiSourceLevelDebugPkgTokenSpaceGuid.PcdDebugLoadImageMethod|0x2 !endif diff --git a/OvmfPkg/OvmfPkgIa32X64.dsc b/OvmfPkg/OvmfPkgIa32X64.dsc index 1d68eb95d69e..0422dda09fbe 100644 --- a/OvmfPkg/OvmfPkgIa32X64.dsc +++ b/OvmfPkg/OvmfPkgIa32X64.dsc @@ -400,6 +400,14 @@ [PcdsFixedAtBuild] gEfiMdePkgTokenSpaceGuid.PcdDebugPropertyMask|0x2F !endif + # This PCD is used to set the base address of the PCI express hierarchy. It + # is only consulted when OVMF runs on Q35. In that case it is programmed into + # the PCIEXBAR register. + # + # On Q35 machine types that QEMU intends to support in the long term, QEMU + # never lets the RAM below 4 GB exceed 2 GB. + gEfiMdePkgTokenSpaceGuid.PcdPciExpressBaseAddress|0x80000000 + !ifdef $(SOURCE_DEBUG_ENABLE) gEfiSourceLevelDebugPkgTokenSpaceGuid.PcdDebugLoadImageMethod|0x2 !endif diff --git a/OvmfPkg/OvmfPkgX64.dsc b/OvmfPkg/OvmfPkgX64.dsc index b971ee8bb56b..18517e337649 100644 --- a/OvmfPkg/OvmfPkgX64.dsc +++ b/OvmfPkg/OvmfPkgX64.dsc @@ -400,6 +400,14 @@ [PcdsFixedAtBuild] gEfiMdePkgTokenSpaceGuid.PcdDebugPropertyMask|0x2F !endif + # This PCD is used to set the base address of the PCI express hierarchy. It + # is only consulted when OVMF runs on Q35. In that case it is programmed into + # the PCIEXBAR register. + # + # On Q35 machine types that QEMU intends to support in the long term, QEMU + # never lets the RAM below 4 GB exceed 2 GB. + gEfiMdePkgTokenSpaceGuid.PcdPciExpressBaseAddress|0x80000000 + !ifdef $(SOURCE_DEBUG_ENABLE) gEfiSourceLevelDebugPkgTokenSpaceGuid.PcdDebugLoadImageMethod|0x2 !endif diff --git a/OvmfPkg/PlatformPei/PlatformPei.inf b/OvmfPkg/PlatformPei/PlatformPei.inf index 8480839efc8e..6dc5ff079f53 100644 --- a/OvmfPkg/PlatformPei/PlatformPei.inf +++ b/OvmfPkg/PlatformPei/PlatformPei.inf @@ -94,6 +94,9 @@ [Pcd] gEfiMdeModulePkgTokenSpaceGuid.PcdPropertiesTableEnable gUefiCpuPkgTokenSpaceGuid.PcdCpuLocalApicBaseAddress +[FixedPcd] + gEfiMdePkgTokenSpaceGuid.PcdPciExpressBaseAddress + [FeaturePcd] gUefiOvmfPkgTokenSpaceGuid.PcdSmmSmramRequire diff --git a/OvmfPkg/PlatformPei/Platform.c b/OvmfPkg/PlatformPei/Platform.c index 8e4da41001e1..0fc227803a84 100644 --- a/OvmfPkg/PlatformPei/Platform.c +++ b/OvmfPkg/PlatformPei/Platform.c @@ -212,17 +212,20 @@ MemMapInitialization ( if (!mXen) { UINT32 TopOfLowRam; + UINT64 PciExBarBase; UINT32 PciBase; UINT32 PciSize; TopOfLowRam = GetSystemMemorySizeBelow4gb (); if (mHostBridgeDevId == INTEL_Q35_MCH_DEVICE_ID) { // - // On Q35 machine types that QEMU intends to support in the long term, - // QEMU never lets the RAM below 4 GB exceed 2 GB. + // The MMCONFIG area is expected to fall between the top of low RAM and + // the base of the 32-bit PCI host aperture. // - PciBase = BASE_2GB; - ASSERT (TopOfLowRam <= PciBase); + PciExBarBase = FixedPcdGet64 (PcdPciExpressBaseAddress); + ASSERT (TopOfLowRam <= PciExBarBase); + ASSERT (PciExBarBase <= MAX_UINT32 - SIZE_256MB); + PciBase = (UINT32)(PciExBarBase + SIZE_256MB); } else { PciBase = (TopOfLowRam < BASE_2GB) ? BASE_2GB : TopOfLowRam; } @@ -248,6 +251,30 @@ MemMapInitialization ( AddIoMemoryBaseSizeHob (0xFED00000, SIZE_1KB); if (mHostBridgeDevId == INTEL_Q35_MCH_DEVICE_ID) { AddIoMemoryBaseSizeHob (ICH9_ROOT_COMPLEX_BASE, SIZE_16KB); + // + // Note: there should be an + // + // AddIoMemoryBaseSizeHob (PciExBarBase, SIZE_256MB); + // + // call below, just like the one above for RCBA. However, Linux insists + // that the MMCONFIG area be marked in the E820 or UEFI memory map as + // "reserved memory" -- Linux does not content itself with a simple gap + // in the memory map wherever the MCFG ACPI table points to. + // + // This appears to be a safety measure. The PCI Firmware Specification + // (rev 3.1) says in 4.1.2. "MCFG Table Description": "The resources can + // *optionally* be returned in [...] EFIGetMemoryMap as reserved memory + // [...]". (Emphasis added here.) + // + // Normally we add memory resource descriptor HOBs in + // QemuInitializeRam(), and pre-allocate from those with memory + // allocation HOBs in InitializeRamRegions(). However, the MMCONFIG area + // is most definitely not RAM; so, as an exception, cover it with + // uncacheable reserved memory right here. + // + AddReservedMemoryBaseSizeHob (PciExBarBase, SIZE_256MB, FALSE); + BuildMemoryAllocationHob (PciExBarBase, SIZE_256MB, + EfiReservedMemoryType); } AddIoMemoryBaseSizeHob (PcdGet32(PcdCpuLocalApicBaseAddress), SIZE_1MB); } @@ -317,6 +344,47 @@ NoexecDxeInitialization ( } VOID +PciExBarInitialization ( + VOID + ) +{ + union { + UINT64 Uint64; + UINT32 Uint32[2]; + } PciExBarBase; + + // + // We only support the 256MB size for the MMCONFIG area: + // 256 buses * 32 devices * 8 functions * 4096 bytes config space. + // + // The masks used below enforce the Q35 requirements that the MMCONFIG area + // be (a) correctly aligned -- here at 256 MB --, (b) located under 64 GB. + // + // Note that (b) also ensures that the minimum address width we have + // determined in AddressWidthInitialization(), i.e., 36 bits, will suffice + // for DXE's page tables to cover the MMCONFIG area. + // + PciExBarBase.Uint64 = FixedPcdGet64 (PcdPciExpressBaseAddress); + ASSERT ((PciExBarBase.Uint32[1] & MCH_PCIEXBAR_HIGHMASK) == 0); + ASSERT ((PciExBarBase.Uint32[0] & MCH_PCIEXBAR_LOWMASK) == 0); + + // + // Clear the PCIEXBAREN bit first, before programming the high register. + // + PciWrite32 (DRAMC_REGISTER_Q35 (MCH_PCIEXBAR_LOW), 0); + + // + // Program the high register. Then program the low register, setting the + // MMCONFIG area size and enabling decoding at once. + // + PciWrite32 (DRAMC_REGISTER_Q35 (MCH_PCIEXBAR_HIGH), PciExBarBase.Uint32[1]); + PciWrite32 ( + DRAMC_REGISTER_Q35 (MCH_PCIEXBAR_LOW), + PciExBarBase.Uint32[0] | MCH_PCIEXBAR_BUS_FF | MCH_PCIEXBAR_EN + ); +} + +VOID MiscInitialization ( VOID ) @@ -393,6 +461,11 @@ MiscInitialization ( POWER_MGMT_REGISTER_Q35 (ICH9_RCBA), ICH9_ROOT_COMPLEX_BASE | ICH9_RCBA_EN ); + + // + // Set PCI Express Register Range Base Address + // + PciExBarInitialization (); } }