Click here to view the article.
UEFI News and Commentary
Thursday, June 19, 2014
Update to Setting Up the EDK2's Windows-Hosted UEFI Environment With Visual Studio 2012
Some of the building components and file names of EDK2 have been update. The article HOW-TO: Set Up the EDK2's Windows-Hosted UEFI Environment With Visual Studio 2012 has been updated to accommodate this.
Tuesday, June 10, 2014
UEFI and SoCs and Chipsets and Firmware Complexity
When I switch back and forth between Insyde's x86 and ARM partners, I have to do a mental vocabulary switch. In the x86 world, we talk about "chipsets" but in the ARM side its almost universally "SoCs" Part of this is historical: the NEAT chipset was, in fact, a set of 3 chips that went alongside of the Intel CPU. More recently, there has been a traditional division of labor between the north bridge (memory controller) and south bridge (I/O controller). This split allowed CPU, memory and I/O technologies to progress at different speeds and allow for different pairings to meet different market requirements. But in the ARM world, the term SoC (system-on-a-chip) is recognized as a single-chip packaging of a CPU and all of the attendant hardware bits that are needed. New technology? Just spin a new SoC. The X86 world has responded with single-chip solutions, but the term 'chipset' still dominates.
In many ways, this difference in philosophy is reflected in firmware architectures. UEFI is designed for many hardware pieces, likely from different vendors, are combined together to boot an operating system. The fundamental unit of UEFI is a driver (or a module if you include PEI and you're a grognard for terminology). You add drivers for specific chips and attendant technologies. You remove drivers that you don't need. Seems logical, right?
But as gate density has increased, the number of technologies that can be stuffed into a single chip has also increased. Then the number of UEFI drivers increases. The difference can be startling: the number of modules (libraries, drivers and modules) required to support one chipset can be 10x more than the number required to support another.
Is this reasonable? If I have one chip on the motherboard, it seems reasonable that I should add 1 thing (driver, module, package, whatever) to the build. That would be my ideal world.
Indeed, there are initiatives going that direction. My friend, Intel guru Vincent Zimmer recently wrote about Intel's FSP (Firmware Support Package) on his blog. The FSP attempts to hide some of the complexity by packaging up most of the chip-support firmware into a single binary blob with three entry points. Dig a little deeper and you find that this "blob" is really a specially formatted UEFI firmware volume with PEI drivers (aka PEIMs). But, along with binary editing of board-specific options, it provides a good starting point to answering the objections of silicon-related firmware complexity. It still struggles with all of the traditional problems of binary deliverables, such as debugging and hot fixes. And it doesn't solve everything for industry standards like ACPI and SMBIOS and even UEFI's own HII. Not to mention OS-specific add-ons (like those for Windows 8.1) It tries to maintain the flexibility of UEFI while simplifying the silicon vendor side of the equation. Good start.
Insyde (my company) has been pursuing this at the source code level, improving how entire chip "packages" (an EDK2 term) come together to create the final firmware. Our goal: 1 command brings in the entire support for a chip. Sure, tweak it from there. Sure, highlight the couple of places where engineer input is required at build. But don't make them find a read-me. Oh, and do it the same way for every chipset/SoC because when SoCs change often, your mind spins with which/where/what in a codebase.
One of the hidden advantages of the UEFI driver model is that it works well for SoCs as well. In order to keep up with new technologies, silicon vendors keep spinning new versions of their SoCs with new sets of peripherals, new versions of memory controllers and upgraded CPU cores. Many of these SoCs share the exact same IP blocks inside, with only a few tweaks. From a firmware perspective, I'd like to grab the same piece of code and use it to support all of the SoCs in which the IP block is included. Sounds like a driver model to me, based no longer on chips on a motherboard but on IP blocks in a chip.
Longer term, that means that some firmware complexity creep is inevitable as SoCs increase in complexity. But it also means that firmware systems must improve to support the increased rate of SoC/chipset change and reduce the effort required to configure and customize those platforms. Inevitably, the BIOS guys get blamed for every delay: the motherboard is ready, the chip is ready, so why isn't the BIOS ready? Simulation (another topic) is one way. Runtime debug/log infrastructures help catch what you missed. But well-designed build systems and firmware delivery models simplify the problem up front.
In many ways, this difference in philosophy is reflected in firmware architectures. UEFI is designed for many hardware pieces, likely from different vendors, are combined together to boot an operating system. The fundamental unit of UEFI is a driver (or a module if you include PEI and you're a grognard for terminology). You add drivers for specific chips and attendant technologies. You remove drivers that you don't need. Seems logical, right?
But as gate density has increased, the number of technologies that can be stuffed into a single chip has also increased. Then the number of UEFI drivers increases. The difference can be startling: the number of modules (libraries, drivers and modules) required to support one chipset can be 10x more than the number required to support another.
Is this reasonable? If I have one chip on the motherboard, it seems reasonable that I should add 1 thing (driver, module, package, whatever) to the build. That would be my ideal world.
Indeed, there are initiatives going that direction. My friend, Intel guru Vincent Zimmer recently wrote about Intel's FSP (Firmware Support Package) on his blog. The FSP attempts to hide some of the complexity by packaging up most of the chip-support firmware into a single binary blob with three entry points. Dig a little deeper and you find that this "blob" is really a specially formatted UEFI firmware volume with PEI drivers (aka PEIMs). But, along with binary editing of board-specific options, it provides a good starting point to answering the objections of silicon-related firmware complexity. It still struggles with all of the traditional problems of binary deliverables, such as debugging and hot fixes. And it doesn't solve everything for industry standards like ACPI and SMBIOS and even UEFI's own HII. Not to mention OS-specific add-ons (like those for Windows 8.1) It tries to maintain the flexibility of UEFI while simplifying the silicon vendor side of the equation. Good start.
Insyde (my company) has been pursuing this at the source code level, improving how entire chip "packages" (an EDK2 term) come together to create the final firmware. Our goal: 1 command brings in the entire support for a chip. Sure, tweak it from there. Sure, highlight the couple of places where engineer input is required at build. But don't make them find a read-me. Oh, and do it the same way for every chipset/SoC because when SoCs change often, your mind spins with which/where/what in a codebase.
One of the hidden advantages of the UEFI driver model is that it works well for SoCs as well. In order to keep up with new technologies, silicon vendors keep spinning new versions of their SoCs with new sets of peripherals, new versions of memory controllers and upgraded CPU cores. Many of these SoCs share the exact same IP blocks inside, with only a few tweaks. From a firmware perspective, I'd like to grab the same piece of code and use it to support all of the SoCs in which the IP block is included. Sounds like a driver model to me, based no longer on chips on a motherboard but on IP blocks in a chip.
Longer term, that means that some firmware complexity creep is inevitable as SoCs increase in complexity. But it also means that firmware systems must improve to support the increased rate of SoC/chipset change and reduce the effort required to configure and customize those platforms. Inevitably, the BIOS guys get blamed for every delay: the motherboard is ready, the chip is ready, so why isn't the BIOS ready? Simulation (another topic) is one way. Runtime debug/log infrastructures help catch what you missed. But well-designed build systems and firmware delivery models simplify the problem up front.
Subscribe to:
Posts (Atom)