UEFI

UEFI News and Commentary

Thursday, June 19, 2014

Update to Setting Up the EDK2's Windows-Hosted UEFI Environment With Visual Studio 2012

Some of the building components and file names of EDK2 have been update. The article HOW-TO: Set Up the EDK2's Windows-Hosted UEFI Environment With Visual Studio 2012 has been updated to accommodate this.

Click here to view the article.

Tuesday, June 10, 2014

UEFI and SoCs and Chipsets and Firmware Complexity

When I switch back and forth between Insyde's x86 and ARM partners, I have to do a mental vocabulary switch. In the x86 world, we talk about "chipsets" but in the ARM side its almost universally "SoCs" Part of this is historical: the NEAT chipset was, in fact, a set of 3 chips that went alongside of the Intel CPU. More recently, there has been a traditional division of labor between the north bridge (memory controller) and south bridge (I/O controller). This split allowed CPU, memory and I/O technologies to progress at different speeds and allow for different pairings to meet different market requirements. But in the ARM world, the term SoC (system-on-a-chip) is recognized as a single-chip packaging of a CPU and all of the attendant hardware bits that are needed. New technology? Just spin a new SoC. The X86 world has responded with single-chip solutions, but the term 'chipset' still dominates.

In many ways, this difference in philosophy is reflected in firmware architectures. UEFI is designed for many hardware pieces, likely from different vendors, are combined together to boot an operating system. The fundamental unit of UEFI is a driver (or a module if you include PEI and you're a grognard for terminology). You add drivers for specific chips and attendant technologies. You remove drivers that you don't need. Seems logical, right?

But as gate density has increased, the number of technologies that can be stuffed into a single chip has also increased. Then the number of UEFI drivers increases. The difference can be startling: the number of modules (libraries, drivers and modules) required to support one chipset can be 10x more than the number required to support another.

Is this reasonable? If I have one chip on the motherboard, it seems reasonable that I should add 1 thing (driver, module, package, whatever) to the build. That would be my ideal world.

Indeed, there are initiatives going that direction. My friend, Intel guru Vincent Zimmer recently wrote about Intel's FSP (Firmware Support Package) on his blog. The FSP attempts to hide some of the complexity by packaging up most of the chip-support firmware into a single binary blob with three entry points. Dig a little deeper and you find that this "blob" is really a specially formatted UEFI firmware volume with PEI drivers (aka PEIMs). But, along with binary editing of board-specific options, it provides a good starting point to answering the objections of silicon-related firmware complexity. It still struggles with all of the traditional problems of binary deliverables, such as debugging and hot fixes. And it doesn't solve everything for industry standards like ACPI and SMBIOS and even UEFI's own HII. Not to mention OS-specific add-ons (like those for Windows 8.1) It tries to maintain the flexibility of UEFI while simplifying the silicon vendor side of the equation. Good start.

Insyde (my company) has been pursuing this at the source code level, improving  how entire chip "packages" (an EDK2 term) come together to create the final firmware. Our goal: 1 command brings in the entire support for a chip. Sure, tweak it from there. Sure, highlight the couple of places where engineer input is required at build. But don't make them find a read-me. Oh, and do it the same way for every chipset/SoC because when SoCs change often, your mind spins with which/where/what in a codebase.

One of the hidden advantages of the UEFI driver model is that it works well for SoCs as well. In order to keep up with new technologies, silicon vendors keep spinning new versions of their SoCs with new sets of peripherals, new versions of memory controllers and upgraded CPU cores. Many of these SoCs share the exact same IP blocks inside, with only a few tweaks. From a firmware perspective, I'd like to grab the same piece of code and use it to support all of the SoCs in which the IP block is included. Sounds like a driver model to me, based no longer on chips on a motherboard but on IP blocks in a chip.

Longer term, that means that some firmware complexity creep is inevitable as SoCs increase in complexity. But it also means that firmware systems must improve to support the increased rate of SoC/chipset change and reduce the effort required to configure and customize those platforms. Inevitably, the BIOS guys get blamed for every delay: the motherboard is ready, the chip is ready, so why isn't the BIOS ready? Simulation (another topic) is one way. Runtime debug/log infrastructures help catch what you missed. But well-designed build systems and firmware delivery models simplify the problem up front.

Friday, April 04, 2014

The Tale of Three Conferences

This week has been a blizzard of news and announcements from three conferences that I care about, all this week. First, there was EELive! in San Jose, CA which focused on embedded systems (or, as they now like to call it, the IoT). Second, there was Intel's Developer Forum in Shenzhen, China where my company (Insyde Software) was exhibiting and speaking. Finally, there was Microsoft's Build 2014 which I was watching with avid interest with live streaming and press releases flowing. UEFI was there, to either be promoted or vilified or both, in all three.

At first glance, at EELive, you would think that no one is paying attention to UEFI. Part of this was because Intel was busy promoting FSP, touting how it could be plugged into any boot loader, including coreboot. But the Galileo board that they were showing comes with a UEFI solution. And, under the hood, FSP is really extracted firmware drivers from Intel's UEFI-based reference code, packaged in UEFI-standard firmware volume format, with a little director binary inserted to allow direct calls into the driver entry points. The other reason is that 32-bit and smaller processors still dominate the IoT space and many of those are ARM designs. ARM platform 32-bit designs have used other boot loaders traditionally, but with 64-bit ARM itself is heavily pushing UEFI as a standard boot architecture. Many discussions around UEFI have to do with complexity. And there is something to these discussions, since the very power and flexibility of UEFI has led to implementations (like that on tianocore.org) which are broken into hundreds of pieces, where assembling the right one requires the right recipes. Most embedded vendors don't need their firmware distribution to be as complicated as their Linux distribution (see yoctoproject.org).

Then there's IDF. Of course, there was the Insyde poster chat: Implementing Dual OS Solutions with UEFI FIrmware" (how to switch between two active OS sessions w/just firmware support). Intel delivered their obligatory Quark and FSP remarks. But it also put out two additional UEFI-related notes. The first appeared in the unlikely session titled "Delivering Compelling User Experiences on Intel® Platforms: Audio, Voice, Speech and Fingerprint Sensors and Biometric Authentication" But in the very back of this presentation, they talked about security issues pertinent to BIOS, including replay-attack prevention related to Real-Time Clock battery removal and secure firmware updates using the UEFI capsule update method described in the UEFI 2.4 specification. It really seemed out of place, but hey... They also recommend adding their new CHIPSEC tool, which performs security checks on chipset and firmware settings. It is available on github.

And then the Intel Android team showed their Android build tool which would create a BSP for your platform and hey, it would also customize your firmware at the same time. It leverages the Unified Binary Management Suite (UBMS), which you can see here at about the 21 minute mark. This shows the increasing co-design process required for configuring your firmware and your OS installation. Many times, the firmware and the OS need to know the same types of information about the platform. For example, which drivers to include, GPIO routing, etc. Especially on OS' that don't use ACPI and that don't rely on the firmware passing them anything (like Android).

Finally, there's Microsoft Build event. We got a definitive date for the Update 1 (with no major new BIOS requirements! Sigh of relief), Microsoft's plan to offer Windows for $0 for certain platforms, Windows booting on Quark(!). And a lot of advice about how to integrate off-SoC sensors and how to write apps that span Windows and Windows Phone.

Hard to breathe. Need air. Next week I'll have a chance to reflect further on what some of these mean. How can we take advantage of vertical integration? How can we reassure folks of security in a world where firmware is increasingly decentralized and under attack? (Did I mention EELive! had a Black Hat track???) More later....

Tuesday, April 01, 2014

Something About the Game I Made with a Thermometer In It

After making the simple app that uses composite images and transparency to change the appearance of an image based on user input (as described in this article), I decided to expand it a little into the beginnings of a simple game.  As it is now, it is simply a shell for a game, in which you can move a sprite around and change modes.

In this article, I will describe the thought process behind creating this as well as the code I wrote.  This article assumes you have read the referenced article above on composite images and transparency.

Source code for this project can be found here.

I knew I wanted this "game" to include the thermometer I had created, and have something affected by the changing temperature.  I decided I could have a little character run around the screen and his color would change with the temperature.  Because I was using the same thermometer as I did in my last project, I was able to use the same code.  The only change I made was a simple image re-size to make it a little smaller.

The next question was how I wanted the playable area to be laid out.  Would it be laid out in a grid, where each object occupied a square, or would it be free movement?  Since collision detection in a free movement environment is much trickier than a grid, I decided to lay it out in a grid.  This was a fairly easy thing to implement.

First, I created a struct to represent a square in the grid, which I called "Box."  It has it's X and Y coordinates, an array containing pointers to the adjacent Boxes in each of the four cardinal directions, and a boolean value indicating whether or not the box is occupied by an object.


The next step was to initialize the grid, which is simply an array of boxes.  So I started by clearing all the memory and initializing the first box's coordinates to (0, 0).
Now from here, I could have hard-coded all the values for every single box, but if I had decided to change the size of the grid, it would have been a big pain to rewrite, so I wrote a loop that initialized the coordinates and array of adjacent boxes for each box.  For each of the boxes, I first set it's Occupied value to false (since each box is initially empty).  I first then look in the East (right) direction to see if the box is on the right edge of the grid.  If it's not, the East adjacent box is initialized to the next box in the array, and if not, East is set to NULL.  Next, I check the North (up) direction to see if it is on the top edge of the grid.  Like the East check, I initialize the North box if it's not on the top edge, and set it to NULL if it is on the top edge. 

So along with initializing the array of boxes, we also need to initialize the coordinates, which we can do in the checks for the West edge.  We can do as we have done with North and East, and initialize the West value.  However, after that, we can begin initializing coordinates.  If the box is not on the West (left) edge of the grid, then it can base its X coordinate on the box to its left, and simply add the width of an image (which is the same as the width of a box).  Its Y coordinate is simply the same as the box to its left.  If it is a left-edge box, then its X coordinate is 0, and the Y coordinate can be gotten by adding the box height to the Y coordinate of the box above it.  Finally, we do the South check, just like the others were done, and finish initialization of the boxes.

Now I needed to deal with the images.  In this application, I had six images: the background image, the character sprite, an object sprite, and the three thermometer images.  I did the same thing as described in the articles referenced above, except with more images.  All the image setup was done in the same function, and the image buffers stored in global variables.  Details about the image setup can be seen in the previous articles or the source code (link provided above).  In the source code, I have included a miniature .FDF file that includes the additional lines I added which allow the appropriate images to be included.  In order to use this, copy the text from my .FDF file and paste it into the FILE statements section of Nt32Pkg.FDF.

The only thing I did differently with regards to image display was in ConvertBmpToGopBlt().  One of the conditions for images was unnecessarily strict, and would sometimes cause the function to falsely report an invalid bitmap image, so I removed it.  The line used to read:
if ((BmpHeader->Size != BmpImageSize) || 
      (BmpHeader->Size < BmpHeader->ImageOffset) ||
      (BmpHeader->Size - BmpHeader->ImageOffset !=  BmpHeader->PixelHeight * DataSizePerLine)) {
    return EFI_INVALID_PARAMETER;
  }
I removed the third condition so it now reads:
if ((BmpHeader->Size != BmpImageSize) || (BmpHeader->Size < BmpHeader->ImageOffset)) { return EFI_INVALID_PARAMETER; }
Which causes it to work properly.

Before we move onto the components of the actual game, we need to remember that in this game, we will be moving around a character as well as changing the temperature on the thermometer.  We could use different keys to change the temperature and move around, but I decided it would be best if the character were standing still while the temperature changed, so I made it so that the user could change which "mode" they were in, movement, or temperature change.  The mode is implemented using an enum, and stored in a global variable.

Up next is the actual character itself that the user can move around.  It contains a pointer to the buffer containing the original image, a pointer to the altered image (since we are going to change the image's color over the course of the game), its coordinates, and the box it is currently occupying.  We make the character a global variable, since there is only one.

Initializing the character struct is simple and can be done as soon as the images and grid are set up.  We first set the mode to be "MOVE" since we would like that to be the initial setting.  Image is set to point to the buffer referenced by the global variable.  The image is then copied into a new buffer, and ChangedImage points to that new buffer.  This way, ChangedImage can be altered without affecting the base original image.  Then, the coordinates and box are set to be the upper left-hand corner (just an arbitrary position).
After this, I decided to have some sort of additional object on the map along with the character.  So I created a basic Object struct.  It has an Image, ChangedImage, coordinates, and box it occupies.  This is actually basically the same as the Character struct, and in the future, I could go back and combine the two into one.  There is only one non-character object in this application, but multiple objects could simply be stored in a global array.

Initializing and setting up objects is similar to that of initializing the character.  In this program, I only included one object, but it would be simple enough to expand the function to deal with multiple objects.  Like the character image initialization, we set Image to be the globally accessible buffer, and ChangedImage to be a copy of the Image buffer.  The coordinates and grid position are set, and then we set the assigned box to be occupied.

Now we turn our attentions towards player actions.  The first and most basic action a player can take is to move.  Since we used a grid, the movement is easy to do.  Basically all it does is check the box adjacent to where the character currently is, and see if it is null or occupied.  If not, it changes the character's box and coordinates to that of its new location.

The next is its color change.  Now, this method does nothing incredibly special.  It simply multiplies the pixel colors by the percentage height difference of the thermometer.  It makes sure the colors never go above their maximum value, and are never negative.  The nice thing about this is that will not alter black pixels (which, in my implementation are interpreted as transparent), so pixels intended to be transparent will remain that way, whatever color changes might occur.  Notice that the transformations are based on the original image's color.  This way, colors at a given temperature are always the same. Another possible implementation of this is to have the blue color increase and red decrease as the temperature decreases, and vise versa as the temperature increases.  

Finally we get to the actual game loop itself.  It is just a do-while loop that waits until an end condition is reached (in this case, until the END key on the keyboard is pressed).  All it does is sit around and wait for the user to press a key on the keyboard.  If it's one of the arrow keys, and the game is in MOVE mode, it moves the character.  Otherwise, if it's in TEMP mode, it will change the temperature and change the player's color only if it's an up or down arrow key.  In this implementation, the mode is changed by pressing the PgUp key.  After that, it displays the images in their new positions.  Once the end condition has been met, the memory is freed, screen cleared, and we return success! In the images below, the image on the left shows the key scanning, and the right image shows the image display and cleanup.
   


And that's all she wrote!  

Friday, January 24, 2014

UEFI: A Retrospective

This nifty article by my friend and co-author Vincent Zimmer takes a look at the UEFI specification from his perspective as one of its early proponents within Intel, as an author (Beyond BIOS), and more recently as a steward of the specification and innovator post-UEFI.

From my perspective as a BIOS architect, the process was never a smooth one. The rise of the industry specification bodies is something we take for granted. If you look at older PC specifications, like ACPI and APM and BBS, they were cooperatively developed by a small cadre of companies, like Microsoft, Toshiba, Phoenix and Compaq. As EFI was coming along from Intel, the BIOS vendors were also developing standards and promoting these standards. Never heard of PowerBIOS from Award? Or how about Manticore or CSI from Phoenix? With the industry standards group approach, any one of these could have become what UEFI is now. Maybe. If the originating company was willing to loosen their grip. It was, in accepting this key point, what allowed Intel and their partners like HP to gain critical agreement within the PC ecosystem.

Sometimes relinquishing control gets you what you want, but the process is more chaotic. Certainly true for UEFI.


Thursday, October 31, 2013

Linux and UEFI: Linaro and LinuxCon

This week I've spent hanging around Linaro Connect 2013 USA, which is dedicated to bringing all of the open-source (and Linux) goodness to the ARM platform. For the 64-bit ARM architecture (AArch64), they have been promoting UEFI for quite a while but Linaro and its members are doing a lot of the really hard work of making sure that is a reality, with engineers dedicated to working from the reset vector up through the Linux kernel and into some server applications, making sure that all the necessary bits are there in open source. There were a lot of sessions this week that deal with the thorny issues of UEFI (secure boot), power management (PSCI) and ACPI (interaction with existing drivers and FDT).  To top it off, this event was co-located with ARM TechCon which doubled the fun, including an interesting keynote by Simon Segars, CEO or ARM.

This follows on the heels of the UEFI Plugfest, which was co-located also, but with LinuxCon in New Orleans. My colleague, Jeff Wheeler from Insyde got to attend, and he found folks interested in the question, "Does it really work?" And the answer was, "Yes, it does" This is the impression I recently found from  Bruno Cornec's blog article "First UEFI PlugFest for Linuxers". Good communication, good testing and, more importantly, UEFI and Linux work together.

The relationship between Linux and UEFI has not always been easy, with conspiracy theories and suspicious kicking of tires. But these two events have shown that they can and do work together, on x86 and ARM.

Tuesday, October 29, 2013

ACPI Specification Now Managed By UEFI, and Why Anyone Should Care

This article by my UEFI colleagues Dong Wei (HP, VP of UEFI) and Andrew Sloss (ARM, ARM Binding Sub-Team Chair) talks about a large recent development in the firmware world: ACPI is now managed by UEFI. I was a part of the ACPI specification development starting with ACPI 2.0 up through the current ACPI 5.0 specification while I was employed with Phoenix Technologies. That specification seems unusual to me now, in that it was essentially a five-way agreement between Intel, Microsoft, Compaq/HP, Phoenix and Toshiba. But actually, back in the day when it was conceived, it wasn't that unusual, as the BIOS Boot Specification (BBS) or Advanced Power Management (APM) will attest. For every revision of the specification, essentially the 5-way consortium was reborn again, occasionally (such as when Phoenix was added or Compaq was merged with HP) with a change of membership.

Due to the unusual structure and the fact that it was tied in many ways to Microsoft's release schedule and Intel's hardware schedule, releases tended to be big and cumbersome. Adding new members was problematic, since the number of signatures required from legal departments grows exponentially.

On the other hand, UEFI has functioned pretty well since 2005 in taking input, releasing regular errata and specification updates. So with Mark Doran (Intel, UEFI president) assuming the helm of both efforts, it seemed like a good time to push them together. They really address the same target audiences: system firmware providers, OS vendors, OEMs and chip manufacturers, with a smattering of plug-in card and application vendors. It probably helps convince folks like Linaro (working on ARM) and Redhat (working on Linux) to adopt ACPI if more than Intel and Microsoft are represented.

Now the process that is used to gather input, hash out differences and formulate solutions for UEFI can be applied to ACPI with, in my opinion, great effect.

"UEFI brings in an amazing level of standardizaton for boot loaders"

The title comes from one slide in the "Is UEFI EDK II ready for Android mobile" session at the Linaro Connect 2013 event in Santa Clara, this was on one of the slides. Here is what was presented as advantages for UEFI on ARM and Android Mobile:

  • Very detailed specification from UEFI forum of industry experts
  • Reduce cost of development, code is highly organized and structures - easy to add support for new platforms.
  • Drivers can be independenly developed and distributed by peripheral/controller manufacturer, just like in any high level OS
  • Brings in concept of "application" at boot loader level itself, can be used for test suite development, new functionality like flashing utilities, splash screens, etc.