Linux Kernel Monkey Log

Random bits from Greg Kroah-Hartman

Kdbus Details

Now that linux.conf.au is over, there has been a bunch of information running around about the status of kdbus and the integration of it with systemd. So, here’s a short summary of what’s going on at the moment.

Lennart Poettering gave a talk about kdbus at linux.conf.au. The talk can be viewed here, and the slides are here. Go read the slides and watch the talk, odds are, most of your questions will be answered there already.

For those who don’t want to take the time watching the talk, lwn.net wrote up a great summary of the talk, and that article is here. For those of you without a lwn.net subscription, what are you waiting for? You’ll have to wait two weeks before it comes out from behind the paid section of the website before reading it, sorry.

There will be a systemd hack-fest a few days before FOSDEM, where we should hopefully pound out the remaining rough edges on the codebase and get it ready to be merged. Lennart will also be giving his kdbus talk again at FOSDEM if anyone wants to see it in person.

The kdbus code can be found in two places, both on google code, and on github, depending on where you like to browse things. In a few weeks we’ll probably be creating some patches and submitting it for inclusion in the main kernel, but more testing with the latest systemd code needs to be done first.

If you want more information about the kdbus interface, and how it works, please see the kdbus.txt file for details.

Binder vs. kdbus

A lot of people have asked about replacing Android’s binder code with kdbus. I originally thought this could be done, but as time has gone by, I’ve come to the conclusion that this will not happen with the first version of kdbus, and possibly can never happen.

First off, go read that link describing binder that I pointed to above, especially all of the links to different resources from that page. That should give you more than you ever wanted to know about binder.

Short answer

Binder is bound to the CPU, D-Bus (and hence kdbus), is bound to RAM.

Long answer

Binder

Binder is an interface that Android uses to provide synchronous calling (CPU) from one task to a thread of another task. There is no queueing involved in these calls, other than the caller process is suspended until the answering process returns. RAM is not interesting besides the fact that it is used to share the data between the different callers. The fact that the caller process gives up its CPU slice to the answering process is key for how Android works with the binder library.

This is just like a syscall, and it behaves a lot like a mutex. The communicating processes are directly connected to each other. There is an upper limit of how many different processes can be using binder at once, and I think it’s around 16 for most systems.

D-Bus

D-Bus is asynchronous, it queues (RAM) messages, keeps the messages in order, and the receiver dequeues the messages. The CPU does not matter at all other than it is used to do the asynchronous work of passing the RAM around between the different processes.

This is a lot like network communication protocols. It is a very “disconnected” communication method between processes. The upper limit of message sizes and numbers is usually around 8Mb per connection and a normal message is around 200-800 bytes.

Binder

The model of Binder was created for a microkernel-like device (side note, go read this wonderful article about the history of Danger written by one of the engineers at that company for a glimpse into where the Android internals came from, binder included.) The model of binder is very limited, inflexible in its use-cases, but very powerful and extremely low-overhead and fast. Binder ensures that the same CPU timeslice will go from the calling process into the called process’s thread, and then come back into the caller when finished. There is almost no scheduling involved, and is much like a syscall into the kernel that does work for the calling process. This interface is very well suited for cheap devices with almost no RAM and very low CPU resources.

So, for systems like Android, binder makes total sense, especially given the history of it and where it was designed to be used.

D-Bus

D-Bus is a create-store-forward, compose reply and then create-store-forward messaging model which is more complex than binder, but because of that, it is extremely flexible, versatile, network transparent, much easier to manage, and very easy to let fully untrusted peers take part of the communication model (hint, never let this happen with binder, or bad things will happen…) D-Bus can scale up to huge amounts of data, and with the implementation of kdbus it is possible to pass gigabytes of buffers to every connection on the bus if you really wanted to. CPU-wise, it is not as efficient as binder, but is a much better general-purpose solution for general-purpose machines and workloads.

CPU vs. RAM

Yes, it’s an over simplification of a different set of complex IPC methods, but these 3 words should help you explain the differences between binder and D-Bus and why kdbus isn’t going to be able to easily replace binder anytime soon.

Never say never

Ok, before you start to object to the above statements, yes, we could add functionality to kdbus to have some blocking ioctl calls that implement something like: write question -> block for reply and read reply one answer for the request side, and then on the server side do: write answer -> block in read That would get kdbus a tiny bit closer to the binder model, by queueing stuff in RAM instead of relying on a thread pool.

That might work, but would require a lot of work on the binder library side in Android, and as a very limited number of people have write access to that code (they all can be counted on one hand), and it’s a non-trivial amount of work for a core function of Android that is working very well today, I don’t know if it will ever happen.

But anything is possible, it’s just software you know…

Thanks

Many thanks to Kay Sievers who came up with the CPU vs. RAM description of binder and D-Bus and whose email I pretty much just copied into this post. Also thanks to Kay and Lennart for taking the time and energy to put up with my silly statements about how kdbus could replace binder, and totally proving me wrong, sorry for having you spend so much time on this, but I now know you are right.

Also thanks to Daniel Mack and Kay for doing so much work on the kdbus kernel code, that I don’t think any of my original implementation is even present anymore, which is probably a good thing. Also thanks to Tejun Heo for help with the memfd implementation and cgroups help in kdbus.

Binary Blobs to C Structures

Sometimes you don’t have access to vim’s wonderful xxd tool, and you need to use it to generate some .c code based on a binary file. This happened to me recently when packaging up the EFI signing tools for Gentoo. Adding a build requirement of vim for a single autogenerated file was not an option for some users, so I created a perl version of the xxd -i command line tool.

This works because everyone has perl in their build systems, whether they like it or not. Instead of burying it in the efitools package, here’s a copy of it for others to use if they want/need it.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
#!/usr/bin/env perl
#
# xxdi.pl - perl implementation of 'xxd -i' mode
#
# Copyright 2013 Greg Kroah-Hartman <gregkh@linuxfoundation.org>
# Copyright 2013 Linux Foundation
#
# Released under the GPLv2.
#
# Implements the "basic" functionality of 'xxd -i' in perl to keep build
# systems from having to build/install/rely on vim-core, which not all
# distros want to do.  But everyone has perl, so use it instead.

use strict;
use warnings;
use File::Slurp qw(slurp);

my $indata = slurp(@ARGV ? $ARGV[0] : \*STDIN);
my $len_data = length($indata);
my $num_digits_per_line = 12;
my $var_name;
my $outdata;

# Use the variable name of the file we read from, converting '/' and '.
# to '_', or, if this is stdin, just use "stdin" as the name.
if (@ARGV) {
        $var_name = $ARGV[0];
        $var_name =~ s/\//_/g;
        $var_name =~ s/\./_/g;
} else {
        $var_name = "stdin";
}

$outdata .= "unsigned char $var_name\[] = {";

# trailing ',' is acceptable, so instead of duplicating the logic for
# just the last character, live with the extra ','.
for (my $key= 0; $key < $len_data; $key++) {
        if ($key % $num_digits_per_line == 0) {
                $outdata .= "\n\t";
        }
        $outdata .= sprintf("0x%.2x, ", ord(substr($indata, $key, 1)));
}

$outdata .= "\n};\nunsigned int $var_name\_len = $len_data;\n";

binmode STDOUT;
print {*STDOUT} $outdata;

Yes, I know I write perl code like a C programmer, that’s not an insult to me.

Booting a Self-signed Linux Kernel

Now that The Linux Foundation is a member of the UEFI.org group, I’ve been working on the procedures for how to boot a self-signed Linux kernel on a platform so that you do not have to rely on any external signing authority.

After digging through the documentation out there, it turns out to be relatively simple in the end, so here’s a recipe for how I did this, and how you can duplicate it yourself on your own machine.

We don’t need no stinkin bootloaders!

When building your kernel image, make sure the following options are set:

1
2
3
4
5
6
7
8
9
10
CONFIG_EFI=y
CONFIG_EFI_STUB=y
...
CONFIG_FB_EFI=y
...
CONFIG_CMDLINE_BOOL=y
CONFIG_CMDLINE="root=..."
...
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE="my_initrd.cpio"

The first two options here enable EFI mode, and tell the kernel to build itself as a EFI binary that can be run directly from the UEFI bios. This means that no bootloader is involved at all in the system, the UEFI bios just boots the kernel, no “intermediate” step needed at all. As much as I love gummiboot, if you trust the kernel image you are running is “correct”, this is the simplest way to boot a signed kernel.

As no bootloader is going to be involved in the boot process, you need to ensure that the kernel knows where the root partition is, what init is going to be run, and anything else that the bootloader normally passes to the kernel image. The option listed above, CONFIG_CMDLINE should be set to whatever you want the kernel to use as the command line.

Also, as we don’t have an initrd passed by the bootloader to the kernel, if you want to use one, you need to build it into the kernel itself. The option CONFIG_INITRAMFS_SOURCE should be set to your pre-built cpio initramfs image you wish to use.

Note, if you don’t want to use an initrd/initramfs, don’t set this last option. Also, currently it’s a bit of a pain to build the kernel, build the initrd using dracut with the needed dracut modules and kernel modules, and then rebuild the kernel adding the cpio image to the kernel image. I’ll be working next on taking a pre-built kernel image, tearing it apart and adding a cpio image directly to it, no need to rebuild the kernel. Hopefully that can be done with only a minimal use of libbfd

After setting these options, build the kernel and install it on your boot partition (it is in FAT mode, so that UEFI can find it, right?) To have UEFI boot it directly, you can place it in /boot/EFI/boot/bootx64.efi, so that UEFI will treat it as the “default” bootloader for the machine.

Lather, rinse, repeat

After you have a kernel image installed on your boot partition, it’s time to test it.

Reboot the machine, and go into the BIOS. Usually this means pounding on the F2 key as the boot starts up, but all machines are different, so it might take some experimentation to determine which key your BIOS needs. See this post from Matthew Garrett for the problems you might run into trying to get into BIOS mode on UEFI-based laptops.

Traverse the BIOS settings and find the place where UEFI boot mode is specified, and turn it the “Secure Boot” option OFF.

Save the option and reboot, the BIOS should find the kernel located at boot/EFI/boot/bootx64.efi and boot it directly. If your kernel command line and initramfs (if you used one) are set up properly, you should now be up and running and able to use your machine as normal.

If you can’t boot properly, ensure that your kernel command line was set correctly, or that your initramfs has the needed kernel modules in it. This usually takes a few times back and forth to get all of the correct settings properly configured.

Only after you can successfully boot the kernel directly from the BIOS, in “insecure” mode should you move to the next step.

Keys to the system

Now that you have a working kernel image and system, it is time to start messing with keys. There are three different types of UEFI keys that you need to learn about, the “Platform Key” (known as a “PK”), the “Key-Exchange Keys” (known as a “KEK”), and the “Signature Database Key” (known as a “db”). For a simple description of what these keys mean, see the Linux Foundation Whitepaper about UEFI Secure boot, published back in 2011. For a more detailed description of the keys, see the UEFI Specification directly.

For a very simple description, the “Platform Key” shows who “owns and controls” the hardware platform. The “Key-Exchange keys” shows who is allowed to update the hardware platform, and the “Signature Database keys” show who is allowed to boot the platform in secure mode.

If you are interested in how to manipulate these keys, replace them, and do neat things with them, see James Bottomley’s blog for descriptions of the tools you can use and much more detail than I provide here.

To manipulate the keys on the system, you need the the UEFI keytool USB image from James’s website called sb-usb.img (md5sum 7971231d133e41dd667a184c255b599f). dd the image to a USB drive, and boot the machine into the image.

Depending on the mode of the system (insecure or secure), you will be dropped to the UEFI console, or be presented with a menu. If a command line, type KeyTool to run the keytool binary. If a menu, select the option to run KeyTool directly.

Save the keys

First thing to do, you should save the keys that are currently on the system, in case something “bad” ever happens and you really want to be able to boot another operating system in secure mode on the hardware. Go through the menu options in the KeyTool program and save off the PK, KEK, and db keys to the USB drive, or to the hard drive, or another USB drive you plug into the system.

Take those keys and store them somewhere “safe”.

Clear the machine

Next you should remove all keys from the system. You can do this from the KeyTool program directly, or just reboot into the BIOS and select an option to “remove all keys”, if your BIOS provides this (some do, and some don’t.)

Create and install your own keys

Now that you have an “empty” machine, with the previous keys saved off somewhere else, you should download the sbsigntool and efiutil packages and install them on your development system. James has built all of the latest versions of these packages in the openSUSE build system for all RPM and DEB-based Linux distros. If you have a Gentoo-based system, I have checked the needed versions into portage, so just grab them directly from there.

If you want to build these from source, the sbsigntool git tree can be found here, and the efitools git tree is here.

The efitools README is a great summary of how to create new keys, and here is the commands it says to follow in order to create your own set of keys:

1
2
3
4
5
6
7
8
# create a PK key
openssl req -new -x509 -newkey rsa:2048 -subj "/CN=my PK name/" -keyout PK.key -out PK.crt -days 3650 -nodes -sha256

# create a KEK key
openssl req -new -x509 -newkey rsa:2048 -subj "/CN=my KEK name/" -keyout KK.key -out KK.crt -days 3650 -nodes -sha256

# create a db key
openssl req -new -x509 -newkey rsa:2048 -subj "/CN=my db name/" -keyout db.key -out db.crt -days 3650 -nodes -sha256

The option -subj can contain a string with whatever name you wish to have for your key, be it your company name, or the like. Other fields can be specified as well to make the key more “descriptive”.

Then, take the PK key you have created, turn it into a EFI Signature List file, and add a GUID to the key:

1
cert-to-efi-sig-list -g <my random guid> PK.crt PK.esl

Where my random guid is any valid guid you wish to use (I’ve seen some companies use all ‘5’ as their guid, so I’d recommend picking something else a bit more “random” to make look like you know what you are doing with your key…).

Now take the EFI Signature List file and create a signed update file:

1
sign-efi-sig-list -k PK.key -c PK.crt PK PK.esl PK.auth

For more details about the key creation (and to see where I copied these command lines from), see James’s post about owning your own Windows 8 platform.

Take these files you have created, put them on a USB disk, run the KeyTool program and use it to add the db, KEK, and PK keys into the BIOS. Note, apply the PK key last, as once it is installed, the platform will be “locked” and you should not be able to add any other keys to the system.

Fail to boot

Now that your own set of keys is installed in the system, flip the BIOS back into “Secure boot” mode, and try to boot your previous-successful Linux image again.

Hopefully it should fail with some type of warning, the laptop I did this testing on provides this “informative” graphic:

Sign your kernel

Now that your kernel can’t boot, you need to sign it with the db key you placed in your bios:

1
sbsign --key db.key --cert db.crt --output bzImage bzImage.signed

Take the bzImage.signed file and put it back in the boot partition, copying over the unsigned /boot/EFI/boot/bootx64.efi file.

Profit!

Now, rebooting the machine should cause the UEFI bios to check the signatures of the signed kernel image, and boot it properly.

Demo

I’ve recorded a video of a Gateway laptop booting a signed kernel, with my own key, here. The demo tries to boot an unsigned kernel image that is on the hard disk, but it fails. I plug in a signed kernel that is on the USB disk, and it properly boots.

I did the test with a CoreOS image as it provides a very small self-contained Linux system that allows for easy testing/building from a development machine.

Future plans

Now that you have full control over your system, running only a Linux kernel image that you sign yourself, a whole raft of possibilities open up. Here’s a few that I can think off of the top of my head:

  • Linux signed system self-contained in the kernel image (with initramfs) booting into ram, nothing on the disk other than the original kernel image.
  • Signed kernel image initramfs validates the other partitions with a public key to ensure they aren’t tampered before mounting and using them (ChromeOS does this exact thing quite well). This passes the “chain of trust” on to the filesystem image, giving you assurances that you are running code you trust, on a platform you trust.
  • Combine signed kernel images with TPM key storage to unlock encrypted partitions.

If you are interested in these types of things, I’ll be at the Linux Plumbers Conference in a few weeks, where a bunch of people will be discussing secure boot issues with Linux. I’ll also be at LinuxCon North America, Europe, and Korea if you want to talk about UEFI and Linux issues there.

Longterm Kernel 3.10

As I’ve discussed in the past, I will be selecting one “longterm stable” kernel release every year, and maintain that kernel release for at least two years.

Despite the fact that the 3.10-stable kernel releases are not slowing down at all, and there are plenty of pending patches already lined up for the next few releases, I figured it was a good time to let everyone know now that I’m picking the 3.10 kernel release as the next longterm kernel, so they can start planning things around it if needed.

I’m picking this kernel after spending a lot of time talking about kernel releases, and product releases and development schedules from a large range of companies and development groups. I couldn’t please everyone, but I think that the 3.10 kernel fits the largest common set of groups that rely on the longterm kernel releases.

This also means that the LTSI project will be rebasing their patchset on 3.10 as well, which is good news for people using that project as a basis for kernel releases for their products.

3.10 Linux Kernel Development Rate

While working on the latest statistics for the yearly Linux Foundation “Who Writes Linux” paper, I noticed the rate-of-change for the 3.10 kernel release that just happened this weekend:

Every year I think we can’t go faster, and every year I’m wrong.

Note, the “number of employers” row is not correct, I haven’t updated those numbers yet, that takes a lot more work, which I will be doing this week.

Spreadsheet source, and scripts used to generate the numbers, and “cleaned up” kernel logs can be found in my kernel-history repo here.

How to Create a sysfs File Correctly

One common Linux kernel driver issue that I see all the time is a driver author attempting to create a sysfs file in their code by doing something like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
int my_driver_probe(...)
{
        ...
        retval = device_create_file(my_device, &my_first_attribute);
        if (retval)
                goto error1;
        retval = device_create_file(my_device, &my_second_attribute);
        if (retval)
                goto error2;
        ...
        return 0;

error2:
        device_remove_file(my_device, &my_first_attribute);
error1:
        /* Clean up other things and return an error */
        ...
        return -ENODEV;
}

That’s a good first start, until they get tired of adding more and more sysfs files, and they discover attribute groups, which allows multiple sysfs files to be created and destroyed all at once, without having to handle the unwinding of things if problems occur:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
int my_driver_probe(...)
{
        ...
        retval = sysfs_create_group(&my_device->kobj, &my_attribute_group);
        if (retval)
                goto error;
        ...
        return 0;

error:
        /* Clean up other things and return an error */
        ...
        return -ENODEV;
}

And everyone is happy, and things seem to work just fine (oh, you did document all of these sysfs files in Documentation/ABI/, right?)

Anyway, one day the developer gets an email saying that for some reason, userspace can’t see the sysfs files that are being created. The user is using a library, or udev rule, and the attribute seems to not exist. This is quite odd, because if you look at sysfs, the files are there, but yet, libudev doesn’t think it is. What is going on?

It turns out that the driver is racing with userspace to notice when the device is being created in sysfs. The driver core, and at a more basic level, the kobject core below it, announces to userspace when a new device is created or removed from the system. At that point in time, tools run to read all of the attributes for the device, and store them away so that udev rules can run on them, and other libraries can have access to them.

The problem is, when a driver’s probe() function is called, userspace has already been told the device is present, so any sysfs files that are created at this point in time, will probably be missed entirely.

The driver core has a number of ways that this can be solved, making the driver author’s job even easier, by allowing “default attributes” to be created by the driver core before it is announced to userspace.

These default attribute groups exist at lots of different levels in the driver / device / class hierarchy.

If you have a bus, you can set the following fields in struct bus:

1
2
3
4
5
6
7
struct bus {
        ...
        struct bus_attribute    *bus_attrs;
        struct device_attribute *dev_attrs;
        struct driver_attribute *drv_attrs;
        ...
}

If you aren’t dealing with a bus, but rather a class, then set these fields in your struct class:

1
2
3
4
5
6
7
struct class {
        ...
        struct class_attribute          *class_attrs;
        struct device_attribute         *dev_attrs;
        struct bin_attribute            *dev_bin_attrs;
        ...
}

If you aren’t in control of the bus logic or class logic, but you have control over the struct device_driver structure, then set this field:

1
2
3
4
5
struct device_driver {
        ...
        const struct attribute_group **groups;
        ...
}

Sometimes you don’t have control over the driver either, or want different sysfs files for different devices controlled by your driver (platform_device drivers are like this at times.) Then, at the least, you have control over the device structure itself. If so, then use this field:

1
2
3
4
5
struct device {
        ...
        const struct attribute_group **groups;
        ...
}

By setting this value, you don’t have to do anything in your probe() or release() functions at all in order for the sysfs files to be properly created and destroyed whenever your device is added or removed from the system. And you will, most importantly, do it in a race-free manner, which is always a good thing.

Hardware, Past, Present, and Future.

Here’s some thoughts about some hardware I was going to use, hardware I use daily, and hardware I’ll probably use someday in the future.

Thunderbolt is dead, long live Thunderbolt.

Seriously, it’s dead, use it as a video interconnect and don’t worry about anything else.

Ok, some more explanation is probably in order…

Back in October of 2012, after a meeting with some very smart Intel engineers, I ended up the proud owner of a machine with Thunderbolt support, some hard disks with Thunderbolt interfaces, and most importantly, access to the super-secret Thunderbolt specification on how to make this whole thing work properly on Linux. I also had a MacBook Pro with a Thunderbolt interface which is what I really wanted to get working.

So I settled in and read the whole spec. It was fun reading (side note, it seems that BIOS engineers think Windows kernel developers are lower on the evolutionary scale than they are, and for all I know, they might be right…), and I’ll summarize the whole super-secret, NDA-restricted specification, when it comes to how an operating system is supposed to deal with Thunderbolt, shhh, don’t tell anyone that I’m doing this:

Thunderbolt is PCI Express hotplug, the BIOS handles all the hard work.

Seriously, it’s that simple, at least from the kernel point of view. So, it turns out that Linux should work just fine with Thunderbolt, no changes needed at all, as we have been supporting PCI hotplug in one form or another for 15+ years now (you remember CardBus, right?)

Some patches were posted to get the one known motherboard with Thunderbolt support to work properly by the engineers at Intel (it seems that the ACPI tables were of course wrong, so work-arounds were needed), and that should be it, right?

Wrong.

It turns out that that Apple, in their infinite wisdom, doesn’t follow the specification, but rather, they require a kernel driver to do all of the work that the BIOS is supposed to be doing. This works out well for them as they can share the same code from their BIOS with their kernel, but for any other operating system, that doesn’t know how to talk directly to the hardware at that level, you are out of luck. So, no Thunderbolt support on Apple hardware for Linux (at least through May 2013, maybe newer models will change this, but I’m not counting on it.)

But wait, what about Thunderbolt support on other hardware? I was in Hong Kong in early 2013, and of course found the chance to find the local computer stores. I saw, on one wall of a shop, all of the latest motherboards that were brand new, and would be sold all around the world for the next 6+ months. None of them had Thunderbolt support on them. It’s almost impossible to find Thunderbolt on a motherboard these days, and that doesn’t look to change any time soon.

Then I read this interesting article that benchmarked Thunderbolt mass-storage devices with USB ones. It turns out that the speeds are the same. And that’s with the decades-old USB storage specification that is so slow it’s not funny. Wait for manufacturers to come out supporting the latest UAS specification (and the USB host controller drivers to support it as well, Linux doesn’t yet because there is no hardware out there, wonderful chicken-and-egg problem…) When that happens, USB storage speeds are going to be way above Thunderbolt.

So Thunderbolt is dead, destined for the same future that FireWire ended up as, a special interconnect that almost no one outside of Apple hardware circles use, with USB ending up taking over the mass-market instead.

Note, all of this is for Thunderbolt the PCI interconnect, not the video connection. That works just fine on Linux as it isn’t PCI Express, but just a video pass-through. No problems there.

Present

I’ve been lucky to be using a Chromebook Pixel for the past few months, thanks to some friends at Google who got it for me. It’s the best laptop I’ve used in a very long time, and I love the thing. I also hate it, and curse it daily, but wouldn’t give it up at all.

I’m running openSUSE Tumbleweed on it, not Chrome OS, so of course that is the main reason I’m having the problems listed below with it. If you stick with Chrome OS, it’s wonderful, seriously great. My day-job (Linux kernel work) means that I can’t use Chrome OS as I can’t change the kernel, but almost everyone else can use Chrome OS, especially if your company uses Google Apps for email and the like. Chrome OS is really good, I like it, and I think it is the way forward for a large segment of laptop users. My daughter weekly asks me if I’m willing to give the laptop to her to reinstall Chrome OS on it, as that’s her desktop of choice, and this laptop runs it better than anything I’ve seen.

Here’s the things that drive me crazy:

  • small disk size. It’s ok for normal kernel work, but when I was trying to build some full-system virtual machines for testing, I quickly ran out of space.
  • slow disk speed. It’s a “SSD”, but I’m used to a real SSD speed, not this slow thing, where I can easily max out the I/O path doing kernel builds, as the processor quickly outraces it.
  • USB 2 ports, I could get around the disk size and speed if I had USB 3.0, and I totally understand why there are only USB 2 ports in the laptop, but hey, I can wish, right?
  • various EC issues, the Embedded Controller in the laptop is “odd” and when you run a different operating system than Chrome OS, the quirks come out. I’ve learned to live with them, but I would love to see an update for the BIOS that fixes the known problems that are already resolved within the code trees. It’s just up to Google to push that out publicly.

Here’s the things that make me love this laptop:

  • the screen
  • the screen
  • the screen
  • seriously, the screen. It’s beautiful, and is worth any problem I’ve had with this laptop.
  • wireless just works, no issues at all, great Atheros driver / hardware.
  • it’s the best presentation laptop I’ve ever had. Gnome 3 works wonderfully with it, the external display adaptor can easily handle a different resolution. LibreOffice’s presentation mode, with the speaker notes on the laptop, with it’s huge screen looks wonderful, and the slides at a much lower resolution is just great. No problems at all with this, just plug the laptop into the projector and go.
  • very fast processors. Full kernel builds in less than 5 minutes, no problem.

There are some things that originally bothered me, but have been fixed, or I’m now used to:

  • suspend / resume didn’t work, that’s fixed in 3.10-rc kernels.
  • resume used to throttle the CPU to only half speed, again, fixed in 3.10-rc kernels.
  • keyboard backlights don’t survive suspend/resume, there are fixes out there that hopefully will get into 3.11, it doesn’t bother me at all.
  • lack of PgUp/PgDown/Home/End/Delete keys. The ever-talented Dirk Hohndel made a patch for the PS/2 driver (seriously, a PS/2 keyboard?) that overloads the right Alt key and arrow keys to provide this fix, so this is solved, but it would be good to get it merged upstream, hopefully one day this will get there for others to use.
  • trackpad was annoying at first, but now I’m used to the three-finger tap for middle click. Oh, and I got a good wireless mouse to make it easier.

It’s a great laptop, built really solid. I’d recommend it to anyone who uses Chrome OS, and for anyone else if you like tinkering with your own kernels (a small market, I know.) Later this year new hardware should be coming out, with the same type of high-resolution display, and beefier processors and bigger storage devices. When that happens, I’ll get one of them, and my daughter will greedily grab this laptop and install Chrome OS, but until then, this is what I use to travel the world with.

The future is glass

A few weeks ago, a friend of mine came over with a newly acquired Google Glass device. I played with it for a few minutes, and was instantly amazed at the possibilities it will provide. I, like probably lots of you, have been reading books that describe different types of heads-up or “embedded computers” for many many years, and I’ve always been waiting for the day that this will become a reality.

Google Glass might not be the device described in science fiction books, but it’s the closest I’ve seen so far. The interface is completely natural, the display is amazing, and the potential is huge.

And yes, you do look like a dork while wearing them, but that will either become acceptable, or the device will shrink over time. I’m betting on a combination of both of them.

But what I found even more amazing is what happened when the kids put them on. The youngest put them on, and, as I explained on Google+ after it happened, his responses went, in order:

  • “You could watch movies with this in class!”
  • “Google Glass, what is Iron Man?”
  • “Google Glass, what is 7 * 24”

So that was YouTube time waster, to to movie background information, to homework solver in a matter of minutes. Total acceptance, no hesitation at all, I think that’s proof of just how big this will be eventually.

Later that day, we went to a neighborhood yogurt shop, and my friend ended up stalling the checkout line for a long time as the teenagers running the store insisted on trying them out and taking pictures of each other and doing google searches to see just how popular their store was (hint, it wasn’t the highest ranking, which was funny.) After we finally paid for our dessert, my friend was stuck demoing the device for about everyone who came in the shop for the next 20 minutes. People of all ages, kids to retirees, all instantly got the device and enjoyed it.

So, if you’ve made fun of Google Glass in the past, try one out, and consider the potential of it.

And of course, it runs Linux, which makes me happy.