The quest towards trusted client applications: a rambling

Some of you are surely aware of my work with PolicyKit and my efforts on making privilege escalation in KDE/Linux easy and accessible. The final result of my efforts was KAuth, now part of kdelibs. Although, since quite a lot of time I have been wondering on how to improve the whole escalation/trust experience, also covering a part of our middleware which is still not quite there.

One of the main issues in a Linux system at the moment is that we are not capable of identifying and ensuring that a client application *is* actually the application we are expecting. Anyone can fake a DBus service on the session bus, same goes for the process name and much more. This is also one of the big pitfalls of Polkit – as much as the authorization is guaranteed to be unique for each client, there is no way to determine if a specific client is *actually* supposed to execute a specific action or not.

These days I am at UDS, and I have been talking about the subject with Alex (Fiestas) and others quite extensively, thinking about a potential way of fixing it. It is not an easy topic at all and the solution is anything but obvious. We came to consider the following points:

  • A client application should be “signed” somehow against a central trusted entity
  • This trusted entity should be managing the authorization towards both privileged and unprivileged actions

This design aims to solve several problems beyond privilege escalation – namely, API access restriction. A very good example is inhibition. At the moment, any application can decide independently to prevent suspension on your system, making your laptop fry if it persists throughout the session. In an ideal world, we would like just a set of “trusted” applications to be able to execute that API. Of course, the concept of a trusted application can either come from the distribution or the user – we don’t really care about this. What we really care about is determining that a specific application IS actually the application we expect it to be, and that it has a set of specific privileges.

So our central “trusted” entity, ideally a daemon started in a way in which it could not be easily faked (I guess systemd might help here), would take care of being a central policy handler for several userspace features and applications. The main problem in this design is how to implement the signature check for each application, and how to make sure the application only can advertise its own signature. This is the solution which seems the most sensible to us at the moment:

  • The application gets installed through a package management system
  • A post-installation hook creates a user named after the executable of the application itself, or in a way which is easily identifiable by the application.
  • The same hook takes care of generating a key pair for the application. One key gets stored in a location accessible only by the newly created user, the other is stored in the database of the central trusted entity and is associated uniquely with the application (inside the central daemon)
  • The application would be a setuid executable. This way, it can escalate back to its original user for the signature handshake process, and get back down to the caller for everything else. This should not be a treat to security as the only privilege of the “special” user would be accessing that specific directory.
  • This way any malicious application without root access would not be able to fake its identity towards a “trusted” client application
  • Of course, this implies that only the package management tool and a developer tool should be able to create new certificates inside the central entity. Which would be guaranteed via the very same mechanism. Of course we start from the assumption that the OS installation is not compromised, which is sensible.
  • Before you complain about the bootstrapping phase, here is how it works. We assume that the package manager will be always installed first. Upon package manager’s installation, the package manager installs its own key (no need for its user, since it always runs as root anyways) into a known path. When the central trusted entity is installed, during the installation phase it self-installs the specific certificate into its database, trusting only the package manager in the first place. This way the system is ready to perform other certificate installations.

I would love to hear opinions on this before we can start working on a proof of concept approach and see if all that we said is somehow feasible. Do you see any flaws? Any obvious things we’ve missed? Any ideas for doing what we planned in a better/simpler way? Please discuss and let us know.

~ by Dario on 11 May, 2012.

14 Responses to “The quest towards trusted client applications: a rambling”

  1. I think this a dangerous step towards the “treacherous (‘trusted’) computing” non-goal. Binaries having to be signed by a central authority to be able to perform basic tasks on the system runs counter to the core principles of Free Software.

    • As long as root is able to update the key store or disable it completely it should not matter from a free software perspective

      • I just need to become root to manipulate the system then ?

        By the way, I would recommend in general to take a look at Aegis or Smack what many security experts have already accomplished for Harmattan and MeeGo back then. I was just spending one and half a years in there, but there were more knowledge experts in there from what I practically saw.🙂

        They are basically already existing security frameworks from the ground up. They have user space utilities and so forth.

        “Of course we start from the assumption that the OS installation is not compromised, which is sensible.” -> This statement whispers to me that, this idea would only address a very small subset of a full-fledged security framework.

        I would be interested in the addition of this idea to the already existing projects and why not extend those, if there are problems in there:
        http://meego.gitorious.org/meego-platform-security/

        When we wrote the “libcreds” library back then, it was being written with upstreaming in mind as a general credential library, and not something tight to one or other underlying linux kernel solution.

    • It’s just one step, but every step which brings us closer to treacherous computing is a problem. Even that step alone already makes it harder to modify or develop software, and those people who are doing that will likely not be able to benefit from the proposed security feature, at least not fully.

  2. How are applications which are not installed via the package manager handled?
    Can any install script implement such a hook?

  3. Great. A HIPS for Linux. Will it work with apps installed from source? Presumably everything can be done by hand but i’d prefer a semi-automagic way.

    Also, is it only for D-Bus? What about AppArmor, SELinux and other stuff?

  4. I don’t think this is the right place to post the following, but:

    At the moment, when a application like dolphin takes actions like copying files to /lib/modules/ without root privileges you get an error message, which is unusable for unexperienced linux users. Isn’t it possible to grant programs like dolphin temporary root-access for certain actions? Popping up a kdesudo messagebox? When you click on a sh file you downloaded, wouldn’t a popup that asks you for a password and explains the situation to you be much easier, instead of opening the code in kate or doing nothing?

    ‘kdesudo dolphin’ works, for users who know something about linux and kde, etc. But besides it being not very user-friendly, running dolphin as root could be rather dangerous…

    On topic:
    Nothing to say, I just agree with the decision.

  5. I think this is a good idea for people who require that extra security and being open source it will probably be done in a flexible open way so people can setup an application with these keys/authentication (even when complied from source🙂

    I don’t know much about it but isn’t SElinux / AppArmor already there to limit what an application can do. Would it be possible to use or extend these to handle this also? I would hate to have a re-invented wheel when work could be done to leverage an existing tool?

    It is great to see that people are still moving Linux security forward despite already being so much more secure than the competitors.

  6. Thinking 5s about this: inventing a new credential mechanism seems very hard, maybe reusing an existing credential mechanism would work?
    For example, if your service was provided by an executable file, you could use the user/group/other to allow/deny running the executable..
    Of course starting an executable use lots of resources so this is only a hack, but I think that it does ~90% of what you want.

  7. I think the problem you’re trying to solve is very important in order to get security accessible to users. Let me know if I am wrong but can we resume what you’re doing to “trying to implement capability-based security in user-level” ?

    Of course, I think the perfect solution here would be some kernel support, but it seems to me this is not going to happen, so let’s stay pragmatic and consider the user-space options.

    First thing first, like many “system” services, I think great care should be taken not to make this “desktop dependant”. I mean it’s too bad we have kio on KDE gvfs on Gnome and fuse for non-desktop applications. Seems to me your solution is good in that respect.

    Another question is how heavy the solution is on the user and with respect to that I think the options you propose can be improved. I think with respect to that, your approach has a few flaws:
    * if I understand correctly, it requires a modification of the applications to implement the key transaction.
    * it requires a lot of user IDs from the kernel, possibly making user management more complex.
    * it requires a specific set of permisions on the application file that might conflict with other requirements.
    * it requires explicit user intervention from the user at install (that can be done by a package manager hook sure, but what about applications installed in another way ?)

    If you decide to go this way anyway, please take a look at google solution for androïd, it’s not that different from what you propose.

    However, I see a simpler option, please tell me if it’s not feasible for some reason:
    * Get the pid of the application making the request (the one that opened the dbus socket), this might require some dbus patch I agree
    * Use the hash of the executable found in /proc/$PID/exe as the identifier of the application in the permission database.

  8. Some thoughts:

    1. I hope I’m not misinformed on this, but how are you going to mitigate the fact that any X client can do anything to any other X client, including the window manager? I think Wayland has a solution to this.

    2. Instead of pre-trusted applications, and following the “Principle of least privilege”, we could take a page out of Android’s book and run *every* application in a sandbox. Taking advantage of recent and yet-to-come Linux kernel improvements, this sandbox could consist of separate PID/FS/network/dbus namespaces. The FS namespace could be FUSE-managed, providing for a way to dynamically give access to filesystem resources to the application while it’s running. Descriptions for these sandboxes could be provided in the applications’ .desktop files, or by expanding the existing solutions found in AppArmor/SELinux.

    3. With the above in place, I think both the goal of trusted access to PolicyKit and elimination of the current Free For All access to the user’s resources by any application, with all the dangers it entails, could be achieved.

    • Sorry for replying to myself,

      in case it isn’t apparent, I would also like to note that the above solution renders the whole signing business (and the resulting management headaches) redundant, as you are already guaranteed that the calling application isn’t modified by another hostile application.

      It also transforms the “trusted application emerging from an untrusted environment” scenario to “trusted application emerging from a trusted environment”, getting rid of the security whack-a-mole game usually found in the former.

      PS: Sorry for my english, not a native speaker.🙂
      PPS: Please enable comment previews!

  9. Signed binaries is one thing that is already being done by some distros before execution, however I love sudo which has many advatages and no backchannel difficulties with selinux that dbus has etc.. It follows the unix security model of everything is a file and inherits the signed execution. All it requires is for devs to stop being scared of modifying sudoers despite blunders of the past, even if rules are hashed by default. A bonus is enforcing better designed small priviledged processes in similar fashion to the proven opensshd too and it’s always clear what priviledges are granted. Guess what, sudo comes from the same highly renowned camp.

  10. It will likely be tough for them to prove to the world they changed’difficult yet not inconceivable.

    The natural inclination is to choose a drug treatment facility that is near to home.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: