tamperfree

I'd like to write about a part of what was my master thesis project. For my thesis I wrote about a mostly theoretical whistleblowing system. A part of that system was a component which verified that the javascript client, used for the submission of new whistleblowing leaks, had not been tampered with. This was to ensure that we could better trust the client, which may be hosted somewhere outside the organisation.

The idea is simple. Basically I've created a program which allows you to verify a static content on a hidden service. I assume that most people when using Tor are using the Tor Browser. This is the recommended way to stay anonymous, by not standing out you are part of a larger anonymity set. Compare this to the normal internet, where there are multiple browsers with versions and operating systems to keep track of. Because most people on Tor use the same client, we can emulate that client programmatically to create requests that are indistinguishable from those that would come from a real user. Since the verifier is using the same client, it will display the same behaviour and send the same HTTP headers like that of a real user. On the internet this would not work, because these things are so diverse that they can uniquely identify a user. On Tor however, the Tor Browser has been specifically engineered to not fingerprint individual users, meaning that all users mostly look the same.

For a verifier this is useful because it will be difficult for a server to tell apart a verification request from a real request. This allows it to monitor the server without the server being able to alter its behaviour by knowing when its being watched.

One problem this might solve is the untrustworhtiness of doing clientside cryptography in javascript. One problem why we can't trust javascript crypto is that there is no guarantee that a server hosting the cryptographic routines has not introduced a backdoor (like removing the encryption altogether). With this verifier we can have some guarantee that the javascript has not been altered by the server.

Prototype

I implemented a prototype to test out this idea. Here's a link to the repository.

Run it once to stamp the "state" of a hidden service...

tamperfree stamp <url.onion>

...then run it again later to verify that the state has not changed.

tamperfree verify <url.onion>

When stamping a website tamperfree tries to identify and save a secure hash of the raw content received from the server for each path visited. It does this by working as a proxy between the Tor Browser and the Tor Socks proxy that it uses to connect to the network. Then it opens the targeted url in a selenium controlled instance of the Tor Browser. When verifying it captures the same raw content, computes the hashes etc and then it compares against the saved hashes.

There are some major caveats to this tool:

  • Can't use it on HTTPS. The proxy can't see the plaintext data as it passes directly to the browser and the encrypted content is not static since I'm guessing different IVs or somesuch are used.
  • Can't use it on dynamic websites (i.e. websites that change the content).
  • Using it on non-Tor websites is kind of pointless since the server would easily identify it by the user agent.
  • Sites requiring user interactions or multiple page loads are not supported. This would require that I spoof user behaviour, adding further complexity to the tool. I think it's doable for smaller sites to build a tree of possible user interactions and generate fake traffic based on every sub-tree. However, I am certain that most users will likely navigate a site the same way, meaning that certain patterns will emerge and make real users stand out from the fake.

So the use case for my tamperfree tool is pretty slim. You're limited to having a single page webapp which loads everything it needs on the first given url. Oh, and it has to run as a tor hidden service.

You want to run this tool often. So often that a malicious server has a higher probability of encountering a verification request than a real request if attempting to tamper by picking a request at random. To do this we simply want the tool to send out more requests to the server than the real requests. For my use-case, a whistleblowing site, the number of real visitors expected is actually pretty low (few visitors per day), so this is something we can do.

Timing is also important. If the verifier runs on a regular schedule then it's trivial for the server to figure out that incoming requests in that specified schedule are verification requests. Therefore it's important that the verifier runs on a random schedule that the server can not determine. E.g. running the verifier every 5 minutes would make it easy for the server to figure out the pattern and only serve it's malicious responces to requests outside that schedule. However, just running a ´sleep(rand())´ would also lead to scenarios where if there are multiple requests happening in quick succession, then the server can see that it's unlikely to be the verifier. I'm currently trying to figure this part out, and I think I will dedicate my next blogpost to it.

That's it for now. This post is a bit rambling and I apologize for that. When I get the time I will go back and try to clean it up, maybe add some pictures to make it easier to read. I'm publishing this now to try to get into the groove of posting again. If you have any feedback or ideas on this, feel free to contact me.

Easier Authentication for your Mobile Apps

Signing in to your mobile apps should be easier in my opinion than having to type a username and password. In my case it tends to be especially painful since I usually use KeePass to generate my passwords as random strings. Typing these long and random passwords on a touch device sucks. I'd like to see more alternatives when it comes to authenticating that are easier to use.

These are some ideas I'd like to see used more often.

The Sign-In Link

Slack uses a great method to authenticate mobile users. They get you to type your team name and your email address. After which you can use a "magic sign-in link" that signs you in automagically. The way it works is that they send you a link to your email, when pressed it signs you into your slack account. So no password required, instead proving that you have access to your email account is enough to log you in.

Slack magic link (I took the above image from https://auth0.com/blog/2015/12/04/how-to-implement-slack-like-login-on-ios-with-auth0/)

I really like this idea. I would love it if the HBO app had such a system, because it seems to have a knack for forgetting that I was logged in whenever I want to watch Game of Thrones. Especially for apps where users don't really interact with billing and paying for stuff, I can see that such a scheme would be nice to use.

Access to your email already meant access to most of your accounts. Only select few sites enable some sort of two factor authentication that don't just allow a potential hacker to reset your password via a "forgot password" function.

It also means one less password to remember. One less password that is potential reused, and then later hacked.

It does mean reliance on email infrastructure though. Which is already not so secure, depending on who your provider is. Spoofing and phishing is probably still going to be a problem for email a long time into the future. These could probably also be used against this type of scheme.

A protocol of such an authentication scheme might look as follows:

  1. Mobile App Requests Sign In from server, providing email address.
  2. Server sends an email with random token embedded in link, unique to email address.
  3. User presses link in the email. Link opens server webpage which in the backend sends a token to the mobile app.
  4. Mobile app receives fresh authentication token and uses it for future authentication.
  5. Server keeps track of active tokens in case they need to be revoked.

Of course, this is very similar to other authentication schemes where you rely on a third party to verify your identity. Other popular choices are to use Facebook, Google or other website logins that you have already authenticated for the device. However Facebook and Google both have privacy concerns (as large companies selling your data for adverts) as to why you may not want to allow them to determine your authentication. They will likely track the application you used.

Google tracking my netflixing

But other transports could be interesting too. For example, you could authenticate over SMS if it's a phone device. This would require a phone number instead of an email address. It would also incur a cost on your business to, since you have to send out the SMS.

Key-Exchange over Local Network

In general I'd like to see more apps that communicate locally. That is my computer to my phone, or vice versa. An app I'd like to code when I get the time is a basic notifier to forward alerts from my PC to my phone, for example. However, I'd like it if the applications did not always rely on a backend server on the Internet. When I'm in my home and I share a local network, it seems very unnecessary and sometimes unsecure to have to route my messages between my devices via a server on the internet.

Instead I would like to see a protocol which implements a Key-Exchange over the local network instead. This way you can set up a secret key for devices to communicate via, so long as you trust your local network.

I think one way for this would be to communicate via UDP. One client broadcasts on UDP for the other to search for it. Once they find each other, they perform a key-exchange like diffie-hellman, and finally ask the user to verify the connection by showing the fingerprint. This is probably very similar to how bluetooth is done, but over an IP network instead.

Once the secret keys have been set up, your devices can then safely communicate end-to-end encrypted over the Internet.

Verification via QR-code

Continuing on devices that communicate locally, another way of exchanging a set of keys could be to use QR codes. This could done by outright sharing a secret key by showing it as a QR code for the another device to scan via camera. Or in the case of authenticating to a third party service, it could contain a token or signature which the new device could use to forward proof that the user already has access.

Obvious advantage here is that the exchange can be done offline. It does however require a camera for at least one of the devices. For a key-exchange the key may also have to fit in a QR code.

For a token based system, where two devices connect to the same api. You could allow an authenticated device help a second device authenticate the following way using a QR code.

  1. Authenticated device generates token. Sends token to backend server to save. Shows token in QR code.
  2. Unauthenticated device scans token. Uses it to authenticate with backend server.

Setting up a Tor hidden service

As part of my thesis, I'm looking at using Tor for an anonymous submission system. For this I set up a small hidden service to test, and I figured I'd write down how it's done. It's pretty easy.

Assuming you have some sort of TCP server you want to serve over Tor, proceed as follows. Install tor first. This is pretty much the only package you need. In Arch-Linux:

pacman -S tor

Then all you need to do is edit the torrc-file which is usually found at /etc/tor/torrc. Here all you need to do is add two lines, and the default config file describes it pretty well. In my case it looked as follows.

HiddenServiceDir /var/lib/tor/hidden_service/
HiddenServicePort 80 127.0.0.1:5000

Here the HiddenServiceDir is the location where tor will store the private key for the hidden service, as well as the hostname. Note that you can have several hidden services running on different addresses, just more HiddenServiceDir lines with different directories. HiddenServicePort will act as a port forward from the first specified port at the onion-address to the specified IP-address and port. In my case this forwarded traffic from my onion address at the normal http address to a local python webservice I was developing on.

Once you've added the lines to your configuration, you can then restart the tor service to start forwarding traffic.

systemctl restart tor.service

Finally you can get the hostname of your hidden service by opening the /var/lib/tor/hidden_service/hostname file

cat /var/lib/tor/hidden_service/hostname

Now use this onion address to connect to in e.g. your Tor-browser.

Revisiting the Free Wifi on Destination Gotland

I'm on the boat from Gotland again after having spent a week there with my family. Again I find myself in need of some preferably free Wifi. Although with a new linux OS, I find myself having to find the new command ip as opposed to the ifconfig command last time.

So this is one way of getting "free" Wifi on the Marlink Internet at Sea service found on the Destination Gotland ferry and I guess other ferries.

The attack is very simple, again all we need to do is spoof the mac address of an authenticated device. We can find an authenticated device quite easily using a wireless sniffer. I use Wireshark for this. Look for any packets going to an external network. I suggest filtering for TLS or HTTP, all we need is the MAC-address.

Once you have what looks like authenticated device, bring the device down, spoof for the mac you want. These are the ip commands used. Where <device> is the name of your wireless interface. In my case wlp4s0.

ip link set <device> down
ip link set <device> [MAC ADDRESS TO SPOOF]
ip link set <device> up

Take this post as a good reminder to RTFM every now and then. It's a bit of a challenge to do stuff without the almighty Google, but the man command and some patient reading does a good job when you are stuck offline :)

Teaching

Disclaimer: Some thoughts on reason, ethics, and teaching. There is probably more refined philosophy on this somewhere, but I figured I would write down my thoughts. This post might be subject to change in the future. I might also move these types of posts to a separate feed/blog to separate it from computer science stuff.

Consider the following ethical problems. Forget about possible sidetracks and just answer yes if you think it is morally right and no if you think it is morally wrong.

First problem: You have a button, when you press the button it gives you a
 minor happiness, but at the same time it causes a lot of unhappiness for
 someone else. Is it morally right to press the button?

I think most people would answer no in this first question, You could rephrase it as question whether it is right or wrong to steal from someone. Now consider the same question, but without the bad part.

Second problem: You have a button, when you press the button it gives you a
 minor happiness. Is it morally right to press the button?

Obviously it does not matter morally speaking, right? If we assume full knowledge of consequences then there is no wrongdoing. From a utilitarian standpoint it would be a good thing to press that button; our utility would go up. However, what if the consequences aren't actually that well known, and this is your perception whenever you push that hypothetical button? Out of curiosity, you pressed the button one day to see what would happen. It gave you that minor boost of happiness and had no other effects to your knowledge.

Third problem: You have a button, when you press the button it gives you a
minor happiness. After a few turns of having pressed the button, you learn that
every time you press the button someone is killed. Is it now morally right to
press the button?

No, right? You might be forgiven for having pressed the button before, but now you don't have the same excuse for doing so. Killing someone for a minor gain is hard to justify morally.

I would argue that the knowledge of the consequences very much depends on the ability to reason and our perception. Do we hold someone liable when they don't know what they're doing is wrong? In many cases, no. However, we could say that negligence is wrong, and that the person should have known that there were bad consequences to his actions. The knowledge of the bad consequences will likely make you reason that continued pressing of the button is bad.

Fourth problem: Your friend has found this same button and has started pressing
it to get happiness. However, she does not know of the bad consequences. Should
you inform her of them?

I think yes. If we go by the reasoning from the previous problem, then we help our friend understand that what she is doing is wrong and so she will likely stop pressing the button.

So finally, where am I trying to get with this? I find the flux between knowing and unknowing interesting. It opens up for further questions. What is your responsiblity for knowledge within your field? Obviously, for someone like a doctor knowledge might impact lives more directly and therefore a doctor has a higher responsibility than say a mailman. Another question, which was supposed to tie into the title of this blogpost, is when are we responsible for informing others? Clearly there is some limit to this, depending on your own self confidence and what not. If we went about informing everyone about every single little wrongdoing they might be doing then we would quickly lose friends. Unsolicited advise is gets old quick.

However, going by this logic where knowing the consequences seems to give a higher moral responsibility, there are some nice conclusions. Learning and teaching become good things.

Show people the consequences of their actions and perhaps they are more likely to do the right thing. In this sense teaching/learning feels like it is an instrumental way to goodness.

Url Secrets

Found a neat little hack using "World Wide Web URLs"

A reference to a particular part of a document may, including the fragment identifier, look like

  http://www.myu.edu/org/admin/people#andy

in which case the string "#andy" is not sent to the server, but is retained by the client and used when the whole object had been retrieved.

Reference.

This gave me an idea of using the "fragment identifier", aka what's behind the #, to send secrets which can be seen by other browsers, but not the server. Secrets like for example passphrases which can be used for cryptographic purposes.

I made a small proof of concept project using this idea for a service to share secret message. Using clientside crypto, the user can submit an encrypted message to the server. The server returns with a uuid to identify the message. The client side script then creates a url containing the uuid and encryption key. Like so:

http://domain.tld/[uuid]#encryptionkey

The user sends this url to his friend. When the friend opens the url, the server sends back the encrypted message. The client side script grabs the encryption key and decrypts the message. The plaintext message is never seen by the server.

pretty diagram

You can find a demo here (note unsigned https) and the github repo here.

Obviously, this idea implies trusting the clientside script that is sent by the server. If the server was adversarial he could easily modify the script to remove the encryption or send the encryption key to the server.

Apache HTTPS Configuration

Here are the virtualhost configuration I use to set up https on apache for a site. Today is the 2015-02-24 and this configuration currently achieves an A+ grade on the Qualys SSL Test if you have a trusted certificate.

A+

Read more…

Rolling my own Certificate Authority

I'm about to deploy a small sideproject I've been coding the past 2 weeks and I want to make sure that the site is served over https. Since I'll likely put it on a subdomain to my own company and I don't want to go get another signature from a public authority I thought I would try to roll my own mini-CA chain which I can use for these types of situations.

Read more…

SSLStrip

Continuing with the theme of wifi attacks, tonight I'm looking at the SSLStrip tool.

Read more…

Ettercap, Arpspoof and DNSSpoof Examples

I'm spending the night learning about the tool ettercap. May as well write down what I learn for future reference.

Read more…