Categories
iOS Security Uncategorized

Reverse Engineering the DirecTV App’s DVR Authentication

Disclaimer: The work below was done nearly a year ago. I have no way to test to see if the information is still accurate. I’m guessing it is, but if you’re able to check and see, feel free to let me know if you still get the same results I did.

A while back a friend came to me and they were interested in writing their own mobile app to control their DirecTV DVR. DirecTV already had its own mobile app, but this person wanted to be able to customize some of the behavior. The existing app gave us a good starting point to try and see how to communicate with the DVR.

The first place to start was to inspect the local network traffic between the mobile app and DVR. Any proxying tool should work; I went with Charles Proxy. When doing this, we observed an interesting pattern with the traffic. Every time the app sent a request to the server, the server would respond 401 Unauthorized. The app would then send a second request, identical to the first, but this time with an Authorization header. The server would accept this second request and respond. This wouldn’t just happen once at the beginning of a session. Every single request would get a 401 the first time, then would be resent with Authorization headers.

Inspecting the server’s 401 response, it contained a WWW-Authenticate header which included four keys: realm, Qop, Nonce, and Opaque. A quick online search of these keys revealed that the server seemed to be issuing a digest authentication challenge.

A digest authentication challenge is part of digest access authentication; an authentication method that can be used with web servers. The way digest authentication works is that the client and server each know a pre-shared secret (a password). When the server responds to a client with a digest authentication challenge, it’s telling the client how to authenticate itself. The client will generate two strings:

string1 = md5(username:realm:password)
string2 = md5(method:digestURI)

These two strings are then used by the client to generate the authentication response:

response=md5(string1:nonce:nonceCount:cnonce:qop:string2)

If we wanted to talk to this DVR server, we needed to figure out how to authenticate ourselves. In order to authenticate our response we needed a username, realm, password, method, digestURI, nonce, nonceCount, cnonce, and qop.

As mentioned above, the server’s challenge response gave us the realm, qop, and nonce. From the client’s plaintext HTTP response we were also able to obtain the username (c0pi10t), method (GET), and digestURI (path in the requested URL).

This left us still needing the password, nonceCount and cnonce. The cnonce is an arbitrary value chosen by the client (us!) and the nonceCount can just always be 00000001. So really all that was left was the password. The password is the very thing that makes digest authentication secure. The client and server ship with the shared password known to both of them, so they never have to transmit it over the wire.

In order to obtain the password, one option was to try brute force. Digest authentication is used with SIP, for which a couple of brute forcing tools already exist. However, if the password being used is sufficiently complex, brute force is impractical. I took an existing tool and tweaked it a bit to at least start a brute force script while working on some other ideas.

While that ran, I attempted to inspect the application binary itself. Sometimes developers do silly things and leave files around with interesting information, store secret values in insecure places, or don’t bother to obfuscate strings in their binary. Knowing the username gave me a known value to search for. Unfortunately, cursory searches didn’t reveal any clues inside the binary and couldn’t even find a match for our username, so they seemed to at least be doing something to obfuscate the strings in the application binary.

I also decided to start skimming through the digest access authentication RFC (RFC 2069). Looking through the table of contents, one section immediately jumped out: Security Considerations. This section covered some of the benefits that digest access authentication has over basic auth, as well as possible attacks.

Section 3.3 – Man in the Middle – A simple but effective attack would be to replace the Digest challenge with a Basic challenge to spoof the client into revealing their password.

https://tools.ietf.org/html/rfc2069

The RFC also goes on to explain how this could be combatted. The developers are likely to have coded the client to only respond to digest authentication challenges, and ignore challenges asking for basic auth. The client knows that the server will be using digest authentication and there’s no reason it should accept basic auth as a challenge, especially when an RFC that’s over 20 years old clearly outlines this attack. But it never hurts to try.

Once again using Charles Proxy, when the client tried an unauthenticated request and the server responds with a digest challenge, we modified the response to have a “WWW-Authenticate: Basic” header, indicating to the client that it should authenticate itself with Basic auth (base64 encoded username and password).

When we tried this, much to our surprise, the client actually responded to our spoofed request with Basic Auth. It’s a base64, colon-delimited string, which decoded gives us: c0pi10t:8th5Bre$Wrus. We already had the username (the first part), and now we also had the password.

At this point the door is pretty wide open for writing our own client that can talk to the DirecTV DVR. We have all the pieces we need to authenticate all of our requests to the DVR so that it believes it’s talking to the official DirecTV mobile app. And not just this specific DVR, but any DirecTV DVR that’s capable of working with the mobile app. Based on my (very novice) understanding of digest access authentication, the password must be the same for any DVRs that want to work with the mobile app.

Categories
iOS Networking Privacy Security

Network Security Changes Coming to iOS

Changes to App Transport Security

Last year, with iOS 9, Apple introduced App Transport Security; an enforcement of best practices for encrypted networking. By default, App Transport security requires the following:

  • NSURLSession and NSURLConnection traffic be encrypted
  • AES-128 or better and SHA-2 used for certificates
  • TLS v1.2 or higher
  • Perfect forward secrecy
In other words, it requires that your app keep your users’ network traffic reasonably protected.

While enabled by default in iOS 9, Apple recognized that may developers don’t control their backends, and included the ability to set exceptions for App Transport Security. The problem is many developers went right for the NSAllowsArbitraryLoads key, which disables ATS entirely, then never bothered to re-visit their configuration to get everything working nice and securely.

Last week, at WWDC 2016, Apple announced that beginning January 1, 2017, App Transport Security will be required (this topic starts about 1m20s into the What’s New In Security session). You should really watch the talk for the exact details from Apple, but in short, all of your app’s traffic needs to be secured with HTTPS and will need to use TLS v1.2 or higher.

Apple will still allow some exceptions, but the rules will be much less relaxed than they have been:

  • Most exceptions will now need to be justified to Apple. NSAllowsArbitraryLoads, NSExceptionAllowsInsecureHTTPLoads, and NSExceptionMinimumTLSVersion will all require a reasonable justification for use.
  • NSExceptionRequiresForwardSecrecy will not require a justification for now. If used, this exception will be granted automatic approval. Presumably this will change down the road as forward secrecy becomes more widely spread.
  • Streaming media using AVFoundation is exempt, but Apple says you should still encrypt with TLS if possible.
  • Content loaded inside of WKWebView does not have to be encrypted. This requires specifying the NSAllowsArbitraryLoadsInWebContent key as true in your Info.plist.
  • Data that is already encrypted does not have to go over HTTPS.
The specific example Apple uses in the talk for still needing exceptions is communicating with a third-party server where you can’t control their cipher suites. Even if this is your situation, it sounds like needing to justify exceptions may slow review time, so it’s probably worth talking to any third parties you work with to ensure they can meet ATS requirements.

If the reason you haven’t supported ATS yet is because it seemed like too much work, now’s the time to start making changes. One of the difficulties you may face implementing ATS is diagnosing failures. On the surface, you’ll likely just see data fail to load in your app with no real indication as to why. What I’ve found most useful is the nscurl --ats-diagnostics https://example.com command. It will test different ATS configurations against the specified domain and report back showing which scenarios pass and which fail. The failed scenarios will make it much more clear what you need to change on your server to support ATS. Tim Ekl has some more helpful info about debugging in this blog post on ATS from last year.

If the reason you don’t yet support ATS is the cost or difficulty of getting set up with SSL certificates, be sure to check out the Let’s Encrypt project. Let’s Encrypt is a free, automated Certificate Authority that operates with support from a large number of sponsors such as Mozilla, the EFF, Chrome, and Cisco.

Certificate Transparency

Certificate Transparency is a standard for monitoring and auditing certificates (TLS certificates in this case). Its purpose is to help protect against fraudulently issued certificates. Last year when Apple went over App Transport Security, they also briefly covered the ability to require certificate transparency in apps using the NSRequiresCertificateTransparency key, though they said this functionality was off by default.

This year, certificate transparency is still not required, but Lucia Ballard spent several minutes discussing it in the same What’s New in Security session, announcing that Apple is “joining the effort for certificate transparency”. She also took some time to cover OCSP stapling—a method for checking to see if a certificate has been revoked—and presents it as a recommended practice. I think the verbiage used around Certificate Transparency and OCSP stapling is noteworthy. I’ll include the text of two quotes from the session here for your consideration:

We think now is the time for folks to move to it and start adopting [OCSP stapling].

And later:

It would be a great time to start experimenting with certificate transparency, find certificate authorities that are participating, and get integrated into this ecosystem. And please go enable OCSP stapling.

I strongly encourage everybody to go watch, at a minimum, the certificate transparency portion of the What’s New in Security session from WWDC 2016. It’s less than 10 minutes long, starting at 7m55s and ending at 16m00s (though really you should watch the entire session). The speaker gives a great overview of both Certificate Transparency and OCSP stapling. They also recommend certificate-transparency.org as a developer resource on the subject.

Updating your apps

Your first priority should be to get your app ATS compliant before the January 1st deadline. Really you should start now because many developers are going to encounter obstacles in doing so. Don’t procrastinate.

Once you have ATS working, it seems like it will be worth your time to start getting familiar with certificate transparency and OCSP stapling. Not only will this benefit the security of your users, but it may make your life easier down the road should Apple choose to encourage its adoption more strongly.

Categories
iOS Mac Privacy Security

Working with Apple’s App Transport Security

Update 6/23/15: Apple now has official documentation for App Transport Security.

With iOS 9 and OS X El Capitan, Apple has introduced App Transport Security. In a nutshell, App Transport Security enforces best practices for secure network connections — notably, TLS 1.2 and forward secrecy. In the future, Apple will also update these best practices to ensure they always reflect the latest security practices that will keep network data secure.

App Transport Security is enabled by default when using NSURLSession, NSURLConnection, or CFURL in iOS 9 or OS X El Capitan. Unfortunately for many developers this may mean that things break as soon as they build for iOS 9 or OS X 10.10. Fortunately Apple offers some configuration options to leverage App Transport Security where possible, while disabling it in places where you cannot support it.

You can opt-out of ATS for certain URLs in your Info.plist by using NSExceptionDomains. Within the NSExceptionDomains dictionary you can explicitly define URLs that you need exceptions for with ATS. The exceptions you can use are: NSIncludesSubdomains
NSExceptionAllowsInsecureHTTPLoads
NSExceptionRequiresForwardSecrecy
NSExceptionMinimumTLSVersion
NSThirdPartyExceptionAllowsInsecureHTTPLoads
NSThirdPartyExceptionMinimumTLSVersion
NSThirdPartyExceptionRequiresForwardSecrecy
Each of these keys allows you to granularly disable ATS or particular ATS options on domains where you are unable to support them.

Sample ATS plist

In the first beta of iOS 9, these keys are incorrect and instead you’ll need to use the following: NSTemporaryExceptionAllowsInsecureHTTPLoads
NSTemporaryExceptionRequiresForwardSecrecy
NSTemporaryExceptionMinimumTLSVersion
NSTemporaryThirdPartyExceptionAllowsInsecureHTTPLoads
NSTemporaryThirdPartyExceptionMinimumTLSVersion
NSTemporaryThirdPartyExceptionRequiresForwardSecrecy

These keys will undoubtedly be fixed in a future seed. If you can, you should use the first set of keys above that Apple is officially supporting, though if you’re using the temporary keys, they should continue to work in future betas. Thanks to Juan Leon for bringing this to my attention—I was told the same in the labs.

Below are examples of different scenarios developers may encounter.

Example A: ATS for all

This is the easiest one. The only thing you need to do is use NSURLSession, NSURLConnection, or CFURL. If you’re targeting iOS 9 or OS X El Capitan or later, ATS’s best practices will apply to all of your NSURLSession, NSURLConnection, and CFURL traffic.

Example B: ATS for all, with some exceptions

If you expect all of your domains to work with ATS, except a few that you know will not work, you can specify exceptions for where ATS should not be use, while leaving all other traffic opted in. For this scenario, you’ll want to use an NSExceptionDomains to specify the domains for which you wish to override ATS’s default settings. To opt-out an entire domain or sub-domain, create a dictionary for the URL you want to opt-out of ATS, then set NSExceptionAllowsInsecureHTTPLoadsto true. You can also specify more specific rules you wish to override with NSExceptionRequiresForwardSecrecy and NSExceptionMinimumTLSVersion if you don’t want to completely disable ATS on those domains.

ATS for All

Example C: ATS disabled, with some exceptions

Conversely, you may only want ATS to work on domains you specifically know can support it. For example, if you develop a Twitter client, there will be countless URLs you may want to load that may not be able to support ATS, though you would want things like login calls, and other requests to Twitter to use ATS. In this case you can disable ATS as your default, then specify URL which you do wish to use ATS.

In this case you should set NSAllowsArbitraryLoads to true, then define the URLs that you want to be secure in your NSExceptionDomains dictionary. Each domain you wish to be secure should have its own dictionary, and the NSExceptionAllowsInsecureHTTPLoads for that dictionary should be set to false.

ATS Disabled with Exceptions

Example D: Downgraded ATS

In some cases you may want ATS on all, or some, or your URLs, but are not ready to fully support all of ATS’s best practices. Perhaps your servers support TLS1.2, but don’t yet support forward secrecy. Rather than completely disabling ATS on the affected domains, you can leave ATS enabled, but disable forward secrecy. In this scenario you would create an NSExceptionDomains dictionary, a dictionary entry for each domain you need to override settings for, then set the NSExceptionRequiresForwardSecrecy value to false. Similarly, if you wish to have forward secrecy enabled, but need the minimum TLS version to be lower, you can define your supported TLS version with the NSExceptionMinimumTLSVersion key.

Downgraded ATS

Example E: NSA-friendly Mode

If you want to opt-out of ATS entirely (which you really shouldn’t do unless you fully understand the implications), you can simply set NSAllowsArbitraryLoads to true in your Info.plist.

NSA-friendly Mode

Third-party keys

You may have noticed a few keys that appear to be duplicates of others keys with the addition of “ThirdParty” in the name. NSThirdPartyExceptionAllowsInsecureHTTPLoads
NSThirdPartyExceptionMinimumTLSVersion
NSThirdPartyExceptionRequiresForwardSecrecy
Functionally these keys will have the same result as the keys that don’t have “ThirdParty” in them. The actual code being invoked behind the scenes will be identical regardless of whether you use the ThirdParty keys or not. You should probably use whichever key best fits your exceptions, but no need to overthink it.

Certificate Transparency

While most security features for ATS are enabled by default, certificate transparency is one you must opt-in to. If you have certificates which support certificate transparency, you can enable certificate transparency checks with the NSRequiresCertificateTransparency key. Again, if your certificates don’t yet support certificate transparency, by default this check will be disabled.

If you need help debugging issues that arise from having App Transport Security enabled, setting CFNETWORK_DIAGNOSTICS to 1 will log all NSURLSession errors including the URL that was called and the ATS error that resulted. Be sure to file radars for any issues you encounter so that ATS can be improved and flexibility expanded.

All of the above information was provided in Apple’s Networking with NSURLSession session at WWDC 2015. Finally, Apple emphasized in the talk to report any issues that you run into and keep any eye out for any changes that may be coming in future betas.

Categories
iOS Mac Privacy Security

Security & Privacy Changes in iOS 8 and OS X Yosemite

I’ve been sifting through this year’s WWDC videos looking for all of the interesting bits around security & privacy. I’m not anywhere close to being done. Fortunately Luis Abreu has done the hard work for all of us and compiled his findings into a very handy post. The post has a lot of great info for developers, QA, and designers around what’s new and what’s changing. Of course you’ll still want to go do your own research before implementing any changes, but Luis’ post serves as a great quick-start guide.

Link: lmjabreu.com
Source: iOS Dev Weekly

Categories
Security

Evaluating the Security of Hosted Build Servers

A few weeks back there was an abuse of Apple’s Enterprise Developer Program that gathered a lot of attention after it was used to distribute a GameBoy emulator outside of the App Store. Obviously the focus at the time was on the fact that anybody could build an app to distribute outside of the App Store by using one company’s enterprise certificate. What got no attention, as far as I can tell, was the lack of security on the build server.

Enterprise Abuse

MacBuildServer.com had previously stated that their goal was to make open source iOS projects more accessible to developers and users. Their service allowed a user to enter the URL of an iOS GitHub project and out the other end they would get an IPA that they could install on their devices. Developers had the option of providing their own certificate to sign the app with, but if they didn’t have one they could just use MacBuildServer’s enterprise certificate. Most developers would probably agree that this isn’t a great idea as it’s a glaring violation of Apple’s terms of use. In fact, as most could probably have predicted, it wasn’t long before Apple revoked the company’s enterprise certificate, leaving any apps that had been signed with it completely dead. In response, MacBuildServer changed their service to instead require a developer to provide their own certificate to sign the app with. End of story, right?

Secrets of an Error Log

Not quite. When my boss, Jay Graves, went to try out MacBuildServer his build failed as a result of a required variable that had not been set. When a build fails in Xcode, you’ll get output telling you what went wrong so you can try and fix it. Many continuous integration servers, like Jenkins for example, will give you console output from the entire build process so you can see everything that took place and any contributing factors that may have led up to the build failure, as well as the error that ultimately caused it to fail. MacBuildServer provides the same. If a build fails, you’ll get the option to view the error log. One of the first things Jay noticed was at the top of the log, the build server had echoed out the password for the certificate that the server was signing with. We had a laugh about it, but without the certificate itself the password doesn’t do you much good. But what if we could get the certificate?

I created an empty application in Xcode. In my project settings, I navigated to the Build Phases tab for my target. Under this tab you’ll find an item called “Run Script”. This does exactly what you might expect; it will execute any commands you enter as part of a shell script during the build process. I started off with a simple whoami just to see if it would work. I followed this with a non-existent command in order to guarantee my build would fail right after my script had run so I could see the output. I uploaded my sample project to GitHub, pointed MacBuildServer to it, chose the option to sign with their enterprise certificate and as expected, my build failed. When I clicked to view the error log I was given the server’s output from my build and at the end of it was the output from my whoami command showing me the user that the build had been run by.

The output of whoami wasn’t terribly interesting, but what it represented was rather significant. At this point it’s pretty much game over for the build server. I have the ability to run any command that I could as a normal user logged in to the computer. The build failure logs told me everything I needed to know about where the enterprise certificate and corresponding keychain were being stored. Combine this with the certificate password that the build log is already giving and you now have the ability to locally sign any app you want with this company’s enterprise certificate. Of course this is less meaningful now that the certificate has been revoked. However, you could also run any number of other commands to cause harm to the build server or use access to the server to launch other attacks from inside their network. From navigating the filesystem, to copying files, to tampering with other people’s git repos, the possibilities are just about endless. I promptly contacted MacBuildServer about this problem and they have since disabled scripts from Build Phases and Build Rules.

What It Means for Users

While this seems like a lesson for administrators of hosted build servers, this should also serve as a warning to users. Using a hosted build server requires that you trust a third-party with your code, all of your signing credentials, as well as trusting that the IPA you get out the other end has not been tampered with. In the case of this particular service, the GitHub repository you use has to be public, but a number of other hosted build server solutions will build from private repositories. If the server you entrust isn’t properly secured, you may unwittingly be giving access to your code to a malicious party.

Of course, this is true of any third-party service. We often hand sensitive information over to a third party and trust that they have properly secured their services to protect our information. But build servers may inherently have a greater risk of being exploited. The very nature of a build server requires that it downloads remote code that is controlled by a user. Usually this is exactly the kind of action that you would want to try and prevent your users from doing on your servers.

Additionally, build servers are usually set up in trusted environments. When an organization sets up a build server internally, it usually has a trusting nature because the need for convenience outweighs security needs. A build server hosted inside of an organization, on a private network, to compile and run code that was written by developers who work for that company will naturally have less need for security than a build server accessible from the Internet to anybody who wants to sign up for a free account. If you’re building the latter, a fair amount of work will need to go into locking down the build server to limit access; that’s not the environment they’re usually set up for. It’s not impossible, but you need to know what you’re doing.

The Takeaway

All of this is to not say that hosted build servers should not be used. Developers should simply be aware of the potential risk involved when trusting a third party like this. Similarly, anybody offering hosted build servers as a service should be aware of the risk involved, and be familiar with the security measures necessary to protect themselves and their users.

This post should also not be construed as a strict criticism of MacBuildServer. When I contacted MacBuildServer about the issues I had found, they were receptive and appreciative to my feedback and promptly addressed the issue. I have not taken the time to investigate other hosted build server solutions, but I would be surprised to learn that other services don’t suffer from their own security problems.

Special thanks to Carl Veazey for his help in Xcode and eternal willingness to answer my stupid questions.

Categories
iOS Security

Trusting the Client

A lesson learned a long time ago in the world of desktop computing is that a server can’t trust what a client tells it. If a user goes to log in to your server, your server checks it, you would never leave it up to the client-side application to tell you the user is authenticated because you wouldn’t know if it’s telling the truth. You have control over your server and how it behaves, which is why it should be left in charge of making important decisions, and providing the client with just enough data to execute on those decisions.

This lesson is still being learned in the mobile world. One of the latest examples is Simple Bracket; a very nicely designed iOS app for choosing your March Madness bracket. When playing around with Simple Bracket, a couple of things jumped out at me. The first is if you sniff the app’s traffic, you can see that when searching for Pools, the server responds with a data for a list of matching pools, including their PIN. This seems to allow the app to quickly, and cleanly either accept or reject your PIN when you try to join a pool. The obvious downside to the client side checking here, is anybody can see the PIN for any pool, rendering the PIN completely useless.

The second thing I noticed is the way Simple Bracket determines when it should show certain data. Currently if you go to view another user, the app tells you that you won’t be able to see their bracket until March 21st, when March Madness begins and brackets are locked. This data is kept private from other users so that nobody can copy somebody else’s bracket. Normally in a situation like this, you would want to have the server be the one to determine when this data should be displayed, and not make the data available to the client until that time. In Simple Bracket they decided to let the client determine when to show this data and the server seems to happily provide it to the client. As such, by simply changing your device’s date to March 21, 2013, you can see anybody’s current bracket.

This is just one more example in a trend we see where many developers view iOS as a closed system that they can trust, and it’s not. As mobile platforms mature and develop, we should be cautious to not repeat the same mistakes that were already made on other platforms. There’s no need to repeat these mistakes and endure the same pain and consequences rather than learning and growing from the past.

Categories
iOS Security

Award for most ironic Appy Award goes to Fandango

Congratulations to all of those who won Appy Awards this year. A very special shout-out to Fandango who somehow won an Appy for Mobile Payments, despite not properly securing customer credit card information. Fandango’s app allows self-signed SSL certificates; combine this with the fact that they transmit all of your credit card details in plaintext (though over an HTTPS connection) when you check ticket availability, and it’s a bit ironic that they would win an award for this category. More than two months after being contacted about these issues, they have yet to respond or do anything to try and secure their customer info. Well done, Fandango.

Update 3/17/13: As reader iOSSneak points out below in the comments, Fandango seems to have fixed the SSL issue in version 5.5.1 of the Fandango app, which was released the day after this post. The Fandango app no longer accepts self-signed SSL certificates.

Related post: iPhone Apps Accepting Self-signed SSL Certificates

Categories
iOS Security Uncategorized

Escrow Keybags and the iOS 6 lock screen bug

There was a lot of fuss last week about an alleged iPhone lock screen bug that allowed an attacker to bypass the lock screen and access your iPhone’s filesystem. I wrote an article on iMore explaining that this wasn’t the case and the confusion seemed to be the result of a misunderstanding on how iPhone passcodes work.

As mentioned in the article, when an unlocked iPhone is plugged in to a computer with iTunes running (or a locked phone that you then enter the passcode on), iTunes has a mechanism that allows it to access the device’s filesystem in the future without requiring the user to enter their passcode. I ambiguously referred to this “mechanism” because I had absolutely no clue how it worked. All I knew is that the first time you plug a locked device in to a computer, iTunes will give you an error message. However, once you enter the device’s passcode, iTunes will never prompt you again to enter it on that computer. Whether the phone is locked or not, iTunes (and any other app that can talk to your iPhone) will gladly read files from the device and write new data to it.

Twitter user Hubert was kind enough to send me more info about this process. He sent me an article that looks at how iOS 5 backups work. The article briefly explains the use of Escrow Keybags by iTunes (technically I think usbmux is what uses them) to talk to locked devices after they have been unlocked once. The gist seems to be that there are certificates and keys which are exchanged via iTunes when an iPhone is plugged in and unlocked, which are then stored in an Escrow Keybag on the host computer. The Escrow Keybag for every device you’ve ever plugged in to your machine can be found in /private/var/db/lockdown and is named with the UDID for the device that it is for. When a device is plugged in to your computer in the future, this Escrow Keybag is used to unlock the device without the need for the user to enter the passcode.

The confusion on the alleged lock screen bypass bug came from this. The bug discovered simply made the screen go black with a status bar still visible. If you plugged this device in to a computer that it had previously been plugged in to while unlocked, the computer used the Escrow Keybag in order to read the file system without needing the passcode. This would have happened with or without causing the black screen glitch. If the person who found the glitch had plugged their iPhone into a computer that it had never been connected to, they would have gotten an error from iTunes saying that the device was locked and they would need to enter their passcode.

The original video demonstrating the bug can be found here. A video of the same bug being reproduced, but with the phone plugged in to a computer it has not been unlocked with before can be seen here. My sincerest apologies that I don’t have any block rocking beats accompanying my video.

Categories
iOS Networking Security

iPhone Apps Accepting Self-Signed SSL Certificates

I recently spent some time looking at a number of iPhone apps in the App Store to see how well they were implementing SSL. It was a little surprising to see how many big-name apps ignored SSL errors and even more surprising to see some that didn’t use SSL at all. If you want the short version, head on over to iMore.com. Here I wanted to take some time to take a closer look at the issues that I found and how I found them in hopes that other developers can avoid making the same mistakes.

Charles Proxy is a tool I’ve discussed on here previously. It’s a proxy tool that allows you to intercept traffic from your device. One of the features Charles has is SSL proxying. iOS ships with a listed of trusted Certificate Authorities on every iPhone, iPad and iPod touch. These CAs are organizations, who are responsible for issuing SSL certificates, that Apple has deemed trustworthy. Modern browsers, as well as other software and hardware, ship with similar lists that tell them which SSL certificates can be trusted and which cannot. If an iPhone, for example, goes to make a secure connection to https://google.com, the device looks at the SSL certificate and, among other things, checks the CA that issued the certificate. In Google’s case, the device would see that the SSL certificate has been issued by Verisign, and since Verisign is a trusted CA, it will accept the SSL certificate for this communication.

Conversely, if you were to try to connect to https://insecure.skabber.com in Safari on your Untrusted Certificatedevice, you would receive an alert about not being able to identify the server’s identity and asking if you want to continue anyway. This is because the certificate has been issued by the Ministry of Silly Walks, which is not a trusted CA. If an app attempts to make a connection and runs into an issue like this, that app should fail the connection. Since the identity of the server you’re connecting to cannot be trusted, the application should err on the side of caution and refuse to send any data, lest it turn over sensitive details to a malicious entity. To understand why the warnings should be heeded or what might cause them to appear, it may be helpful to give more background here.

The reason encryption works is because a person trying to eavesdrop doesn’t have the keys to decrypt the data. But what if they did? In a man-in-the-middle attack, a malicious third party sits between you and the server, decrypting your traffic. The attacker pretends to be the server to you, and pretends to be you to the server. When you send your encrypted data to what you believe is the server, it’s actually the attacker who now has the keys to decrypt the data, because you’re the one they began the connection with. After decrypting the data, looking at it, and manipulating it if they want, the attacker then re-encrypts the data and sends it across their secured connection to the actual server. When the server responds, the attacker will be able to do the same, decrypting the data, looking at it, then re-encrypting it before sending it along to you. But in order to do this, the attacker would need to possess an SSL certificate for the server you’re trying to connect to.

As it turns out, anybody can create their own certificate authority. As such, anybody can issue an SSL certificate. Furthermore, that SSL certificate can be for any server that they want. You could go generate an SSL certificate for google.com right now if you wanted. But what would prevent you from ever being able to make use of that certificate, like in the scenario described above, is that it would not be issued by a trusted CA. You could say you were google.com, but nobody would believe you because your certificate didn’t come from a trusted source. SSL hinges on this chain of trust. And this is where some apps fail.

Instead of only accepting SSL certificates from iOS’ trusted list of CAs, some apps accept SSL certificates issued by anybody. Because of this, the man-in-the-middle attack described above is possible against traffic sent by these apps. An attacker can use an SSL proxying tool that will generate SSL certificates on-the-fly for whatever server is trying to be connected to. Since the apps trust any CA, they will allow these certificates and open up communication with the attacker, who can then view the traffic and send it on to the server. Credit Karma, Fandango, Cinemagram, Flickr, eFax, WebEx, TD Ameritrade, E*TRADE, Monster, H&R Block, ooVoo, Match.com, PAYware (by Verifone), EVERPay Mobile POS, and Learn Vest all suffer from this issue. Some companies have already responded. E*TRADE already has a partial fix available, and a more complete fix on the way. Credit Karma, TD Ameritrade say they have updates in the works. Cisco has opened tickets in their bug tracker (Cisco.com login required) and CVE for users to track the issue, and are also working on a fix. Monster responded but has not been able to say whether or not they will fix it. eFax’s stance is that once your data is outside of their network, they cannot protect it. None of the others have responded. Users should be particularly careful with buying tickets through Fandango since credit card details are transmitted.

These apps are easy enough to find on your own by using a tool like Charles Proxy or mitmproxy. Normally for these tools to work for testing, you would install a certificate on your device to tell the device to trust your CA. Simply skip that step (or uninstall the certificate if you already have it Settings > General > Profiles) and fire up your proxy with SSL proxying enabled. With traffic going through your untrusted proxy, you can fire up apps like Facebook and see that they won’t work. Looking at the requests in your proxy, you’ll see that a connection was attempted, but then closed and no traffic was sent. Now, if you go and open an app that ignores warnings about untrusted CAs, like Flickr, and try to log in, you’ll see something different. You’ll see the app successfully log in and in Charles, you’ll see the request that was sent to the server, and inside it, your username and password. If an attacker were performing a man-in-the-middle attack on you, this is what they would be seeing. Flickr Login

While searching for apps with this problem, I stumbled upon a few that while not suffering from this, had other security problems related to logins. Flixster, iSlick, Camera Awesome!, TechCrunch and Photobucket all send some login information over HTTP. In the case of Camera Awesome!, it’s only Photobucket credentials, and in TechCrunch it’s only authentication for Pocket. But iSlick, Flixster and Photobucket all send usernames and passwords for their respective accounts in plaintext. This means that a user wouldn’t have to do any SSL certificate generation or proxying, they need only be on the same network as you and sniff the network traffic. Obviously this poses an even larger security risk than just accepting self-signed SSL certificate. Particularly if users use the same usernames and/or passwords for other accounts. In Photobucket’s case, they hash your password with MD5, but the password can easily be retrieved using a reverse MD5 hash lookup tool.

So what should be done? Well in the case of apps not using encryption, they should be using SSL for those login requests. iSlick says that SSL support will be coming in a future update. I haven’t heard any updates from the others. As for the other apps that are using SSL, but not implementing it correctly, they need to make sure that they respond to SSL warnings properly. From what I’ve seen, most networking libraries refuse self-signed SSL certs by default. Perhaps ignoring these warnings was done for development purposes and developers forgot to later pull that code out. Whatever the case, I hope each of the companies will respond quickly with updates for their users and hopefully we can all learn from their mistakes to be a little more cautious about how we implement security in our apps. If you decide to do some research on some of the apps you use, be sure to let developers know if you find a problem so they can fix it.

Categories
iOS Networking Security

Simple Security

We previously looked at how to peek inside app bundles to get an inside look at how they might be handling data. Another important element to an app’s security is how they handle data externally. You can read through Terms of Service and Privacy Policy, but the only real way to know what data your applications are sending is to take a look at their traffic. I recently received an invitation to Simple; a service that seeks to replace your bank. Naturally, the first thing I did after signing up was start poking and prodding at their app. All too often these endeavors to get a glimpse at what apps are doing behind the scenes are surprising and eye-opening. Sadly, Simple was no exception.

Simple Logo

 

After looking at the app bundle with PhoneView and failing to find anything interesting there, I fired up Charles Proxy to have a look at the app’s traffic. The good news is Simple appears to be sending everything over SSL. This means that if, for instance, you’re at the airport and your iPhone is on an open wi-fi network, an attacker will just see encrypted garbage if they look at the traffic going between your iPhone and Simple’s servers. Now the bad news.

Charles Proxy has a wonderful option called SSL proxying. SSL proxying is handy for debugging when your app’s traffic is encrypted, but you want to see the contents. The first issue I noticed with Simple is it doesn’t take any steps to make sure it’s talking to its own servers. SSL guarantees that your connection is encrypted, but it doesn’t necessarily guarantee who is on the other end of that encrypted tunnel. Without the app validating that it’s talking to the server it should be talking to, it’s possible for an attacker to perform a man-in-the-middle attack.

Editor’s Note: In iOS (and any modern web browser), there is a list of trusted CAs that is bundled with the OS. These CAs are who issue SSL certificates. When an SSL connection is established, if the SSL certificate’s URL does not match the URL that a connection is being made to, a warning will be raised and the connection will fail. This means that a malicious attacker can’t use his SSL certificate that he may have acquired for fakebank.com to perform a MITM attack with requests that are going to https://simple.com. Some sort of compromise of the chain of trust in SSL would have to take place, like a CA being hacked and generating a phony certificate for Simple’s servers. When I originally said that Simple doesn’t take any steps to make sure it’s talking to its own servers, what I meant was it’s not performing any validation outside of what SSL inherently gives on iOS. There is an additional step that can be performed that is called SSL pinning. With SSL pinning, the app can take matters into its own hands to perform additional validation on who it’s talking to, rather than relying on the more general list of trusted CAs. This would ensure that in cases like a CA being hacked, there would be an additional layer of security by the app to validate it’s still actually talking to Simple’s servers. However, it’s worth emphasizing that without such a compromise taking place, user data is encrypted and secure with Simple’s app. All data is going over an SSL connection, and some compromise of SSL would need to occur for this data to be at risk in this scenario.

Normally the way SSL works is the client will setup an encrypted connection with the server before sending data to it. The way a man-in-the-middle attack works is the attacker intercepts your communications to the server and the server’s communication back to you. The client has an encrypted tunnel set up with what it believes to be the server. The server has an encrypted tunnel set up with what it believes to be the client. In reality both are just talking to the attacker and the attacker relays the traffic on after it’s able to take a look at it.

A MITM attack isn’t extremely likely on iOS because iOS ships with a list of trusted SSL certificate authorities. Using an app like Charles Proxy, you use a self-signed SSL certificate which iOS won’t trust unless you explicitly tell it to. That said, SSL is not foolproof and there have been instances in the past of it being exploited, and even of attackers creating phony SSL certificates signed by a legitimate and trusted CA.

Ultimately this isn’t something users have to worry about too much, but it is something Simple should fix. The app should be making sure it’s talking to Simple’s servers and Simple’s servers should make sure that it’s really the app that’s talking to them.

What’s more interesting is what you find once you start looking at Simple’s traffic. Since Simple isn’t doing any certificate pinning, we can enable SSL proxying in Charles and watch the traffic that Simple is sending and receiving. The first thing that jumps out is the request to https://api.simple.com/user-api/mobile-auth-tokens when you sign in to your account. Included in the request are your plaintext username and plaintext passphrase. The request is sent over SSL, but this doesn’t guarantee security and when dealing with such sensitive data, more security measures should be taken.

To be fair, up until this point, I haven’t seen anything that Simple is doing wrong that other banks are doing right. Of the two other banks I have accounts with, neither do pinning of SSL certificates or have any additional security to protect your username and password when signing in. So far Simple’s behavior seems par for the course. So far their security is just as disappointing as my other banks.

But wait, there’s more… There’s another request sent when you sign in: https://api.simple.com/user-api/users/UUID. The body of the request isn’t all that exciting, but the contents of the response are. It seems to include all of our personal info including full name, email, street address, date of birth, and social security number. Yes, your social security number. Simple’s server is sending all of this data back to your app when you log in, and as most of this info doesn’t seem to be used by the app anywhere, it’s unclear why they feel the need to send it.

After reading their security policy I felt hopeful that they’d be receptive to my concerns, so I sent an email to their security team. After eight days, I sent a followup to make sure my original email hadn’t been lost. Finally, a reply:

Hi Nick,

My apologies for not responding sooner; we did receive your report and I’ve discussed the issues you’ve raised with the team.

We are continuously working to improve the security of our solutions, and you can expect to see steady improvements as we continue to build out our platform. The challenges with ensuring the effectiveness of SSL are well-understood and under constant review inside the organization. In addition, it does look like we are transmitting more data than we absolutely need to and that is something that we are actively working to improve.

Thank you again for your message, and please do not hesitate to contact us again. We also provide a GPG key for communicating security issues with Simple on the Security page at www.simple.com; please feel free to use that in the future if you are able.

Justin Thomas Director of Information Security Simple

Simple’s 404 image – I can relate

So it sounds like maybe they’ll fix this at some point… or maybe not, I’m not sure. I’ve seen restaurants respond with more urgency when they screw up my hamburger. For a company that I’ve entrusted with my personal and financial data I expect them to treat security with the utmost importance. I expect them to fix security oversights as quickly and apologetically as possible. Since the company does not seem to share my enthusiasm there, I felt that the responsible thing to do was inform other users of these shortcomings so that they can make an educated decision on what to do with their own accounts. I’m still hopeful that eventually Simple will fix these issues, but in the meantime their users deserve to know how the company is handling their information.

Update #1 12/20/12 3:00PM - Simple has responded to several users on Twitter in order to try and address their concerns. Happy to see them actively responding to users and looking forward to seeing the issues actually fixed.

 

Update #2 12/21/12 9:00AM – I received a tweet from Brian Merritt, the Director of Engineering at Simple, asking if I’d be willing to give him a call and talk.

Good news. The SSN issue has been fixed as of last night. If you go look at your traffic now, your SSN is no longer being sent back by the server. Brian also reassured me that the SSL certificate pinning is an issue that they were aware of and have been working on. Unfortunately that sort of fix will require an app update, so it’s something we’ll have to wait on, but it does indeed sound like the fix is coming.