Categories
iOS Networking Security

iPhone Apps Accepting Self-Signed SSL Certificates

I recently spent some time looking at a number of iPhone apps in the App Store to see how well they were implementing SSL. It was a little surprising to see how many big-name apps ignored SSL errors and even more surprising to see some that didn’t use SSL at all. If you want the short version, head on over to iMore.com. Here I wanted to take some time to take a closer look at the issues that I found and how I found them in hopes that other developers can avoid making the same mistakes.

Charles Proxy is a tool I’ve discussed on here previously. It’s a proxy tool that allows you to intercept traffic from your device. One of the features Charles has is SSL proxying. iOS ships with a listed of trusted Certificate Authorities on every iPhone, iPad and iPod touch. These CAs are organizations, who are responsible for issuing SSL certificates, that Apple has deemed trustworthy. Modern browsers, as well as other software and hardware, ship with similar lists that tell them which SSL certificates can be trusted and which cannot. If an iPhone, for example, goes to make a secure connection to https://google.com, the device looks at the SSL certificate and, among other things, checks the CA that issued the certificate. In Google’s case, the device would see that the SSL certificate has been issued by Verisign, and since Verisign is a trusted CA, it will accept the SSL certificate for this communication.

Conversely, if you were to try to connect to https://insecure.skabber.com in Safari on your Untrusted Certificatedevice, you would receive an alert about not being able to identify the server’s identity and asking if you want to continue anyway. This is because the certificate has been issued by the Ministry of Silly Walks, which is not a trusted CA. If an app attempts to make a connection and runs into an issue like this, that app should fail the connection. Since the identity of the server you’re connecting to cannot be trusted, the application should err on the side of caution and refuse to send any data, lest it turn over sensitive details to a malicious entity. To understand why the warnings should be heeded or what might cause them to appear, it may be helpful to give more background here.

The reason encryption works is because a person trying to eavesdrop doesn’t have the keys to decrypt the data. But what if they did? In a man-in-the-middle attack, a malicious third party sits between you and the server, decrypting your traffic. The attacker pretends to be the server to you, and pretends to be you to the server. When you send your encrypted data to what you believe is the server, it’s actually the attacker who now has the keys to decrypt the data, because you’re the one they began the connection with. After decrypting the data, looking at it, and manipulating it if they want, the attacker then re-encrypts the data and sends it across their secured connection to the actual server. When the server responds, the attacker will be able to do the same, decrypting the data, looking at it, then re-encrypting it before sending it along to you. But in order to do this, the attacker would need to possess an SSL certificate for the server you’re trying to connect to.

As it turns out, anybody can create their own certificate authority. As such, anybody can issue an SSL certificate. Furthermore, that SSL certificate can be for any server that they want. You could go generate an SSL certificate for google.com right now if you wanted. But what would prevent you from ever being able to make use of that certificate, like in the scenario described above, is that it would not be issued by a trusted CA. You could say you were google.com, but nobody would believe you because your certificate didn’t come from a trusted source. SSL hinges on this chain of trust. And this is where some apps fail.

Instead of only accepting SSL certificates from iOS’ trusted list of CAs, some apps accept SSL certificates issued by anybody. Because of this, the man-in-the-middle attack described above is possible against traffic sent by these apps. An attacker can use an SSL proxying tool that will generate SSL certificates on-the-fly for whatever server is trying to be connected to. Since the apps trust any CA, they will allow these certificates and open up communication with the attacker, who can then view the traffic and send it on to the server. Credit Karma, Fandango, Cinemagram, Flickr, eFax, WebEx, TD Ameritrade, E*TRADE, Monster, H&R Block, ooVoo, Match.com, PAYware (by Verifone), EVERPay Mobile POS, and Learn Vest all suffer from this issue. Some companies have already responded. E*TRADE already has a partial fix available, and a more complete fix on the way. Credit Karma, TD Ameritrade say they have updates in the works. Cisco has opened tickets in their bug tracker (Cisco.com login required) and CVE for users to track the issue, and are also working on a fix. Monster responded but has not been able to say whether or not they will fix it. eFax’s stance is that once your data is outside of their network, they cannot protect it. None of the others have responded. Users should be particularly careful with buying tickets through Fandango since credit card details are transmitted.

These apps are easy enough to find on your own by using a tool like Charles Proxy or mitmproxy. Normally for these tools to work for testing, you would install a certificate on your device to tell the device to trust your CA. Simply skip that step (or uninstall the certificate if you already have it Settings > General > Profiles) and fire up your proxy with SSL proxying enabled. With traffic going through your untrusted proxy, you can fire up apps like Facebook and see that they won’t work. Looking at the requests in your proxy, you’ll see that a connection was attempted, but then closed and no traffic was sent. Now, if you go and open an app that ignores warnings about untrusted CAs, like Flickr, and try to log in, you’ll see something different. You’ll see the app successfully log in and in Charles, you’ll see the request that was sent to the server, and inside it, your username and password. If an attacker were performing a man-in-the-middle attack on you, this is what they would be seeing. Flickr Login

While searching for apps with this problem, I stumbled upon a few that while not suffering from this, had other security problems related to logins. Flixster, iSlick, Camera Awesome!, TechCrunch and Photobucket all send some login information over HTTP. In the case of Camera Awesome!, it’s only Photobucket credentials, and in TechCrunch it’s only authentication for Pocket. But iSlick, Flixster and Photobucket all send usernames and passwords for their respective accounts in plaintext. This means that a user wouldn’t have to do any SSL certificate generation or proxying, they need only be on the same network as you and sniff the network traffic. Obviously this poses an even larger security risk than just accepting self-signed SSL certificate. Particularly if users use the same usernames and/or passwords for other accounts. In Photobucket’s case, they hash your password with MD5, but the password can easily be retrieved using a reverse MD5 hash lookup tool.

So what should be done? Well in the case of apps not using encryption, they should be using SSL for those login requests. iSlick says that SSL support will be coming in a future update. I haven’t heard any updates from the others. As for the other apps that are using SSL, but not implementing it correctly, they need to make sure that they respond to SSL warnings properly. From what I’ve seen, most networking libraries refuse self-signed SSL certs by default. Perhaps ignoring these warnings was done for development purposes and developers forgot to later pull that code out. Whatever the case, I hope each of the companies will respond quickly with updates for their users and hopefully we can all learn from their mistakes to be a little more cautious about how we implement security in our apps. If you decide to do some research on some of the apps you use, be sure to let developers know if you find a problem so they can fix it.

Categories
iOS

An Introductory Look at iOS Crash Logs

There’s a nice writeup over on Ray Wenderlich on Demystifying iOS Application Crash Logs. Even if you already know the ins and outs of crash logs, you may find some interesting bits, or at the very least a good reference to tuck away for later.

Categories
iOS

Poke Her? You Hardly Know Her!

Tirelessly working to further their goal of giving every individual piece of functionality its own app, Facebook has released the Facebook Poke app today. While being described by many as an app for sexting, you may want to think twice about sending that photo that you think will only be seen once.

One of the largely publicized features of the Poke app is the ability to send a photo, video or text message that is only viewable for a short period of time. Determined by the sender, the content can be visible for 1, 3, 5 or 10 seconds before it’s gone forever. This timeframe starts once the receiver taps and holds on the message to view it. After the timeframe expires, the messages is gone for good. Or at least you hope.

The first obvious way to get around the time limitation is to take a screenshot while it is visible. Fortunately Facebook planned for this. If you send somebody a message and they take a screenshot of it, the sender is alerted by a flash icon that will appear on that message. However, that’s not the only way to save these timeless memories.

When you launch the Poke app, it reaches out to Facebook’s servers and downloads any photos, videos, message or pokes that you have waiting for you. By proxying SSL requests to https://attachment.fbsbx.com when the app is launched, you can see the URL that the app is downloading the file from. The request will look something like this https://attachment.fbsbx.com/pokemedia.php?id=4668708209916&accesstoken=really-long-token-here. If you copy and paste that URL into a browser, you’ll see the content displayed and be able to download it to your computer. But what if you didn’t have the foresight to have a proxy running and the app has already downloaded the file?

Once the media is downloaded by the app it is stored on your device until you view it. By using an app like PhoneView we can dive into the Poke app bundle. If you drill down into Library > Caches > FBStore > number > MediaCache, you’ll find a list of any photos or videos you have not yet viewed. By copying them from here onto your computer, you ensure you can open that embarrassing photo or video any time you’re feeling a little down.

Neither of the above mentioned methods will alert the sender that you have saved off a copy of the package they sent. However, once the media has been viewed on your iPhone, it gets deleted from the Facebook URL mentioned above, and seems to be deleted off the device. So if you’ve already viewed the message and are looking for a way to get it back, you’re out of luck for now and the sender can breathe a sigh of relief. In the end, people just need to use common sense. You have to remember that if you’re sending somebody sensitive content, there is always a risk that it will not remain private. So maybe don’t send pictures of your junk to your friends.

Categories
iOS Networking Security

Simple Security

We previously looked at how to peek inside app bundles to get an inside look at how they might be handling data. Another important element to an app’s security is how they handle data externally. You can read through Terms of Service and Privacy Policy, but the only real way to know what data your applications are sending is to take a look at their traffic. I recently received an invitation to Simple; a service that seeks to replace your bank. Naturally, the first thing I did after signing up was start poking and prodding at their app. All too often these endeavors to get a glimpse at what apps are doing behind the scenes are surprising and eye-opening. Sadly, Simple was no exception.

Simple Logo

 

After looking at the app bundle with PhoneView and failing to find anything interesting there, I fired up Charles Proxy to have a look at the app’s traffic. The good news is Simple appears to be sending everything over SSL. This means that if, for instance, you’re at the airport and your iPhone is on an open wi-fi network, an attacker will just see encrypted garbage if they look at the traffic going between your iPhone and Simple’s servers. Now the bad news.

Charles Proxy has a wonderful option called SSL proxying. SSL proxying is handy for debugging when your app’s traffic is encrypted, but you want to see the contents. The first issue I noticed with Simple is it doesn’t take any steps to make sure it’s talking to its own servers. SSL guarantees that your connection is encrypted, but it doesn’t necessarily guarantee who is on the other end of that encrypted tunnel. Without the app validating that it’s talking to the server it should be talking to, it’s possible for an attacker to perform a man-in-the-middle attack.

Editor’s Note: In iOS (and any modern web browser), there is a list of trusted CAs that is bundled with the OS. These CAs are who issue SSL certificates. When an SSL connection is established, if the SSL certificate’s URL does not match the URL that a connection is being made to, a warning will be raised and the connection will fail. This means that a malicious attacker can’t use his SSL certificate that he may have acquired for fakebank.com to perform a MITM attack with requests that are going to https://simple.com. Some sort of compromise of the chain of trust in SSL would have to take place, like a CA being hacked and generating a phony certificate for Simple’s servers. When I originally said that Simple doesn’t take any steps to make sure it’s talking to its own servers, what I meant was it’s not performing any validation outside of what SSL inherently gives on iOS. There is an additional step that can be performed that is called SSL pinning. With SSL pinning, the app can take matters into its own hands to perform additional validation on who it’s talking to, rather than relying on the more general list of trusted CAs. This would ensure that in cases like a CA being hacked, there would be an additional layer of security by the app to validate it’s still actually talking to Simple’s servers. However, it’s worth emphasizing that without such a compromise taking place, user data is encrypted and secure with Simple’s app. All data is going over an SSL connection, and some compromise of SSL would need to occur for this data to be at risk in this scenario.

Normally the way SSL works is the client will setup an encrypted connection with the server before sending data to it. The way a man-in-the-middle attack works is the attacker intercepts your communications to the server and the server’s communication back to you. The client has an encrypted tunnel set up with what it believes to be the server. The server has an encrypted tunnel set up with what it believes to be the client. In reality both are just talking to the attacker and the attacker relays the traffic on after it’s able to take a look at it.

A MITM attack isn’t extremely likely on iOS because iOS ships with a list of trusted SSL certificate authorities. Using an app like Charles Proxy, you use a self-signed SSL certificate which iOS won’t trust unless you explicitly tell it to. That said, SSL is not foolproof and there have been instances in the past of it being exploited, and even of attackers creating phony SSL certificates signed by a legitimate and trusted CA.

Ultimately this isn’t something users have to worry about too much, but it is something Simple should fix. The app should be making sure it’s talking to Simple’s servers and Simple’s servers should make sure that it’s really the app that’s talking to them.

What’s more interesting is what you find once you start looking at Simple’s traffic. Since Simple isn’t doing any certificate pinning, we can enable SSL proxying in Charles and watch the traffic that Simple is sending and receiving. The first thing that jumps out is the request to https://api.simple.com/user-api/mobile-auth-tokens when you sign in to your account. Included in the request are your plaintext username and plaintext passphrase. The request is sent over SSL, but this doesn’t guarantee security and when dealing with such sensitive data, more security measures should be taken.

To be fair, up until this point, I haven’t seen anything that Simple is doing wrong that other banks are doing right. Of the two other banks I have accounts with, neither do pinning of SSL certificates or have any additional security to protect your username and password when signing in. So far Simple’s behavior seems par for the course. So far their security is just as disappointing as my other banks.

But wait, there’s more… There’s another request sent when you sign in: https://api.simple.com/user-api/users/UUID. The body of the request isn’t all that exciting, but the contents of the response are. It seems to include all of our personal info including full name, email, street address, date of birth, and social security number. Yes, your social security number. Simple’s server is sending all of this data back to your app when you log in, and as most of this info doesn’t seem to be used by the app anywhere, it’s unclear why they feel the need to send it.

After reading their security policy I felt hopeful that they’d be receptive to my concerns, so I sent an email to their security team. After eight days, I sent a followup to make sure my original email hadn’t been lost. Finally, a reply:

Hi Nick,

My apologies for not responding sooner; we did receive your report and I’ve discussed the issues you’ve raised with the team.

We are continuously working to improve the security of our solutions, and you can expect to see steady improvements as we continue to build out our platform. The challenges with ensuring the effectiveness of SSL are well-understood and under constant review inside the organization. In addition, it does look like we are transmitting more data than we absolutely need to and that is something that we are actively working to improve.

Thank you again for your message, and please do not hesitate to contact us again. We also provide a GPG key for communicating security issues with Simple on the Security page at www.simple.com; please feel free to use that in the future if you are able.

Justin Thomas Director of Information Security Simple

Simple’s 404 image – I can relate

So it sounds like maybe they’ll fix this at some point… or maybe not, I’m not sure. I’ve seen restaurants respond with more urgency when they screw up my hamburger. For a company that I’ve entrusted with my personal and financial data I expect them to treat security with the utmost importance. I expect them to fix security oversights as quickly and apologetically as possible. Since the company does not seem to share my enthusiasm there, I felt that the responsible thing to do was inform other users of these shortcomings so that they can make an educated decision on what to do with their own accounts. I’m still hopeful that eventually Simple will fix these issues, but in the meantime their users deserve to know how the company is handling their information.

Update #1 12/20/12 3:00PM - Simple has responded to several users on Twitter in order to try and address their concerns. Happy to see them actively responding to users and looking forward to seeing the issues actually fixed.

 

Update #2 12/21/12 9:00AM – I received a tweet from Brian Merritt, the Director of Engineering at Simple, asking if I’d be willing to give him a call and talk.

Good news. The SSN issue has been fixed as of last night. If you go look at your traffic now, your SSN is no longer being sent back by the server. Brian also reassured me that the SSL certificate pinning is an issue that they were aware of and have been working on. Unfortunately that sort of fix will require an app update, so it’s something we’ll have to wait on, but it does indeed sound like the fix is coming.

Categories
iOS

iOS Support Matrix 2.0 – Winter Edition

A few months ago I posted an iOS Support Matrix put together by the folks at Empirical Magic. Now they’ve released version 2.0 of the matrix which includes the 5th gen iPod touch, iPhone 5, iPad mini and iPad 4. In addition to the new devices, they’ve also added some additional useful info to the matrix. The matrix is available in a number of resolutions as well as a PDF here.

While this matrix is great to track which devices you support and high level specs like iOS version and whether a device is retina or not, sometimes you need more info. If you’re looking for a little more technical info, you should check out this Google Doc spreadsheet maintained by Jonathan Wight and this iOS Device Specification Grid by Blake Patterson (h/t http://iosdevweekly.com). Both provide nice compilations of various technical information about iOS devices, though each has some data that the other does not. I’m sure there are some other great references like this out there that I’m not aware of. If you know of one that wasn’t mentioned here, please share it in the comments below.

Categories
iOS

Using Network Link Conditioner in iOS 6

I previously covered using Network Link Conditioner to test how your app does in less-than-ideal network scenarios. One of the inconveniences of using it is in order to test your app on a device, you have to take the extra steps to configure a proxy on your computer that your device can connect to over a wi-fi connection. Well, not anymore. With iOS 6, Apple has given us Network Link Conditioner right on the device.

To get started with Network Link Conditioner, first your device must be set up as a developer device. If you haven’t done this yet, you’ll need to go to Xcode, open Organizer (Shift-Command-2), select your device, and click the button that says Use for Development to add the device your developer account. Once you’ve done that, you should be able to open Settings on your device and see a Developer menu near the bottom, just above the apps list. Tapping on that reveals some helpful settings, Network Link Conditioner Status obviously being the one relevant to our interests here.

The options available should look familiar if you’ve previously used Network Link Conditioner in OS X. There are some useful presets immediately available to you, and you can also set your own by adding a profile. One notable difference from OS X when creating your own profile is the ability to specify the interface, so you can create profiles that only affect 3G or only affect wi-fi (or both) which is nice. Once you have the profile you want, enable Network Link Conditioner and you’re good to go.

Be careful to not leave Network Link Conditioner on once you’re done testing as there is no indicator in the status bar or anywhere else outside of Settings to tell you that it’s on. I foresee many occasions where I forget to disable Network Link Conditioner, then later wonder why my connection is so slow.

While Charles Proxy stills gives you more control for network testing, it’s really nice to have the option to test a slow network without needing to run a proxy, have my Macbook Pro out, or have my device on a wi-fi network with my laptop. Now you can just pull out your device, enable Network Link Conditioner, and quickly (or rather, not so quickly) test different network conditions right on your device.

Categories
iOS

A Mind Map for iOS Testing

Update 10/8/13: A newer version of the mind map can now be found here.

Update 10/30/12: Version 1.1 of the mind map has been updated with the following changes:

  • Added iPad mini & iPhone 5 under Hardware
  • Added in-app purchase testing under Software
  • Added right-to-left text input under Internationalization (h/t Matthew Henderson)
  • Added caching under Data (h/t Brad Dillon)
iOS Testing Mind Map 1.1 Zip File Download

I had previously posted about a mind map for getting started with mobile testing that I had come across on the Ministry of Testing website. What I was really hoping to eventually see was an iOS specific one, which I’m happy to say we now have. Bernard and I have put together this mind map for testing iOS.

I’ve been wanting something like this for myself to serve as a checklist; things to make sure I’m covering or have at least considered when testing. We tried to strike a balance of being thorough while not going overboard with details. If you think it sucks, or that we missed a bunch of important stuff, or have a bunch of things you don’t care about, luckily for you the .smmx file is available to modify to your heart’s content! Should you find yourself feeling so adventurous, the application I used was SimpleMind Free. Though you’ll need to grab the pay-for version of SimpleMind to be able to import an existing mind map.

The .smmx file is included in this handy zip file (updated zip here) that Rosie was kind enough to make. The zip file also hosts a few other useful formats (PDF, PNG, text, OPML, MM).

Categories
iOS

iOS Support Matrix

If you’ve ever sat down to try and figure out what iOS versions run on which hardware in order to decide which combinations you should be testing and need to support, you know what a tedious chore this can be. If you have the time and patience, Wikipedia has a thorough history of iOS versions, but wouldn’t it be nice to have a pretty picture that sums it all up? Well the folks at Empirical Magic seemed to think so and they’ve been kind enough to share their results with the rest of us in this iOS Support Matrix (see update below). Looks like there are a few pieces of missing information, but overall this serves as a great guide.

Update: Some bug fixes have been made and version 1.2 of the iOS Support Matrix is available here.

 

Categories
iOS Security

In-AppStore.com Continues to Allow Users to Steal In-App Purchase Content

There has been a lot of talk this week about hacker Alexy V. Borodin (who goes by the handle ZonD80) who put up a service that facilitated illicit transactions of IAP (in-app purchase) content from iOS apps. While identifying security problems with the way IAP transactions work, it also underlined the need for developers to be diligent in secure practices with their apps.

The exploit requires users to first install two certificates on their iOS device: an SSL certificate for Alexy’s “fake App Store” and a certificate to trust the CA that signed that SSL certificate. Once this is done, the user changes their DNS settings to point to Alexy’s DNS servers. Now when an IAP request is made, instead of talking to Apple’s servers, the user’s device is talking to Alexy’s, and instead of being charged then given a valid receipt, the user is not charged and Alexy’s server simply hands back the receipt it has cached. As far as the app on the user’s device is concerned, the user has paid for the IAP content and the app received a receipt from Apple verifying this.

There are two problems (as far as I can tell) with Apple’s current implementation that allow this to happen. The first issue is that a user can trust 3rd party certificates for in-app purchases. If StoreKit had been locked down to only trust Apple’s certificates for this type of traffic, then the receipt likely never could have been sniffed and cached by Alexy in the first place. Even if he had acquired them somehow, users would be unable to install his SSL and CA certificates that are required to perform the fake transactions. However, since users have (and in many legitimate cases need) the ability to install 3rd party, untrusted certificates in iOS, I don’t know if it’s possible for StoreKit to ignore the OS-wide certificate list, and only use its own provided by Apple.

The second problem with Apple’s implementation (as pointed out by Alexy) is that the receipts have no uniquely identifying information. The receipts aren’t tied to any particular device or Apple ID. This means that only one person ever has to buy the IAP content, but they can then share that receipt with others who can re-use it to fool apps on their devices into thinking they paid for the content. If Apple were to only fix this part of the problem by having apps check that the receipt belonged to the current device, it would still leave the potential for users to recycle their own receipts over and over. For example, if a user were playing Angry Birds and decided to buy the $0.99 IAP for 20 Space Eagles, they could capture the receipt from the transaction, then re-use it on subsequent transactions for the same in-app purchase. So by paying $0.99 one time, they could acquire unlimited Space Eagle in-app purchases.

All of this boils down to the fact that we have a very broken in-app purchase system right now, so how do we fix it? The bad news is, we really can’t; Apple has to. Signs already point to Apple taking measures to fix the system. They’ve started including a unique identifier in App Store receipts, which apps could potentially (with an update) use to validate that the receipt does actually belong to that device. I’m not sure if they have any plans to lock down StoreKit to make sure it only ever talks to Apple.

While developers can’t fix this broken system, there are steps that can be taken to minimize the risk involved. In fact, developers who follow Apple’s instructions on validating receipts against their own servers seem to be unaffected so far. When this story first broke it was thought that developers who were validating receipts against their own servers were immune, which isn’t quite true. Receipts validated against developer servers can be forged in the exact same way. Anybody can capture the response from the developer’s server that occurs during the validation, and then reuse it. However, to my knowledge, this isn’t currently being done. While it’s not a solution, it does provide one more hoop to go through for people to steal IAP content; a hoop which Alexy doesn’t seem to be jumping through as evidenced by the large number of users on his website who complain about apps with which his service does not work.

It looks like Alexy may soon offer the same circumvention services for Mac App Store apps (then again, maybe not). Apple has been making efforts this week to minimize the damage done by Alexy’s service, will surely have a complete solution available shortly, and are hopefully already ahead of him with the Mac App Store and working on a solution there as well. In the meantime, developers should at least make sure they’re validating receipts agains their own servers, and if they can’t, look into a 3rd-party solution like Beeblex.

Update: Apple is now providing best practices to developers on closing this loophole. More info here: http://www.neglectedpotential.com/2012/07/in-appstoh-yes-they-did

Update 2: The service has now gone live for stealing Mac App Store IAP: http://91.224.160.136/osx.html

Categories
iOS Security

Peeking Inside App Bundles

With iOS being a relatively closed system, it’s easy for developers to get lulled into a false sense of security; believing their apps are a black box that users can’t look into. Exploring a few different apps from the App Store, it’s clear that some developers either don’t realize that users can explore their app bundles, or they simply forget this fact. Using a tool like PhoneView or iExplorer, we’re going to peek inside app bundles and look for curious bits that developers may not have intended for others to see.

When exploring app bundles there are generally three different categories of files that are of interest and which developers should be careful with:

  1. Files that can be manipulated by a user to affect an application
  2. Sensitive user data
  3. Sensitive company data

User Manipulated Files

Files in this category are usually some type of data store that hold information about how an application should function, or the current state of the application for a user. To see an example of this, go install Pocket Planes, run it on your device, then:
  1. Plug your device into your computer and fire up PhoneView.
  2. Navigate to Pocket Planes/Documents/playerData. In there you’ll find the file user-playerData.db.
  3. Double-click it to save it to your Documents folder. You’ll need an app that can read SQLite 3 database files (my personal preference is Base 2).
  4. Open user-playerData.db in your preferred SQLite app.
After opening user-playerData.db, you’ll see there are a few different tables. My favorite is “AttemptingtoeditthisfilemaymakePocketPlanes_unplayable”. Apparently they’re aware that people might attempt to play around in here, so they were kind enough to leave a warning for us to be careful. Take an opportunity here to make a backup of this file, just in case something goes wrong.

With that done, let’s look at another interesting table; playerMeta. Looking at some of the keys and values in this table, it becomes quickly apparent that this is where all of the player’s game data is stored. It stores how many coins you have (“b”), how many bux you have (“c”), your level, amount of experience, and a few other interesting bits in this table. By now you might see where this is going. You can edit the values in here and change something like your coins to have a value of 9,999,999. Then simply drag this newly updated .db file back onto your device in the same playerData directory you got it from, force quit the application on your device, relaunch, and voilà! You’re rich. There’s more fun to be found in this app, but I’ll leave that as an exercise to the curious reader.

So what’s the impact here? We have a free application that presumably makes money from in-app purchases of bux, with which users can buy coins to advance more quickly in the game. But because the value of both the coins and the bux is stored unsecured in a database, any user can go modify these values and avoid having to pay for anything. The impact to developers here is the potential loss of revenue. Additionally in multiplayer games, there’s the possibility for some users to cheat (as once was the case with Hero Academy).

With these types of files, you have a few options as a developer. If you don’t expect the data to change, or any changes could be handled with an app update, then store that data inside of your compiled code. This ensures the data is in the signed binary, out of the user’s reach. If you must update these values regularly, you can implement some kind of a checksum for your application to validate. For an example of this you can look at Temple Run. They store a lot of the user’s data (like coins) in a plain text file, but it’s also checksummed and if the checksum is not correct, the game ignores that data. One more option would be to store these values securely on or validate against a server. Obviously this requires more planning, upkeep and resources than the other options.

Sensitive User Data

It wasn’t that long ago that Facebook and Dropbox caught some flack for insecurely storing user authentication tokens. Basically the insecure storing of these tokens meant that a malicious user with physical access to somebody else’s device could gain unauthorized access to the victim’s account. Less than ideal. Unfortunately, there is no shortage of other apps out there with similar insecurities.

One such app that I recently came across is My Secret Apps Lite. This app claims it will securely store photos, videos, bookmarks, browsing history, notes and contacts. It also has several mechanisms intended to inform you if somebody tries to gain unauthorized access. It even goes as far as letting you set a decoy pin so that if you’re forced to tell somebody your pin, you can give them the decoy which will show them your decoy data.

So how does its security hold up? As you may have guessed, not well at all. Not only can all of your “secure” data be accessed in the Library/Application Support directory (bookmarks, history, photos, notes, security log, it’s all in there), but by simply looking at Library/Preferences/my.com.pragmatic.My-Apps-Lite.plist, you can see the pin and decoy pin (if there is one set). In reality, this app provides more novelty than actual security.

Other apps might store most of your sensitive data securely, but still leave some of it unprotected. For instance with Mint, I couldn’t find anything along the lines of account numbers or login credentials, however, it does look like they store your transaction history in a plaintext database. They made the effort to secure other data in the application, so I’m not sure why this data was left in unsecured.

Chrome is another application that stores most of your datas securely, but not all of it. They made good efforts with decisions like having password encryption turned on by default, but one file that is not stored securely is Library/Application Support/Google/Chrome/Default/UIWebViewCookies. If you have one device where a user has logged in to their Google account, then you copy that Default directory of that device to another, after relaunching Chrome, you’ll be able to navigate to any Google website and be logged into the user’s account.

You can also glean such details as what websites the user has saved their login info for, as well as the usernames for all of those sites. While they’re not storing your password in plaintext, they’re leaving enough info on the table to make you (or at least me) uncomfortable.

What makes me more uncomfortable is Google’s philosophy regarding mobile security. As you can see in their response to my bug report, their feelings are that you have no expectation of security if you lose your phone. I would agree that if a user loses their phone it’s in their best interest to assume all data has been compromised and respond accordingly, but I also feel developers have a duty to their users to make reasonable efforts at keeping the user’s data secure.

1Password is an example of a application whose developers take their users’ security seriously. Even if an attacker gains full access to your phone and you have 1Password on it, they won’t be able to access any sensitive information from looking in the app bundle because it’s all encrypted. Google made the effort to encrypt passwords, I’m not sure why they wouldn’t bother to secure everything. This is especially troublesome because one of the best features of Chrome is to have all of your data synced across all of your devices. If a company is storing all of my data, I just expect that they’ll make every effort to secure it. Google isn’t alone either. After noticing that a popular Craig’s List app saves the user’s username and password in plaintext, I emailed their support. Over three months later, I haven’t heard anything back from them and they’ve released an update since then, but it didn’t fix this problem.  If you’re storing a user’s sensitive data, please ensure you’re doing so responsibly and securely. If you’re a user storing sensitive data in your apps, make sure you check to see how secure that data is.

Sensitive Company Data

Sometimes information goes into your app bundles that is meant to be kept private from others, including your users. One specific example of such information is API tokens for services like Facebook and Twitter. A few months ago I found that eBay’s iPhone app actually had their Twitter and Facebook API tokens stored in plaintext in the app. Such practices are against (at least with Twitter) their terms and conditions. A malicious party could use those tokens to spam or otherwise abuse their APIs and get your valid token revoked, breaking Twitter and Facebook functionality for any of your apps or services that use it. eBay fixed this problem in a recent update and I haven’t yet come across another app doing it, but it’s certainly something to be mindful of. Sometimes you may also come across debug or test files in an app that developers forgot to remove before shipping.

A few months back I was looking for a way to launch into a specific Pandora station using Launch Center (now Launch Center Pro). Looking in Pandora.app at the Info.plist, we can see in the CFBundleURLSchemes section that Pandora has 3 custom URL schemes: pandora, pandorav2, and pandorav3. Unfortunately this doesn’t tell us any of the arguments that you can pass to those URLs to perform more complex tasks, and when I emailed Pandora they were unable to provide any additional information. Fortunately for us a helpful developer left behind a test page in the app bundle. If you look in Pandora.app there is a file called testAdWebPage.html. Opening the file in a text editor, we find exactly what we’re looking for near the end of the page: “pandorav2:/createStation?stationId=” + stId. All you need is the station ID to plug in. You can find this by going to pandora.com and clicking on the station you want an ID for. The station ID will be at the end of the URL. Now with a URL like pandorav2:/createStation?stationId=165870829537384993, you can launch Pandora directly into a specific station. If you have test pages or images or scripts in your project, make sure you clean out anything you don’t want users to see.

As you can see, it’s not hard to find apps that have sensitive data stored with varying levels of insecurity and severity. It’s a good idea to look at your app bundles before shipping to make sure there isn’t anything in there that you don’t want people to see. If your app is storing sensitive user information, make sure you’re securing it; you owe it to your users.

It’s important to note that if you have a passcode set on your device, you won’t be able to use a tool like PhoneView until you enter the passcode. Obviously it’s a good idea to have a passcode set on your device so if a malicious person does gain access to your device, that will provide some type of barrier between them and your data. Sadly not all users choose to use a passcode, and even for those of us that do, it’s not a guarantee. The best thing to do as a developer is assume that security (or your users) will fail, and plan for the worst. You don’t want to be the weakest link in the chain.