Categories
iOS

Poke Her? You Hardly Know Her!

Tirelessly working to further their goal of giving every individual piece of functionality its own app, Facebook has released the Facebook Poke app today. While being described by many as an app for sexting, you may want to think twice about sending that photo that you think will only be seen once.

One of the largely publicized features of the Poke app is the ability to send a photo, video or text message that is only viewable for a short period of time. Determined by the sender, the content can be visible for 1, 3, 5 or 10 seconds before it’s gone forever. This timeframe starts once the receiver taps and holds on the message to view it. After the timeframe expires, the messages is gone for good. Or at least you hope.

The first obvious way to get around the time limitation is to take a screenshot while it is visible. Fortunately Facebook planned for this. If you send somebody a message and they take a screenshot of it, the sender is alerted by a flash icon that will appear on that message. However, that’s not the only way to save these timeless memories.

When you launch the Poke app, it reaches out to Facebook’s servers and downloads any photos, videos, message or pokes that you have waiting for you. By proxying SSL requests to https://attachment.fbsbx.com when the app is launched, you can see the URL that the app is downloading the file from. The request will look something like this https://attachment.fbsbx.com/pokemedia.php?id=4668708209916&accesstoken=really-long-token-here. If you copy and paste that URL into a browser, you’ll see the content displayed and be able to download it to your computer. But what if you didn’t have the foresight to have a proxy running and the app has already downloaded the file?

Once the media is downloaded by the app it is stored on your device until you view it. By using an app like PhoneView we can dive into the Poke app bundle. If you drill down into Library > Caches > FBStore > number > MediaCache, you’ll find a list of any photos or videos you have not yet viewed. By copying them from here onto your computer, you ensure you can open that embarrassing photo or video any time you’re feeling a little down.

Neither of the above mentioned methods will alert the sender that you have saved off a copy of the package they sent. However, once the media has been viewed on your iPhone, it gets deleted from the Facebook URL mentioned above, and seems to be deleted off the device. So if you’ve already viewed the message and are looking for a way to get it back, you’re out of luck for now and the sender can breathe a sigh of relief. In the end, people just need to use common sense. You have to remember that if you’re sending somebody sensitive content, there is always a risk that it will not remain private. So maybe don’t send pictures of your junk to your friends.

Categories
iOS Networking Security

Simple Security

We previously looked at how to peek inside app bundles to get an inside look at how they might be handling data. Another important element to an app’s security is how they handle data externally. You can read through Terms of Service and Privacy Policy, but the only real way to know what data your applications are sending is to take a look at their traffic. I recently received an invitation to Simple; a service that seeks to replace your bank. Naturally, the first thing I did after signing up was start poking and prodding at their app. All too often these endeavors to get a glimpse at what apps are doing behind the scenes are surprising and eye-opening. Sadly, Simple was no exception.

Simple Logo

 

After looking at the app bundle with PhoneView and failing to find anything interesting there, I fired up Charles Proxy to have a look at the app’s traffic. The good news is Simple appears to be sending everything over SSL. This means that if, for instance, you’re at the airport and your iPhone is on an open wi-fi network, an attacker will just see encrypted garbage if they look at the traffic going between your iPhone and Simple’s servers. Now the bad news.

Charles Proxy has a wonderful option called SSL proxying. SSL proxying is handy for debugging when your app’s traffic is encrypted, but you want to see the contents. The first issue I noticed with Simple is it doesn’t take any steps to make sure it’s talking to its own servers. SSL guarantees that your connection is encrypted, but it doesn’t necessarily guarantee who is on the other end of that encrypted tunnel. Without the app validating that it’s talking to the server it should be talking to, it’s possible for an attacker to perform a man-in-the-middle attack.

Editor’s Note: In iOS (and any modern web browser), there is a list of trusted CAs that is bundled with the OS. These CAs are who issue SSL certificates. When an SSL connection is established, if the SSL certificate’s URL does not match the URL that a connection is being made to, a warning will be raised and the connection will fail. This means that a malicious attacker can’t use his SSL certificate that he may have acquired for fakebank.com to perform a MITM attack with requests that are going to https://simple.com. Some sort of compromise of the chain of trust in SSL would have to take place, like a CA being hacked and generating a phony certificate for Simple’s servers. When I originally said that Simple doesn’t take any steps to make sure it’s talking to its own servers, what I meant was it’s not performing any validation outside of what SSL inherently gives on iOS. There is an additional step that can be performed that is called SSL pinning. With SSL pinning, the app can take matters into its own hands to perform additional validation on who it’s talking to, rather than relying on the more general list of trusted CAs. This would ensure that in cases like a CA being hacked, there would be an additional layer of security by the app to validate it’s still actually talking to Simple’s servers. However, it’s worth emphasizing that without such a compromise taking place, user data is encrypted and secure with Simple’s app. All data is going over an SSL connection, and some compromise of SSL would need to occur for this data to be at risk in this scenario.

Normally the way SSL works is the client will setup an encrypted connection with the server before sending data to it. The way a man-in-the-middle attack works is the attacker intercepts your communications to the server and the server’s communication back to you. The client has an encrypted tunnel set up with what it believes to be the server. The server has an encrypted tunnel set up with what it believes to be the client. In reality both are just talking to the attacker and the attacker relays the traffic on after it’s able to take a look at it.

A MITM attack isn’t extremely likely on iOS because iOS ships with a list of trusted SSL certificate authorities. Using an app like Charles Proxy, you use a self-signed SSL certificate which iOS won’t trust unless you explicitly tell it to. That said, SSL is not foolproof and there have been instances in the past of it being exploited, and even of attackers creating phony SSL certificates signed by a legitimate and trusted CA.

Ultimately this isn’t something users have to worry about too much, but it is something Simple should fix. The app should be making sure it’s talking to Simple’s servers and Simple’s servers should make sure that it’s really the app that’s talking to them.

What’s more interesting is what you find once you start looking at Simple’s traffic. Since Simple isn’t doing any certificate pinning, we can enable SSL proxying in Charles and watch the traffic that Simple is sending and receiving. The first thing that jumps out is the request to https://api.simple.com/user-api/mobile-auth-tokens when you sign in to your account. Included in the request are your plaintext username and plaintext passphrase. The request is sent over SSL, but this doesn’t guarantee security and when dealing with such sensitive data, more security measures should be taken.

To be fair, up until this point, I haven’t seen anything that Simple is doing wrong that other banks are doing right. Of the two other banks I have accounts with, neither do pinning of SSL certificates or have any additional security to protect your username and password when signing in. So far Simple’s behavior seems par for the course. So far their security is just as disappointing as my other banks.

But wait, there’s more… There’s another request sent when you sign in: https://api.simple.com/user-api/users/UUID. The body of the request isn’t all that exciting, but the contents of the response are. It seems to include all of our personal info including full name, email, street address, date of birth, and social security number. Yes, your social security number. Simple’s server is sending all of this data back to your app when you log in, and as most of this info doesn’t seem to be used by the app anywhere, it’s unclear why they feel the need to send it.

After reading their security policy I felt hopeful that they’d be receptive to my concerns, so I sent an email to their security team. After eight days, I sent a followup to make sure my original email hadn’t been lost. Finally, a reply:

Hi Nick,

My apologies for not responding sooner; we did receive your report and I’ve discussed the issues you’ve raised with the team.

We are continuously working to improve the security of our solutions, and you can expect to see steady improvements as we continue to build out our platform. The challenges with ensuring the effectiveness of SSL are well-understood and under constant review inside the organization. In addition, it does look like we are transmitting more data than we absolutely need to and that is something that we are actively working to improve.

Thank you again for your message, and please do not hesitate to contact us again. We also provide a GPG key for communicating security issues with Simple on the Security page at www.simple.com; please feel free to use that in the future if you are able.

Justin Thomas Director of Information Security Simple

Simple’s 404 image – I can relate

So it sounds like maybe they’ll fix this at some point… or maybe not, I’m not sure. I’ve seen restaurants respond with more urgency when they screw up my hamburger. For a company that I’ve entrusted with my personal and financial data I expect them to treat security with the utmost importance. I expect them to fix security oversights as quickly and apologetically as possible. Since the company does not seem to share my enthusiasm there, I felt that the responsible thing to do was inform other users of these shortcomings so that they can make an educated decision on what to do with their own accounts. I’m still hopeful that eventually Simple will fix these issues, but in the meantime their users deserve to know how the company is handling their information.

Update #1 12/20/12 3:00PM - Simple has responded to several users on Twitter in order to try and address their concerns. Happy to see them actively responding to users and looking forward to seeing the issues actually fixed.

 

Update #2 12/21/12 9:00AM – I received a tweet from Brian Merritt, the Director of Engineering at Simple, asking if I’d be willing to give him a call and talk.

Good news. The SSN issue has been fixed as of last night. If you go look at your traffic now, your SSN is no longer being sent back by the server. Brian also reassured me that the SSL certificate pinning is an issue that they were aware of and have been working on. Unfortunately that sort of fix will require an app update, so it’s something we’ll have to wait on, but it does indeed sound like the fix is coming.

Categories
Mac OS X Server Uncategorized

Mac Mini Servers: A Cautionary Tale

This post will deviate from the type of content I generally post here. It isn’t really related to QA and instead deals with a recent problem we encountered when upgrading our build servers. I’m posting about it here in hopes that others may save themselves time and trouble from the lesson that we learned. If you’re interested, read on, if not, hopefully I get some more QA posts up over the holidays. If you just want to know what the problem was and how we fixed it, you can scroll right to the bottom.

We recently decided to give our build servers an upgrade. Our existing setup was a Mac mini running Jenkins and 4 Xserves that served as slaves. While the Xserves were great as servers, their product line has been discontinued, they’re noisey, they’re large, and they require more power than Mac minis. And even though redundant networking and power supplies are nice, they’re overkill for our needs. So when we went to update the servers we decided go to with Mac mini servers.

Setting up the first slave took a little bit of time and patience. We decided to start clean and set these up from scratch rather than migrate the existing servers. We had a general idea of what was needed and figured out what pieces we were missing through trial-and-error. Taking notes on steps as we went through the first one allowed for an expedited process on the subsequent servers. When we finally got the first one setup we launched it as a slave on Jenkins, assigned a project to build on it, kicked it off and, with some tweaking, got the build to succeed.

With my set of steps to follow, I quickly ran through the second server and had it up and going in an hour or so. Feeling fairly confident in the process at this point, I decided to try a third server near the end of the day, but this one was a special case. This one was going to replace the server that also ran our VPN. Initially to try and avoid having to re-create all the VPN users and have everybody set up a new password, I set up this Mac mini as a clone of the existing server it would be replacing.  During the cloning process though I began to give things more thought. After talking with my boss we decided that VPN would actually be moved off of the build servers to a more dedicated box. Once the cloning finished, I booted the mini into the recovery menu and did a re-install.

An hour or so later the re-install was complete, but it hadn’t gone quite as I had hoped.  There still seemed to be certain user data and settings that had persisted through the re-install, but it was good enough for now. I ran through my list of steps, skipping a few for software that had persisted through the re-install, and got the server added to Jenkins. Feeling satisfied with my work for the day, I decided to hold off until the next day to do the remaining servers.

The next morning we noticed something strange; Jenkins was showing that two of the new servers had response times of a few seconds while the old servers were showing response times of a few milliseconds. Jenkins was also reporting a clock difference on the two new servers of being a few seconds ahead while the old ones all showed in sync. We also noticed a few builds had failed overnight, but all at different points in the build process. Some of them were even failing in the middle of uploading builds to an external server.

Our first guess was a networking problem, but why? When we created the new build servers we gave them the same names as their predecessors that were to be retired. We also assigned them the same IP addresses, but due to some technical hurdles, the static IPs weren’t being handled by the router. Instead we were telling the machines to use DHCP, but were manually assigning them the IPs we wanted them to have. We wondered if these factors had caused some weird networking problems when trying to reach these new machines that had taken over old hostnames and IPs. While trying to diagnose the issue I  noticed that if I pinged one of the new servers continuously, intermittently several packets would timeout.

Oddly, the third server wasn’t exhibiting any of the symptoms that the other two servers were. I had more or less followed the same steps, but one notable difference is this server had gotten a new hostname and IP since the existing Xserve needed to stay up for now to run our VPN. We were already thinking it seemed like some kind of network issue, and now we had one server that didn’t take over an existing IP and hostname and it wasn’t having any of the same problems.

Now that we had figured out the issue, it was just a matter of moving the other two servers we had already setup to new hostnames and IPs. The change only took a couple minutes and we were back in business. Jenkins was showing the response times as low and the clocks in sync. Everything looked great… until a little while later when I got an email notifying me that a build had failed. I looked and sure enough, it was one of our new Mac minis and like the other failures, it happened in the middle of the build. Looking at the Jenkins interface, the slave had now been marked as down as its response times had gotten so bad. I looked to the third server that had shown the initial success and it was still fine. Once again, I was perplexed.

I tried searching online for any clues on what to do about a drifting clock in Jenkins with no luck. I went to an OS X Server room on IRC and asked for help, but was directed to try the Jenkins room. I went to the Jenkins room for help and nobody had any suggestions. I spent the night mulling things over and was running out of ideas.

The next morning I talked with Casey, one of our developers who had been helping me try to troubleshoot the issue the last couple of days. Casey had an idea; what about the energy saving preferences? I quickly took a look at the two problematic servers and noticed they were set to put the computer to sleep after 30 minutes of inactivity. I checked the third server and this option was disabled. I quickly disabled the option on the other two servers and wished for the best. Initially things looked better, but I had already been fooled several times with other solutions that initially had good results.

We let the servers go for a few hours and this time the good results actually stuck. That was it; Casey figured it out. In retrospect it all makes sense. Well… almost all of it. The build processes didn’t count as activity because they were running in the background; the OS was relying on mouse/keyboard input to tell it the computer was in use. The reason the third server didn’t have any issues is because one of the settings that persisted across the reinstallation was the Xserve’s Energy Saver preferences. The Xserve’s seemed to have shipped with a much saner default of “Never” for the computer sleep option. I still don’t quite understand the symptoms of showing the clock as drifting ahead or why the response time appeared slow rather than just non-existant. Regardless, this was a much harder lesson to learn than it probably should have been. It should have occurred to us much sooner to check the Energy Saver preferences, but I do hope that Apple will change the default on the Mac mini servers to have these preferences disabled and save others the trouble and confusion. I don’t imagine that many people out there running servers want them going to sleep every 30 minutes.

tl;dr: Mac mini servers running as build servers had symptoms of slow response times with Jenkins, Jenkins reporting their system clocks as ahead, and builds failing at random points, but not all the time. This was fixed by changing the Mac mini server Energy Saver preferences from the default of putting it to sleep after 30 minutes of inactivity, to never putting it to sleep.

Categories
iOS

iOS Support Matrix 2.0 – Winter Edition

A few months ago I posted an iOS Support Matrix put together by the folks at Empirical Magic. Now they’ve released version 2.0 of the matrix which includes the 5th gen iPod touch, iPhone 5, iPad mini and iPad 4. In addition to the new devices, they’ve also added some additional useful info to the matrix. The matrix is available in a number of resolutions as well as a PDF here.

While this matrix is great to track which devices you support and high level specs like iOS version and whether a device is retina or not, sometimes you need more info. If you’re looking for a little more technical info, you should check out this Google Doc spreadsheet maintained by Jonathan Wight and this iOS Device Specification Grid by Blake Patterson (h/t http://iosdevweekly.com). Both provide nice compilations of various technical information about iOS devices, though each has some data that the other does not. I’m sure there are some other great references like this out there that I’m not aware of. If you know of one that wasn’t mentioned here, please share it in the comments below.

Categories
iOS

Using Network Link Conditioner in iOS 6

I previously covered using Network Link Conditioner to test how your app does in less-than-ideal network scenarios. One of the inconveniences of using it is in order to test your app on a device, you have to take the extra steps to configure a proxy on your computer that your device can connect to over a wi-fi connection. Well, not anymore. With iOS 6, Apple has given us Network Link Conditioner right on the device.

To get started with Network Link Conditioner, first your device must be set up as a developer device. If you haven’t done this yet, you’ll need to go to Xcode, open Organizer (Shift-Command-2), select your device, and click the button that says Use for Development to add the device your developer account. Once you’ve done that, you should be able to open Settings on your device and see a Developer menu near the bottom, just above the apps list. Tapping on that reveals some helpful settings, Network Link Conditioner Status obviously being the one relevant to our interests here.

The options available should look familiar if you’ve previously used Network Link Conditioner in OS X. There are some useful presets immediately available to you, and you can also set your own by adding a profile. One notable difference from OS X when creating your own profile is the ability to specify the interface, so you can create profiles that only affect 3G or only affect wi-fi (or both) which is nice. Once you have the profile you want, enable Network Link Conditioner and you’re good to go.

Be careful to not leave Network Link Conditioner on once you’re done testing as there is no indicator in the status bar or anywhere else outside of Settings to tell you that it’s on. I foresee many occasions where I forget to disable Network Link Conditioner, then later wonder why my connection is so slow.

While Charles Proxy stills gives you more control for network testing, it’s really nice to have the option to test a slow network without needing to run a proxy, have my Macbook Pro out, or have my device on a wi-fi network with my laptop. Now you can just pull out your device, enable Network Link Conditioner, and quickly (or rather, not so quickly) test different network conditions right on your device.

Categories
iOS

A Mind Map for iOS Testing

Update 10/8/13: A newer version of the mind map can now be found here.

Update 10/30/12: Version 1.1 of the mind map has been updated with the following changes:

  • Added iPad mini & iPhone 5 under Hardware
  • Added in-app purchase testing under Software
  • Added right-to-left text input under Internationalization (h/t Matthew Henderson)
  • Added caching under Data (h/t Brad Dillon)
iOS Testing Mind Map 1.1 Zip File Download

I had previously posted about a mind map for getting started with mobile testing that I had come across on the Ministry of Testing website. What I was really hoping to eventually see was an iOS specific one, which I’m happy to say we now have. Bernard and I have put together this mind map for testing iOS.

I’ve been wanting something like this for myself to serve as a checklist; things to make sure I’m covering or have at least considered when testing. We tried to strike a balance of being thorough while not going overboard with details. If you think it sucks, or that we missed a bunch of important stuff, or have a bunch of things you don’t care about, luckily for you the .smmx file is available to modify to your heart’s content! Should you find yourself feeling so adventurous, the application I used was SimpleMind Free. Though you’ll need to grab the pay-for version of SimpleMind to be able to import an existing mind map.

The .smmx file is included in this handy zip file (updated zip here) that Rosie was kind enough to make. The zip file also hosts a few other useful formats (PDF, PNG, text, OPML, MM).

Categories
iOS

iOS Support Matrix

If you’ve ever sat down to try and figure out what iOS versions run on which hardware in order to decide which combinations you should be testing and need to support, you know what a tedious chore this can be. If you have the time and patience, Wikipedia has a thorough history of iOS versions, but wouldn’t it be nice to have a pretty picture that sums it all up? Well the folks at Empirical Magic seemed to think so and they’ve been kind enough to share their results with the rest of us in this iOS Support Matrix (see update below). Looks like there are a few pieces of missing information, but overall this serves as a great guide.

Update: Some bug fixes have been made and version 1.2 of the iOS Support Matrix is available here.

 

Categories
Uncategorized

Apple to Fix IAP Vulnerability in iOS 6

Apple recently announced that the IAP vulnerability discovered earlier this week will be fixed in iOS 6. They have also released documentation for developers outlining best practices that should be taken to ensure they are not affected by this attack moving forward. One interesting bit is that Apple actually instructs developers to make use of private APIs to secure their apps.

Thanks to Matthew Panzarino for the heads up.

 

Categories
iOS Security

In-AppStore.com Continues to Allow Users to Steal In-App Purchase Content

There has been a lot of talk this week about hacker Alexy V. Borodin (who goes by the handle ZonD80) who put up a service that facilitated illicit transactions of IAP (in-app purchase) content from iOS apps. While identifying security problems with the way IAP transactions work, it also underlined the need for developers to be diligent in secure practices with their apps.

The exploit requires users to first install two certificates on their iOS device: an SSL certificate for Alexy’s “fake App Store” and a certificate to trust the CA that signed that SSL certificate. Once this is done, the user changes their DNS settings to point to Alexy’s DNS servers. Now when an IAP request is made, instead of talking to Apple’s servers, the user’s device is talking to Alexy’s, and instead of being charged then given a valid receipt, the user is not charged and Alexy’s server simply hands back the receipt it has cached. As far as the app on the user’s device is concerned, the user has paid for the IAP content and the app received a receipt from Apple verifying this.

There are two problems (as far as I can tell) with Apple’s current implementation that allow this to happen. The first issue is that a user can trust 3rd party certificates for in-app purchases. If StoreKit had been locked down to only trust Apple’s certificates for this type of traffic, then the receipt likely never could have been sniffed and cached by Alexy in the first place. Even if he had acquired them somehow, users would be unable to install his SSL and CA certificates that are required to perform the fake transactions. However, since users have (and in many legitimate cases need) the ability to install 3rd party, untrusted certificates in iOS, I don’t know if it’s possible for StoreKit to ignore the OS-wide certificate list, and only use its own provided by Apple.

The second problem with Apple’s implementation (as pointed out by Alexy) is that the receipts have no uniquely identifying information. The receipts aren’t tied to any particular device or Apple ID. This means that only one person ever has to buy the IAP content, but they can then share that receipt with others who can re-use it to fool apps on their devices into thinking they paid for the content. If Apple were to only fix this part of the problem by having apps check that the receipt belonged to the current device, it would still leave the potential for users to recycle their own receipts over and over. For example, if a user were playing Angry Birds and decided to buy the $0.99 IAP for 20 Space Eagles, they could capture the receipt from the transaction, then re-use it on subsequent transactions for the same in-app purchase. So by paying $0.99 one time, they could acquire unlimited Space Eagle in-app purchases.

All of this boils down to the fact that we have a very broken in-app purchase system right now, so how do we fix it? The bad news is, we really can’t; Apple has to. Signs already point to Apple taking measures to fix the system. They’ve started including a unique identifier in App Store receipts, which apps could potentially (with an update) use to validate that the receipt does actually belong to that device. I’m not sure if they have any plans to lock down StoreKit to make sure it only ever talks to Apple.

While developers can’t fix this broken system, there are steps that can be taken to minimize the risk involved. In fact, developers who follow Apple’s instructions on validating receipts against their own servers seem to be unaffected so far. When this story first broke it was thought that developers who were validating receipts against their own servers were immune, which isn’t quite true. Receipts validated against developer servers can be forged in the exact same way. Anybody can capture the response from the developer’s server that occurs during the validation, and then reuse it. However, to my knowledge, this isn’t currently being done. While it’s not a solution, it does provide one more hoop to go through for people to steal IAP content; a hoop which Alexy doesn’t seem to be jumping through as evidenced by the large number of users on his website who complain about apps with which his service does not work.

It looks like Alexy may soon offer the same circumvention services for Mac App Store apps (then again, maybe not). Apple has been making efforts this week to minimize the damage done by Alexy’s service, will surely have a complete solution available shortly, and are hopefully already ahead of him with the Mac App Store and working on a solution there as well. In the meantime, developers should at least make sure they’re validating receipts agains their own servers, and if they can’t, look into a 3rd-party solution like Beeblex.

Update: Apple is now providing best practices to developers on closing this loophole. More info here: http://www.neglectedpotential.com/2012/07/in-appstoh-yes-they-did

Update 2: The service has now gone live for stealing Mac App Store IAP: http://91.224.160.136/osx.html

Categories
iOS Security

Peeking Inside App Bundles

With iOS being a relatively closed system, it’s easy for developers to get lulled into a false sense of security; believing their apps are a black box that users can’t look into. Exploring a few different apps from the App Store, it’s clear that some developers either don’t realize that users can explore their app bundles, or they simply forget this fact. Using a tool like PhoneView or iExplorer, we’re going to peek inside app bundles and look for curious bits that developers may not have intended for others to see.

When exploring app bundles there are generally three different categories of files that are of interest and which developers should be careful with:

  1. Files that can be manipulated by a user to affect an application
  2. Sensitive user data
  3. Sensitive company data

User Manipulated Files

Files in this category are usually some type of data store that hold information about how an application should function, or the current state of the application for a user. To see an example of this, go install Pocket Planes, run it on your device, then:
  1. Plug your device into your computer and fire up PhoneView.
  2. Navigate to Pocket Planes/Documents/playerData. In there you’ll find the file user-playerData.db.
  3. Double-click it to save it to your Documents folder. You’ll need an app that can read SQLite 3 database files (my personal preference is Base 2).
  4. Open user-playerData.db in your preferred SQLite app.
After opening user-playerData.db, you’ll see there are a few different tables. My favorite is “AttemptingtoeditthisfilemaymakePocketPlanes_unplayable”. Apparently they’re aware that people might attempt to play around in here, so they were kind enough to leave a warning for us to be careful. Take an opportunity here to make a backup of this file, just in case something goes wrong.

With that done, let’s look at another interesting table; playerMeta. Looking at some of the keys and values in this table, it becomes quickly apparent that this is where all of the player’s game data is stored. It stores how many coins you have (“b”), how many bux you have (“c”), your level, amount of experience, and a few other interesting bits in this table. By now you might see where this is going. You can edit the values in here and change something like your coins to have a value of 9,999,999. Then simply drag this newly updated .db file back onto your device in the same playerData directory you got it from, force quit the application on your device, relaunch, and voilà! You’re rich. There’s more fun to be found in this app, but I’ll leave that as an exercise to the curious reader.

So what’s the impact here? We have a free application that presumably makes money from in-app purchases of bux, with which users can buy coins to advance more quickly in the game. But because the value of both the coins and the bux is stored unsecured in a database, any user can go modify these values and avoid having to pay for anything. The impact to developers here is the potential loss of revenue. Additionally in multiplayer games, there’s the possibility for some users to cheat (as once was the case with Hero Academy).

With these types of files, you have a few options as a developer. If you don’t expect the data to change, or any changes could be handled with an app update, then store that data inside of your compiled code. This ensures the data is in the signed binary, out of the user’s reach. If you must update these values regularly, you can implement some kind of a checksum for your application to validate. For an example of this you can look at Temple Run. They store a lot of the user’s data (like coins) in a plain text file, but it’s also checksummed and if the checksum is not correct, the game ignores that data. One more option would be to store these values securely on or validate against a server. Obviously this requires more planning, upkeep and resources than the other options.

Sensitive User Data

It wasn’t that long ago that Facebook and Dropbox caught some flack for insecurely storing user authentication tokens. Basically the insecure storing of these tokens meant that a malicious user with physical access to somebody else’s device could gain unauthorized access to the victim’s account. Less than ideal. Unfortunately, there is no shortage of other apps out there with similar insecurities.

One such app that I recently came across is My Secret Apps Lite. This app claims it will securely store photos, videos, bookmarks, browsing history, notes and contacts. It also has several mechanisms intended to inform you if somebody tries to gain unauthorized access. It even goes as far as letting you set a decoy pin so that if you’re forced to tell somebody your pin, you can give them the decoy which will show them your decoy data.

So how does its security hold up? As you may have guessed, not well at all. Not only can all of your “secure” data be accessed in the Library/Application Support directory (bookmarks, history, photos, notes, security log, it’s all in there), but by simply looking at Library/Preferences/my.com.pragmatic.My-Apps-Lite.plist, you can see the pin and decoy pin (if there is one set). In reality, this app provides more novelty than actual security.

Other apps might store most of your sensitive data securely, but still leave some of it unprotected. For instance with Mint, I couldn’t find anything along the lines of account numbers or login credentials, however, it does look like they store your transaction history in a plaintext database. They made the effort to secure other data in the application, so I’m not sure why this data was left in unsecured.

Chrome is another application that stores most of your datas securely, but not all of it. They made good efforts with decisions like having password encryption turned on by default, but one file that is not stored securely is Library/Application Support/Google/Chrome/Default/UIWebViewCookies. If you have one device where a user has logged in to their Google account, then you copy that Default directory of that device to another, after relaunching Chrome, you’ll be able to navigate to any Google website and be logged into the user’s account.

You can also glean such details as what websites the user has saved their login info for, as well as the usernames for all of those sites. While they’re not storing your password in plaintext, they’re leaving enough info on the table to make you (or at least me) uncomfortable.

What makes me more uncomfortable is Google’s philosophy regarding mobile security. As you can see in their response to my bug report, their feelings are that you have no expectation of security if you lose your phone. I would agree that if a user loses their phone it’s in their best interest to assume all data has been compromised and respond accordingly, but I also feel developers have a duty to their users to make reasonable efforts at keeping the user’s data secure.

1Password is an example of a application whose developers take their users’ security seriously. Even if an attacker gains full access to your phone and you have 1Password on it, they won’t be able to access any sensitive information from looking in the app bundle because it’s all encrypted. Google made the effort to encrypt passwords, I’m not sure why they wouldn’t bother to secure everything. This is especially troublesome because one of the best features of Chrome is to have all of your data synced across all of your devices. If a company is storing all of my data, I just expect that they’ll make every effort to secure it. Google isn’t alone either. After noticing that a popular Craig’s List app saves the user’s username and password in plaintext, I emailed their support. Over three months later, I haven’t heard anything back from them and they’ve released an update since then, but it didn’t fix this problem.  If you’re storing a user’s sensitive data, please ensure you’re doing so responsibly and securely. If you’re a user storing sensitive data in your apps, make sure you check to see how secure that data is.

Sensitive Company Data

Sometimes information goes into your app bundles that is meant to be kept private from others, including your users. One specific example of such information is API tokens for services like Facebook and Twitter. A few months ago I found that eBay’s iPhone app actually had their Twitter and Facebook API tokens stored in plaintext in the app. Such practices are against (at least with Twitter) their terms and conditions. A malicious party could use those tokens to spam or otherwise abuse their APIs and get your valid token revoked, breaking Twitter and Facebook functionality for any of your apps or services that use it. eBay fixed this problem in a recent update and I haven’t yet come across another app doing it, but it’s certainly something to be mindful of. Sometimes you may also come across debug or test files in an app that developers forgot to remove before shipping.

A few months back I was looking for a way to launch into a specific Pandora station using Launch Center (now Launch Center Pro). Looking in Pandora.app at the Info.plist, we can see in the CFBundleURLSchemes section that Pandora has 3 custom URL schemes: pandora, pandorav2, and pandorav3. Unfortunately this doesn’t tell us any of the arguments that you can pass to those URLs to perform more complex tasks, and when I emailed Pandora they were unable to provide any additional information. Fortunately for us a helpful developer left behind a test page in the app bundle. If you look in Pandora.app there is a file called testAdWebPage.html. Opening the file in a text editor, we find exactly what we’re looking for near the end of the page: “pandorav2:/createStation?stationId=” + stId. All you need is the station ID to plug in. You can find this by going to pandora.com and clicking on the station you want an ID for. The station ID will be at the end of the URL. Now with a URL like pandorav2:/createStation?stationId=165870829537384993, you can launch Pandora directly into a specific station. If you have test pages or images or scripts in your project, make sure you clean out anything you don’t want users to see.

As you can see, it’s not hard to find apps that have sensitive data stored with varying levels of insecurity and severity. It’s a good idea to look at your app bundles before shipping to make sure there isn’t anything in there that you don’t want people to see. If your app is storing sensitive user information, make sure you’re securing it; you owe it to your users.

It’s important to note that if you have a passcode set on your device, you won’t be able to use a tool like PhoneView until you enter the passcode. Obviously it’s a good idea to have a passcode set on your device so if a malicious person does gain access to your device, that will provide some type of barrier between them and your data. Sadly not all users choose to use a passcode, and even for those of us that do, it’s not a guarantee. The best thing to do as a developer is assume that security (or your users) will fail, and plan for the worst. You don’t want to be the weakest link in the chain.