Category: iOS

Security & Privacy Changes in iOS 8 and OS X Yosemite

I’ve been sifting through this year’s WWDC videos looking for all of the interesting bits around security & privacy. I’m not anywhere close to being done. Fortunately Luis Abreu has done the hard work for all of us and compiled his findings into a very handy post. The post has a lot of great info for developers, QA, and designers around what’s new and what’s changing. Of course you’ll still want to go do your own research before implementing any changes, but Luis’ post serves as a great quick-start guide.

Link: lmjabreu.com
Source: iOS Dev Weekly

What Developers Should Know About Apple’s TestFlight

When Apple acquired Burstly, makers of TestFlight, earlier this year, many were hopeful that Apple was finally ready to provide developers with an easy way to manage beta testing. So naturally, developers responded to Apple’s official announcement of the (re)launch of TestFlight at WWDC with great applause. Since then, many (including Apple) have rejoiced that the days of dealing with UDIDs and provisioning profiles are over. Many already believe that TestFlight spells the end for HockeyApp. But looking at what we know so far about TestFlight, I’m not so sure that’s the case.

The Promise

TestFlight will bring two big changes to ad hoc app distribution as we know it. First, test devices are now managed using email addresses, rather than UDIDs. Second, the test device limit was drastically increased from 100 devices per account to 1,000 users per app. Many developers have been frustrated for some time now with Apple’s 100 device limit, and this has only gotten worse over the years with more new devices being released. The current system of UDIDs and provisioning profiles unnecessarily wastes a lot of developer time, requiring close monitoring and management of test devices and troubleshooting esoteric error messages.

With TestFlight, developers won’t need to ask testers for UDIDs anymore—they will simply send them an email invite from iTunes Connect. Once a user accepts the invite and installs the TestFlight app, they’ll be able to view details for apps, install betas, and provide feedback to developers. Developers will be able to use iTunes connect to upload builds, manage users, and even get insights into how testers are using the app. So far, so good, but here come the caveats.

Internal Testers vs. Beta Testers

TestFlight users will fall into two buckets: internal testers and beta testers. Beta testers will be the ones invited via email and, as previously mentioned, you will be able to have up to 1,000 of them per app. Internal testers will be managed in iTunes Connect by creating an account for each tester; you are limited to 25 of these. Builds for beta testers will need to be reviewed and approved by Apple before testers will have access to them, while internal testers will have access to builds as soon as they are uploaded. As others have already pointed out, reviews will only be required for the initial submission and any builds that contain major changes to the app—minor changes won’t require another review. But this is already a hurdle that developers didn’t previously have to deal with. And for many companies, 25 internal tester slots simply won’t be enough. It’s great to see Apple get rid of the 100-device limit, and for a lot of developers it may be sufficient, but for many the hassle of dealing with the 100-device limit isn’t going away, it’s just being replaced with a similarly frustrating limitation of 25 internal testers.

Supported Platforms

Not surprisingly, TestFlight is only available for iOS 8. While TestFlight originally worked on Android, that support was dropped shortly after Apple acquired Burstly and there’s no indication it will return. There also hasn’t been any mention of TestFlight supporting Mac apps (which Hockey does). So if you’re planning on making an iOS 8-only app, then you’re in good shape. But anybody supporting older iOS versions, Android, or wanting to have a Mac app will need to look elsewhere.

Build History

The demonstration shown in Apple’s iTunes Connect session seems to suggest that testers will only be able to install the latest version of a beta. When a new build is uploaded, the previous build is marked as Inactive. Most of the time this is perfectly fine, but it’s not uncommon for developers and testers to need to install previous builds of an app. You may need to go back and reproduce a bug in an old build, or go back to compare a design element that has changed, or check old builds to determine when a regression was introduced, or to test database migrations from old builds… there are plenty of perfectly valid reasons for needing the ability to install previous builds. Perhaps I’m reading into Apple’s demonstration too much, but if this winds up being the case, Hockey will be able to add this to the list of features it has over TestFlight.

Automated Builds & Continuous Integration

TestFlight builds will be uploaded and managed through iTunes Connect. Unless Apple provides command line tools for this (which there has been no hint of), any developers who use any sort of continuous integration environment for automating builds and uploads will be out of luck. Each update for testers will require a developer to manually build and upload through iTunes Connect. Obviously there are plenty of developers who currently do manual builds, and for them this won’t be a big deal. But for the many of us using CI to upload multiples builds a day, this lack of functionality alone is enough to immediately rule out moving from Hockey to TestFlight.

Crash Reporting

Another big feature that TestFlight will have is crash reporting. Apple has provided crash reporting for App Store apps for a while now, but many have found it to be lacking and utilize 3rd-party solutions instead. Apple has only mentioned TestFlight supporting crash reporting for App Store submissions, not betas. It’s better than no crash reporting at all, but it’s not a complete solution and it’s still subpar to the many 3rd-party solutions currently out there. Oh, and one more thing. Apple said that crash reporting and symbolication will be available later next year.

Support

Last but not least, let’s talk about support. Hockey won me over early on with their beyond-stellar support. I frequently receive responses to my support requests in less than 15 minutes, and at all hours of the day and night. One time I submitted a support request to find out if I could aggregate crash information in a particular way through the Hockey website, but was told it wasn’t currently possible. The next morning Hockey support sent me a Ruby script to accomplish what I wanted using their API. Apple won’t be, and likely has no interest in, providing that kind of support.

Where TestFlight Fits In

Believe it or not, I’m not trying to talk anybody out of using TestFlight. I’m quite glad to see TestFlight; I think it’s a big step in the right direction. TestFlight will be entirely sufficient for many developers’ testing needs, and the fact that those developers will be able to utilize TestFlight instead of relying on, and having to pay for a 3rd-party service, is great news. Even developers who continue to use services like Hockey to fulfill needs unmet by TestFlight will likely benefit from TestFlight. Hockey may make the most sense during development when you’re cranking out builds and rapidly iterating, but as you’re getting ready to ship to the App Store, being able to get your app into the hands of a much larger group before finally shipping to the App Store is a huge plus. The more users you have testing your app on more devices, the greater your ability to catch strange edge cases and eventually ship a more polished app. That’s not only great news for developers, it’s great news for users who easily become collateral damage of insufficient testing.

Maybe down the road Apple will expand TestFlight to compete more aggressively with Hockey. Maybe they’ll even kill off manual management of distribution profiles for good. But looking at what we know about these upcoming changes so far, TestFlight and HockeyApp are two different services that serve two very different needs. TestFlight isn’t going to kill Hockey, it’s going to complement it.

iPhone Touchscreen Accuracy – A lesson in understanding test requirements and goals

Effective problem solving requires that you fully understand the problem you’re trying to address. This holds true in life and in programming. Effective testing requires that you have a good understanding of what you are testing and why. Without this solid foundation, at a minimum you’ll cause some confusion, and often times you’ll end up wasting time, money and energy investigating problems that aren’t really problems. This week, a company called OptoFidelity provided the perfect opportunity to discuss this challenge that engineers and testers commonly face.

OptoFidelity is a technology company that, among other things, provides automated test solutions. They recently performed a number of automated tests on the iPhone 5c, iPhone 5s, and Samsung Galaxy S3. One of the tests that was carried out is meant to measure the accuracy of a touch panel. This test is performed by a robot which has an artificial finger that performs hundreds of precise taps across the entire display. The location of the tap is compared against where the device registered the tap. If the actual location and registered location are within 1mm of each other, the tap is displayed as a green dot– a pass. If the actual location and registered location differ by 1mm or more, then tap is displayed as a red dot– a failure.

Touchscreen Accuracy Test Results

The image above comes from OptoFidelity and shows the results of the test. As you can see, the Galaxy S3 performs very well in this test, only losing accuracy at the very edge of the display. The iPhones show a somewhat alarming amount of inaccuracy, with roughly 75% of the touchscreen yielding inaccurate results. The obvious conclusion to draw here is that the iPhone 5c and 5s clearly have subpar touchscreen accuracy, at least when compared to the Samsung Galaxy S3. But something sticks out about the iPhone results.

The green area for the iPhone results, where it registered taps within 1mm of the actual tap location, fall into an area that would be easily tappable with your thumb when holding your phone with your right hand. If you pick up your phone with your right hand, and try tapping with your right thumb, it’s easy to see that this area is easy to tap with a fair amount of accuracy. You’re not stretching your thumb as you would when you go for the top of the screen, or scrunching your thumb up too much like you would when trying to tap close to the right edge of the screen. You also wind up tapping with the same part of your thumb while in this area. Put more concisely, this is an area of the screen that you’re more likely to tap exactly where you mean to. So what about where the circles turn red? What’s going on there?

Let’s look at one specific red area that OptoFidelity calls out in their study– the left and right edges of the keyboard.

Keyboard Accuracy

In this image, you can see the results of the touchscreen accuracy test overlaid on the top row of the iOS keyboard. In the center, over letters like T and Y, the black circles of the robot’s tap show green dots nearly centered inside, indicating that the iPhone registered the taps very close to the center of where the tap actually took place. As you move left or right of the center, you see the dots start to shift in the same direction. As you move over to the letters E and W on the left, you see the green dot moving to the left side of the actual tap circle, and by the time you get to Q, the iPhone is now registering taps 1mm or more to the left of where the actual tap took place. The conclusion of the test indicates this is a failure in accuracy on the part of the touchscreen, but is this a failure or a feature?

Looking at the displacement of taps as you move away from the green area, there’s a definite pattern. The more you move away from the easily-tappable area, the greater the “inaccuracy” of the tap. But the inaccuracy skews in a way that would make the target slightly closer to starting position of your thumb (which is likely the most frequently used digit for tapping). As your thumb stretches out from your hand, likely positioned near the bottom of the phone, the portion of your thumb that actually comes into contact with the screen when you tap changes. Your perception of the screen also changes slightly, as when you move higher on the screen, it’s less likely that you’re viewing the screen at exactly a 90 degree angle. These are factors that this automated test does not account for. The robot doing the test is viewing its tap target at a perpendicular angle to the screen. It is also tapping at a perpendicular angle every time. This isn’t generally how people interact with their phones.

I haven’t been able to find official documentation on this, but I think this behavior is intentional compensation being done by Apple. Have you ever tried tapping on an iPad or iPhone while it’s upside-down to you, like when you’re showing something to a friend and you try tapping while they’re holding the device? It seems nearly impossible. The device never cooperates. If the iPhone is compensating for taps based on assumptions about how it is being held and interacted with, this would make total sense. If you tap on a device while it’s upside-down, not only would you not receive the benefit of the compensation, but it would be working against you. Tapping on the device, the iPhone would assume you meant to tap higher, when in reality, you’re upside down and likely already tapping higher than you mean to, resulting in you completely missing what you’re trying to tap.

Commentators across the Internet have already chimed in saying “I’ve noticed this too! I’m always tapping the wrong button!” It’s a touchscreen– you’re going to miss. If the report had revealed the opposite, that the Galaxy S3 was inaccurate, you would have had a swarm of S3 users also supporting the study, citing that they sometimes tap the wrong button or key. The bottom line is, the testing performed here bears no resemblance to real-world usage. OptoFidelity tested how closely each device maps a tap to the actual position of the tap. This accuracy would be extremely important if you had robot fingers tapping very small and close tap targets at a 90 degree angle. If you’re looking for a phone to use this way, steer clear of the iPhone. What the test didn’t show is the accuracy of taps on a device relative to a user’s intended tap target. I would not be surprised if this was exactly the sort of testing Apple did when they decided to skew the touch accuracy of their devices.

This comes up in testing all of the time. In order to properly test something, you need to understand what it is that you’re testing. If you don’t understand what you’re testing, then it’s easy to misinterpret the results. Every tester out there has filed a bug, only to have it explained to them why it’s actually the expected behavior for an app (and not in the joking “it’s not a bug, it’s a feature” kind of way). A critical part of our jobs as testers is not just reporting what something does, but asking why it behaves that way. Consider this real world example. You come across a light switch that when flipped down, the lights are on, and when flipped up, the lights are off. It could be that the switch was installed upside-down. Or it could be that it’s a three-way switch and there’s another switch elsewhere that controls the same lights. In the latter case, the behavior of the switch could not be considered a bug. Arriving at that conclusion requires an understanding of what you’re testing in order to know the expected result.

I could be completely wrong about the accuracy of the iPhone. I am not a touchscreen expert, and have no proof to show what’s going on. I am in no better position than OptoFidelity to make claims about the accuracy of the iPhone touchscreen. My point is that they should be asking questions. Testers should always ask questions. Testers and engineers should always ask questions. By asking questions and trying to look below the surface, you gain a better understanding of the problems you’re trying to solve and the original questions you were trying to answer. As developers and testers, asking questions is how we build better products and yield the best results.

360iDev 2013 – How to Break Your Apps Before I Do

I had the wonderful opportunity to speak at 360iDev this year. My talk, entitled How to Break Your Apps Before I Do, covers some of the methods and mentality of a good QA person, as well as how developers can get the most out of testing.

The conference organizers, John and Nicole Wilker, have generously decided to make the session recordings available to everybody for free this year. Those interested can check out the recording of my talk, as well as all of the other sessions.

The slide deck is also available.

iOS Testing mind map 1.2 – Now with more stuff

Nearly a year since the last refresh of the iOS testing mind map, it seemed due for an update. The changes in this version are outlined below.

  • Hardware
    • Added iPhone 5s (64-bit)
    • Added iPhone 5c
    • Removed iPhone 3G
  • Network
    • Added LTE
  • Date
    • Time Settings
      • Added 24 hour clock
  • Software
    • iOS
      • Added 7.x
  • Functionality
    • Added Motion Activity
    • Added Restrictions
      • Added Disabled Safari
      • Added No IAP password caching
      • Added Disabled Camera
    • Added Privacy
      • Added Location Services, Contacts, Calendars, Reminders, Photos, Bluetooth Sharing, Microphone, Motion Activity, and Social Networking
    • Added Push Notifications

iOS Testing Mind Map 1.2

iOS Testing Mind Map 1.2

Once again, this mind map seeks to be thorough without getting so detailed as to become unmanageable or unhelpful. The mind map is not exhaustive, but helps serve as a template that you can customize to suit the needs of your apps. Anyone wishing to modify the mind map can do so by downloading SimpleMind Free and the SMMX mind map file (you may need to right-click to save), then dragging the file onto the SimpleMind app icon in your dock. Clicking on the image above will take you to a full-resolution PNG. You can also download this zip file which contains PDF, PNG, text, OPML, MM and SMMX versions of the mind map.

Vesper Beta Collaboration

Just about every beta I’ve participated in has been set up in a way that feedback is only sent back to the developers. Vesper is my first time on a beta where a collaboration tool was set up for testers, in this case Glassboard. For those unfamiliar with Glassboard, it’s a sort of social network that allows you to create private boards for groups of people to communicate. Its lightweight structure is well-suited for private communications amongst a small group, like during a beta. Brent Simmons explained to me that this is how he has always done betas. Whether it be email, Glassboard, or some other tool, Brent has always set up a way for testers to discuss the project with one another.

Not long after the first beta release, the Vesper Glassboard began to fill with feature requests, design feedback, and general comments. It was interesting to watch all the discussion taking place around design, features, and interactions, but my specialty has always been breaking things, so that’s what I did. This took an interesting turn one night after I emailed a list of about 20 bugs I had found in my latest run through the app. Dave responded with “How about I save myself the time and just give you Lighthouse access?”

This didn’t just make things easier on him, but easier for me too. Glassboard is a great collaboration tool, but by no stretch of the imagination was it set up to be a bug tracking tool. With direct access to Lighthouse I could open tickets for bugs as I found them, rather than trying to compile a large list of items to submit at once. It also let me better detail my tickets, as well as attach photos and videos for bugs that were more difficult to explain. Lighthouse access also meant that I could make sure everything reported by testers on Glassboard got tracked. Not to mention how much easier it made it to follow up on bugs once they were fixed.

The difference between a bad app and a good app is apparent to most. The difference between a good app and a great app is much more subtle. It’s the small details that 99% of users wouldn’t consciously notice were missing if they weren’t there. It’s the one or two out-of-place pixels that will cause a slight distraction to a user’s mind, even if they never realize it. I love helping with that refinement. Producing a seamless app that allows people to forget that what they’re actually looking at is software executing thousands and thousands of lines of code in order to display a sequence of colors onto millions of pixels that make up a screen. Creating that suspension of disbelief where users interact with an app as if it’s actually made up of physical components. Q Branch wants to ship the absolute best apps that they can, and I’m thrilled and honored to be a part of that.

Spoofing Location Services in Your iOS Apps

Of all the instruments available on the iPhone, GPS is easily one of the most utilized. Having access to Location Services can greatly enhance user experience in your app by adapting behavior to what best suites your user based on his or her whereabouts. One critical piece to utilizing location services is making sure your code behaves the way you expect. Fortunately, in recent releases of Xcode, Apple has made this job a little bit easier by allowing us to spoof our location in the Simulator and on devices.

The way you decide to utilize Xcode’s location spoofing will depend on how your app will use it. For example, if you’re making a gorgeous weather app for the iPad, you’ll want to spoof a static location to simulate getting a user’s local weather. On the other hand, if your app does something like map a user’s bike ride, you’ll need to spoof a set of locations that simulate a user’s location changing over time. Xcode has some presets you can use for both scenarios, but more than likely you’ll want to construct your own custom data to feed to Xcode.

First, you will create a GPX file. GPX is a standard file format used for expressing GPS locations. The format for GPX files that Xcode looks for is fairly straightforward:

<gpx>
    <wpt lat="38.897678" lon="-77.036517"></wpt>
</gpx>
This GPX file gives the latitude and longitude for The White House. Throw that text into a text editor, and save it as WhiteHouse.gpx. In Xcode, open a project you have for an iPhone or iPad app, and build it to your device. Once the app is running on your device, go to Product > Debug > Simulate Location, and select the last option in the list that says “Add GPX File to Project…“, then select the WhiteHouse.gpx file we just created. Xcode will pop up some options for adding the file; the defaults should be fine, so just click on Finish. Now, if you go back to Product > Debug > Simulate Location, you should see WhiteHouse near the top of the list; click on it (you can also get to this list from the debug bar at the bottom of Xcode, just click on the location services arrow icon). Your device should now think you’re at the White House. To check, press the home button on your device to close your app and go to Maps app. If it didn’t seem to work, check Xcode and make sure the location arrow in the debug bar is blue. If it’s still gray, Xcode doesn’t like something about your GPX file.

One thing you might be wondering now is if Maps uses your simulated location, will other apps use it also? The answer is yes. Think about all the apps you have and what they are using location services for. Twitter clients posting your location with tweets, sports apps using your location to black out games, camera apps that put your location into Exif data, apps like Foursquare that check you into places – they will all look at your simulated location. I won’t go into detail here, but you should play around with this; it can be interesting.

Visiting the White House was fun, but what if you are making an app for people to track their sweet rollerblade workouts? You will need to make a GPX file with multiple location points. The format is the same but with more wpt elements that have additional latitude and longitude coordinates. I found a GPX file for the New York City Marathon here but had to make some modifications for Xcode to like it. Grab the modified file here. Click on the location button in the debug bar of Xcode, and select “Add GPX File to Project…” again. Point it to your New York City Marathon file, and accept the defaults as before. Finally, click the location button again, and select New York City Marathon from the list. Now, if you go back into Maps, you should see your blue dot making its way through the New York City Marathon; every rollerblader’s life ambition.

We’ve had quite a day so far. Any developer or tester would be exhausted after visiting The White House and rollerblading the New York City Marathon, so I won’t keep you much longer; just one more thing to show. With your app still running in Xcode, unplug your device (I will not be held responsible for any bootstrap errors that result). If you go back into Maps, you’ll find your rollerblader is stuck hanging out in New York. If you don’t stop your app in Xcode before unplugging your device, your device will continue to use the simulated location until you restart it, or until you plug it back in and turn off the location simulation. This is extremely handy for testing without being tethered to your computer. However, this can also be confusing for your mom when she sees your Facebook post from Antarctica, so don’t forget to turn it off when you have finished playing.

Hopefully, Apple will eventually give us the ability to control how quickly Xcode advances through the file. Maybe if we’re really lucky, they’ll give us a way to easily generate GPX files, or let us just specify a location without the need to create a GPX file. In the meantime, Location Simulation, while requiring a little bit of hands on work, is an extremely handy tool for testing location services in your app.

Update: For some additional information on this topic check out this article from Brandon Alexander, an iOS developer at Black Pixel. He has links for some handy tools and also covers an alternate way to enable location simulation using schemes.

Testing with the Extended Status Bar

iOS Simulator comes with a number of debug options to assist you in testing your iPhone and iPad apps. Two of of my favorite and most used options are found near the bottom of iOS Simulator’s Hardware menu: Simulate Memory Warning and Toggle In-Call Status Bar. We haven’t yet covered the in-call status bar and it’s often and overlooked and under-tested scenario that deserves some attention.

Toggle In-Call Status Bar simply enables the double-height status bar that users would normally see when they are on a call outside of the phone app. The extended status bar also appears with other background processes like tethering, voice recordings, Garage Band recordings, and Skype calls, among others. Because of the increasing number of apps that will cause this extended status bar to appear, it’s becoming more and more likely that your users will be using your app in such a scenario, and therefore increasingly important that you test for such scenarios.

There are a few different scenarios for testing the extended status bar in each view: turning on the extended status bar while you’re in a view, turning it on prior to going into a view, disabling it while you’re in a view, and disabling it while you’re out of a view. Just because a view handles one of these properly doesn’t mean it will look good in all of these scenarios. Getting familiar with the ⌘-Y shortcut to enable and disable the extended status bar will help this testing go pretty quickly.

For most apps the main thing to look for when enabling the extended status bar is checking to make sure you can still scroll to the bottom of the view and nothing is being cut off. Another bug that commonly manifests is having a blank area at the top or bottom of the view that shows up when turning off the extended status bar. While many users may never even encounter these scenarios while using your app, those who do will appreciate your attention to detail if you handle these scenarios properly. Even if they don’t notice, at least they won’t be leaving you a one star review for bugs like having the back button half covered by the status bar.

The Real Value of Panic’s Status Board

Today Panic announced the release of their new iPad app, Status Board. I was fortunate enough to have the opportunity to spend some time with it before its release and I can confirm what most of you probably already anticipate; it’s a phenomenal app. I’ll spare you another review, because there are already great ones to be read elsewhere. But there are a couple of features in Status Board that I wanted to talk about. Ones that I think have a tremendous amount of value and potential.

The first feature that I’m really excited about is HockeyApp integration. One of the widgets available in Status Board is a custom graph. HockeyApp has made it incredibly simple to pass URLs from Hockey to Status Board which will feed your widget the data necessary to chart a graph of your crash numbers. You can fit up to 6 different graphs comfortably in Status Board’s landscape orientation, and up to 8 in portrait. I’m hopeful that daily crash numbers are just the beginning. The HockeyApp API offers a lot of useful data, and the data you can graph in Status Board is really only limited by what people decide to make scripts for. This leads me to the second thing that I’m excited about.

IMG_0083

Status Board also has widgets for custom tables and do-it-yourself panels. Combined with the custom graph widget, there are countless possibilities for the data and information to be displayed in Status Board. I have a suspicion (or possibly more of a hope) that a lot of users will quickly see different scenarios and opportunities to create scripts that will populate interesting and useful data for various widgets. Maybe the number of builds they have each day, or GitHub pushes, or Pivotal velocity, or bugs closed in Lighthouse. The list goes on and on. I’ve already seen a number of people on Twitter getting excited just thinking about the possibilities. As developers create tools to generate data for their own widgets, I hope they’ll be kind enough to share them with others. Before long we may have a laundry list of tools that you can use to create entertaining and helpful widgets for your status board.

So with that, I encourage you to go check out Status Board if you haven’t already, and get cracking on one of the first must-have dataset generators for Status Board.

Update: Chris Patterson has already gotten the ball rolling over here.

Goalie: The Native iPhone HockeyApp Client

Wonderful news in the HockeyApp community today. Brian Gilham and Mark Pavlidis have released their native HockeyApp client for iOS: Goalie. HockeyApp is a platform for managing your apps. It offers everything from beta management to crash reporting and has become an indispensable tool for many developers and testers in the iOS community (and elsewhere) including myself. If you haven’t checked HockeyApp out yet, you should.

HockeyApp offers a web clip (a web page with an icon on your home screen), but the functionality is pretty limited. Goalie is the first native iOS client for HockeyApp. For testers it offers similar functionality to the web clip, allowing you to install your available apps. For developers it offers many more features for managing betas including viewing crash reports, handling user feedback, adding new apps, viewing analytics and managing your teams.

While the app is free, you’ll want to buy the in-app purchase to unlock all of the features for managing betas; it’s well worth the price. For anybody who uses HockeyApp on any sort of regular basis, this is a must-have app.

iTunes Link