On this page I'm planning on collecting and de-construct complex Apple specific jargon while researching the iOS platform. Hopefully this page won't get too long, if it does I'll reconsider how I lay out this information. Anything that is too complex to really break down properly here will get a link out to a relevant blog post that either I write or to an article on the web that I feel answers the question well enough.
Why does every library in the iOS standard library start with NS?
The standard Apple libraries goes back all the ways to the NeXTSTEP days. NeXTSTEP was an operating system written by NeXT Inc. the company Steve Jobs setup during his time away from Apple (1985 - 1997).
The programming language of NeXTSTEP was Objective-C which is an extension of C. Seeing as C didn’t have a module system or ability to namespace components, a common pattern was to prefix library/framework elements with some sort of unique prefix. For NeXT, their original prefix was NX (though that wasn’t always the case, a lot of framework components didn’t have prefixes at first).
The big change that had the NS prefix take off was the creation of NSString. Seeing as a string is one of the most common data objects you are likely to use in a language system it kind of went viral/was probably enforced internally.
Contray to popular belief the NS did not come from the combination of NeXT and Sun under the OpenStep standard. That it would seem that that was just a happy accident.
What exactly is a sandbox?
The sandbox on iOS and OS X is a security policy system that apps need to interact with it they wish to run on these Apple operating systems (mandatory on iOS, only mandatory on OS X if you want to distribute through the Mac App store). The idea comes down to the Principle of Least Privilege. Meaning if you don’t ask for operating system/file system privilege, you don’t get it.
The presentations I’ve watch by Apple engineers liken the sandbox concept to the automotive safety industry. In cars they have smart methods of preventing motor vehicles accidents. However, if the worst were to happen the industry also put a lot of focus on damage prevent during an accident (seat belts, airbags…etc). Sandboxing is the damage prevent system for modern operating systems.
Ivan Krstic, an Apple engineer on the Core OS Security talked a bit about the history of sandboxing. One thing Ivan pointed to was the way in which the original unix OSes enforced the policy that operating systems users should be protected/blocked from each other, but they stopped at that level and said the process should be allow total access within the users execution space.
Sandboxing is a method of removing that concept and enforcing stricter security in the user process execution space. That way if an application were to deliberately run amok or perhaps even to have genuine errors, it would be less likely to delete or corrupt important user data that has nothing to do with that application.
What is ARC?
For those of us who deal with dynamic languages for our days jobs, memory management isn’t something we find ourselves thinking about too frequently. I did low level programming like C and C++ in college, but long gone are the days of malloc/free, alloc/dealloc, new/delete and constructor/deconstructors.
ARC stands for Automatic Reference Counting. In non-memory managed languages (C, C++, Objective-C…etc) you would have to ask the OS for memory for each of the object your system wish to hold in memory, and when you’d finished with it you’d need to release that memory. It’s all about being a good citizen, and in contrained environments like iOS, being greedy/inefficient can get you app killed by the OS.
In the old days there used to be the concept of reference counting, which is convention of managing memory/objects manually in your application. The idea being that your objects/data could have complex relationships and you can’t just discard the memory/object as soon as you hit the end of a particular method. So what you’d do is when you’d create an object, there would be a system that you would tell that, “I have created this object, and it is being used/owned by this other parent object thing, don’t destroy it”. Later that parent object would relinquish control over the object by decrementing the counter and as soon as the underlying reference system saw a count hit zero the system would destroy the object and free up memory to the OS.
ARC was a technology designed by Apple that does all the complex donkey working of incrementing/decrementing counters automatically for you. ARC is not “garage collection” as you might know it in a virtual machine. It is done at compile time. Where as VMs have a complex technology that traces an object throughout it’s lifecycle, reference counting adds a little overhead to each object to hang onto it’s references counter information. In a traced garage collecting system it does all these crazy algorithms where it traces the object, and moving between these zones, anyway, too complicated to explain here.
ARC is possible thanks to technology baked into the CLang compiler system. Along as you give CLang the right pieces of general information that present how various object relate to one another, it would inject the reference counting calls into your code at compile time. That way when you system ran it would do so as though it was a hand code memory managed system.
There are reasons why you’d perfer garbage collection over reference counting. It’s clear that GC has won the world (look at the .NET stack for instance). Objective-C and Swift are reference counted because of the nature of the underlaying technologies. It’s likely if the system were redesigned all over again, it would use some sort of traced garage collector. But don’t quote me on that, I’m still learning about all this stuff.
If you want to learn more about the guts of ARC, read the Apple docs on “Transitioning to ARC Release Notes”.
What is Autolayout?
Previously in iOS, designing fluid resize-able layouts was difficult. It involved a lot of computation on behalf of an app programmers. Which meant the programming/designing cycle could be quite long. Even worst, a designer might suggest something a programmer wouldn’t have anticipated and achieving the design would mean rewriting lots of code.
Autolayout is what is referred to as a constraint based layout system for iOS apps. Put simply you tell iOS that you want a button to be roughly in a location and to be about a certain size and you can leave the layout system do all the rest of the work. If you want a button to be near the bottom of the screen and take up about 90% of the width of the screen, you can achieve that easily enough. What’s better is that when you change the screen size or even screen orientation (which is very common on mobile devices), the layout copes with this and resizes everything to be the right proportions.
In ways it probably sounds familiar to web developers, because of the responsive design trends that have appeared over the years on the web. Constraint based layout systems aren’t new, not by any stretch of the imagination. There are papers going back to the a famous computer science conference OOPSLA ‘88 that talk about building GUIs using constraint based layout.
Autolayout is a well tested and well worn way of making UIs when you know you need to target very different screen sizes, which all iOS apps need to do nowadays.
What was/is Chipgate?
Chipgate was a “scandel” that plagued the iPhone 6 launch in the Apple Fall event of 2015. It would seem what happened was it became public knowledge pretty soon after launch that Apple involved two different microchip manufactures to create their new A9 chip, Samsung and TSMC. This of course wasn’t unusual or even unprecedented, there was plenty of analysis that said the A8 had dual manufactures as well (again Samsung and TSMC).
One of the main bug bears was that it would seem there was different battery consumption between the two different A9 chips. The Samsung chip being supposedly more power hungry. From what I’ve heard that claim is quite dubious. My main perspective on this would be based on the things John Poole of Geekbench said on episode 79 of the Debug podcast.
The general gist is that the various benchmarking/tools/analyzers people were using to substantiate the claim that the battery performance was worst on one chip over the other weren’t representative of real world usage. Mostly they would just pin the chip for a few hours and look at the plain figure of how long it took to die. This is naive because for one phones have the GPU they can use to handle various forms of load, such as running video then you phone is going to use the H.264 hardware codex, which are super low power and efficient.
Either way, there are a lot of variables that would make one phones battery usage versus another (with the same hardware) different. And most of the data had differences that could be as easily explained by naive benchmark tools, poor control conditions during testing among many other factors.
- “Chipgate FAQ: Everything you need to know about iPhone 6s controversy” by Evan Killham
- Mitchell & Webb Look Sketch “Watergategate”
- Recode article “Teardown Shows Apple’s iPhone 6 Cost at Least $200 to Build” by Arki Hesseldahl
- Wikipedia article on the Apple A9
- Wikipedia article on “Lithium ion batterys”