Review Camera iPhone XS

The dual 12-megapixel cameras on the iPhone XS models have specs virtually unchanged from the first iPhone X: two vertically aligned 12-megapixel cameras, an f1.8 wide angle lens and a f2.4 2X telephoto lens, and a 7-megapixel forward-facing camera as part of the TrueDepth module and its passel of sensor technologies. However, all of it is backed with new image sensors, lenses, and a brand-new image signal processor that comes as part of the new A12 Bionic chipset.
Like me, Souza marveled over the updated portrait mode capabilities that let you adjust the background blur effect after you take the photo, with either the front or rear dual camera system. He told me he didn’t think consumers would take notice of the f-stop numbers on the interface.

Later, on the phone, Souza said, “I think they’ll use it and not really understand it.” But he added that consumers will understand the results and see how “when they go in one direction, everything other than what’s in focus gets less in focus and the other way things get more in focus.”

“I love this decision by the team to honor art of photography and the work that went into characterizing how great lenses work,” said Apple senior vice president of worldwide marketing Phil Schiller when I asked about the decision to include f-stop numbers in the depth editor interface.

Schiller, along with Graham Townsend, Apple’s senior director of camera hardware, and Sebastien Marineau-Mes, Apple’s vice president of software, sat down late on the afternoon of iPhone XS launch day to peel away the veil of secrecy surrounding at least one part of Apple’s iPhone technology matrix: how they design and develop their photo and video capture hardware and software.

The numbers consumers will see on these phones and through the photo editing app are not just an old-school nod to how f-stops and aperture control work on DSLR cameras. Schiller told me Apple engineered an exact model of how a physical lens at those aperture numbers would behave.
In a physical camera, a higher-number f-stop represents a smaller aperture opening and a longer depth of field. In other words, a setting of f1.4 would have the front of a subject’s face in focus while the background is fuzzy. On the other hand, a setting of f16 puts almost everything front and back in focus.

The first expression of this kind of photography on smartphones appeared in 2016 with the iPhone 7 Plus and its portrait mode, which used the two images grabbed by its dual-lens system (and some algorithmic magic) to create a background-blur, or bokeh, effect. This on its own was a radical innovation for amateur iPhone photographers by transforming mundane portraits into studio-quality images. Even so, it — like virtually all other smartphone-based portrait-mode photography that followed — was a two-plane version of the depth effect. The images held the foreground object in focus and blurred the back plane. 

Samsung was the first to introduce adjustable blur that could be used during photography and in post-processing, but Samsung’s Live Focus still sees the image as two planes.

Like a lens

What’s clear from using the new iPhone XS and XS Max is that the depth slider captures almost unlimited planes between the foreground and background. Apple calls this “lens modeling.”
“We turned the model of a lens into math and apply it to that image,” explained Marineau-Mes. “No one else is doing what this does. Others just do ‘blur background.’” And the post-processing works equally well whether you’re taking a selfie with the iPhone XS single, 7-megapixel front-facing camera, a portrait with the dual-lens system on the iPhone XS or XS Max, or a photo with the single 12-megapixel rear camera on the iPhone XR.


Put simply, Apple is employing three distinctly different depth-information-capturing technologies to drive the same depth editing result. Townsend described it to me as using three different sources of information: the dot-based depth sensor in the TrueDepth module, the dual-lens stereo imagery of the 12-megapixel cameras on the back of both the XS and XS Max, and an almost entirely algorithmic solution on the XR.

Apple’s depth editing is all the more remarkable because it lets you adjust the aperture in post without touching the exposure. In traditional and DSLR photography, every adjustment of the f-stop has to be met with a correlative adjustment of the exposure setting. A smaller aperture means less light while a more open one blows out the exposure in the photo unless you increase the shutter speed.

However, sliding the depth editor back and forth on an iPhone XS image adjusts that blur exponentially while somehow maintaining your original exposure. It’s a heavy lift computationally, but Marineau-Mes said they do it all in real-time.

Seeking professional quality

Souza, who had been test-driving the iPhone XS at Washington, D.C.’s Natural History Museum, described the depth edit feature to me as “pretty, pretty nice.”

When he tried portrait mode on the life-size early-human heads in the dioramas at the museum — even through the display glass — the results were impressive. “To be able to change the f-stop and get your pinpoint focus… I was using the stage lighting [one of the settings in portrait mode] to darken the background, yet the eyes are still sharp as a tack,” he said.

“I would compare it to… I use Canon, a Canon 85 mm lens. I use it as a widest aperture. That’s the effect you’re getting,” Souza added.

An admitted Apple and iPhone enthusiast (other than early flip phones, the 63-year-old photographer said he has never owned a different brand of phone), Souza said that while he almost invariably used a DSLR when photographing Obama (“the images were going in the National Archive,” he explained), he always had the iPhone (models 5 through 7) to shoot more casual photos. He noted, “I have hundreds of snow pictures, pictures of Rose Garden, nobody in it, [taken] with the iPhone.”

Obviously, professional photographers know the limits of smartphone photography. Even with multiple lenses, telephoto capability, and post-processing, it’s hard to replace what you can do with full-frame 35 mm sensors and a 55 mm or larger lens.

But that hasn’t stopped Apple from trying. Apple’s multi-pronged effort to put pro-level photographic capabilities in the hands of millions of iPhone users starts with addressing image capture (both photo and video) as a system.

“We’re not like a hardware company; we’re not like a software company. We’re a system company,” Townsend said, emphasizing what’s become the hallmark of Apple’s success across a wide array of consumer electronics categories: the ability to control the full stack, from designing and development through hardware and software and virtually everything in between.
“We have the privilege of being able to design everything from the photon first entering the lens right through to the format of the captured file. Only Apple is able to customize and match together,” Townsend added.


Part of the reason Apple does this, especially with a system as intricate as imaging, is that components that work well do not always work well together. Apple’s penchant for bespoke components from third-party partners is well-known, but it goes further than that.
To manufacture something like the dual-camera system, Apple has to ensure that the two lenses are at a precise point and tilt.

“We have tight specs,” Schiller said, smiling.
If a partner says they can’t reproduce Apple’s spec, Apple gets manufacturing process experts specifically for the camera manufacturing process. They work with the manufacturers, Apple has said, to re-engineer custom versions of their equipment.

“One of the really big things we aim for is the first phone, the 1 millionth phone, and the 10 millionth phone we want that experience to be as close as we can humanly manage, and we put a lot of effort, and it’s not something we talk a lot about… but it’s really important to us that there’s no big variation in performance between any phone anywhere the world,” Townsend said.
Internally, Apple’s hardware and software teams have been meeting regularly to define the imaging features they’ll deliver through fresh silicon like the A12 Bionic (which has taken three years to develop) and on new hardware like the iPhone XS.

“The architectural decision to deliver HDR [High Dynamic Range] has to come at the beginning of conversation of chip architecture,” Townsend noted.

Even so, different components can arrive at different times. So, there’s the agreed-upon architecture plan, and then there are the on-the-fly adjustments.

If the A12’s development started three years ago and two or three generations of iPhones and camera systems are delivered in that time, adjustments have to be made to match new lenses and image sensors with the image signal processor (ISP). Fortunately, Marineau-Mes says his team can still program the ISP, which is part of the A12’s chipset, to match the lenses and sensors found on the iPhone XS, XS Max, and XR.

And there can be tradeoffs. “We’ll do this in the lens, it’s going to give us a better image, but we have to do this in the ISP,” said Sebastien Marineau-Mes.
That kind of cross-department coordination proved crucial in the development of Smart HDR, the second jewel in Apple’s image processing crown.

In traditional HDR, a pair of images at different exposures are used to capture details in the dark and over-bright areas of a photo. Depending on the disparity, it’s likely that, even in the best HDR images, some detail — or maybe a lot — will be lost or the final combined image will have significant amounts of noise, especially in the case of action shots. Additionally, HDR can introduce a bit of shutter lag, which means motion photography is almost impossible.

In my tests with the iPhone XS and XS Max, Smart HDR produced high-quality images in challenging situations that had stumped even the year-old iPhone X.There are, as Schiller said during the keynote speech, trillions of operations occurring with each photo to make this possible, but it starts with the ISP and its high readout capabilities.

As Marineau-Mes explained to me, the camera starts by capturing two image frames at different exposures in one-thirtieth of a second. That information is passed on to the software pipe and the A12’s neural engine, which starts analyzing the images.

Smart HDR doesn’t stop there. “Raw material is captured at 30 fps (that’s a pair every thirtieth of a second), and the fusing happens in a few hundred milliseconds,” Marineau-Mes pointed out.
Inside the A12 chip is the neural engine that analyzes frames not just for exposure but for discrete image elements. It’s identifying facial features and looking for motion. If the system detects motion, it looks for the frame with the sharpest image of the motion and adds it to the image. Similarly, an image with red-eye is not just fixed but replaced with a frame where the eye isn’t red or with the reference eye color from the frame without red-eye.

Earlier in the day, Apple had shown me a photo of a dreadlocked man standing in a lake. He’s dramatically backlit, though I could easily see his torso. His hair is mid head-toss, so his dreads flair out and water is captured flying off into the air. If I were shooting the image with a DSLR, I’d set the shutter speed to at least 500 fps but keep the aperture somewhat closed — maybe f11 to try and maintain some of the image depth. I’d also have to raise the ISO level to pull in enough light, which would probably introduce a lot of grain.

This perfectly frozen and exposed photo, however, was taken with an iPhone XS.
“We set a reference frame and fuse in information from multiple frames,” Marineau-Mes said. The image I saw was a composite of multiple frames. Some of those frames contained pieces of what would become the final image, like the perfectly sharp hair and water.
“As you stack the frames, if you have the same image, you have lower and lower noise and better and better detail,” he explained.

It takes an incredibly powerful ISP and neural engine backed by an equally powerful GPU and CPU to do all this processing, Marineau-Mes said.
All that heavy lifting starts before you even press the iPhone camera app’s virtual shutter button. Schiller said that what users see on their iPhone XS, XS Max, and XR screens is not dramatically different from the final image.

“You need for that to happen,” said Schiller. “It needs to feel real-time for the user.”
When I asked what all this gathered information meant for file size, they told me that Apple’s HEIF format results in higher quality but smaller file sizes.

Sometimes Apple’s engineers arrive at better image technology almost by accident. Last year, Apple introduced flicker detection, which seeks light source refresh frequencies and tries to reduce flicker in still and video imagery. While incandescent and fluorescent lights have consistent refresh frequencies, which makes it easy to figure out exposure times, modern energy-saving LEDs operate at all different frequencies, especially the ones that change hue, Townsend explained. This year, Apple engineers widened the range of recognized frequencies to further cut down on flicker. However, while doing so, they realized that they can now also immediately identify when the sun is in the picture (“The sun doesn’t flicker,” Townsend noted) and instantly adjust the white balance for the natural light.

“Our engineers were kind of working at that, and they spotted this extra information. So, this is the bonus that we get from the flicker detect,” Townsend said.
Video and new frontiers

All of these image capture gymnastics extend to video as well, where the same frame-to-frame analysis is happening in real-time to produce video with more details in high and low light.
It occurred to me that with all that intelligence, Apple could probably apply the depth editor to video, adding the professional polish of a defocused background to everyday video shoots, but when I asked Schiller about it, he would only say that Apple does not comment on future plans.

Video or stills, the end result is a new high-water mark for Apple and, perhaps, smartphone camera photography in general. The company gets emails, Townsend told me, where people say, “I can’t believe I took this picture.” It’s an indication that Apple’s achieving a larger goal.

“We make cameras for real people in real situations,” Townsend said. “They’re not on tripods; they’re not in the labs. They want their picture to be a beautiful picture without thinking very much about it.”
Back on the phone with Souza, whose book on the photographic contrast between the Obama and Trump administrations, Shade, will be on sale starting in October, he told me he’s long been impressed with the iPhone’s closeup shot capabilities. “I’m continually amazed how close you can get to your subject with an iPhone. The minimum focusing distance is better on an iPhone and a DSLR unless you have a macro [lens] with you.”

That morning, Souza had printed out a pair of his iPhone XS shots. He said he thought they looked like they’d been captured with a DSLR.

I emailed Souza one last question: If he were asked by a future president to photograph the administration, would he use an iPhone for official photos or still rely on his DSLR?
“DSLR,” he wrote back, perhaps crushing a few Apple dreams, “though [I] would still use iPhone for some Instagram posts as I did during the Obama administration.”

That calculation — of switching between the always-in-your pocket iPhone and an expensive DSLR camera — surely is one that Apple must hope Souza and other photographers won’t always have to make.
Lance Ulanoff

Review: iPhone Xs & Xr  All you need to know

Review: iPhone Xs & Xr   All you need to know

This year we said goodbye to the home button with the introduction of 3 new iPhones during the Apple Event in Cupertino. The central stage was taken by two Apple products: the new and updated iPhone Xs and Xr lineup and the new and updated Apple Watch Series 4.

In this article I am summarising for you the main highlight from Apple’s keynote.In short, this years event can mainly be characterised by small incremental updates and spec bumps and there is actually nothing to really get excited about. So let’s start first with the new iPhone X lineup.


iPhone XS & XS Max

Design

In terms of the design, not a lot has changed. The new iPhone lineup will rock a familiar design from last years iPhone X with the same combination of stainless steel and glass. It will come in 3 colours: Gold (new), Silver and Space Grey.

The display

In terms of the screen both the iPhone Xs and Xs Max will come with TrueTone OLED Super Retina panels. During the keynote, they didn’t talk about the screen to body ratio so we can assume that the bezels didn’t shrink.


This year, the iPhone X comes in 2 screen sizes:

5.8" screen called the iPhone Xs
6.5" screen called the iPhone Xs Max

The bigger screen (iPhone Xs Max) will offer a resolution of 2688×1242 with a 458 ppi density. In addition to that, the screen will offer a 60% greater dynamic range, TrueTone and of course the standard Apple notch that is packed with an upgraded set of sensors. Thanks to the updated sensors the new iPhones should offer better FaceID performance and security. Again, Apple was not very specific here when talking about how faster and secure FaceID will be.

Chipset

The biggest changes actually happened under the hood with the introduction of their new A12 Bionic chip. Throughout the keynote, Apple was flexing the muscles of this new chip by demonstrating the AR and machine learning capabilities of their new iPhone lineup.In short, Apple claims that this new A12 Bionic chip will be:

15% faster
40% more power efficient
50% GPU performant

All this combined, should result in 30% faster app opening speed, better AR performance and “console like” gaming experience.

Camera

Let’s talk now about the second biggest change, being the camera. Both the Xs and Xs Max will rock a dual lens camera with a primary wide-angle12 MP shooter at a 1.8 aperture and a secondary 12 MP telephoto shooter with a 2.4 aperture. Both of them optically stabilised and they will offer Smart HDR which enables you to take bursts of photos with zero shutter lag. 

They will also come with TrueTone flash.The front camera comes with a 7 MP shooter at a 2.2 aperture. An additional feature that this new chip and camera combo offers is the ability to edit the depth of field of your portraits.In terms of video, there will be now support for wide stereo audio recording and playback.

Storage

Storage-wise there are 3 options: 64 GB, 256 GB and 512 GB, but of course in line with Apple’s tradition there won’t be any expendable storage option.

Battery

The battery also got a slight bump as well. But as always, Apple was very vague in terms of actual battery capacity. What they promised to us is that:

The iPhone Xs will have 30 min more battery life compared to the iPhone X from last year
The iPhone XS Max will pack a even bigger battery and should deliver 1.5 hours of battery life compared to last years iPhone X

Other

Not to forget some other things like IP68 water resistance and for the first time in history of iPhone, dual sim support (one physical and one e-SIM).


Price

iPhone Xs starts at $999 for the base model
iPhone Xs Max starts at $1,099 for the base model and goes up to a very uncomfortable $1,449 price tag for the 512 GB version

Please note that for both of them you are not getting a fast charger nor the headphone adapter. These items have to be purchased separately.

iPhone Xr

Now let’s focus on whats in my opinion one of the biggest highlights of the event: the introduction of the more “affordable” member of the new lineup called the iPhone Xr.
Why do I say “the biggest highlight of the event”? It’s because the iPhone Xr offers you a very compelling package of the latest Apple technology for a much more affordable price compared to the bigger Xs and Xs Max models.

The iPhone XR packs a very similar edge to edge screen design found by it’s bigger brothers with water and dust resistance.
It will come in 6 colours: white, black and the more vibrant colours being yellow, red, blue, coral.

But there are of course some sacrifices made to get to this reduced price tag of $749 for the base model.
The displayInstead of an OLED screen you are getting a 6.1" TrueTone LCD screen that Apple calls “Liquid Retina”. This places the iPhone Xr right in between the 5.8" Xs and 6.8" Xs Max.

It offers a resolution of 1792×828 which like by the other models supports FaceID, has true depth sensors and tap to wake option.

Chipset

Under the hood it packs the identical A12 Bionic chip. So if you combine the lower resolution screen with this chip, this phone should be a real power house.
An additional sacrifice here made is in the camera department where the dual lens camera is replaced by a 12 MP single lens camera with a 1.8 aperture. Hardware-wise, this camera is identical to it’s bigger brothers. It will offer the same experience with TrueTone flash, smart HDR, same bokeh experience with depth control. The same applies as well for the front camera.

Storage

It will also offer a bit more humble storage capacity with a 64 GB, 128 GB and 256 GB option.

Battery

For the battery again no specific capacity information it should be able to deliver 1.5 hour more than iPhone 8+. During the keynote they did not mention fast charging or wireless charging but on the website it says that it supports both fast charging and wireless charging.

Price

The base model of the iPhone Xr will have a price-tag of $749.
Please note that just like for the Xs and Xs Max model you are not getting a fast charger nor the headphone adapter. These items have to be purchased separately.

Apple Watch Series 4Apple also announced an updated version of the Apple Watch — Series 4. By showcasing both the hardware and software capabilities we see that Apple is focusing to become a all-round health guardian.

With the improved watch sensors they are now able to detect potential hearth irregularities and the watch can now also recognise if you fall on the ground and enable you to do a SOS call or notify your emergency contacts indicating your exact location.But in essence, the story here is the same like for the iPhones, not a lot has changed!

The screen got a bit bigger. They are talking about an edge-to-edge screen but from the keynote it was visible that the screen doesn’t really stretch edge to edge. The chip has improved, the speaker is 50% louder and the battery delivers the same 18 hours of operational time. And that’s it!

Conclusion

There is nothing to get too much excited about this year — as said, it was mostly small, incremental updates on both the iPhone and Apple Watch.

This is what I can recommend you:

If you already own a iPhone X there is no need to upgrade. In essence you are getting the same phone that is slightly faster and that’s practically it.
If you own a iPhone 8 or 7, the more compelling option is the iPhone XR because it offers you a familiar screen on which you already got used to on the iPhone 8, but just refreshed, up to the standards of 2018 with an edge-to-edge design.

The iPhone XR offers the full Apple experience by doing a slight compromise with the screen and camera. Smaller storage capacity compared to the more expensive iPhones is not really here an argument since the majority of your photos, music and movies is probably in the cloud by now.

If you own an Apple Watch Series 3, there is also no reason to upgrade. Only maybe it you are suffering from a cardiovascular disease where you could benefit from the newly implemented ECG sensor.

Prices

Another thing we witnessed this year is Apple’s rude push towards even higher smartphone prices. Why am I saying rude? Asking $1,449 for the 512 GB iPhone Xs Max model, which in Europe by the way costs $1.9K is just plain disrespectful. There is nothing in this phone that justifies a price tag of over $1,000 and nobody can argue her that this is the the way in which the smartphone industry is heading.

With these inflated prices and insane prices, we are already stepping in the premium laptop segments. Just think about it… For the price of a base model iPhone Xs Max you could easily get a 12" MacBook and for the price of the 512 GB version you can buy a new MacBook Pro 13".

how to fix : Why does Apple’s 3D Touch fail miserably?

this week Apple revealed their new offerings for their most successful product and brand, the iPhone.

Most things announced in their keynote were well within the expectations of Apple’s product evolution. New improvements on the top of the line, including a more modern and bigger phone with a 6.5 inches diagonal. Their largest phone ever.

However, there was a massive diversion in their product strategy with the introduction of the iPhone XR. This move is an extremely rare product decision that will likely shape the future of the iPhone and change the way its produced and marketed.


For those who didn’t watch the keynote or haven’t read enough about it, the XR is a weird phone. Here are some reasons why this phone is completely different than anything that we have seen from Apple before:

It’s cheaper than the XS but larger (the XS has 5.8" diagonal, while the XR has 6.1").
It has the same A12 Bionic processor found in the XS. However, it comes with 3GB of RAM (same as the X), while the XS comes with 4GB. This means that this phone is faster than the X and slightly slower than the XS.

It comes with a 128GB storage option. The only phone of the X family to offer it.
It has a single camera instead of a dual camera, but it’s still capable of producing a depth of field effect in portrait photos by relying on its software (a la Google Pixel).
It comes in six different colors. An old Apple strategy to create diversified demand. However, it does have a premium glass back which makes a huge difference in how it’s marketed (remember, the iPhone 5C? that felt cheap, this doesn’t).

It comes with what Apple’s call “Liquid Retina.” An LCD screen that is worse than the OLED display used in the X and XS, but better than their previous LCD iteration used in the iPhone 8.
Although these phones are marketed as “bezel-less,” they all still have a small rounded bezel. This screen has a bezel that is noticeably thicker than the one found in the X and XS.
Last but not least, this screen DOESN’T come with 3D Touch. Apple apparently is giving up on this feature, and it decided to drop it from a phone that it’s meant to re-adjust their product strategy for the coming years.


The last point was for me the most important reveal of Apple’s upcoming strategy. It seems that Apple is preparing us for a future that will drop 3D touch from the top flagship line.

So this begs the question. Why did this feature fail and what does it mean for the future of the iPhone?
I took a couple of days to break down the UX and history of the infamous 3D touch and try to shed some light on how the disappearance of the 3D touch feature will reshape the future of Apple’s most important product.

3D touch, a solution begging for a problem.
Apple introduced 3D Touch / Force Touch in 2014 as an embedded technology in their evolved trackpads. The technology came as a companion to the Haptic Engine technology, which its primary goal was to recreate the haptic feeling of mechanically pressed or actuated buttons. This was a significant evolution in Apple’s industrial design strategy which has always favored the reduction of mechanical parts that are prone to break with use.

The technology made its way into the iPhone and Apple Watch in 2015, with the idea of bringing a new dimension of interaction to touch-capable devices. The “game-changing” technology was one of the main selling points of the iPhone 6S.

Here is one of those now iconic Jony Ive videos explaining the technology

As a designer, I remember watching this video for the first time and getting extremely excited about this new interaction level. But watching it three years later makes things loudly clear and evident.
This was a solution begging for a problem. None of the interactions demoed in the video are remotely useful.

For example, the interactions that Jony calls “peek and pop” are merely gimmicky alternatives to open resources like photos and URLs.
The video even fails to show how a complete shortcut flow enabled by 3D touch works, and only focus on the contextual menus.

The video shows the technology as a step-forward in touch sensing technology, but the reality is that even in a marketing piece it had very little practical use. It seems that Apple’s strategy with 3D touch was to provide the technology as a primitive of their sensors offering and let their developer community figure out creative and smart ways to use it and augment their experiences. It wouldn’t be the first time Apple had done this since some of their most breakthrough technologies came from similar rationales.

So why that didn’t happen? After doing an in-depth break down of the feature here is what I found out.

Extremely poor developer adoption
This may be either a reason or a result for/from the failure of 3D touch. Either way, it’s a fact.

I tested the app icon shortcut menu in a sample of 200 iPhone popular apps. Only 40% of the apps had a 3D touch shortcut menu. This adoption rate doesn’t sound terrible until you start digging into the details of their implementations.

For example, Google seems to include a 3D touch shortcut menu in every app, but I was surprised to find a lack of consistency in their implementations. The Sheets app doesn’t have a shortcut menu while the Docs app does.

This is a weird inconsistency for a group of apps that are part of the same suite.

Google Sheets — No 3D Touch Menu

Google Docs — 3D Touch Menu
Popular apps like Lyft and Bumble don’t have a shortcut menu, and Uber has it only for the ride app but not for the UberEats app.

Google Sheets — No 3D Touch Menu
Many apps have shortcuts that don’t even work. While testing different shortcuts, I notice some apps attempting to deep link into the described view or functionality and then getting stuck in white screens. This issue happened to me so much that I didn’t even make an effort to document it. Just go and test it yourself.

Most of the apps that provide a 3D touch-enabled shortcut menu are not offering a lot of value in their menus either. Take for instance the DoorDash app which the only option that provides via the shortcut menu is “Search.”

DoorDash 3D Touch Shortcut Menu
The shortcuts are dull and repeated flows that can be achieved without the 3D touch interaction.
I elaborate more on this in the following point.

Duplicated and overlapping user paths
Let’s imagine for a minute that 3D touch was the godsend productivity and time-saving feature that it claimed to be. That would mean that by using it, you would speed up your workflows and achieve things with fewer steps. Right?

Well, this is far from the truth. In all my tests I wasn’t able to find a single shortcut that was more practical and usable than merely using the app with the standard touch capabilities.
Instagram is an excellent example of this failure. Let’s take for instance the camera shortcut.
If I want to open the camera via the 3D touch menu I have to do the following things:

1) Locate the Instagram Icon, 
2) Force touch it 
3) Tap on the Camera menu item.

Now, if I wanted to open the camera via the traditional touch interactions I have to do the following things:
1) Locate the Instagram Icon, 
2) Tap it 
3) Swipe from left to right or tap the camera icon on the top left.

Instagram 3D Touch Shortcut Menu
Given that it takes the same amount of steps to achieve the same path with both interaction methods, there’s no good incentive to diverge from the traditional inputs. This problem is also real for other menu options that are slightly more functional and shortcut-y like switching accounts. There’s no reason to rely on an interaction that gives limited improvements and sometimes no improvements at all.

Extremely poor discoverability
This point is perhaps the most well-known issue of this technology. 3D touch is exceptionally undiscoverable in the UI layer.
If you want to understand the type of actions that this interaction enables, you have to force touch everything in your screen and expect to get some output. And sometimes when you do get output, it’s hard to understand what kind of augmentation or functionality is the interaction enabling.

Apple doesn’t even attempt to provide guidance on how to increase the discoverability of 3D touch. Their Human Interface Guidelines doesn’t provide context on this topic and only explains the nuances of the useless “peek and pop” concept.

An unreliable interaction and an ergonomic nightmare
If you want to experience a bit of what arthritis feels like, I would suggest you to 3D touch things on your iPhone for a full day. This thing is an ergonomic nightmare. Its main problem is how hard is too determine the right amount of pressure to trigger the interaction. In its default sensitivity setting (medium) sometimes it seems that a light touch would trigger it but most of the times it won’t. After a failed attempt to trigger the 3D touch, most users would then apply extreme pressure as a way to counterbalance the apparent need of force required to enable the interaction.

A home-made test using a food weighing balance and a hand to hand comparison revealed that sometimes a user could apply well above of 100 grams of pressure to trigger the 3D touch.
I’m not saying that’s the actual pressure force required to trigger the interaction, but it might very well be on the user’s mind once it fails the first time. The fact that sometimes you find yourself applying a quarter of a pound of pressure to trigger the 3D touch makes this feature utterly impractical for daily use.

Concept collision in the interaction spectrum

The iPhone is primarily a touch device. It has other input mechanisms like the mic, the accelerometer/gyroscope combo, and the camera, but none of these mechanisms can compare with the effectiveness and efficiency of touching a screen to register intent.
Touch is such a dominant input mechanism that human hands anatomy is expected to change just based on how we use technology devices with our hands.

This rationale might be the reason why Apple thought that amplifying the number of available touch interactions was a no-brainer. They did it quite successfully with the introduction of multi-touch capacitive screens and the range of motions and interactions that came from that technology.
But 3D touch is different. 3D touch doesn’t provide any practical advantage over a typical capacitive touch. In fact, it does the complete opposite.
By being a feature that relies on physical pressure, 3D touch sits in a place in the interaction spectrum that clashes and negates the continuous success of the light touch interactions enabled by capacitive screens.

Remember how frustrating was to use a phone or device with a resistive touchscreen? 3D touch is a technology that brought back all that unnecessary impedance that made pressure screens so frustrating.

Nokia 5800 with a Resistive Screen. The most frustrating phone ever.
While it makes sense to find ways to amplify the range of interactions available, a technology like 3D touch was inevitably going to suffer an existential crisis especially when it was indirectly competing with one of the features that made the iPhone so beloved and successful.

A limited technology with poor inputs and monopolizing outputs.
As mentioned in one of the earlier points 3D touch is an unreliable technology.

It’s too hard to determine the amount of pressure required to trigger the 3D touch which makes it hard to use consistently. But the input is not even the worse part of this technology. 3D touch is so limited within the UI layer that it cannibalizes the potential experience benefits that other technologies like the haptic engine could provide to normal touch interactions.

Since 3D touch interactions usually come paired with haptic feedback produced by the haptic engine, the job of a fantastic technology like the haptic engine is reduced to the role of “peasant 3D touch chaperone”.

Although this is not necessarily a reason for why the 3D touch was a failure, it explains how limiting is the micro-universe created by the 3D touch. The feature doesn’t really add value to the final UX of the iPhone, but it’s dominant enough to feel like a limiting factor instead of an un-used technology.

The future of 3D Touch

With the introduction of the iPhone XR and the removal of 3D touch in that device, Apple’s intentions regarding the future of the technology are crystal clear. However, removing the feature from the top of the line it’s a more challenging process than deciding not to add it to future phones anymore.

Apple’s strategy to test price inelasticity on their high-end models seems to be working. But this strategy only works if Apple keeps adding features in a way that justify the price increments.

Removing 3D touch would be a challenge mainly because it means removing one particular technology that allows justifying the price for their high-end devices.

It’s unlikely that Apple will remove the technology without first finding an exchangeable replacement, even if it’s just a software-based alternative input like knuckle detection.

There’s also a small chance that Apple is looking into a new iteration of this technology based on other technologies like the one described on this patent,
or even some sort of weight-sensing capable screen which would allow devices like the iPad work as small food weighing balances.

Of course, everything is possible, and the XR could be an indication that Apple’s new product strategy for high-end models could also be affected by the new approach of technology integration and deployment seen in this model.

Or maybe the replacement for the 3D touch technology, in what would be the final nail in Steve Jobs’ legacy, could be the addition of Apple Pencil support (something that has been expected in the last two years) and the introduction of an Apple Pencil Mini and special iPhone cases with pencil holders.

In any case, the present reality is that 3D touch is dead and Apple is still dealing with its body…

So what are your thoughts? Are you a hardcore user of 3D Touch and will you miss it once it’s gone?