A personal opinion on writing on a touchscreen device

Writing is still writing no matter the platform. It’s all about getting the words out, to give them a physical form be it on the screen or on paper. You can write on a piece of paper using a pen. You can write using your smartphone. You can write on your laptop or a desktop computer.

But what I have discovered is that writing on a touchscreen just feel weird and difficult. Some people no doubt won’t have any problems. It’s just not the thing for me.

I got the iPhone X. With its 5.8 inch, nearly edge-to-edge display, it’s way bigger than the iPhone 6s and 7 plus display I used in the past. That means with apps like iA Writer, I can see way more of the text with the keyboard below. The Super Retina HD display meant that text are sharp and clear. Writing on that device had been a joy.

Yet, whenever I tried to write long form, like a short story, my fingers do get really tired from attempting to hit the keys. My fingers are rather fat. Combine that with hyperhidrosis, it means either wrong keys are pressed and I need to hit delete or that the key presses aren’t registered like it should. It slows down my writing by a lot, which is irritating in a way if your thoughts is faster than the words appearing on the screen.

The other issue I have with typing on a touchscreen was the lack of tactile feedback. This is one of the reason why I prefer to write using a keyboard. The sound my finger hitting the keys and the clacky feel when you press the key just feels so good. I know you could enable haptic feedback on the phone such that every key pressed will give you a vibration. But that vibration is missing when you set the phone to silent mode via that switch. Not only that, vibration requires the motors in the phone to work hard and cause faster battery drainage. For the iPhone X, that vibration mode is no more and what you get is simulated keyboard clicks, something that you won’t hear if your phone is on permanent silent mode.

The third issue I have is having to deal with the weight of the device while typing. I know smartphones are small and consider rather light. After all you carry it in your pockets everyday. But it does become heavy when you are holding it in your hands for long period of time as you type. And that particular use case happens quite often if you are writing a long article, an essay or stories. Notes taking is fine actually because those are short burst action and probably won’t be doing it over 1 or 2 hours.

So those three reasons are why I will always prefer to write on a keyboard. And in order to do writings on the go, a portable typing machine is needed. Thus, I decided to reuse the 13inch MacBook Pro (2015) that was in storage. The 15inch MacBook Pro that I’m currently using is just a tad bigger and heavier than what I would like. You know what? Without the keyboard cover, typing on that classic chiclet keyboard is rather delightful. I could type equally fast on it.

And now I’m curious about what’s the primary device that you use to write everyday? And why.

An opinion on improving voice user interface while ensuring privacy

Voice user interface is going to be one of the ways we interact with our devices as we go about our daily lives. It is just a very intuitive way for us because we communicate primarily via voice with text and images to complement.

But there still are various problems that need people to work on them to improve the overall experience. One of it is related to how the AI behind voice user interface can interact with us more naturally, like how we interact with fellow human beings.

This article written by Cheryl Platz got me thinking about that. It also covered a little on privacy and why it is a contributing factor that make it difficult for current generation of AIs to speak more naturally and understand the context when we speak. Unless, companies don’t give a shit about our privacy and start collecting even more data.

In this article, I am going to share what I think could help improve the AI and ensure user privacy.

Current Implementations and Limitations

What an AI needs to be better at understanding and responding in ways most useful to us are processing power, a good neural network that allows it to self-learn, and a database to store and retrieve whatever it has learnt.

The cloud is the best way for an AI to gain access to a huge amount of processing power and large enough database. Companies like Amazon and Microsoft offer cloud computing and storage services via their AWS and Azure platform respectively at very low cost. Even Google offers such services via their Compute Engine.

The problem with the cloud is reduced level of confidence when privacy is involved. Anything you store up there is vulnerable and available for wholesale retrieval through security flaws or misconfigurations. Companies could choose to encrypt those data via end-to-end encryption to help with protect user’s privacy but the problem is the master keys are owned by said companies. They could decrypt those data whenever they want.

Or you could do it like what Apple did with Siri, storing data locally, and use Differential Privacy to help ensure anonymity but it reduces the AI capabilities because it doesn’t have access to sufficient amount of personal data. Two, Siri runs on devices like Apple Watch, iPhones and iPads, which could be a problem when it comes to processing and compute capabilities, and having enough information to understand the user.

Although those devices have more processing power than room-sized mainframes from decades ago, it’s still not enough, energy-efficiency and capability wise, to handle highly complex neural networks for better experience with voice user interfaces.

Apple did try to change that with its A11 Bionic SoC that has a neural engine. Companies like Qualcomm, Imagination Technologies, and even NVIDIA are also contributing to increase local processing power with energy efficiency for AI through their respective CPU and GPU products.

Possible Solution

The work on the hardware by companies should continue so that there will be even more powerful and energy efficient processors for AI to use.

In addition to that, what we need is a standard, wireless-based protocol (maybe bluetooth) for the AI on our devices, irrespective of companies, to talk to each other when they are near to each other and in our home network. This way, the AI on each of those devices can share information and perform distributed computing, thereby improving its accuracy, overall understanding of the user, and respond accordingly.

A common software kernel is also necessary to provide different implementation of neural network a standardized way of doing distributed computing efficiently and effectively.

So now, imagine Siri talking to Alexa, Google Assistant or even Cortana via this protocol and vice versa.

Taking privacy into account, information exchanged via this protocol should be encrypted by default with keys owned only by the user. Any data created or stored should only reside on device also encrypted and nowhere else. Taking a page out of Apple’s playbook, the generated keys should come from some kind of hardware-based “Secure Enclave”.

To further improve the neural network, Differential Privacy should be applied on any query or information sent by the AI to the cloud for processing.

Conclusion

The above is really just my personal thought of how current the AIs powering voice user interfaces can be improved.

At the end, it’s really up to the companies to decide if they want to come together and improve all our lives taking into account our privacy and security.