There are a plethora of third-party keyboards that have been created for iPhone and iPad since Apple opened its mobile platform for keyboard support with iOS 8 in 2014. However, no keyboard that has been created thus far is quite like Keymochi.
You know that old saying about how it’s not what you say, but how you say it? Keymochi takes that idea and runs with it. Created as a research project by three Cornell Tech students, Hsiao-Ching Lin, Huai-Che Lu, and Claire Opila, Keymochi uses information about the way that we type to try and gauge our emotions.
This data includes information like typing speed, punctuation changes, smartphone motion sensor data, time of day, number of backspaces, distance between keys, and — yes — a smattering of sentiment analysis to work out how you’re feeling. All of this data is then fed into a machine-learning model to decode — with the results reportedly 82 percent accurate.
“For us, emotion-sensing is fascinating because it’s currently a missing part in today’s human-AI communications,” researcher Huai-Che Lu told Digital Trends. “Imagine a chatbot can recommend you a healing song when you are sad or suggest that you take some deep breaths when you are nervous.”
As Lu pointed out, detecting emotions based on physiological changes is not something completely new. There is an entire field dedicated to “affective computing,” with one key aspect of it — called sentic modulation — focused on studying the physiological changes that accompany changing emotions, such as facial expression, voice intonation, heart rate, and more.
One of the key differentiators with Keymochi — in addition to the fact that it focuses on the lesser-studied topic of emotion detection using mobile keyboards — is that it aims to protect user privacy by doing all the data preprocessing on mobile devices. In this instance, the server only gets pseudonymous metadata and Keymochi’s creators are therefore unable to carry out reverse-engineering to learn what users are actually typing.
Unfortunately, the Keymochi app isn’t available to the public just yet. The keyboard is unlikely to find its way into the App Store anytime soon, although Lu said that the plan is to release its different components under open-source licenses. This would allow researchers to use the work to, for example, collect data (with the knowledge of users) in future HealthKit or CareKit research projects.
As for what is next, Lu said the team would like to build on this work.
“In the next semester, we’re planning to pivot a little bit by developing a productivity app to help people be more focused in their work based on the activity patterns on laptops,” he said. “Based on different focus levels, the app would suggest tasks that best suit to your current status — like doing some coding when you are ‘in the flow,’ and writing some emails when you are drained, as well as toggling the ‘do-not-disturb’ mode automatically.”