The world is shifting to the devices with smaller screen size. We saw a shift from the desktop to the laptop then to the mobile device. This increase in the use of mobile devices has prompted the app developers to produce a large number of mobile application that could perform a variety of tasks.
When it comes to the mobile app, user interface plays a crucial role in determining retention of apps. Â According to research, people like those apps more which are interactive and engage people more.
Facebook has come with a technology that tends to enhance the user interface of the app developed by the developers. They have developed a system that enables the app developers to take input in the form of gestures. This will enhace the user experience and also increase the capabilities of the app developed by the developers.
Why this technology?
The need for this technology arises from the fact that the number of mobile users has increased a lot in the recent times. This has forced the app developers to develop the new app with all the latest technologies.
But, integration of the latest technologies lives voice input, image recognition, gesture input remained one of the great challenges. But, now Facebook has developed the technology that makes it easy for the app developers to integrate gesture input technology in their app.
There is a need for faster and more efficient methods and systems for including the gestures control for a user interface of an application. It could be a prototype of an application that is currently in development. Such methods and interfaces could replace conventional methods for defining gestures for a user interface of an application.
How does this technology works?
A method is performed at any electronic device which has one or more processors and a memory capable of storing instructions for execution by the processors. The instructions could include a utility for prototyping a user interface which has one or more layers.
Related Read:Â Revolutionizing Gesture Recognition Technologies: Project Soli
The method includes the process for each image of one or more images in the user interface and then selecting an image patch, which is followed by selecting a layer patch and coupling an image output of the image patch to an image input of the layer patch.
In this, the respective images of the one or more images correspond to respective layers of the one or more layers. Then the gesture patch is selected and gesture for that gesture patch is specified. In this, the gesture patch is associated with an underlying gesture recognition engine.
It couples the output of the gesture patch to an input of a first layer patch. The input of the first layer patch corresponds to a display parameter of a first layer. It then generates the user interface for the given display in accordance with the coupling of each image patch, each layer patch and the gesture patch.
It then receives the user-interaction data for manipulating the user interface whenever the user gives any gesture input. This data then passes to the system and in response to this, the device updates the display of the user interface.
Related Read:Â Microsoft Develops Three Dimensional User Input Technology To Enhance 3D Gaming Experience
For example, if the user has five an instruction of moving the image of the slide show, then the device understands this instruction and then it changes the display i.e. moves the slideshow forward.
What’s next?
The next thing for Facebook will be to implement this technology in the real life. They also have to deal with the accuracy of the device in predicting the result. It will also have to face competition from other companies like Microsoft and Google who are believed to be working on the similar technology.
The use of this technology will also depend on how much easy it is for the developer to integrate this one into their app.