In this case study I describe my graduate project at University of Applied Sciences in Mainz. I started working on it in April 2015, before the massive refugee crisis in Germany. A business coach suggested to create a version to learn German as a foreigner and I saw the potential of this project. It was rewarded at Kreativsonar 2015 which is a competition for Business Ideas in the region Rheinland-Pfalz and Saarland.

Lingit is an App for learning foreign languages on mobile devices. Because learning is based on experience, Lingit allows you to experience things and scenarios. This App is similar to a game you can touch everything, you see. The linked word is displayed in a bubble. You can move objects and discover what is inside of them. There are multiple tests to prove your knowledge and everything is displayed in the language you want to learn. Therefore you start to think like a native speaker.


In the beginning of 2015 I traveled to India. In this multiracial state people speak multiple languages depending on the region they live in. Of course English is widely known, but especially in rural areas people only use their local language. Schools are not accessible to everyone and the only way to learn English is by having contact with tourists. During a safari I had contact with these people. We were able to communicate using simple words, but reading and writing is almost impossible for them. I was surprised that even if these people are poor, they have smartphones and internet access, even in the desert. If they could have an application where they can learn a written language by simple gestures, they could access the knowledge of the world.

The same situation occurs in the refugee crisis in 2015. Many people with different native languages come to a new country and have to learn a new language and culture. My goal was to simplify this process by building an intuitive application to make learning entertaining. It should not be based on translations, rather on how we learn our native language. In other words, you should learn the language by touching, playing and repeating what you have learned.

How we learn

Before creating the first wireframes, I did some research on how we learn from a psychological perspective. Basically we can split the brain into the working, long-term and short-term memory. The working memory is for problem-solving. It is able to distinguish between necessary and unnecessary information based on our experience from live. It can save information for a few hours. The long-term memory provides knowledge based on experience and has unlimited capacity. The goal of every learning process is to store information in the long-term memory. To do that efficiently we have to repeat and remember what we learned on a regular basis. The short-term memory contains information for our operative behavior. This information is lost quickly, especially when we do routine things. Successful learning is based on associations. Our brain creates connections between our experience and the new things we learn. Therefore it is important to make associations between objects/words e.g. dog with a bone. My application should support audiovisual learning to connect as many information as possible. A further step was to create test levels, to ensure that new learned information stays in the long-term memory.

Creating an entertaining application for productive learning

My goal was to combine imagery with sound and written words. For a natural interaction I used gestures most users know from touch-devices to interact with the world, which looks like a mobile game.

Example compositions from the learning session „at home“


Let’s assume the user or learner wants to learn Russian language so that in one month he is able to describe his apartment to his working colleague.

As a learner I have following needs:

In the app I want to see these objects as soon as possible.

I want to know the names of these furnishings.

I need to have a spacial connection between these objects to remember them better.

I need to test my knowledge.

In a test I want to choose the right word from three possible ones according to a displayed object.

According to learners needs, the application should respond this way:

The application contains a learning category „at home“.

The application is structured logically, so that a learner can navigate to desired objects by using touch-gestures.

When the learner touches an object, it displays the right word in a text-box and plays a sound.

The application displays a scenario, where objects stay at the place where the learner would expect it.

The application places an object and provides three words of which a learner can choose.

The application checks if the learner has touched the right text-box and gives visual feedback.

Visual Design

I chose the low-poly design because of its illustrative feeling and low costs in performance. As we live in a 3D world I wanted the illustrations to be three-dimensional so you can interact naturally with them. Everything had to stay in a miniature style and a minimalistic graphical user-interface so that the learner would not be disturbed by buttons or alert messages. It contains a menu-button, back and previous-buttons and the text-boxes. The main text-box at the top displays the current object. The list at the bottom shows names of objects the learner had already touched. Text-boxes and other UI-elements should have rounded corners to separate them from the boxy low-poly world. If the learner touches an object, where he can go deeper and explore the inside of it, the user-interface provides feedback using an animated circle.


The application uses known gestures like pinch, tap or swipe-gestures. The following UML-diagram explains how the pinch-gesture works.

The process of a learning-session

Successful learning can be achieved while practicing units in a fixed amount of time on a daily basis. These units are called learning-sessions. The following activity-diagram shows the procedure of a learning-session. Just for explanation: A composition is the setting with objects what the learner sees on the screen. E.g. A kitchen is displayed in the scene. In this case the kitchen is the composition and a fridge is an object. If the learner taps on the fridge and wants to explore what’s inside, the fridge moves to the front and the kitchen disappears. In this case the fridge becomes the composition which contains objects like apples or cheese.

The future: augmented-reality

Currently under development, my goal is to create an augmented-reality version of this application. This will provide more effective learning because it is possible to display the objects as holograms and the learner can interact with them using hand-gestures in a 3D-space. Learning could be more effectively thanks to the natural interaction.

augmented reality