Two days Re-design sprint

I recently made a 48hours re-design sprint of an travel app by the British vila tour operator James Villa. The focus of the sprint was to re-design one of the main screens, and rethink both the architecture and visual design. My approach to the redesign was to structure a process with the following stages: Scope, understanding, research/analysis, explore/test, and design. You can see the existing app here

I choose to redesign the screens of “In the area” and the “Map”. James Villa brands the app as “the perfect travelling companion”, so I matched this vision up against the app, and I found the features around overview and planning of holiday activities to work poorly. Combined I believe the two sections can create a powerful tool, but in the current app they don’t compliment each other very well, and a lot of wired overlaps exists between them.

From different activities such as mapping of user flow, three quick ’n’ dirty user tests and follow-up analysis, I identified the following three main pains in the current app: 1) It’s not possible for the user to save content (e.g. destinations) or plan ahead in the App. 2) Lack of useful and engaging information about the different locations and activities. 3) The List and Map doesn’t talk together, which makes it very hard to get a good overview and plan.

Linear with the research I sketched a lot of quick wireframes in order to externalize ideas and discuss with the participants. I ended up with 24 sketches that each suggested a fix or a new functionality.

I prioritized these wireframes and turned the most urgent into final wireframes. To quickly sum I ended up with the following three design changes: 1) To merge the “Map” and “In the Area” pages into a new “Discover” page, in order to mix “the best from both worlds” and add dynamic relationship between the to functions. 2) To implemented a new “save” feature (the heart), which allows the user to save items for later. 3) Restructure the hierarchy of some pages and making the layout support more relevant and engaging content.

The new wireframes are displayed below

Through the sprint I found many issues beyond what have been addressed in this redesign, especially regarding navigation and overall app structure. However, due to the time limit I’ve tried to keep my scope narrow and focused and therefore ignored some of the bigger fundamental issues.


Sound Classifier tool

In my final thesis project at K3, I’ve explored a design space of how sounds beyond speech can be used as input in the design of natural interfaces.

My prototyping approach has mainly been conducted around interactive machine learning with Wekinator, as a start to exploring different kind of sonic gestures as inputs and how they can be classified to specific outputs. Besides the possibility to prototype new exiting and advanced input-output relations, I also believe that IML can facilitate a more human centered design approach through tools that supports rapid iterations and explorations on the spot together with the domain experts.

Early on I quickly encountered a lack of tools to easily use sound as input for IML systems that could work with temporal aspect of sound. Discussing this problem with Andreas and Lasse from Støj Studio, they guided me in the direction of using image recognition on spectrograms of sounds, as explained in this article by Boris Smus

Inspired by this I’ve created a sound classification tool for the Processing environment that can make live spectrograms of the microphone input, and send the pixel array to Wekinator through OSC. I included controls to easily change analytical parameters such as frequency spectrum, thresholds, and the temporal aspect. This has served as a way for me to experiment with these different features of sounds, and how different settings works for different situations and desired outcomes.

The tool is available at my GitHub

I’ve used this sound classifier tool to design prototyping platforms that e.g can be used to simulate a smart TV. This has been done with a virtual keyboard robot.


Poster design for the Practio Office

One of the last tasks I did at Practio was poster design for eight posters with the intent to decorate the office space, with a collection of cultural values and quotes. The posters consisted of seven small (700mm x 500mm) and one big (1500mm x 1500m) to summarize them all. My design task was to make a design that clearly could convey the information in a delightful way in the everyday office life. Inspired by the classical Swiss-style, and especially the work of Joseph Müller-Brockmann, I designed the posters with an aim to balance the communication of a clear message, with a layout that still would invite for examination of the content, while leaving a calm feel to the physical space.


My first drawing machine

A few month ago I decided to build a drawing machine, inspired by the many DIY and open source projects available online. I found the V-plotter type of drawing machine to be a good “beginner” drawing machine, and based on the software an essential principles by the Polargraph drawing machine by Sandy Noble I tried to build my own. One of my goals for this machine was to make it portable and easy to attach to different size of surfaces. I wanted it to draw on 1x1m canvases as well as 3x3m canvases, without changing anything on the design. I achieved this by making a clamp design for the motors, and small clamps for the strings, making it possible to adjust the length of the used string. Some of my first test drawings were processed using the Polargraph software and can be see bellow.

The next step for me, is to create a custom algorithm than can output G-code from generative drawings made with Processing.


Capacitive sensing + Machine Learning

During a four weeks project on Play and Ludic Interaction, I have explored capacitive sensing as a starting point for the project. During the first week my group and I have explored the qualities of different conductive materials when combined with capacitive sensing. We have been working with the Tact library by NANDstudio, which is capable to capture very rich data from the sensor. Different materials and objects each affordance different interactions, and this also affect which spectrum readings can be read by the sensor. E.g. when a jar of water is used as a capacitive sensor it peaks when the water is touched. Where a bag of wet sand peaks when squished tightly.

Through these explorations we have found especially two very interesting ways of using capacitive sensing:

  1. Its possible to create a “chain” of sensors that works through non-conductive materials (wood, glass, acrylic etc.). E.g. we had a path of aluminium foil that could sense proximity and touch on a jar of water, through a 4mm layer of wood/acrylic.

Classifying the data with Machine learning (we used Wekinator) can be used to recognize different gestures very well. This is mainly due to the rich data from the Tact library. We used 32 inputs pr. Sensor.

These findings have been the core for our project in Play and Ludic Interaction, and the mechanics we developed can be seen on the gifs/video. Hopefully I can soon share how we have applied this technology in a concept.


Ice cube painting toy

During a 2 days project of the course Play And Ludic Interaction, we were set to explore the creation of toys and playful interactions based on a material. Together with my group I was set to explore water. This resulted in a series of small experiments that each explored playful expression and characteristics of water.

The findings from this explorative process let to several ideas to water based toys. We ended up prototyping a maze that used ice cubes with food colouring, to paint abstract paintings. Inspired by generative art, our idea was to have our 11 co-students to each navigate an ice-cube through the maze to create individual paintings. The final toy, and the experiments can be seen below.