Skip to main content

Innovation Design Engineering (MA/MSc)

Yi-Fan Hsieh

Yi-Fan Hsieh is a Taiwanese experienced industrial designer, specializing in the synthesis of technology and design solution to be applicable and appealing to the market. His practice embodies the simplicity, subtlety and practical solution from his unique view of the subjects.

During IDE course, his group project, Fallback, has been featured in Dezeen and Creative Application and has been honoured in the Core77 Design award of Student Notable for Design for Social Impact. In the Grand Challenge module set by CERN and RCA in 2019, his group, Knowtrition team, was the final winner in the category of Healthcare and Wellbeing. Also, he was one of the selected applicants for the Circular Economy Fellowship program 2018 holding by IDEO COLAB London. Previously, he worked for TG0, a London based startup developing future 3D interactive hardware products.

Contact

+44 7549262039

Portfolio website

LinkedIn

Behance

Degree Details

School of Design

Innovation Design Engineering (MA/MSc)

 As an industrial designer, he aims to deliver engaging tangible experiences that connect the physical and digital world.

He believes that bringing the technical and design aspects of knowledge together will be an essential core skill for the industrial designers to be a joint between diverse teamwork. Thus we have obligations to make those innovations happen, to give innovation genuine value and go beyond the current product forms and categories.

Loci — The concept of loci is to create the channels between the physical and digital world.

Concept Introduction
Loci of Interface creates new possibilities to extend interaction with computer interfaces to physical objects. This interface leverages augmented reality and utilizes spatial dimensions of objects to attach any object with the corresponding channel of spatial anchors to digital interfaces. With Loci, even non-digital physical objects can be connected to your digital world and act as an adaptive personal interface collection.

Loci anchor is the core of this new interactive interface that links tangible objects to computer user interfaces using spatial intelligence. The anchor can be located on the selected surface of the object recognized by the computer vision. And the computer side of the anchor will be able to define the interactive area of the graphical user interface that corresponds to the Loci anchor. Loci anchors will be created/placed and navigated in the environment by a mouse, which embodies the traditional pointer of the computer peripheral.
Prototype1.0-1

Prototype1.0-1 — The first version of the prototype creates a direct physical address to the digital interface.

Prototype1.0-2

Prototype1.0-2 — This demonstration showcases the possibility of seamless interaction to embed digital content onto our environment.

Prototype2.0-1

Prototype2.0-1 — This refined prototype controls the anchor point on the surface of the object(mug) which is linked to the particular control(zoom buttons) of the map.

Prototype2.0-2

Prototype2.0-2 — This prototype demonstrates the usage of the customized and adaptive user interface to allow people to create their digital document library interface on their notebook.

Loci plug-in software — Loci anchor attached to a linear moving element of the digital interface.

Loci plug-in software — Loci anchor attached to the folder of the computer OS.

The second layer of information is given to environmental objects and spaces and displayed in front of users through computer and smartphone screens. This is currently the primary interface of augmented reality. In the future we imagine, everyone will wear a smaller display screen on their heads, or wear augmented reality filters on our eyes. I am very interested in exploring the transitional stage in the immediate future.

Loci mouse

Hardware composition

For computer operation, the mouse can only move around on the surface of the interface. In reality, the mouse is also your physical cursor. You can treat the spatial object as a digital icon. Yet the spatial anchors we embed in physical objects require a camera to capture, what if this mouse perceives the world in a way it doesn't just use an IR sensor?

Traditionally, spatial interactions have been about placing a camera on the other side to capture gestures, which I think is different from holding a camera to capture a digital object. The user has more autonomy than trying to fit objects into the camera's field of view.

EarlyEXP1 — "Navigation of the hidden data." - How can we understand the meaning of information represents through the hidden signal?

EarlyEXP2 — "Simplify multimodal interface interaction" - What if we apply the usage of user interface we used to this new spatial interface?

EarlyEXP3 — "From Navigation to sensing of invisible existence" - What if we can recognize the functions of the data through direct feedback in the 3D space?

EarlyEXP4

EarlyEXP4 — "Giving Digital Content a Body" - What if the location of the physical body is where the data is stored on the computer?

EarlyEXP5 — "Computer interface and environment merge into one" - How do we enable users to move from a traditional computer interface to the entire environmental interface?

EarlyEXP6 — "The shape of the spatial objects as interface" - What if we can touch the objects around us as computer interaction?

This project initially started with the idea that the value of digital content is that they have no physical form, can be transmitted, copied, and exist around any of our environments. However, they all need a material representation to convey the information value. By giving the limitation of the space dimension and storing in a specific scope, can we collect those digital content into our pockets like picking up stones on the roadside? Or will it be like the smell, with different information being delivered to people according to different substances and corresponding context?

Assuming that we can physically interact with those digital content existing in space. What sort of feedback will be triggered by people? The approach of this experimental project is to generate hypotheses through a series of imaginations.

Previous Student

Next Student

Social
  • Twitter
  • Facebook
  • Instagram
  • YouTube
Royal College of Art
Registered Office: Royal College of Art,
Kensington Gore, South Kensington,
London SW7 2EU
RCA™ Royal College of Art™ are trademarks
of the Royal College of Art