Research

Research - wearables as physical interfaces with unique characteristics

To correct:
  • correct the translations
  • replace with original quotes
  • correct, format and add references list
  • superscript the reference markup

Physical and Tangible User Interfaces

Definitions

HCI - human-computer interaction
HCI, or human-computer interaction, is a rapidly developing field of technology. It deals with issues related to human-computer interaction, in particular from the perspective of users of the designed systems. The association for Computing Machinery (ACM) defines HCI more precisely as
“a discipline focused on the design, evaluation and implementation of interactive information systems for human use and the study of key related phenomena”
.2 HCI itself is also called HMI (human-machine interaction) or chi (computer-human interaction). The rapid development of technology has resulted in significant changes in that field.

In terms of HCI, a computer is a special kind of tool, the scope of which is so wide that its use is compared to dialogue, two-way communication similar to that between people.3 this paradigm and analogy to human interaction is the basis for theoretical considerations about machine interaction.

HCI is linked to many areas, m.in. computer science, design, communication, psychology, sociology, anthropology, marketing, so there are many research methods that situate HCI and its elements in a specific context.
"Due to the fact that HCI studies man and machine in the process of communication, it will use its knowledge about both the machine and the human side. On the machine side, the technologies of computer graphics, operating systems, programming languages and operating environments are significant. On the human side, Communication Theory, graphic design, industrial design, linguistics, social sciences, cognitive psychology, social psychology and human factors such as user satisfaction are important. And, of course, engineering and design methods are key.”
4 for this reason, only a multidimensional and multidisciplinary consideration of human interaction - the computer allows you to fully understand the phenomenon and its importance in the modern world, where the importance of technology continues to grow.
Interface
One of the key concepts in HCI research is the interface. It is defined as:
"a user interface is a connection between a person and a device or system that allows the user to interact (for example, exchange information) with that device or system. An interface is a boundary or intermediary between two dissimilar objects, devices or systems, through which the information is transmitted. The connection can be physical or logical."
in conclusion, the interface is a kind of interface between the in-formatted system and its user. In recent decades, a number of ways have been developed to enable human-computer communication. The most popular and currently leading is the GUI-graphical user interface. Besides it, there are also CLI - command line interface (based on the text console), VI - voice interface (voice interface) and the latest TUI-tangible user interface.

The physical interface can be understood in many ways. The English terminology widely adopted in most domains based on computer systems means that there is no clear translation between polish and English. The still ungrounded theoretical foundations in this area also do not allow for an unambiguous and universal definition of this concept. The most commonly cited and used term in describing physical interfaces in Tangible User Interfaces developed by Ullmer and Ishii:
"The physical user interface (Tui) is based on the user's skills and situates physically realized digital information in physical space. The design challenge is the imperceptible transfer of the possibilities arising from the physicality of objects to the digital domain.6”
"The key idea of TUI remains: to give physical form to digital information, allowing it to serve as representation and control for its digital counterparts. Tui makes digital information directly manipulable through our hands and experiential through our peripheral senses through their physical embodiment.7”
Since the publication of Ishii's and Ullmer's work, technological developments in recent years have led to the emergence of many solutions that enable user and computer system communication in an unprecedented way. Here you can replace the sensors of movement, distance, biosensors of physiological functions of the body. The most significant is the development of a three - dimensional camera that allows you to interpret the position of a character in three-dimensional space-Microsoft Kinect. It allowed the development of completely new possibilities of computer vision based on the user's position and movement in a way that is unnoticeable and intuitive for the user without the need for direct contact with devices.

Another example of an interface in which the body of an interactor becomes a kind of Artifact may be systems based on sensors of physiological functions. An illustration of such a model can be the game Brainball8, in which the activity of brain waves (recorded by EEG) players moves an analog ball on the board. Thanks to relaxation, rivals can move the ball to the opponent's side, winning the game. Smart products, tangible interaction and tangible Augmented reality are based on the integration of the physical interface with information systems. The following areas using the physical interface are m.in. industrial design, whole - body interaction, motion interaction, interactive surfaces/tables, embodied interfaces, ambient technology, Ubiquitous and Pervasive computing, intractive architecture and interior fittings, and organic interfaces.9

Not all of these are based on interaction with physical objects, through their manual manipulation, and therefore do not meet the criteria of the Tan - Gible user interface defined by Ullmer and Ishii. In order to capture these phenomena, which I consider to be significant and promising in the future, I decided to lean towards the later (2006) and more open approach developed by Hornecker and Buur, who point out as common features for systems with a physical interface:
"corporeality and materiality physical embodiment of data physical interaction embedding in real spaces and contexts " 10
Thus, they complement the most frequently quoted term Tangible user interface in the context of Ullmer and Ishii. The broad definition proposed by Hornecker and BU - ur, covering all the most important issues relating to the physical dimension, has been adopted in many circles of researchers dealing with this discipline. The question of how broadly the physical interface can be understood is still controversial in the environment, and opinions on this subject are divided among the community of researchers. Some of them consider interfaces based on gesticulation and reading the movement of the interactor as full-fledged physical interfaces. Some researchers consider as such only systems based on manipulation of a three-dimensional object-an artifact (other than the user's body). A series of Tei conferences (organized by Hornecker), which in 2010 changed the name from "tangible and embedded" to "tangible embedded and Embodied interaction", can be considered as an example illustrating the evolution of the concept and its discussion, thus indicating the belonging of interfaces using the movement and gesticulation of the interactor to the group of physical interfaces.11

Due to the multidisciplinary nature of research on human-computer interaction described above and the resulting multiplicity of theoretical approaches, as well as the fact that the interface is inextricably linked to the context of the entire interaction process, which it concerns, I decided not to limit my work to the interface, but to present it in a broader context. This is an approach derived from Paul Dourish's research on interaction, which resulted in the coining of the term Tangible Interaction, described by Hornecker as follows:
"Tangible interaction adopts the terminology preferred by the design community, which focuses on user experience and interaction with the system. As a broader perspective, he emphasizes corporeality and materiality, physical data incarnation, bodily interaction and embedding in real spaces and contexts. This embedding makes the physical interaction always located in a physical and social context.”12

Evolution of physical interfaces

Background
The GUI's proposed separation of the user from the real world around him in favor of virtual elements displayed on screens for many years was the only alternative to an even more limited text console. The desire to restore physical aspects to the process of interaction with computer systems has encouraged several researchers to try to develop a different, richer paradigm of HCI. The era of mass-produced and commercially available Xerox products, and later Apple, has established the use of conventions not only in the field of the graphical interface, but also the devices controlling it. However, the first computers were built as single copies, and therefore all components were developed for a specific application and a specific context. Control was carried out using buttons, knobs, sliders and other electronic components, each of which was assigned a separate function.
Ubiquitous Computing
One of the first concepts opposed to interface virtualization is the idea of Ubiquitous Computing developed by Mark Weiser, published in the article "the computer for the 21st century" in Scientific American (1991). The author describes it as: "a new way of thinking about computers in the world, which takes into account the natural environment of man and allows computers to disappear into the background". 29 Ubiquitous computing has become not so much a buzzword as it is sometimes referred to as the "third wave of computer science".30

The main idea of Ubiquitous computing in Weiser's view is to blur the boundary between the physical and virtual world, which in earlier applications was just an interface. This is not tantamount to pushing technology into the background, rather it is aimed at gradually accustoming people to its presence in life, until it ceases to be even seen as a separate element. Thanks to this dependency, users no longer treat it as a tool, but as an integral part of their environment, with which they solve problems or perform certain tasks. This process has become a central issue of research originating from many fields, taking various names: “compiling” (Herb Simon - computer scientist, economist, Nobel Prize winner), “Silent dimension” (Michael Polanyi - philosopher), “visual invariants” (TK Gibson - philosopher), “horizon” (Georg Gadamer - philosopher), “at hand” (Martin Heidegger - philosopher) and “periphery” (John Seely Brown - one of the Informatics Xerox PARC). 31

Mark Weiser et al.in the publication" Designing calm technologies " introduced the categorization of technologies into two possible groups: informative and comforting32. According to the authors, IT systems, or rather their individual elements, constantly compete for the attention / focus of the user. An alternative to this approach is" quiet " technology, which uses not only the center of attention, but also its periphery. In Weiser's view, "it engages both the center and the periphery of our attention and actually moves from one to the other." 33

Peripherals, in the sense of Weiser and brown, are also aspects of the environment that users are aware of but do not pay attention to.34 Only sudden changes, e.g. the silence of a working device, make the user aware of its presence again. The use of user attention peripherals is particularly advantageous for systems that perform many complex and time - consuming operations that do not require additional actions from the user until they are completed. Peripheral technologies are classified as "quiet" because they allow users to decide whether and on which of them they will focus their attention in a way that they consider beneficial to themselves. This allows the information flow to be graded and selected instead of being overloaded with competing signals. As Wieser warns, " We must learn to design for the periphery in order to be able to take full control of technology without being dominated by it.”35
Graspable User Interfaces
One of the concepts created on the basis of Ubiquitous Computing was the so - called Graspale User interface. It took its origin also in the first computer interfaces described earlier, based not on conventional input devices, but on knobs, buttons and switches, the operation of which required increased, two - handed user activity. A pioneering project was Bricks, a system created by George Fitzmaurice, Bill Buxton and Hiroshi Ishii, on the basis of which the authors also created a theoretical basis for describing a new approach to the design of information systems36.

The most important term referring to this interface became" graspable function handles " - gripping function handles, i.e. elements of the input device, which closely corresponded to the virtual functions performed by them. Fitzmaurice pointed out five of their basic features37:
1. spatially complex, each virtual operation has one corresponding handle, they do not change over time 2. highly specialized - each handle performs only one function 3. supporting each other, the handles act as parallel user control points 4. spatially defined-the handles have their position in the space defined in relation to the specified frame or point 5. variables in space-the handles can be moved in space, changing position in relation to this link
The features listed by Fitzmaurice are still relevant to today's physical interfaces, while his understanding of function handles is referred to as "pre - tangible"38. They are more the transfer of graphic interface elements to the physical world than the creation of a new paradigm based on physical representation and using the material values in a substantive way. Fitzmaurice and Buxton attempted to combine gripping function handles with Weiser's idea of quiet technology, thus creating the concept of "gripping media".39
Tangible Bits
In the work "Tangible bits: Towards Seamless interfaces between people, bits and atoms" published in 1997, Hiroshi Ishii and brygg Ullmer introduced the term "tangible user interface" - often in the form of the acronym tui40. Their goal was to make technology and computer science truly invisible by using "natural physical affordances" 41. The intention was to achieve better readability and continuity (English seamlessness) of interaction between people and machine data.42 According to Ishii and Ullmer, tangible user Interfaces :“enrich the real physical world by pairing digital information with everyday physical objects and media.”43

Like the graspable user interface, Tangible bits was modeled on metaphors derived from graphical interfaces of the time. Windows, menus and widgets had their counterparts in the form of physical objects, through which manipulation the user communicated with the system. The very approach to interaction, representation of data and functions has not undergone a radical change.
Emerging Frameworks
The transfer of interaction elements from the digital world back to the physical world laid the foundation for a new paradigm, which Ullmer and Ishii developed in their later publication," the paper on emerging frameworks for Tangible user interfaces " from 2000, providing the theoretical basis for the emerging new branch of HCI - physical interfaces and physical interaction44. This approach was innovative, it allowed to break the existing links with the graphical interface and the resulting limitations. The model - view-Kon-troler (MVC) structure was adopted in graphical user interfaces, separating input and output information. Ullmer and Ishii proposed an alternative Model-Control-Representation physical + digital (MCRpd) scheme with the following characteristics: 45
Physical representations (rep-p) are computer-paired with basic digital information (model); 2. Physical representations (rep-p) embody the mechanisms of control; 3. Physical representations (rep-p) are perceptually paired with actively mediating digital representations (rep-d); 4. The physical state of interface artifacts partially expresses the digital state of the system.
The mcrpd model is a consequence of Fitzamureice's grips and likes - it does not indicate a close relationship between physical control and digital representation. However, the approach of Ullmer and Ischia offers richer possibilities in terms of semantic interpretation of the relations of these elements.
Embedded Interaction
Another novel publication that dealt with physical interfaces was Paul dourish's book"where the action is: the foundations of em - bodied interaction" published in 2001. It is an analysis of sociological and philosophical aspects and theories relevant to the study of human-computer interaction. The publication is based on the concept of ebodiedinteraction, which the author defines as “creation, manipulation and work - not meaning through active interaction with artefacts”46. The author, analyzing and combining positions from many humanities, proposes six principles of interaction design:
„1. computer science is a medium 2. meaning is created on many levels 3. users, not designers, create and communicate meaning 4. users, not designers manage pairing 5. embodied technologies are part of the world that represent 6. embodied interaction turns action into meaning”47
The first two principles in dourish's view imply that interactive technologies are important not only through their functionality, but also through their symbolic and social value and embedding in practice and everyday life. The interactivity of the media makes the meaning created by communicating with the system, but it can also be shared between different users of the system.

Another pair of features outlined by Dourish is based on the previous ones. It refers to the fact that users of interactive systems themselves choose the ways in which they use technology according to their own preferences and needs. These, in turn, result from a complex network of connections between cognitive and social aspects with the performance of a particular activity. Actions taken by people are the result of processes of reasoning, including their connection with the existing experience and acquired knowledge.

The last two principles in interaction design relate to the relationship between technology and the environment in which it is embedded. The way the system is designed and its interface influences the methods of its use by the interactors, but it is not the only defining factor. The users themselves also, through the subjective adaptation of the actions performed, give meaning to the system, which they use.

Dourish concludes that actions are considered to generate meaning not only in a functional sense, but also in a social context. The" primacy of action " advocated by Dourish allows us to consider the physical interface through the prism of sociological and digital effects of actions occurring in physical realities48. Unlike previous research on physical interaction, which focused on the computational and technological aspects of it, dourish focused on the broader context in which the whole process takes place-in particular, the action and giving it meaning. His research contributed to the development of user-centered design methods.

Characteristics of physical interfaces

Physical interfaces have evolved intensively in recent years. A number of theoretical ideas have been developed which have variously defined and characterized the features of interfaces as well as the whole process of interaction between IT systems and their users. As physical interfaces are based on the graphical interface, gradually exploring and reinterpreting its individual properties, the following characterization will be based on an indication of the similarities and differences between the two approaches to interaction49.
Advantages
Communication
The main task of the interface is to allow the user to communicate with the system. As a system is a product of human activity, which shapes it and defines its possible functions, it can be said that it is the system designer who engages in a dialogue with the user through the interface50. The appropriate selection of project resources can determine the success of a project in the circumstances. How physical interfaces are superior to graphical interfaces in stimulating learning, especially based on user interaction in immediate and real - time contexts, as shown by studies on e.g. children or people with autism spectrum51.

Hornecker and buur52 point to three possible reasons for this relationship. Familiarity and "affordances" stemming from people's daily lives in a world full of physical objects increase the likelihood and make it easier for users to engage actively. Compared to the GUI, physical interfaces have been assigned an "inviting quality" 53, attracting more visitors to get to know them in a museum context, especially children and women.

The multiple access points from which the interface can be operated reduce the bottleneck for interaction, so that multiple interactors can use the system at the same time,in contrast to a graphical interface usually controlled by a single mouse. All operations performed by users manipulating analog objects are visible to others present around the installation. Observing the behavior of other participants allows for coordination of activities in the group. Well-designed interfaces can provide so-called. embodied facilitation 55, gently limiting and guiding user behavior56.

A limited number of objects - resources in the interface can encourage users to share and exchange them, interact and negotiate. This was particularly evident in the study of games, which required joint decisions by users. One of the values of card and board games was the "face-to-face" game. This type of interaction between users is not possible in fully digital systems, which negatively affects the ability to build connections between players who are not present in the same real space57 based on their research comparing the player's experience in digital and analog games, van den Hoven et al., states that the latter provide so - called social information58, thanks to which social beings. Sharing game space in the real world allows you to multilevel communication, also non-verbal. Board games, in contrast to their digital counterparts, use physical objects, which are usually designed in a way that encourages interaction not with the game itself, but with the remaining players. For example, boards can be viewed from many sides, moved from place to place. The rules of the game can be fixed and changed on an ongoing basis. On the contrary, fully computer games, based on graphical interfaces - allow very skom-plied rules, dependencies, calculations and simulations, but more isolate players from each other than encourage cooperation and interaction. Van den Hoven et al.therefore proposes a new model of hybrid games based on a physical interface with a computer system that combines the advantages of both worlds.59

One of the leading researchers of physical learning interfaces - Antle, indicates that they have "both space and affordances for many users", 60 so that they can produce "space for friends" 61, thereby facilitating cooperation and interaction of the audience not only with the system, but also with other participants in the activity

One of the most frequently cited examples of a physical interface that facilitates user collaboration is reactivision and reakable, developed in 2003 by Sergi Jordà, Günter Geiger, Martin Kaltenbrunner and Marcos Alonso as part of interaction research at Pompeu Fabra University in Barcelona.62 the reactivision system is currently available under an open-source license. It is an interactive table consisting of a projector, a translucent glass and a camera reading markers developed specifically for the project. The application analyzes their position and rotation, translating to the corresponding virtual parameters-in the original, the drivers of musical synthesizers. By manually manipulating glass or plastic tokens (usually in the form of discs) with markers pasted from the bottom, the user can create live music.

As a "feature inviting" many users to interact with the system at the same time, one can note the round shape of the table, which has not been used before, so that each of the interactors has an equivalent position63. In the publication summarizing and describing the project, the team notes that the Interactive table surpasses the capabilities of the graphical interface in terms of real-time data manipulation, such as musical performance, thanks to the simultaneous incorporation of the actions of the Interactor, the reaction of the system and the visibility of the action-reaction for passive viewers and other active users.Other examples of a system where live action and direct manipulation are significant include LogJam for group video registration and Toontown for mapping user presence in audio space. In both applications, you need a quick system response to an event that is not in the focus of the user, busy with his main goal.65
Embeddedness in the world
By having a body, a person becomes part of space, has a spatial dimension. It is the environment of human life as defined by Merleau-ponty66. The body becomes a reference point for the perception of the surrounding world and other people. Spatial relationships translate into the perception of the environment and have psychological significance. Human presence and interaction with the physical interface means moving in real space, creating context, whether it is deliberate manipulation or exploration of the possibilities offered. Every move or change can generate meaning.67

The position in the world in relation to physical interfaces comes from the description of already Ubiquitous computing, which was supposed to blur the boundary between the real and virtual world by gradually accustoming users to the presence of technology in their environment until it would become unnoticed68. In the case of physical interfaces, as the name suggests, the user and the integration process itself remain embedded in the real world. Dourish notes that interfaces " can occupy the same world as US and be located in our world of life."69 it is the emphasis on physicality and the physical aspect of the interface that is the fundamental problem of the paradigm invoked, making the technologies based on it a particular type of Ubiquitous computing systems.

The virtual nature of graphical interfaces makes them exist only on screens, do not have a tangible, tangible shape. In contrast to them, the actual form of the physical interface causes them to occupy a specific place in the space, as well as architectural elements, products, devices. As Hornecker and Buur note, physical interaction is always embedded in a real, analog, defined space70. This means that the meaning generated by interaction with the system also depends on the location and circumstances in which it occurs. Conversely, an interactive installation can affect the perception of a given space.

A thorough understanding and design for interaction in context is one from the elements of the procedure, which in the social sciences has been described as a "practice turn". In the case of HCI, it means moving away from the perception of the interface only as a carrier of information and data representation (which was reflected in Ishii and Ullmer's research on Tangible Bits) in favor of to treat it as an element of the process of physical interaction, incorporating human activity, control, creativity and the social dimension71. This position explores the question: how technology connects with everyday human activity and what is its relationship with the environment and circumstances in which it is used. Fernhaus et al. defined his position as: "the user's perspective in action and interaction with technology"72

Hornecker and Shaer combine the concept of "offline activities", i.e. actions taken by the interactors around the interface, which are not aimed at direct communication with the system or changing its state73. Their task is to branch out into the material layer or the social aspect of interaction. They are especially appreciated in interfaces based on many physical artifacts, the manipulation of which requires proper management of resources and available space. Examples include counting, saving to an inactive area, and exchanging objects between users.

The issue of interaction with physical objects is discussed in the publication” Reality - based interaction: a framework for post - wimp interfaces " by Jacob et. al.74. It explores the issue of realism in interfaces, the deepening of which, in the authors ' opinion, can reduce the distance between the digital and physical world - thus creating a more natural experience for the user of the system. The main issues addressed in this theory are: naïve physics, body awareness and skills, and environmental awareness and competence. environment awareness and skill) and social awareness and skills.

Naive physics refers to the well - known rules of the real world-for example, physical principles like gravity. Body awareness and skills deals with awareness of one's own material body and the AMI skills necessary to control and coordinate it. Awareness and competence in the environment address the issue of people's perception of the environment and their ability to manipulate and navigate in the environment. This theme is associated with the already mentioned peripheries of the awareness of technology users, as well as physical manipulation of material interface elements. Social awareness is based on the perception of other people in the environment and the competence to communicate with them
Tactile thinking
Man through his existence in his body, the essential physical form, can know the world through his prism75 76. Research shows that children develop their spatial cognitive abilities through motor experiences. Children learn abstract concepts through bodily manipulation of physical objects77. People performing occupations based on spatial reasoning, such as designers, architects, engineers often use mock-ups and three-dimensional objects to analyze and visualize complex issues.78 The use of these dependencies makes physical interfaces, in contrast to their purely graphical counterparts, capable of so - called "tangible thinking" - reasoning based on actions, physical manipulation and tangible representation79. The most prominent researchers working on tactile thinking are Klemmer and his team. In their publication they outline several dimensions of this concept based on external representation 80, distributed cognition 81 and studies on gestures.82
Gesture
Gesticulation is usually regarded as one of the ways of inter-human communication. However, studies show that it has more functions-in both adults and children it can relieve cognitive tension83. Gestures have also been shown to facilitate the planning and delivery of speeches and presentations.84

In a graphical interface that is used with the mouse and keyboard, the user has limited mobility. His hand or both is busy operating the input devices. On the contrary, the situation with the physical interface is presented-the user can freely gesticulate, interacting with the system or other users. Its position and mobility is not limited by the devices determining the position, since physical interfaces provide a plurality of equivalent access points. In addition, in installations using analog objects, manipulation in three dimensions is possible, which is not possible with graphic applications limited by the two - dimensionality of the screens85.

Some physical interfaces use gestures as a way for the user to communicate information that the information system reads. These can be symbolic gestures-in a specific convention or specifically defined for the needs of the project, or natural gestures-imitating the corresponding actions performed in ordinary circumstances. Some of the physical interfaces are built with the latter in mind86. For example, steering wheels in video games, which correspond to real-life control systems in cars. These types of con - trolers are often used in military, security and medical simulations, in which the most faithful representations of real instruments are needed. Mastering their handling can be used by the user in real situations.

These examples refer to the paradigm of mimetic interfaces developed by the game researcher Jesper juula87. According to him, this model involves performing in-game actions that reproduce actions from the outside world. This kind of interface is easier to use for users who are not adept at using more standard controllers. This is the case with the guitar hero game, in which the control is carried out using a model of the guitar, which is a familiar and intuitive object to use.

Mimetic interfaces are one of the categories of so-called natural interfaces based on natural interaction. Wigdor and Wixon define it as follows: "natural refers more to the user's behavior and feelings during the experience than to being the product of an organic process" 88. The main design issue is the use of existing Interac - Tor skills for communication with the system.[5]. People acquire skills throughout their lives and practice them all the time, ensuring their survival in the world.

Competences such as interpersonal, verbal and non-verbal communication are crucial for life in society. Natural interaction occurs when people interact with the system in a similar way as when communicating with other people, including: facial expressions, gestures, intonation and movements. As Savari 89 notes, inferring from the history of technology, people have always adapted to the possibilities of a given procedures, learning to communicate with devices. At present, however, it is possible to develop technological solutions that adapt to human language, along with its impressions, presentation and narration.90 natural interfaces rely on reading body movement, gestures, movement and sound, allowing interaction between users and the computer.

Technologies that use the reading of the user's gestures as input use kinesthetic memory - the ability to sense, remember and reproduce muscle effort, body position and movement, in order to achieve skills.91 according to Kirk, kinesthetic memory triggered by performing a gesture, such as moving an object, increases the user's awareness of the actions performed, increases the sense of control over the system, and reduces the likelihood of making mistakes and imprecise actions92.

There is a certain group of motor skills that require experience and, most often, long - term practice, but allow you to develop an almost instinctive reaction to a stimulus, without any apparent mental effort. They use body - centric experiential co-gnition. Examples can be cycling, driving a car. This feature is most clearly highlighted in sports practice, e.g. martial arts.
Epistemic actions and "thought props"
The operation of the capacity of the motorcycle can be used in the Poznań process-that is, through the use of transitional elements for" operation " and external operation. The inclusion of "thinking props" wy-wood from Kirsh and maglio94, who introduced categorized actions into pragmatic-have functional consequences consequences for achieving the goal, and epistemic-not very frequent consequences for directly improved-UNM to the intended effect, but affecting the understanding and cognitive process of the tasks performed. They recognize the diagnosis of possible options, compare them, the previous page, the support of the process. Manipulating existing objects allows you to reduce laadness after significant and taking away user performance. Threatening, indicating, obra-cane, resourcing, conducted or adjusting of objects to them in the visualization available possibilities can be considered for epistemic activities, and manipulated artifacts-for experience recovery.

The physical interface also "offline "' 95 activities, which are not interpreted by the computer, did not take into account any features in the system's digital wars. These are the factors beyond the "affordances" included in the projects. However, users allowed for their exploration and interdisciplinary use, as well as support for epistemic activities. 96 97
Physical representation
In a broader context, Zhang and norman98 have shown that operational representation has a significant impact on the flow analysis and performance of human activities. Based on the study of alternative representations (e.g. isomorphs), the German' Ko-Ko and krzyżyk 'and' Wiese Hanoi 'found that increasing the number of external representations improved the players' performance in terms of the time needed to find a solution, and the proportion of correct solutions relative to incorrect ones. In a later publication, zhang99 concluded that external agents are not integral elements of many cognitive activities, because, suger, they limit and even determine cognitive stimuli.100

In contrast to graphical interfaces, physical interfaces use real-ne, three objects in the external representation of data and digital functions. However, both approaches use similar facilities for-lie these facilities, respectively two or three-dimensionally. Some of them are recently to integrate with the physical interface, for example, architecture or chemistry are set presentation standards based on their actual features-geometric or topographic. Other domains, e.g. Economics, biology or music, such not position, but have some conveniences which are adapted-I would like to have to purposeful representations. The last categories are entities that do not impose any standards of presentation of information, allowed on the symbolic and metaphorical relations remaining the designers of the decision.

Zuckerman introduced the classification of representations in physical interfaces into Froebel-inspired manipulators(fims) and Montessori manipulators (MiMs).101 The former use materials simulating real models, e.g. drew these blocks are used for building architectural elements. Montessori manipulators, in turn, are based on abstract representations, for example, they can be locks of systems participating in the accounts of truths and other actions of mate-Matic, and block locks covered by the basics of programming102.

Another is the dependence of individual elements in relation to the interface using three objective objects representing data and functions. Ullmer et al.103 proposed a "token + - constraint" approach. Token to in the scope of the authors of objects that can be allowed in or removed from the corresponding constraints (Eng . limitation). For example, marble answer machine developed by Durell bishop104. This is an automatic protection for secretaries whose messages are represented by sects that the user can receive to the channel, receive a recording or receive again. In this case, the token is a ball, and the channel is limited-it defines how the token can be manipulated.

Retrofitting of compatible elements is expressed by their physical properties, mechanical properties, e.g. size, weight, shape. It is single for users with no empirical consequences. The structure of the fizsion interface signals the possibility of interaction. Suggesting available options for manipulating objects, increasing operations that are not compatible with sys-based by designer105. Thanks to this, the operating technology is intuitive. This property involves the so-called introduction of perception (Eng. perceptual inference) - direct reflection by human perception without the need for complex logical deduction. 106 107

Another theme related to physical representation is the concept of Suge-stay (evocative objects) 108-everyday objects that serve as emotional support, intelligent, evoke memories, inspire new ideas. Their influence on cognitive and emotional development was described by Sheryl Turkle in the publication "Evocative objects: things we think with".The use of thematic and gestural objects in interfaces allows for a more emotional, metaphorical or symbolic meaning of actions.
Spatial dependence and directness of interaction
Control of an interactive system can be based on the use of time-multiplexing-manipulation of a single input device with different functions that change over time. So most often this is done in graphical interfaces. An alternative way is to use space-multiplexing, in which each function of the system has a corresponding controller, and each of them occupies its place in space. This is a characteristic feature for physical interfaces containing more than one object-controller, then each of them has a dedicated scope of action.109

In traditional, time-defined GUIs, each mouse click can result in a different effect depending on where the user's "focus" is currently located, in which window it occurs, which object is selected. The standard controller must therefore have a universal, often abstract shape and appearance that would be neutral to the currently assigned function. On the contrary, in systems based on spatial-differentiated functions performed by artifacts allow for a differentiation of their physical characteristics. This increases the functionality and ease of use of the system. The invariant relationship of the token with the function allows them to be adjusted in terms of the apparition, making it an expressive value and creating "affordances" 110. According to Hornecker et al., this fact makes physical interfaces highly specific (strong-specificity)111.

In graphical user interfaces, it can perform only one operation at a time, which imposes sequential operations. Physical interfaces, thanks to the use of multiple objects controlling the system, allow you to perform many manipulations in parallel. This translates into speeding up the implementation of tasks. Performing several operations synchronously is also possible by eliminating the need to pay attention to the artifact currently being moved by the user. This process triggers, as Fitzmaurice calls it, "muscle memory." muscle memory) through the proper use of human spatial memory instead of visual112.

Time dependence assumes that the object that the user wants to manipulate must first be selected, be in the"focus state". The need to perform each sequence of actions: function selection-action execution-selection of the underlying function, which is present in systems based on this model, is eliminated in applications based on the model with spatial dependence. This makes them more efficient than their predecessors.113

As Beaudouin-Lafon states, the increase in the immediacy of interaction is due to the minimization of the number of steps in the mediation between the system and the user. The author categorizes the interface elements into: input devices (e.g. mouse), instruments (e.g. slider, knob, button) and interaction objects (currently active entities, e.g. text), and then suggests a method for measuring the immediacy of interaction based on the relationship between the three components. The spatial and temporal distance between the instrument and the object of intervention, the differences in degrees of freedom between the input device and the instrument, and the similarity between the manual action/gesture and the effect it produces on the subject matter.The interaction object can also take its corresponding physical form, thus becoming its own input device-controller. These characteristics make physical interfaces in BeaudoinLafon's sense to a much greater extent enhance the immediacy of interaction compared to other forms of HCI, so that the user can be more involved in the interaction.

In the view of Hutchins, Hollan and norman116, the sense of immediacy is dependent on the distance between the user's thoughts and the physical demands of the application being used and the involvement of the sense of direct manipulation of an object. The authors describe the relationship between the user's intentions and the possibilities offered by the system using the concept of the Gulf of execution - the distance between human expectations and their interpretation by the computer and the Gulf of evaluation. Gulf of evaluation) - the distance between the feedback provided by the machine, and its understanding by the user.

Hutchins et al.distinguish between two types of characterized distance: semantic-defining the relationship between the intentions of the interactorand the meaning of the given expressions in the interface language and articulatory distance referring to the relationship between these expressions and their physical form. This issue is reflected in the relationship between the generality and specificity of representation. For example, the articulation distance can be reduced by using a mimetic interface based on motion having an equivalent outside the system.
High specificity supporting iconicity and affordances (?)
The previously presented high specificity of input objects can suggest possible actions, functions and ways of interaction, thanks to the use of physical "affordances". This contributes to the intuitive translation of actions into desired results - which is useful for both users and designers of systems.

The concept of affordance was coined by environmental psychologist James J gibson119. He defined them as properties enabling action between man or animal and the world around him and its elements. The author gave the example of a flat surface, which makes it possible to sit on it, and a pointed surface, on which this is impossible-it does not have the appropriate characteristic. For the researcher, af-fordance are inalienable and unchangeable characteristics beyond the control of the designer. The follower of Gibson's ideas in the field of design was Donald Norman. In his view, affordance "play very different roles in physical products than in the world of screen products".120 in contrast to its predecessor, the researcher found that affordance may depend on the user's previous experience, knowledge and culture.121 on the basis of this fact, Norman distinguishes the properties of objects that are invariant from those that are given to them by the designer. For the author, in the case of analog objects, both the perceived and the relevant affords can be controlled, whereas in digital products only the perceived affords are dependent on the designer.122

The analog form of the artifacts makes them remain unchanged at all times. Thanks to this, once correctly connected mentally with the function performed, they do not require further interpretation. As Fitzmaurice demonstrates in his research, technologies that implement high controller specificity perform better in interdependent structures, speeding up the process of user internalization.123

The physical properties of artifacts can be configured in a way that carries meaning. This applies, for example, shape, color, size, temperature, hardness and many other characteristics by which you can determine any material object that has existed in the real world. In the case of controllers in interfaces, in addition to the very arrangement of objects, it is also important how the user can manipulate them. It is defined, among other things, by weight, texture, shape of the edges (e.g. sharp ends discourage touching, rounded corners-create the impression of friendliness). Smaller and lighter objects the user can pick up and move freely in space. The size of an artifact also determines how it is captured, which has also become an issue of interest for researchers124.

The physical properties of artifacts can be configured in a way that carries meaning. This applies, for example, shape, color, size, temperature, hardness and many other characteristics by which you can determine any material object that has existed in the real world. In the case of controllers in interfaces, in addition to the very arrangement of objects, it is also important how the user can manipulate them. It is defined, among other things, by weight, texture, shape of the edges (e.g. sharp ends discourage touching, rounded corners-create the impression of friendliness). Smaller and lighter objects the user can pick up and move freely in space. The size of an artifact also determines how it is captured, which has also become an issue of interest for researchers124. Easiest to interact with in the hand for the average person there are objects with a size of 5-10cm. Precise manipulation requires the use of a thumb and one or two fingers, allowing objects smaller than 5cm. In turn, objects larger than 10cm are usually grasped with the whole hand. It follows that physical properties impose or suggest a way in which the user can interact with them.

Manipulation of three-dimensional objects requires manual action on the part of the interactor, physical contact. Operating them triggers tactile feedback 125, so there is no need for visual information-and therefore eye contact. This is a feature that distinguishes physical interfaces from others, based solely on providing feedback through projections or displays, which usually involve only one sense.

On the basis of the exploration of the concept of" inviting qualities", which some ancestors possess, the term"irresistables" 126 was developed. According to Overbeeke et al. it characterizes objects that encourage people to interact with them, manual manipulation - the "promise of pleasure"127.

The physical form of the interface elements should result from the best use of their "affordance" 128 129. However, it is not possible to develop precise design guidelines, as the physical properties are determined by a large number of parameters. The final impression and the way the object is perceived by the interactors depends on a combination of many factors. It should also be noted that "affordance" is closely related to the possibility of taking a particular action by a particular person. Therefore, it is a relative term. This fact is of great importance for it designers, and adaptation to the target audience is one of the main issues addressed by user-centered de - sign. For example, because of the larger size of the hand, an adult will catch an object in a different way than a child. Products intended for children have handles that do not allow them to be used comfortably by others.

Limitations

Physical interfaces have many features that can be of interest to both users and designers. However, the search for originality and technological novelties should not obscure other elements important for the process of interaction, in particular efficiency and user experience. Ishii et al. drew attention to another problem that physical interfaces can generate:
"Such systems require careful balancing between physical and graphic expression in order to avoid physical disorder and to exploit the contrasting strengths of different forms of representation. This balance between physical and digital representations is one of the main design challenges in physical interfaces " 130
Some of the characteristics of physical interfaces classified as limitations are closely related to their physical nature, while others are due to limitations imposed by the way the technology currently available works and its imperfection. It is possible that the development of electronics and computer science will make it possible to eliminate some restrictions or to establish completely new solutions.
Scalability and risk of loss of components
Physical interfaces are best suited for applications that do not require complex structures. Complex systems of representation, functioning and post - relationship between elements are difficult to translate into physical objects. The use of material artifacts involves the need to adapt to their inherent physical characteristics-including size and volume. Tokens occupy a certain space on the interface work surface, which also has its limitations-for example, the range of action of a camera reading markers. The convenience of users is also important, manipulation of artifacts requires easy access to them. So they should be in the hands of the interactors. The interface surface defines the number and size of objects that can be placed on it. In graphical interfaces, the equivalent term is "screen estate" 131, which refers to virtual elements that are displayed once on the screen.

The problem of "physical clutter" 132 arises from spatial constraints.Filling the space to the maximum makes it impossible to stack more tokens on it. In this case, the size of the working surface becomes a constraint. Artifacts require space not only when they are in use, but also when they do not participate in the interaction process. It is then necessary to find a neutral place to hold them. For interfaces based on computer vision, this must be a location outside the camera's range of vision, which must also be taken into account in the design process. Due to the small size of the tokens and the need to move from the storage point to the place of use, it is also possible to accidentally lose or destroy them-for example, dropping a glass element on the floor causing damage.133

Virtual interfaces allow you to change the appearance of elements by designers and also give users the ability to customize certain parameters according to their likes or needs. For example, you can zoom in on a map or text so that visually impaired people can read it easily. In the physical interface, the artifacts have a predetermined size imposed by the designers, which is in a certain proportion to the other elements. In many cases, enlarging one object might require adjusting the size of all the others. Edge and Blackwell call this property bulkiness. 134

The limitation of space available to users in physical interfaces also means that they have limited possibilities to compare the available action options with each other - juxtaposability 135. It is of course different for digital performances and their layouts, which can easily be copied, saved and opened not only in the same session, but even after a long time. Graphical interfaces also allow you to select multiple objects and move them arbitrarily in virtual space, or change the view, move it, or scale it if you decide that they are not optimal for the task. In physical interfaces, however, the lack of scalability and spatial constraints translate into "premature commitment," as edge and Blackwell call it.136 Starting to establish a combination of objects in the interactive space at a given location makes it possible to move it later, for example. when space is needed for the layout of the second set is much more difficult. Repositioning requires recreating an entire, often complex, arrangement of items that can usually only be moved one at a time
Versatility and adaptability.
As Poupyrev137 notes, graphical interfaces are based on digital data and presets that are easy to manipulate. It is different in the case of physical interfaces based on objects having their material form. The previously described high specificity that allows using "affordances", which is a characteristic of this model of interaction, makes them more intuitive to use. However, this comes at the cost of universality, versatility and responsiveness to changes in the interface.

The use of the wimp model in graphical interfaces implies the use of Windows, thanks to which you can switch the system view to the one that is of interest to the user at a given moment, leaving the others minimized, even if they perform operations. It is therefore possible to perform many tasks at the same time tasks such as playing music and writing messages. Physical interfaces, on the other hand, because of their properties, are most often dedicated to performing much more limited tasks, with a lower degree of complexity and with a limited range of data and operations that can be captured.

Physical interfaces, as has already been demonstrated, are particularly useful in tasks requiring real-time activities, as is the case, for example, in musical improvisation using Reactable. This is due to the system continuously interpreting the current state, position, rotation, position of the token relative to the reference point or analysis of other parameters supplied to the application. The continuous equipping of the analyzed signal with the response makes it impossible or much more difficult to implement the functions of reversing an already performed operation, the history of performed actions or their playback, which, by contrast, have wide applications in other interfaces138.

Jacob et al.139 points out that in the design of the interface, designers should strive to ensure that they correspond as closely as possible to reality. This is particularly important for gesture - based systems and naturally mapped interfaces. However, the realism of the performances and their high specificity reduces their susceptibility to change and versatility. This relationship requires an appropriate balance to be maintained, adapted to the desired results in a specific application by a specific target group.
Physical activity requirement
Interaction with the physical interface has consequences that are typical for all manual operations-it requires a much greater range of motion from the user than is necessary to operate the graphical interface140. This is most often based on small computer mouse maneuvers, mainly by moving the wrist. The hand may, however, rest its weight on the device or the surface of the table. These are not energy intensive activities. Muscle strain can result in pain and, in severe cases, health consequences. In physical interfaces: "tokens are constantly being raised and moved, which is their basic model of interaction" 141. Manipulation of three-dimensional objects or interaction with the system reading gestures and user movement, requires the use of not only the hand, or hand, but sometimes the whole body. Depending on the size of the interactive area, the size and weight of the tokens, the system usage may be limited and sensitive to prolonged use. Therefore, the physical interface also needs to be designed in terms of ergonomics.

The issue of user-friendliness of the interface is particularly present in the design of games. In the user experience, a significant component is the fun that it consists of-it is fantasy, challenge and curiosity.142 with UX in games is also associated with the problem of their difficulty. Csikszentmihalyi introduces the concept of flow143, the sinking state in a given activity, which can be achieved by appropriately balancing between efficiency and challenge. Easy tasks quickly tire, difficult ones can be frustrating, but they also give satisfaction in return for effort. This relationship also translates into the design of physical controllers in games. They can facilitate the operation of the system or be part of a challenge. As Johnson and wiles144 write: “ achieving mastery of system control is an important part of most games."a properly selected physical interface can completely change the perception of the mechanics of the game. It can create a user experience that is impossible with a graphical interface and standard methods, and encourage the player to engage in interaction for many hours. These opportunities are used in educational games, often also in institutions such as museums and theme parks.

Summary

In recent years, physical interfaces and the interaction based on them have become a fully recognized field of human-computer interaction. New paradigms have made it possible to develop innovative methods of communication between humans and machines. Moving away from graphical interfaces, blurring the line between the real and the virtual world, the ubiquity of technology, support for cognitive functions, coloration, tactile thinking are just some of the related issues that define the directions in which research is conducted.

The work adopts a broad treatment of the topic and broadens the definition to capture phenomena hitherto overlooked, in particular in older source materials, and which are of ever-increasing importance in connection with the development of technology. The original, most popular definition of physical interfaces developed by Ullmer and Ishii did not include gesture-based interactions without artifacts and using your whole body. The newer approach developed by Hornecker et al.not only treats them as full - fledged physical interfaces, but also points to the non-autonomy of the interface and treats it as an element of the whole, rich and complex interaction process.

A broad context has been taken into account when considering the whole course of interaction, and not just the interaction itself due to its independent nature. Thanks to this, the new paradigm has been placed in the light of the technological solutions of the time, as well as discussed with the use of theories that have sources in many areas related to interaction. This is necessary in the case of such a complex process involving both psychological, sociological, semantic, cognitive, design, and technical aspects. The multidimensional treatment of the topic in the theoretical sphere can be reflected in practical design approaches to physical interfaces, which would allow designers to base their activities on a solid foundation that takes into account the diverse needs of users.

The paper presents a cross-section of the researchers ' achievements to date and an attempt to structure them in the form of characteristics of physical interfaces. It is presented in relation to graphical user interfaces. They were originally considered as the opposite of physical interfaces, but they can be a complementary element of interaction. The characteristics of physical interfaces showed their properties superior to graphic interfaces: the possibility of interaction, location in the world, tactile thinking, spatial dependence and immediacy of interaction, high specificity enabling iconicity and AF - fordances. They also have their own limitations: small scalability and the risk of damage to components, low versatility and susceptibility to change, and the requirement of physical activity from the user.

Bearing in mind that in the process of interaction, the user's experience, satisfaction and ease of achieving the desired goal are of paramount importance, the design process should take into account the choice of the interface that will have the most optimal properties for a given project, as well as support the chosen style of integration. Physical interfaces are a good alternative to graphical interfaces, but they do not completely displace them. Each of them has its optimal range of applications due to its nature and characteristics. Designers must make a choice between them or combine them depending on the intended purpose, the audience, the nature of the interaction, the spatial and social context.

Critical evaluation of the possibilities of a given style of interaction, analyzing many factors affecting the quality of the user experience, is the basis of the project process. A good design takes advantage of the strengths and minimises the shortcomings of the approach. For example, you can combine the advantages of a physical and graphical interface, allowing you to achieve impossible results with just one option. 145

As Hornecker146 and jacob147 point out, from a research perspective, experimental searches for innovative solutions are also valuable, not necessarily ending with a project with optimal properties for the user. They allow us to test the advantages and limitations of physical interfaces, sometimes also to develop alternative possibilities and models of operation of technologies and interfaces.

Hornecker and Shaer identify the following possible directions for the future development of interaction research in the physical context:
- "actuation -from physical interfaces to organic interfaces - from physical representation to physical resources for actions interaction not studied -whole-body and performative interaction, and - long-term interaction studies " 148
One of the implications arising from the novelty of the field and the initial stage of scientific research on it is the lack of long-term tests.Most experiments are conducted in short sessions (several minutes, several hours or - maximum-several days). The results obtained in this way describe the initial attractiveness of the system for the user and ease of use, but do not apply to long-term and repeated use.

The relative novelty of physical interfaces and research on them (less than 20 years) in relation to the whole field of human-computer interaction results in an ungrounded system of theoretical descriptions for the phenomenon. There are many approaches that have their origins in various related issues, and therefore interpret physical interfaces and the whole process of interaction with them in a diverse, interdisciplinary way. The continuous development of technology and the opportunities it provides also make the paradigm continue to evolve at a rate exceeding the work of researchers to describe it.

The true significance of the phenomenon and how it will affect our daily communication with technology in the future remains a matter of considerable importance today and requires further scientific and practical research into the possibilities arising from the intersection of the real and digital worlds and technological mediation between humans and machines.

FURTHER RESEARCH: Wearables as interfaces: unique characteristics, scope of definitions, paradigms shift

social context

physical features - flexibility and softness

interestion of digital and physical world

the essence of embodiment - the body

wearability - no fixed position or spatial constraints,full mobility

Awareness vs no awareness of interaction, intimacy, privacy and consent discourse

Haptic feedback loops with on body actuators

Touchless seamless interaction paradigm shift

more human-like communication, more intuitive experience

No action in interaction?

Interactive systems vs. simple wearables with no microcontrollers

No circuit - no interaction? Is it still an interface? The scope of the definition of interface

New generation of Graspable User Interfaces

Contrary to one of the earliest approaches of definining physical interfaces with tangible tokens with semi-fixed spatial constraints as Graspable User Interfaces - in the last 30 years the technology evolved from simple artifacts awaiting human action into complex systems embedded directly on human bodies. Now, wearables have an ability to "grasp" humans within themselves (Fitzmaurice, George W., Ishii, Hiroshi and Buxton, Bill (1995) : Bricks: Laying the Foundations for Graspable User Interfaces. In: Katz, Irvin R., Mack, Robert L., Marks, Linn, Rosson, Mary Beth and Nielsen, Jakob (eds.) Proceedings of the ACM CHI 95 Human Factors in Computing Systems Conference May 7-11, 1995, Denver, Colorado. pp. 442-449)

What's next? Future development and application possibilities



Back