Ebook: DHM2020
Digital human modeling (DHM) is an active field of research directed towards the goal of creating detailed digital models of the human body and its functions, as well as assessment methods for evaluating human interaction with products and production systems. These have many applications in ergonomics, design and engineering, in fields as diverse as the automotive industry and medicine.
This book presents the proceedings of the 6th International Digital Human Modeling Symposium (DHM2020), held in Skövde, Sweden from 31 August to 2 September 2020. The conference was also accessible online for those unable to attend in person because of restrictions due to the Covid-19 pandemic. The symposium provides an international forum for researchers, developers and users to report their latest innovations, summarize new developments and experiences within the field, and exchange ideas, results and visions in all areas of DHM research and applications.
The book contains the 43 papers accepted for presentation at the conference, and is divided into 6 sections which broadly reflect the topics covered: anthropometry; behavior and biomechanical modeling; human motion data collection and modeling; human-product interaction modeling; industry and user perspectives; and production planning and ergonomics evaluation.
Providing a state-of-the-art overview of research and developments in digital human modeling, the book will be of interest to all those who are active in the field.
This book of proceedings contains papers accepted for the 6th International Digital Human Modeling Symposium (DHM2020), hosted by the University of Skövde in Sweden, and held at the ASSAR Industrial Innovation Arena in Skövde, as well as online, August 31–September 2, 2020.
The International Digital Human Modeling Symposium provides an international forum for researchers, developers and users to report their latest innovations, summarize new developments and experiences within the field, and exchange ideas, results and visions in all areas of digital human modeling research and applications. It is a major event for researchers, academics and industrialists engaged in topics such as:
- unmapped: label –
Anthropometry and 3D human body modeling
- unmapped: label –
Human functional data
- unmapped: label –
Musculoskeletal human models
- unmapped: label –
Motion capture and reconstruction
- unmapped: label –
Posture and motion simulation
- unmapped: label –
Modeling for subjective responses
- unmapped: label –
Applications and software demonstration
- unmapped: label –
Virtual reality and DHM
- unmapped: label –
Mental/cognitive models and integrated models
- unmapped: label –
Older, disabled and other populations
- unmapped: label –
Verification and validation of DHMs
- unmapped: label –
Model standards and protocols
- unmapped: label –
Body part modeling
- unmapped: label –
Virtual humans’ appearance
- unmapped: label –
Biomechanical modeling
- unmapped: label –
Human vibration modeling
- unmapped: label –
DHM in safety applications
- unmapped: label –
DHM in game applications
The proceedings of DHM2020 consists of 43 papers subdivided into six parts, reflecting the topics addressed at the symposium.
Part 1 is entitled Anthropometry. It contains papers on the collection and processing of anthropometric data, and on the development of methods for how to use anthropometric data in DHM settings, e.g. in the design of truck interiors and protective equipment. Also included in this part are methods for handling 3D scan data, skewed data, and how to generate full body shapes with a limited number of measures.
Part 2 is entitled Behaviour and Biomechanical Modeling. It contains papers on cognitive modeling of roadside human interactions, and on physical musculoskeletal modelling of jaw motions. Modelling of hand-eye strategies and vision behaviour are covered, representing areas in the intersection of cognitive and physical modelling. Also presented are modelling technologies, including optimal control and neural networks.
Part 3 is entitled Human Motion Data Collection and Modeling. It contains papers on reach and grasp modelling, as well as posture stability and hand trajectories. This part also includes papers on how to gather motion data with 3D textiles and smart clothing, and how to store motion data in databases.
Part 4 is entitled Human-Product Interaction Modeling. It contains papers on how vehicle drivers interact with automotive interiors. Seat interaction for vehicle drivers and pilots is presented, as well as papers on models for human-seat foam interaction. Also included in this part is modelling of exoskeleton as a human support.
Part 5 is entitled Industry and User Perspectives. It contains papers on both industry, health, and medical sector perspectives. Examples are given on applications of DHM software and associated technologies. Future needs and identified gaps are discussed. Several papers focus on usability of DHM software, both on desktop and in VR. Also included in this part is gamification of DHM.
Part 6 is entitled Production Planning and Ergonomics Evaluation. It contains papers on DHM as an ergonomics evaluation tool. Gender perspectives on DHM are presented, as well as a case from the maritime sector. The development of a multi-objective approach for DHM simulation and evaluation is presented. DHM simulations are compared with motion capture data. Also included in this part are DHM tools with VR functionality, combined with motion capture and AI technologies.
We would like to thank:
- unmapped: label –
all authors of papers,
- unmapped: label –
the members of the Scientific Committee who assisted with the blind peer-review of the papers submitted and presented at the symposium,
- unmapped: label –
the key-note speakers Cecilia Berlin, Johan Iraeus, and Kalle Sandzén for sharing their experiences,
- unmapped: label –
our sponsors ESI, Volvo Group, IPS, and University of Skövde for supporting the event,
- unmapped: label –
IOS Press and editor-in-chief Josip Stjepandić for accepting to publish the DHM2020 proceedings in the book series Advances in Transdisciplinary Engineering (ATDE),
- unmapped: label –
everyone who has contributed to the proceedings of DHM2020, and to the realization of the symposium.
Organizing committee DHM2020
Lars Hanson – General Chair
Dan Högberg – Program Chair
Erik Brolin – Local Chair
Pernilla Klingspor – Event Coordinator
Amila Domi – Finance Officer
University of Skövde, Sweden
Over several decades, police officer body dimensions have increased as have the body dimensions of many Americans. But the external dimensions of a law enforcement officer completely outfitted in all of his or her gear has increased dramatically, with the near-constant use of body armor and the addition of body cameras, radios and a host of other work-related items. At the same time, the available space in his or her police cruiser has decreased, with the addition of dash cameras, radios, and computers and the modernization of the bucket seat design. The result is a disaccommodation problem that is increasing with each addition of new equipment for either the officer or the vehicle.
Digital human modeling is an ideal tool to help solve this accommodation problem, by creating realistic models of officers wearing their gear. To create a database for use in that modeling, we recruited approximately 1000 officers from 12 locations around the US and obtained whole body, head, hand, and foot scans from each. In addition, we measured them for a series of traditional anthropometric dimensions both semi-nude and fully equipped in their uniform, body armor and gear. The differences obtained between the equipped and semi-nude officers will allow the verification of future models. Further, we collected data on the vehicles they use, any difficulties with their current vehicles, and the ancillary equipment worn on the body. This paper presents partial results of that study.
The significant challenge going forward will be to create models that take into account the wide diversity in how officers wear their equipment on the body. For example, many officers carry a weapon on the duty belt; some carry it in a thigh holster. Some officers carry a radio on the shirt; others carry it on the duty belt, sometimes in front, and sometimes on the side. We conclude with a suggestion that modelers use data from this survey to accommodate the variability added by the equipment.
Digital Human Modeling Systems (DHM’s) benefit from detailed up to date anthropometric data. Whereas the clothing industry focuses on anthropometric measures according to ISO-18825-2, ergonomic- and safety-related measures are defined in ISO 7250-1. For the current research project, body scan data was collected as part of an epidemiological study (Study of Health in Pomerania, SHIP). ISO 20685-1 recommends a validation study for the comparison of manual vs. 3D body scan data from at least 40 persons, if the data should be considered in anthropometric databases. The current study evaluated data of 44 participants. The scans and the manual measurements for each participant were taken successively at the same day. The definition of anatomical landmarks differed for some parameters between the ISO 7250-1 standard and the standard operating procedures (SOP’s) of the SHIP study. As it was not possible to change the methods of the SHIP study, the authors performed a relative offset calculation. With few exceptions, the validation measures exceeded the maximum error allowances from ISO 20685-1:2018. The paper discusses possible root causes of the evaluated differences.
Some anthropometric measurements, such as body weight often show a positively skewed distribution. Different types of transformations can be applied when handling skewed data in order to make the data more normally distributed. This paper presents and visualises how square root, log normal and, multiplicative inverse transformations can affect the data when creating boundary confidence ellipses. The paper also shows the difference of created manikin families, i.e. groups of manikin cases, when using transformed distributions or not, for three populations with different skewness. The results from the study show that transforming skewed distributions when generating confidence ellipses and boundary cases is appropriate to more accurately consider this type of diversity and correctly describe the shape of the actual skewed distribution. Transforming the data to create accurate boundary confidence regions is thought to be advantageous, as this would create digital manikins with enhanced accuracy that would produce more realistic and accurate simulations and evaluations when using DHM tools for the design of products and workplaces.
The paper talks about the known concept of Multivariate anthropometry, a term explained in several other studies. The method consists of creating a family of manikins with a combination of more than one anthropometric variable to broaden, or to accommodate more users in a physical ergonomic analysis scenario. The paper explains how this method was adopted at a truck OEM and how it affects the design aspect of ergonomics. The paper also describes how the usage of such a method would be difficult to achieve before the advent of DHMs, CAD environments. Communicating this working method that is within reach of several industries is a history that is worth being aware of and sharing.
Another section of the paper describes how different DHMs deal with this Multivariate method, and what possibilities are open for the users when they create their families in different software.
The statistical concept of percentile is addressed in the paper, and presents it not as being a surpassed method, but rather as being one important variable of the Multivariate approach.
The use of statistical body shape models (SBSM) offers the possibility to generate a realistic full body shape with a limited number of measures/predictors such as traditional anthropometric dimensions, surface landmarks etc. The purpose of the present work is to explore the possibility to create a personalized surface model with a small set of easily measurable parameters, and to compare the quality of SBSM-based prediction in function of predictors. A sample of 164 full body scans in a standing posture from European and Chinese males were selected based on stature and BMI. After cleaning the raw scans, a non-rigid mesh deformation method was used to fit a customized template onto scans. Then, a principal component analysis (PCA) was performed to build SBSM with different set of predictors, including anthropometric dimensions, landmarks’ coordinates, postural parameters. The partial least square regression was used to take into account correlated nature between predictors. As statistical models cannot match the target values of predictors, an optimization was further proposed for better matching targets while not deviating too much from the initial prediction by statistical regression. A leave-one-out (LOO) procedure was used to evaluate the quality of SBSM with different set of predictors.
Advanced driver assistant systems are supposed to assist the driver and ensure their safety while at the same time providing a fulfilling driving experience that suits their individual driving styles. What a driver will do in any given traffic situation depends on the driver’s mental model which describes how the driver perceives the observable aspects of the environment, interprets these aspects, and on the driver’s goals and beliefs of applicable actions for the current situation. Understanding the driver’s mental model has hence received great attention from researchers, where defining the driver’s beliefs and goals is one of the greatest challenges. In this paper we present an approach to establish individual drivers’ temporal-spatial mental models by considering driving to be a continuous Partially Observable Markov Decision Process (POMDP) wherein the driver’s mental model can be represented as a graph structure following the Bayesian Theory of Mind (BToM). The individual’s mental model can then be automatically obtained through deep reinforcement learning. Using the driving simulator CARLA and deep Q-learning, we demonstrate our approach through the scenario of keeping the optimal time gap between the own vehicle and the vehicle in front.
During concept design of new vehicles, work places, and other complex artifacts, it is critical to assess positioning of instruments and regulators from the perspective of the end user. One common way to do these kinds of assessments during early product development is by the use of Digital Human Modelling (DHM). DHM tools are able to produce detailed simulations, including vision. Many of these tools comprise evaluations of direct vision and some tools are also able to assess other perceptual features. However, to our knowledge, all DHM tools available today require manual selection of manikin viewpoint. This can be both cumbersome and difficult, and requires that the DHM user possesses detailed knowledge about visual behavior of the workers in the task being modelled. In the present study, we take the first steps towards an automatic selection of viewpoint through a computational model of eye-hand coordination. We here report descriptive statistics on visual behavior in a pick-and-place task executed in virtual reality. During reaching actions, results reveal a very high degree of eye-gaze towards the target object. Participants look at the target object at least once during basically every trial, even during a repetitive action. The object remains focused during large proportions of the reaching action, even when participants are forced to move in order to reach the object. These results are in line with previous research on eye-hand coordination and suggest that DHM tools should, by default, set the viewpoint to match the manikin’s grasping location.
Natural human locomotion contains variations, which are important for creating realistic animations. Most of all when simulating a group of avatars, the resulting motions will appear robotic and not natural anymore if all avatars are simulated with the same walk cycle. While there is a lot of research work focusing on high-quality, interactive motion synthesis the same work does not include rich variations in the generated motion. We propose a novel approach to high-quality, interactive and variational motion synthesis. We successfully integrated concepts of variational autoencoders in a fully-connected network. Our approach can learn the dataset intrinsic variation inside the hidden layers. Different hyperparameters are evaluated, including the number of variational layers and the frequency of random sampling during motion generation. We demonstrate that our approach can generate smooth animations including highly visible temporal and spatial variations and can be utilized for reactive online locomotion synthesis.
Many digital human model applications are based on optimal control simulations of the musculoskeletal system. These simulations usually involve the derivatives of the underlying kinematic and dynamic model, which are in general not easy to derive analytically. In the direct transcription method DMOCC, we use the discrete Euler-Lagrange equations together with a discrete null space matrix and a nodal reparametrization, which are embedded into a constrained optimization problem. The abstract and formalizable structure of this method offers many possibilities for automation. Therefore, we use the CasADi nonlinear optimization and algorithmic differentiation tool to automatically derive the discrete Euler-Lagrange equation and a valid discrete null space matrix. This allows us an efficient and easy implementation of the DMOCC method for large multibody systems.
For virtual evaluation of universal design products, it is necessary to synthesize natural grasps for various hands including those with disability. As one of the disabilities, we focused on the limitation of the thumb’s range of motion (ROM). For example, carpal tunnel syndrome (CTS) is a typical disease that limits thumb’s ROM. Though there is no doubt that the range of motion affects the whole grasp, detailed grasp strategy has not been studied so far due to the difficulties in collecting data from such patients. Therefore, in this paper, we propose to synthesize grasping postures by the thumb’s ROM-limited digital hands based on the observation of an actual subject whose hand is artificially-disabled. The synthesized postures of the healthy hand and the thumb’s ROM-limited hand were obviously different. We applied a contact-region-based method for grasp synthesis for ROM-limited hand and succeeded in synthesizing the grasping postures that reflect the features of the thumb’s ROM-limited hands’ grasps.
This paper presents research performed on behalf of Transport for London in the UK addressing the over representation of trucks involved in accidents with vulnerable road users where issues with driver vision are often cited as the main casual factors. A Direct Vision Standard for London and potentially for Europe has been developed that utilizes a volumetric assessment of field of view performance. This paper presents research into how to contextualize the somewhat abstract volumetric performance scores into real world metrics using digital human models. The research modelled 27 trucks currently available from major manufacturers and analyzed their volumetric performance. It also explored a supplementary process using digital human models define the minimum threshold of field of view performance. The current proposal utilizes thirteen human models, representing 5th %ile Italian females, positioned to front, left and right of the cab. The minimum standard was developed to ensure that no blind spot exists between the regulations for mirror coverage and the new Direct Vision Standard. The research is ongoing in line with the finalization of the standard at a European level.
Temporomandibular disorder (TMD) is a prevalent dental disease in common with dental caries and periodontitis. The major symptoms of TMD are masticatory muscle pain, temporomandibular joint (TMJ) pain and impairment of jaw movement due to the pain and pathologic derangement of TMJs. However, there are few studies using TMD patient-specific motion data to drive the musculoskeletal model that can elucidate kinematic and biomechanical characteristics of the patient. The purpose of this study is to develop the workflow of musculoskeletal modeling of the mandible with jaw motion data obtained from a TMD patient. This involves the establishment of patient-specific boundary conditions representing the characteristics of the TMJ. The jaw motion of a TMD patient was recorded and used as an input for driving the model.
In IPS-IMMA the operation sequence planning tool offers an easy and powerful way to construct, analyze, and simulate sequences of human operations. So far, the simulations created using this tool have been quasi-static solutions to the operation sequence. In this paper we present new functionality for motion planning of digital human operation sequences which also takes the dynamics of the human into consideration. The new functionality is based discrete mechanics and optimal control and will be seamlessly integrated into to the IPS-IMMA software through the operation sequence planning tool. First, the user constructs an operation sequence using the operation sequence planning tool in IPS-IMMA. The operation sequence is then converted into a discrete optimal control problem which is solved using a nonlinear programming solver. Finally, the solution can be played back and analyzed in the graphical interface of IPS-IMMA. In order to obtain physically correct solutions to complex sequences consisting of several consecutive and dependent operations, we view the digital human as a hybrid system, i.e. a system containing both continuous and discrete dynamic behavior. In particular, the optimal control problem is divided into multiple continuous phases, connected by discrete events. The variational integrators used in discrete mechanics are particularly well suited for modelling the dynamics of constrained mechanical systems, which is almost always the case when considering complex human models interacting with the environment. To demonstrate the workflow, we model and solve an industrial case where the dynamics of the system plays an important part in the solution.
Accidents between vulnerable road users and trucks have been linked to the inability of drivers to directly see the areas in close proximity to the front and sides of the vehicle cab. The lack of direct vision is mitigated through the use of mirrors. The coverage requirements of mirrors are standardized in Europe. Direct Vision for trucks is not currently standardized in any way. Research by the authors identified key requirements for a Direct Vision Standard (DVS). Transport for London funded this work. This standard is now being applied in London, and a European version is in development. A key element of the definition of this standard was the application of DHM software to define a standardized eye point. This is used to create simulations of the volume of space to the exterior of the cab that a driver can see. Eye point definitions exist in standards for trucks, but the standards are defined in a manner which allows variability in the eye point location. This variability allowed some truck designs to gain an advantage over their competitors, leading to the requirement for a new definition for a common eye point. The paper describes the process that has been followed to define this eye point.
We propose an evidence based methodology for the systematic analysis and cognitive characterisation of multimodal interactions in naturalistic roadside situations such as driving, crossing a street etc. Founded on basic human modalities of embodied interaction, the proposed methodology utilises three key characteristics crucial to roadside situations, namely: explicit and implicit mode of interaction, formal and informal means of signalling, and levels of context-specific (visual) attention. Driven by the fine-grained interpretation and modelling of human behaviour in naturalistic settings, we present an application of the proposed model with examples from a work-in-progress dataset consisting of baseline multimodal interaction scenarios and variations built therefrom with a particular emphasis on joint attention and diversity of modalities employed. Our research aims to open up an interdisciplinary frontier for the human-centred design and evaluation of artificial cognitive technologies (e.g., autonomous vehicles, robotics) where embodied (multimodal) human interaction and normative compliance are of central significance.
The precision of human movements within the reach envelope has been poorly described, but could play an important role, particularly in the simulation of human movements or in virtual interactions. We therefore describe an experiment in which virtual target points need to be reached. The targets are visualized in the three-dimensional reach envelope using the HTC-Vive Pro and participant movements are recorded using an optical motion-capturing system. The targets consist of 60 spheres which appear at different ranges, height angles and side angles. We measured 43 test subjects in three test conditions (fast as possible, precise as possible, fast and precise). Two forms of human movement precision were measured: static holding precision and dynamic reaching precision. As a result, the static and dynamic precision is described in terms of the speed, the distance between the hand and the virtual target, as well as the position within the reach envelope. Fast movements seem to be more precise at the end of the movement phase (26 mm deviation). Precise movements result in better dynamic precision at the end of the adjustment (7 mm deviation) and holding precision (5 mm deviation). Future works involve the evaluation of movement strategies.
Foot positioning has a significant impact on human body stability control when completing a manufacturing task. In classical Digital Human Models (DHM), the use of stepping strategies to generate stable postures relies on simplistic models, which generally locate the DHM center of mass (COM) at half distance between feet contact or limit the zero moment point (ZMP) projection within the base of support (BOS). Developing more comprehensive stepping models requires rigorous experimental studies to extract human movement coordination strategies during manufacturing tasks, which can be used to validate DHM models. The objective of this study is to develop an experimental test bench representing industrial conditions and to carry out experiments to provide these DHM models with parameters of postural stability. The assessed postural stability parameters in this study were the support length which is a variation of the step length, and the ZMP position with respect to the BOS. Results obtained from a pilot subject showed that the contralateral and ipsilateral legs move respectively to expand the BOS in the direction of ZMP displacement to maximize stability.
Current finite element (FE) approaches to model clothing on the human body in terms of personal protective equipment (PPE) are mainly bound to the discretization of the outer element layer of the human body model (HBM) and the given posture. Costs for PPE prototyping could be lowered drastically if an efficient and posture-independent clothing modeling method would be available, so that the effectiveness of PPE in terms of injury risk mitigation could be assessed in a donned configuration. In the present study, an FE modeling method was developed to map 2D planar clothing structures on arbitrary 3D human body contours. The method was successfully applied to the GHBMC M50-PS with a modular design based ballistic vest including all components, joints and fasteners. The 3D shaped clothing models in combination with arbitrary HBM allow to analyze the structural interaction of protective clothing with the human body in unforeseen dangerous situations. The presented method facilitates the building of full featured FE models of PPE in donned configurations.
Automated driving is currently one of the most important research areas in the automotive industry. If automation reaches its system limits, the driver is obliged to take over the driving of the vehicle again. In that scenario, the driver first has to put his hands on the steering wheel again if he was occupied with a non-driving task. This work shows a method to precisely model these hand movements from a specific task back to the steering wheel. These movements are analyzed depending on the individual parameters of each person, such as age, gender, body height. For this purpose, a test stand was developed and assembled, on which a study with 52 participants was carried out. It can be observed that the hand trajectories are lying in a plane in the three-dimensional space, with orthogonal deviation from the plane smaller than 10 mm. This finding allows to model the trajectory as a combination of a polynomial and the orientation of the individual plane. The results show that the trajectory of the hand movement only averaged about 2 mm from their main movement plane. The trajectories from this study were fitted using polynomials. Trajectories were parametrized and fitted to a linear mixed effects model using the “lmer” [1] and “afex” [2] R-packages.
Driver posture monitoring is beneficial for identifying driver physical state as well as for optimizing passive safety systems to mitigate injury outcomes during collisions. In recent years, depth cameras are increasingly used to monitor driver’s posture. However, good driver posture data is missing for developing accurate posture recognition methods. In this study, we introduce a method to build an in-vehicle driver posture database for training posture recognition algorithms based on a depth camera. Driver motion data was collected from 23 participants performing both driving and non-driving activities by an optical motion capture system Vicon. Motions were reconstructed by creating personalized digital human skeletons and applying inverse kinematics approach. By taking advantage of the techniques developed in computer graphics, a recorded driver motion can be efficiently retargeted to a variety of virtual humans to build a large database including synthetic depth images, ground truth labels of body segments and skeletal joint centers. Examples from motion reconstruction, data augmentation and preliminary posture prediction results are given.
This paper presents a method to calculate spatiotemporal parameters using a chest-worn accelerometer. Accuracy was compared with an optical system that consists of a walkway of transmitting and receiving bars (Microgait, Optogait, Bolzano, Italy). To this purpose, seventeen healthy male wore a smart shirt based worn accelerometer performing five meters of walkway delimited by five meters of optical bars OptoGait™ for three times. Spatiotemporal parameters such as gait cycle and gait phases were analysed and compared using the two systems. Smart shirt based on chest-worn accelerometer revealed to be a non-intrusive way of calculating gait cycle, phases and sub-phases. In addition, the inverted pendulum model based on chest body-worn accelerometer revealed to be a good model for calculating step length variation and consequently the speed. Our results, are in line with previous literature presenting an average of 60.24% of stance phase, 39.75% of swing phase, a foot flat subphase of 17.60%, a terminal stance subphase of 21.42%, a pre-swing subphase of 10.65%, a step length of 0.74 m for an average speed of 1.37 m/s using the smart shirt.