While advances in sensor and display technology make it possible to operate military vehicles with indirect vision, a number of unique issues must be addressed to make it usable by the human operators. This article presents a human factors view of the issues and ways to approach them.
Indirect vision systems have been envisioned for use in aircraft since the 1950s. Indirect vision systems can be classified as Synthetic Vision Systems (SVS) or Enhanced Vision Systems (EVS).
SVS present an image of the scene ahead that is generated from a stored data base. The data base contains terrain features, and may contain cultural features such as buildings and power lines. A SVS image is generated in a way that is similar to the way the scene is generated in a training simulator or video game. SVS allows the pilot to see a color, daylight view regardless of the time of day or the weather conditions.
In contrast EVS imagery comes from sensors aboard the vehicle. The sensors can include daytime cameras, infrared sensors, image intensifiers, and millimeter wave radar. While these sensors can outperform the human eye in many useful ways, there are limitations. SVS and EVS imagery is usually presented on a head down display on the instrument panel (dashboard) of an aircraft. This allows an unobstructed view of the outside scene through the aircraft’s windshield.
The FAA has certified a number of SVS and EVS for use in commercial and general aviation aircraft. Some of these systems allow approaches to continue to lower altitudes before a runway is seen directly than would be permissible in the absence of that SVS or EVS.
Recognizing that soldiers operating ground vehicles are at risk whenever they are not under armor protection, the U.S. Army directed that the Manned Ground Vehicle (MGV) component of the Future Combat System (FCS) program would use indirect vision as the primary mode of operation, leveraging the work that had been conducted in the aviation domain. To accomplish this, LCD displays were designed into the MGV (similar to replacing the windshield of the HSCT with displays), and cameras were provided to provide 360° coverage around the vehicle.
The vehicle manufacturer’s main goal when building these systems is to meet performance goals at the desired cost, and to achieve a balanced solution. The human is a critical element of these systems, but is sometimes also the weakest link. An overbuilt engine is of no use if the indirect vision system won’t let the human drive at top speeds.
The Role of the Human Factors Engineer
The role of human factors engineers (HFEs) in Soldier Machine Interface design is to understand the soldier’s capabilities (and sometimes weaknesses) and to modify the system design to best support the soldier. The goal is to create a solider centric solution in the context of the balanced design, using data from prior human performance research, and gathering new data where required. Fortunately, HFE often are able to identify and apply information from previous studies to address the human-system design problem. In the case of indirect vision there is a large body of work performed by the U.S. government and by academic organizations . HFEs are a valuable asset to the design team by assessing the relevance and empirical strength of the prior work. They bring these data to the design space, and inform product teams on the expected performance benefits and losses associated with each design path.In some cases the existing body of work does not completely answer the design team’s questions, and additional studies are required. The Human Factors Engineer has the skills to design simulation and field studies that apply to the target population, that comply with Federal Regulations concerning the use of human subjects in DoD research, and provide meaningful, reliable data to ensure confidence in a subsequent design decision.
Key Indirect Vision System Design Parameters
The remainder of this article briefly discusses some of the design parameters that we as human factors professionals consider important to the design of an indirect vision system. In the authors’ prior work these issues were traded off against each other and other system design parameters (such as cost, weight, and volume) to develop a COTS indirect vision system that proved effective for day and night off-road driving of a surrogate vehicle.
Glass-to-Glass Lag. When viewing a scene directly, light takes virtually no time to travel between the object and the soldier’s eye. However, in an indirect driving system the imagery seen by the driver is delayed. This delay is measured as the elapsed time from the light initially striking the lens of the sensor (the first piece of “glass”) until the light is emitted by the driver’s display (the second piece of “glass”). The delay at the sensor is usually a function of the frame rate. At 30 frames per second the delay is approximately 33 msec. Similarly, if the display updates at 60 frames per second the delay is approximately 17 msec. Additional delays are added for the transmission of the imagery and for each processing phase or piece of equipment in the video stream. When overlaying symbology on a video scene the additional delay can be significant.
Glass-to-glass lag affects the ability of the operator to drive because the vehicle moves in the interval between the time the image is obtained by the sensor and when the image is displayed to the driver . This is usually a small error, but can be important when precise driving is critical. Arguably more important is the effect that glass-to-glass lags have on the ability of the driver to determine the effect of control inputs. With large lags the driver’s control inputs get out of phase with the vehicle. This often leads to imprecise and non-aggressive vehicle control. In extreme cases this can lead to abrupt changes in a vehicle’s heading. This is similar to what aviators refer to as “Pilot Induced Oscillation ”.
Human performance research suggests that the smallest glass-to-glass lag known to produce decrements in tracking performance is approximately 40 milliseconds (Boff & Lincoln, 1988). This value comes from studies of humans performing compensatory tracking tasks that are very similar to vehicle control tasks.
Control lag. The next generation of vehicles are likely to use drive-by-wire technology. With drive-by-wire technology there is no physical connection between the steering wheel and the tires or tracks, which introduces a lag between the time that the driver makes a steering wheel input and the time that the tires or tracks begin to execute that steering input. Fortunately, this delay is small with modern data busses, such as the CAN and FlexRay busses which are increasingly being used in commercial automotive applications. However, computer processing of the steering input (e.g. for safety reasons) may add to the delay. Unfortunately, there are few data available that let us accurately predict how glass-to-glass and control lags interact with one another to affect vehicle control.
Image minification and magnification. The images of objects presented on a display can be smaller, larger, or the same angular size as those images viewed directly through the windshield (See Figure 1). Many designers have chosen to minify sensor imagery in order to fit a wider Field of View onto the displays, but this can result in distance and speed judgment errors. Humans use the angular size of familiar objects as one source of information about the distance to that object. If a familiar object, such as an automobile ahead on the road, subtends a larger angle on the retina than it would if viewed directly (a magnified image) it will tend to be perceived as being closer to the viewer than would the same image if it subtended a smaller angle (a minified image). So, for accurate distance perception when operating a vehicle one would expect that the image should be shown at 1:1 magnification. However, there is some evidence that magnifying the image on a screen by about 30% results in the most accurate distance perception (Roscoe, Hasler, & Doghherty, 1966).
Image magnification presents the sensor imagery so that it subtends a larger FOV on the screens than it does in the sensor. Image minification takes a wider FOV from the sensor and presents it on a narrower FOV on the display. With both of these approaches the spatial correspondence to objects in the real world (i.e. the heading of that object with respect to your vehicle) is lost, and speed and distance are not perceived correctly. "Unity Vision" is when the sensor FOV is the same as the driver's FOV to the displays. With unity vision, the spatial relationships between yourself and objects in the environment are preserved, and appear the same on the displays and in direct vision or through periscopes.
An often overlooked drawback to minification or magnification of an indirect vision image is the loss of directional accuracy (See Figure 2). Specifically, when the image on a screen is minified or magnified the direction from the observer to that object is affected in a continuous, but non-linear manner; the farther from straight ahead the object is the larger the directional error. This direction error would likely go undetected until the driver attempts to locate the object though a direct vision device, such as vision blocks which neither magnify or minify the image. In this situation, the heading of the object on the display would diverge from that seen in the direct vision device. This may increase the time a soldier requires to locate and visually identify the object in the vision block, and negatively affect his situation awareness.
Vertical and Horizontal Field of View. The vertical and horizontal field of view (FOV) of the scene may be altered independently of the magnification by making the displays either wider and/or taller, or by adding more displays. However, in most vehicles space and weight are at a premium, so changing the eye-to-screen distance (moving the screen(s) closer to the driver to increase the FOV) is often preferred, provided that the screens don’t interfere with controls, impair ingress or egress, or present other hazards to the driver or other crewmembers. However, changing the FOV of the screens will require the field of view of the sensors providing the imagery to be changed to match that of the screens in order to keep the same magnification level of the image.
The importance of FOV varies depending on the driving conditions. On a smooth road with moderate slopes a limited FOV is acceptable. However, when operating off-road the vertical FOV needs to be large enough to allow drivers to make correct height and depth estimations of obstacles ranging from ditches to hills. In extreme off road conditions a very large vertical FOV is needed, or a means of aiming the sensor to it looks up or down is required. Similarly, when driving in urban situations the ability to look 90o left and right is required to safely negotiate right angle turns at intersections, and some ability to look upward is essential to providing area security against rooftop threats. With display FOV fixed by the screen physical layout, and the magnification fixed to provide accurate and reliable depth perception, a user interface allowing the soldier to aim the sensors (or select imagery from other sensors that are aimed in the desired direction) is needed.
Conclusions
Indirect vision is one example of areas in which Human Factors Engineering provides value to a military vehicle design team using their human research skills. Other areas include crew station physical layout and Graphical User Interface (GUI) design.
As vehicle development efforts continue to incorporate new technologies such as indirect vision, intelligent sensor algorithms, vehicle protection systems, and autonomous mobility solutions, Human Factors Engineers will be very valuable assets to ensure that human-machine systems will work as desired and accomplish mission objectives.
References
Boff, K.R. & Lincoln, J.E. (1988). Engineering Data Compendium: Human Perception and Performance. Wright-Patterson AFB, OH: AAMRL
Roscoe, S.N., Hasler, S.G., Dougherty, D.J. (1966). Flight by Periscope: Making Takeoffs and Landings; The Influence of Image Magnification, Practice, and Various Conditions of Flight. Human Factors. p. 13-40.