Strategics | Studies, Essays, Thesises » Andrew T. Miranda - Understanding Human Error in Naval Aviation Mishaps

Datasheet

Year, pagecount:2018, 15 page(s)

Language:English

Downloads:2

Uploaded:April 05, 2021

Size:797 KB

Institution:
-

Comments:

Attachment:-

Download in PDF:Please log in!



Comments

No comments yet. You can be the first!


Content extract

771904 HFSXXX10.1177/0018720818771904Human FactorsUnderstanding Human Errorresearch-article2018 Understanding Human Error in Naval Aviation Mishaps Andrew T. Miranda, Naval Safety Center, Norfolk, Virginia Objective: To better understand the external factors that influence the performance and decisions of aviators involved in Naval aviation mishaps. Background: Mishaps in complex activities, ranging from aviation to nuclear power operations, are often the result of interactions between multiple components within an organization. The Naval aviation mishap database contains relevant information, both in quantitative statistics and qualitative reports, that permits analysis of such interactions to identify how the working atmosphere influences aviator performance and judgment. Method: Results from 95 severe Naval aviation mishaps that occurred from 2011 through 2016 were analyzed using Bayes’ theorem probability formula. Then a content analysis was performed on a subset of relevant

mishap reports. Results: Out of the 14 latent factors analyzed, the Bayes’ application identified 6 that impacted specific aspects of aviator behavior during mishaps. Technological environment, misperceptions, and mental awareness impacted basic aviation skills. The remaining 3 factors were used to inform a content analysis of the contextual information within mishap reports. Teamwork failures were the result of plan continuation aggravated by diffused responsibility. Resource limitations and risk management deficiencies impacted judgments made by squadron commanders Conclusion: The application of Bayes’ theorem to historical mishap data revealed the role of latent factors within Naval aviation mishaps. Teamwork failures were seen to be considerably damaging to both aviator skill and judgment. Application: Both the methods and findings have direct application for organizations interested in understanding the relationships between external factors and human error. It presents

real-world evidence to promote effective safety decisions Keywords: human error, safety, aviation, HFACS Address correspondence to Andrew T. Miranda, Naval Safety Center, 375 A St., Norfolk, VA 23511-4399, USA; e-mail: andrew.miranda@navymil Author(s) Note: The author(s) of this article are U.S government employees and created the article within the scope of their employment. As a work of the US federal government, the content of the article is in the public domain. HUMAN FACTORS Vol. 60, No 6, September 2018, pp 763­–777 DOI: 10.1177/0018720818771904 Copyright 2018, Human Factors and Ergonomics Society. Introduction Human factors researchers have advocated for decades that “human error” in accidents and mishaps is actually the result of difficult working conditions shaped by external influences (Dekker, 2014; Fitts & Jones, 1947; Norman, 1988; Rasmussen, 1983; Reason, 1990; Woods & Cook, 1999). The partial nuclear meltdown at Three Mile Island in 1979, for instance,

is an apt example that demonstrates operator error on the day of the accident was the result of interactions between poorly designed displays, inadequate training for the operators, and mechanical failures (Meshkati, 1991). To better monitor these types of interactions and the external influences large organizations can exert on workers, the Department of Defense (DoD) instructed the safety communities within the services to establish procedures to help observe and categorize “human error” data. The DoD Human Factors Analysis and Classification System (HFACS) became the standardized taxonomy in 2005. Developed by Doug Wiegmann and Scott Shappell (2003), DoD HFACS is inspired by Reason’s (1990) “Swiss cheese” model of accident causation and attempts to identify how specific error tendencies (classified as active failures, known as unsafe acts) are shaped by higher-level influences (classified as latent failures, known as preconditions, unsafe supervision, and organizational

influences) (see Figure 1). As an aviation example, a hard landing may be the result of an aviator not following procedures (unsafe act). But that occurred because critical information was not communicated (precondition), and it was influenced by inadequate risk assessment (unsafe supervision) and organizational culture (organizational influences). DoD HFACS aids safety professionals, with or without formal human factors training, to investigate beyond the person (e.g, the aviator or maintenance personnel) that happens to be closest in time and space to the scene of the mishap. Results of previous research have suggested that DoD HFACS is an effective tool toward reducing 764 September 2018 - Human Factors Figure 1. The Department of Defense Human Factors and Analysis Classification System (DoD HFACS 7.0) mishap rates (e.g, Belland, Olsen, & Lawry, 2010). The primary purpose of HFACS is to be a tool used by safety professionals to help identify unsafe practices wherever they

may occur within an organization. At the active failure level, there are two categories of unsafe acts: errors and violations. Errors are defined as unintentional deviations from correct action, whereas violations are defined as deliberate deviations from rules or instructions. Errors are further delineated into two subcategories: (1) performance-based errors (PBE) and (2) judgment and decision-making errors (JDME). The motivation to distinguish between the two types of errors is inspired by the seminal work from human factors scholars Jens Rasmussen (1983) and James Reason (1990), who both studied the underlying cognitive mechanisms that contribute to how errors can manifest depending on the task and person. Wiegmann and Shappell (2001, 2003) describe PBE (originally defined as skill-based errors) as occurring when there is a breakdown of the basic skills that are performed without significant conscious thought. For instance, an aviator may forget to perform a highly practiced task,

such as lowering the landing gear on approach, because of a warning light distraction. JDME, on the other hand, are considered “honest mistakes.” They are outcomes of intentional behaviors and choices that turn out to be inadequate for the situation (Wiegmann & Shappell, 2001, 2003). For example, an aviator may choose to fly into a seemingly mild storm, but they underestimated the storm severity, leading to an unsafe situation. Within the HFACS framework as well as the greater human factors community, both error types are viewed as symptoms of deeper trouble within an organization (Dekker, 2014; Wiegmann & Shappell, 2001). That is, the term human error is often considered an unhelpful and reductive label used to identify the solitary person(s) as the weak component with a c­ omplex system encompassing numerous people, tools, Understanding Human Error tasks, policies, and procedures (Dekker, 2014). That is why the latent conditions, the higherlevel factors that

ultimately shape the work for the individuals who are typically directly involved with the mishap, must be examined when assessing safe practices. DoD HFACS distinguishes latent conditions into three tiers: preconditions (subdivided into seven categories), unsafe supervision (subdivided into three categories), and organizational influence (subdivided into four categories) (see Figure 1). HFACS has been adopted within a variety of domains, including the mining industry (Patterson & Shappell, 2010) and aviation maintenance (Krulak, 2004). The framework is typically modified to better accommodate the type of work being performed. Theophilus et al (2017), for instance, developed a version of HFACS for the oil and gas industry that featured additional categories covering industry regulatory standards. The version used by the DoD has evolved to the current version of DoD HFACS 7.0 that includes an additional layer of specificity, known as nanocodes, within each category. For example, the

teamwork category within the precondition tier includes the nanocodes critical information not communicated and task/mission planning/briefing inadequate. A more thorough description of DoD HFACS can be found on the Naval Safety Center Web site (DoD HFACS, 2017). Numerous studies have analyzed safety data obtained from HFACS application of safety performance. The most informative methods of HFACS analysis are those that go beyond a descriptive understanding of how often particular failures or errors appear and instead attempt to learn the linkages between latent failures and active failures. For example, Li and Harris (2006) analyzed HFACS data from Republic of China Air Force mishaps from 1978 to 2002. Their findings revealed a variety of relationships between active and latent failures, including physical/mental limitations as a precondition lead to higher likelihoods of both judgment and skill-based errors. Hsiao, Drury, Wu, and Paquet (2013a, 2013b) created a modified version of

HFACS tailored to aviation maintenance operations. The researchers incorporated HFACS data obtained from historical safety audit reports 765 (instead of past safety performance), along with known mishap rates, into an artificial neural network that predicted monthly safety performance with moderate statistical validity (Hsiao et al., 2013b). Chen and Huang (2014) also developed a modified HFACS framework for aviation maintenance and incorporated their data into a Bayesian network that revealed how various latent factors influenced maintenance performance (e.g, how the physical posture of the worker likely influenced their ability to inspect aircraft parts). These studies have demonstrated the potential for historical HFACS data to help understand how latent factors can shape ­performance. Similar work has been conducted on DoD HFACS data obtained from military aviation accidents. Tvaryanas and Thompson (2008), for example, performed an exploratory factor analysis on DoD HFACS data

alongside data from mechanical failures obtained from 95 safety events, including mishaps and near misses, of U.S Air Force unmanned aircraft systems The results revealed most of the events in their data set were the result of deficiencies within organizational process and the technological environment. The authors therefore suggested a need to incorporate HFACS into a broader human-­ systems integration framework and human error data be considered early in the organizational process of acquiring new technologies (Tvaryanas & Thompson, 2008). Walker, O’Connor, Phillips, Hahn, and Dalitsch (2011) performed similar analysis on DoD HFACS data obtained from 487 Navy and Marine Corps (henceforth referred to as Naval) aviation mishaps. Using what they called “lifted probabilities,” they observed certain facilitatory/inhibitory relationships between specific DoD HFACS nanocodes (e.g, mental fatigue contributed to an aviator ignoring a caution or warning). The findings of the

previous studies have demonstrated value in analyzing the relationships between active and latent failures within the DoD HFACS framework. The previous HFACS research mentioned to this point has been projects analyzing a collection of existing HFACS data points. Other research has examined the more practical use of HFACS as a tool used by safety professionals. 766 September 2018 - Human Factors For example, in a study simulating Naval aviation mishap investigations, O’Connor and Walker (2011) found that different investigation teams may disagree about what DoD HFACS nanocodes should be assigned as mishap causal factors, suggesting DoD HFACS has poor interrater reliability. The implications raise concerns on the effective use of DoD HFACS at the specific level of the nanocodes. Cohen, Wiegmann, and Shappell (2015) attempted to address this concern by reviewing a collection of studies assessing the rater reliability of HFACS. They concluded that though DoD HFACS has questionable

reliability at the specific nanocode level, it does provide adequate agreement and reliability at the broader category level. With this conclusion in mind, there remains a gap in the literature of examining the relationships between active and latent failures within DoD HFACS from severe mishaps at the category level. We wanted to explore this gap while also presenting a new method to analyze DoD HFACS data. The method we chose to implement was Bayes’ theorem. This method of conditional probability analysis when applied to DoD HFACS data allows us to introduce prior occurrences of both active and latent failures to more accurately observe how the presence of certain latent failures are related to certain unsafe acts. Bayes’ theorem allows us to answer questions of conditional probabilities while minimizing potential for over- or underestimating miscalculations (Ramoni & Sebastiani, 2007). With this method, we could address the following two research questions: Research Question

1: What latent failures impact PBE and JDME separately? Research Question 2: What latent failures impact both PBE and JDME? Bayes’ Theorem Method We sought to use Bayes’ theorem to identify conditional probabilistic relationships between active and latent failures within a DoD HFACS data set. The data set was coded from 95 Class A Naval aviation mishaps from 2011 through 2016. Naval Class A mishaps are defined as a mishap in which any fatality or permanent total disability occurs, the total cost of damage is $2,000,000 or greater, and/or any aircraft is destroyed (DoD, 2011). We chose to focus our analysis on Class A mishaps because unlike less severe mishaps, Class As garner further investigative scrutiny in which the investigation team includes individuals from various departments (i.e, safety, operations, maintenance, and medical) and likely aid from professional aviation mishap investigators (Department of the Navy, 2014). Furthermore, previous HFACS research has shown that

HFACS error types can change depending on severity level (Wiegmann & Shappell, 1997). Specifically, JDME are more likely to be associated with severe mishaps, whereas PBE are associated with less severe. By examining a single severity level, we avoided unwanted influence from that confounding variable. Lastly, we chose to focus our analysis on unsafe acts that were determined to be unintentional (i.e, JDME and PBE) rather than including violations that are considered deliberate misconduct. Bayes’ Theorem and HFACS Statistically speaking, the strength of Bayes’ theorem is that it allows us to make accurate inferences about the probability of events based on the prior knowledge of how often the conditions that surround that event occur (Ramoni & Sebastiani, 2007). This statistical operation is a convenient formula that lends itself toward better understanding of relationships among various conditional probabilities (Winkler, 2003). Therefore, it allows us to acquire deeper

meaning from the categories within DoD HFACS and move beyond the unhelpful label of human error. When applied to HFACS, the formula yields an accurate conditional probability that considers prior occurrences of both active and latent failures: P (Active | Latent ) = P ( Active ) × P (Latent | Active) P ( Latent ) . The modified Bayes’ theorem formula provides the answer to the conditional probability question that reads as, “Given the presence of a latent failure, what is the probability of an active failure?” For example, in our data set, 63 out of 95 mishaps cited a JDME, giving the P(JDME) = Understanding Human Error 767 Table 1: Frequency of Active (Unsafe Acts) and Latent (Precondition, Unsafe Supervision, and Organizational Influences) Failures Across 95 Class A Mishaps Failure Type Unsafe act Performance-based error Judgment/decision-making error Violations Precondition Mental awareness Teamwork State of mind Sensory misperception Technological environment

Physical environment Physical problem Unsafe supervision Inadequate supervision Supervisory violations Planned inappropriate operations Organizational influences Policy and process issues Climate/culture influences Resource problems Personnel selection and staffing Frequency Probability (%) 74 63 19 77.89 66.32 20.00 62 59 55 26 17 15 10 65.26 62.11 57.89 27.37 17.89 15.79 10.53 44 20 17 46.32 21.05 17.89 46 11 9 5 48.42 11.58 9.47 5.26 Note. The sum of total frequencies within and between Department of Defense Human Factors and Analysis Classification System tiers exceeds 95 because some mishaps report multiple failures. 0.66 Among the 63 JDME mishaps, 30 cited mental awareness as a latent failure, giving P(Mental Awareness | JDME) = 0.48 Mental awareness as a latent failure was cited in 62 out of the 95 total mishaps, giving P(Mental Awareness) = 0.65 These three variables plugged into Bayes’ theorem yields 0.48 Now we can say, given a mental awareness failure, the

probability of a JDME is 48%. With this product, we can now observe both the magnitude of how much a latent failure impacts the probability of certain error types as well as similarities and differences between the impact of latent failures on active failures. Bayes’ Theorem Results Before applying Bayes’ theorem to the DoD HFACS data set, we first determined the prior probabilities of all three necessary variables. Table 1 presents the frequency of all 17 failure categories across all four DoD HFACS tiers. PBE were the most common type of unsafe act cited within the mishap data set. Mental awareness failures were the most common precondition, being cited in 62 mishaps. Inadequate supervision was the most common unsafe supervision failure, and policy and process issues was most common among organizational influences. Once we obtained the prior probabilities presented in Table 1 and the necessary prior conditional probabilities, we applied Bayes’ theorem probability formula to

the DoD HFACS data set. The results are split into three groups: latent failures with greater impact on JDME, latent failures with greater impact on PBE, and latent failures with equal substantial impact on both types of errors (see Table 2). 768 September 2018 - Human Factors Table 2: Bayes’ Theorem Probabilities Across Both Error Types Greater JDME impact Planned inappropriate operations Climate/cultural influences Greater PBE impact Sensory misperception Technological environment Mental awareness Equal impact Teamwork JDME Probability (%) PBE Probability (%) 82 64 18 27 27 29 48 73 71 73 66 68 Note. The latent failures not included in this table were withheld because there was equally inconsequential impact on both error types. JDME = judgment and decision-making errors; PBE = performance-based errors Figure 2. Differences in Bayes’ probability between JDME and PBE across five latent failures. Each column in the graph represents the difference between the values

presented in the columns of Table 2. The addition of the 95% confidence intervals error bars allows us to confirm these latent failures likely impact only one type of error. JDME = judgment and decision-making errors; PBE = performance-based errors. Five total latent failures produced different levels of impact on the two types of errors. The two latent failures that provided the greater impact on JDME were planned inappropriate operations and climate/cultural influences. The three latent failures that provided greater impact on PBE were sensory misperception, technological environment, and mental awareness. Teamwork was the only latent failure that p­ rovided equal substantial impact on both types of errors (i.e, greater than 50% probability for both error types). To better interpret the level of impact that latent failures have on certain error types, we graphed the difference between the two types of errors across the five latent failures (see ­Figure 2). This allowed us to add

statistical error bars that help answer questions of plausibility of Understanding Human Error the analysis (i.e, if the error bars overlapped with zero, it would not be likely that the latent failure impacts only one type of error). Bayes’ Theorem Discussion There are two valuable findings from the results of the present study. First, the application of Bayes’ theorem to a historical DoD HFACS data set demonstrated an efficient solution toward understanding relationships between latent and active failures. To our knowledge, this was the first application of the Bayes’ theorem probability formula to any HFACS data set that identifies how specific error types are impacted by latent failures. These results confirm that these relationships must be examined when using HFACS with the intent to improve safety. The second valuable finding is the results of the Bayes’ theorem application itself. Five latent failures were observed to have substantial impact on specific error

types, and one latent failure substantially impacted both. Sensory misperception, mental awareness, and technological environment were factors that specifically impacted PBE. All three of these latent failures are in the preconditions tier, which Wiegmann and Shappell (2001, 2003) define as the conditions within and surrounding the individual that lead to unsafe acts. Sensory misperception and mental awareness both consider perception, attention, and cognitive factors of the aviator (e.g, visual illusions, spatial disorientation, and fixation) This finding is supported by previous research showing misperceptions and miscomprehensions can lead to degradation of situation awareness (Endsley, 1995). While on its own this conclusion does not give us a deeper understanding of “human error,” it is likely related to the third factor within the precondition tier also specifically impacting PBE: technological environment. Within DoD HFACS, the technological environment identifies the

overall design of the workspace. In aviation, for example, the design of the displays and controls in the cockpit and the personal equipment in use are included in this latent factor. Though the technological environment was only cited in 17 out of 95 mishaps, the current findings demonstrate the profound impact it plays in disrupting basic aviator skill 769 and performance during severe mishaps. This finding is supported by previous research showing aviator performance is improved when using flight displays that feature basic human factors design principles (e.g, Andre, Wickens, & Moorman, 1991). Furthermore, previous research and discussions of situation awareness, which theoretically encompasses misperceptions and miscomprehensions, have emphasized the necessity of well-designed technology for effectively displaying information to the aviator for maintaining safe performance (Endsley, 2015). This suggests the technological environment latent failure within DoD HFACS may also

be related to sensory misperceptions and mental awareness failures within the preconditions tier, further emphasizing the importance of the technological environment latent factor. These findings advance our perpetual efforts as human factors researchers to demonstrate the importance of well-designed tools and tasks to maintain safe performance. The two latent factors that were observed to have impact on JDME specifically are both categorized outside the preconditions tier within the DoD HFACS model. Planned inappropriate operations and climate/cultural influences are located within the unsafe supervision and organizational influences tiers, respectively. These two latent factors reflect how the working environment is shaped by leadership (both at the supervisory or organization levels). As a reminder, JDME errors within HFACS are considered “honest mistakes” and are the results of individuals making incorrect choices (Wiegmann & Shappell, 2001, 2003). The latent failure

planned inappropriate operations is defined as a failure of supervision to adequately plan or assess the hazards associated with an operation. The current findings demonstrate that this latent factor may indicate that aviators are put into unfamiliar situations and therefore situations with increased risk. Whereas it does not degrade basic aviation skills per se, it may create unnecessary demands of the aviator’s decision-making abilities. The presence of the latent factors of planned inappropriate operations, climate/cultural influences, and teamwork and the influences they had on “human error” within the current ­mishaps 770 September 2018 - Human Factors remain unclear. All three categories still encompass human decision makers within an organization, whether it be the cooperation of aviators working toward a common goal (Teamwork) or their leadership, the squadron or unit commanding officers, approving their operations and assessing the risks of their missions (planned

inappropriate operations and climate/cultural influences). Therefore, determining that these latent failures are the causes of “human error” committed by the single aviator is simply displacing the label human error to higher levels within the organization where humans are still working within the constraints and influences of the complex system. By acknowledging that “human error,” wherever it occurs within an organization, is still a result of the working conditions, we determined additional analysis beyond the data provided by DoD HFACS were necessary. This allowed us to more thoroughly address our research question of what latent factors were influential within the current Naval aviation mishaps. Limitations of Error Classification The primary goal of the Bayes’ analysis was to determine what latent factors impacted specific error types within Naval aviation mishaps. This analysis was able to identify relationships between latent factors and specific error types. But the

results have demonstrated that learning of these relationships does not guarantee we will obtain a deeper and more meaningful understanding as to the conditions that instigate “human error.” Therefore, we felt it necessary to address the limitations of DoD HFACS and error classification systems in general. The DoD HFACS framework has demonstrated its effectiveness as an error classification system, but the current results reveal limitations that it does not provide valuable contextual information imperative for gaining a deeper understanding of “human error.” Studying the contextual influences and constraints that provoked the human to err is a hallmark of human factors applied in real-world settings (e.g, Fitts & Jones, 1947). DoD HFACS and error classification systems in general have been criticized for a variety of reasons. First, as classification systems, they are not effective at being able to capture information relevant to the context and constraints workers faced

when being involved in an accident (e.g, Dekker, 2003). Second, they are considered too reliant on hindsight bias, which hinders a deeper understanding of why individuals did what they did and particularly why they considered that their actions (or inactions) would not have led to a mishap at the time (Woods & Cook, 1999). Lastly, others have found error classifications systems and accident models in general can unintentionally direct accident investigations in such a way that provides an arbitrary stop-rule for the investigation (Lundberg, Rollenhagen, & Hollnagel, 2009). The safety investigation, being guided by the accident model or error taxonomy, will not be inclined to consider other possible contributors to the accident. Researchers and safety professionals, whether in aviation or other domains, should recognize and consider these criticisms and limitations of error classification systems when analyzing their data. For the present study, we chose to address the

limitations by extracting more qualitative, contextual data from the mishap reports themselves. This, in essence, is what Fitts and Jones (1947) understood when studying “human error.” Performance problems are better understood by examining the real-world constraints placed on people’s behavior. Latent error categories within DoD HFACS have helped narrow down this examination, but we reached a point where the data set could not provide sufficient resolution to answer these questions. We sought to extract context and meaning behind the teamwork breakdowns, planning inappropriate operations, and climate/cultural influences. The next section presents the methods of the content analysis used to obtain qualitative data about each mishap. Content Analysis Method Qualitative research emphasizes gathering data relevant to the meaning of people’s lives within their real-world settings, accounting for the real-world contextual conditions, and providing insights that may help explain

social behavior and thinking (Yin, 2016). By analyzing individual mishap reports for thematic patterns, we can illuminate concerns about the Understanding Human Error constraints or expectations the people involved in the mishaps faced as the events were unfolding. A methodical review of the mishap reports encourages us to focus on what the people involved knew and anticipated, thus minimizing the hindrance of hindsight bias prevalent in error classification systems. The purpose of the content analysis was to search across the mishap reports to reveal thematic explanations: the pattern of information across the reports that provides meaningful understanding of the conditions and constraints affecting the performance of the individuals involved. This would provide insight for how teamwork broke down during the mishaps as well as what influenced the commanders to both plan inappropriate operations and create an unsafe working atmosphere with climate/cultural influences. Each Naval

aviation mishap report is comprised of various sections, including an event narrative, list of lines of evidence, set of rejected causal factors (i.e, factors the mishap investigation board falsified as being causal to the mishap), accepted causal factors, and list of recommendations The content analysis was a methodically iterative process comprised of three activities: (1) disassembly, (2) reassembly, and (3) interpretation. Disassembly consisted of focusing on only the areas of interest within each mishap report (i.e, the narrative and relevant causal factor analysis). If additional information was needed, it would be referenced in the lines of evidence, other causal factors, or supplemental evidence stored outside the report within the data repository. Researcher-derived notes were recorded during each read-through. These included paraphrases and early interpretations about recurring factors that may have been part of a larger pattern. The derived notes became the ingredients for

the thematic explanations During reassembly, the derived notes were organized to develop possible ideas for explanations. The purpose of this activity was to find, assess, and challenge robustness of the themes being abstracted. For example, one of the initial themes that began to emerge early during the content analysis was difficulties inherent in complex geopolitical coordination. Several events occurred during joint operations, either 771 within or alongside foreign support. It seemed at first that the influences of this massive challenge may have been a factor in planning inappropriate operations. This theme dissolved, however, as the disassembly-reassembly iterations progressed and was not considered as a meaningful influence across the mishaps. Lastly, during interpretation and as the thematic explanations began to formalize, the content analysis became dedicated to establishing the pattern of how or why things happened across the mishap reports. All three activities were part

of an iterative process to foster a flexible and regular assessment of thematic explanations. Content Analysis Results In an effort to reduce speculation about what took place during the event, we established criteria for determining what mishap reports would be acceptable for a content analysis. First, we screened reports by examining the amount of information provided within the narrative and accepted causal factor analysis to determine if it was adequate and relevant to the latent factors. For instance, to build a subset of reports for teamwork, only reports that provided information specific to communication and cooperation between team members actively involved in the event were included. Because planned inappropriate operations and climate/cultural influences were both observed to impact JDME from the Bayes’ analysis, these factors were grouped together for the content analysis. This also allowed for a larger subset of mishap reports that provided enough substantiating

information relevant to studying the contextual conditions commanders faced when assessing and approving the risk of their missions. The criteria selection process resulted in 22 reports unique to teamwork breakdowns and 23 reports unique to the planned inappropriate operations and climate/cultural influence grouping. The content analysis provided three distinct thematic explanations: one for teamwork factors and two for planned inappropriate operations and climate/cultural influences. All privileged safety information (e.g, locations, dates/times, specific squadron or unit information, witness interviews) as well as specific information of aircraft model, mission type, or specific aerial 772 maneuvers and tactics has been withheld from this paper. This was done to both comply with the Naval Aviation Safety Management System guidelines for not releasing privileged information from a mishap report (Department of the Navy, 2014) and take care in protecting the anonymity of the

groups and individuals involved with the mishaps. Each thematic explanation begins with a paraphrased quote from a particular report that provides exemplar context of the conditions and constraints faced by the people involved in the mishap. Within the quotes, certain words and phrases are bolded because they are direct reflections of the underlying patterns observed during the content analysis. Further discussion is then provided. The following three thematic explanations were abstracted from the mishap reports following the content analysis. Plan Continuation Aggravated by Diffusion of Responsibility Throughout the approach, [instructor pilot, IP] had recognized multiple mistakes [by the student pilot, SP], but attempted to balance the requirements of both instructing and evaluating, acting as a competent copilot, while still keeping the aircraft safe. The IP allowed the maneuver to proceed and was looking for the SP to make the necessary control inputs along with the [lookout

crewman, LCM] providing advisory calls. The [LCM] felt that with the pilots looking out the right, he should cover the left side as well. Consequently, the . maneuver progressed beyond a reasonable margin of safety Plan continuation is a concept in complex, dynamic systems, like aviation, where human operators do not notice that a situation is gradually deteriorating from safe to unsafe. When human operators begin to perform challenging and hazardous tasks, they will first notice clear and unambiguous cues that they recognize the situation as familiar by remembering previous, similar experiences (Lipshitz, Klein, Orasanu, & Salas, 2001). Individuals are not making decisions by assessing the pros and cons of all choices available but rather are sensitive to cer- September 2018 - Human Factors tain occurrences they have seen and experienced before; thus, they apply their previous experience to guide their actions and decisions in the present. Plan continuation errors occur,

however, when the event slowly progresses toward a more hazardous and riskier situation and subsequent cues are much less clear, more ambiguous, and overall weaker (Orasanu, Martin, & Davison, 2002). These cues do not pull the people into a different course of action, mostly because they are anchored to the original, stronger cues, thus making them less likely to change their plans (e.g, Bourgeon, Valot, Vacher, & Navarro, 2011). The current quote starts with an experienced instructor pilot observing a less experienced student pilot make mistakes. In hindsight, this is an opportunity for the instructor to pause the training and take over. But in the moment, the instructor did not consider it unusual for the student to be making seemingly harmless mistakes characteristic of a pilot-in-training. This problem was exacerbated by the social dynamics of diffusion of responsibility. Often referred to as the bystander effect, diffusion of responsibility is the social tendency when

onlooker intervention during an emergency situation is suppressed by the mere presence of other onlookers (Darley & Latane, 1968). As an unsafe situation is unfolding, an onlooker will observe other onlookers not intervening, thus confirming that this situation must not be an emergency. Others have observed this tendency within military settings, when the onlooker is not a bystander per se and is still personally invested in the outcome of the event (Bakx & Nyce, 2012; Snook, 2000). The previous quote demonstrates diffusion of responsibility during that mishap The mission was comprised of three crew members: an instructor pilot, a student pilot, and a lookout crewman. Already with this many people involved, there is likelihood for diffused responsibility. As the aircrew was maneuvering around physical obstructions, the lookout crewman was executing the single task he intended to complete: Look out the left side of the aircraft while the pilots look out the right. Diffusion of

responsibility encouraged the two pilots to do the reverse with the lookout crewman; he would look out for hazards Understanding Human Error in general, both to the right and left, while they flew the aircraft. In the end, no one was at fault for not looking out the right side because no one was considered responsible for looking out the right side. This thematic explanation applied to 18 out of the 22 mishap reports featuring teamwork breakdowns. Each report discussed the beginning of a multi-crew event that at first seemed benign and manageable. As it progressed, however, it became more unstable and unsafe but not obvious enough to signal to the aircrew that they should stop. With two or more aviators and/or aircrew involved, the diffusion of responsibility worsened matters by unintentionally fostering a context encouraging people to miss important information and thus not be able to share it with one another. The combination of these factors evoked conditions that allowed

small, subtle changes and threats to go unnoticed, eventually making it more difficult to recover from error. The remaining two thematic explanations relate to planned inappropriate operations and climate/ cultural influences factor grouping. Risk Mitigation Program Incompatible With Unexpected Hazards and Risks The [risk assessment] document itself, as well as the [risk management] program supporting it, while utilized in its current form, was inadequate in identifying the risks apparent after the mishap. The command culture in the execution of the [risk management program] failed to identify unusual risks unique to the [current situation]. Naval aviation squadrons follow the operational risk management or ORM (Department of the Navy, 2018) program when assessing potential hazards (i.e, any condition with the potential to negatively impact mission) and risks (i.e, chance that a hazard will actually cause harm). ORM describes hazard assessment as “the foundation of the entire [risk

management] process. If a hazard is not identified, it cannot be controlled” (Department of the Navy, 2018, enclosure 1, p. 7). Thirteen out of 23 mishaps revealed that squadron ­commanders were given unreasonable 773 expectations to algorithmically identify the exhaustive collection of hazards and risks. These expectations are incompatible with human judgment in general, including the ability to assess or anticipate risk. The present quote demonstrates that there was an expectation that the risk management document and program was expected to identify unusual risks unique to the current event. Like plan continuation previously mentioned, only hindsight would reveal that cues would have been subtle and gone unnoticed by risk assessors. Meanwhile, previous research has demonstrated the high level of uncertainty within risk assessment and identification. Orasanu (2010), for instance, reported on the mismatch between aviator’s assessment of risk salience versus risk frequency.

Generally, aviators tend to overemphasize salient risks (risks more familiar and severe) and underemphasize frequent risks (less familiar and seemingly inconsequential). This was observed specifically in one of the mishap reports within the subset. It mentioned a squadron commander approving of unsafe operations because he overemphasized one risk (crew workday and rest) and did not accurately anticipate another risk (flying in a visually degraded environment). This result is also supported by previous research emphasizing the inherent subjectivity of assessing risk (Orasanu, Fischer, & Davison, 2002). The ORM program may unintentionally be placing squadron commanders and planners in overly demanding situations for making judgments. In these rare circumstances where unforeseen hazards create an unsafe situation, it is unreasonable to expect commanders to suddenly become risk prognosticators that can foresee all potentially adverse outcomes, particularly in our perpetually

increasingly complex environment. Results suggest that existing conceptual models of risk and hazard management assessment should be examined for their effectiveness (e.g, Aven, 2016) Limited Opportunities for Deliberate Practice of Challenging Tasks It is also worth noting that the proficiency concerned here, with regard to [this particular aviation tactic], cannot be 774 obtained in a flight simulator. Current versions of flight simulation do not have the fidelity to simulate [the specific task demands]. Therefore, such proficiency must be gained with actual, dedicated aircraft training. Like any organization, Naval aviation has limited resources. These organizational limitations can exacerbate stress at the squadron level, particularly when there are expectations for the aviators to perform a certain number of operations or hours. With limited time, equipment, and people, commanders are then considered accountable because they put the aviators into unsafe situations where the

task demands exceeded their capabilities. The content analysis found 11 out of 23 mishap reports supported this thematic explanation. When an aviator, occasionally an amateur, was put into a situation where the task demands exceeded performance ability and it resulted in a mishap, the commander was considered as planning inappropriate operations or setting up a climate/cultural workplace that encourages unsafe behavior. Each of these events, however, reported that the demanding tactics or maneuvers themselves, for a variety of reasons, were rarely practiced. Deliberate practice is considered the intentional and effortful engagement within a task with the intent to improve performance (e.g, Ericsson, Krampe, & Tesch-Römer, 1993). An important aspect of deliberate practice is learning the subtle yet vital contextual constraints that can impact performance. As the quote demonstrates, performance of particular tactics can best be refined in the actual context. Across the 11 cases,

there was one of three explanations for why these particular skills were rarely practiced. First, as the quote suggests, there were resource limitations. Certain rarely exercised skills could only be practiced in the real environment. Second, there were no existing policy requirements for the tactic to be requalified, thus encouraging the skill to go unpracticed for prolonged periods of time. Lastly, the skill itself is unique and simply seldom exercised. There was not enough information within the mishap reports to delineate the specific nature of the challenging tasks or the cognitive mechanisms actively involved within the task. Regard- September 2018 - Human Factors less, this finding suggests a need to evaluate the specific types of skills required for these unique tactics and their susceptibility to performance degradation. There are a variety of factors that play a role in skill degradation, but degradations of overall flight skills have been documented in civil aviation

pilots (e.g, Childs, Spears, & Prophet, 1983). Clearly work is needed to better understand the specific demands of these tasks to help inform practical solutions for limited degradation. This thematic explanation provides a thorough description of a contextual constraint encountered by squadron commanders. Content Analysis Discussion The results of the content analysis revealed three thematic explanations for the three remaining latent failures. First, teamwork breakdowns were found to be influenced by plan continuation aggravated by diffusion of responsibility. Individuals within a multi-crew arrangement were observed to not intervene or speak up when the team members all had a shared expectation that a particular task was being performed by someone else, occurring as an event was slowly progressing toward a riskier situation. Second, commanders were placed in difficult conditions of judgment when using risk mitigation programs. They were held to an unreasonably high expectation

of accurately foreseeing all potential hazards when hazards have been seen to go unnoticed or underemphasized due to their subjective nature. Lastly, resource limitations resulted in rare opportunities for aviators to actively improve their skills within challenging tasks. These thematic explanations helped provide a deeper understanding to “human error” within Naval aviation mishaps. The underlying problems within all three thematic explanations could potentially be mitigated by examining principles of resilience engineering (Hollnagel, Woods, & Leveson, 2007). In short, resilience engineering emphasizes that success and failures are more closely related than we think. When systems fail, it is typically not because a single component, whether human or mechanical, broke. But rather, failure emerges from the vast interactions across the web of a complex system of components. Therefore, resilient systems emphasize gathering evidence and data from normal events, not

Understanding Human Error just mishaps, to assess human performance variation under differing conditions. This type of holistic view at human performance within the complex system will lead to improved conceptual risk models. Hazards originally unknown or considered to be minor threats may turn out to be more threatening than originally considered. Weber and Dekker (2017) recently provided a method for assessing pilot performance during normal events. Observing and understanding pilot performance during normal events helps provide deeper understanding of performance constraints during mishaps. For example, Weber and Dekker reported pilots not following strict procedures during normal operations, which would normally be considered as being causal to an accident during a mishap investigation but are actually intentional deviations during normal events to maintain safety during demanding situations. Understanding how the front-line pilots, aviators, or human operators within any complex

system improvise to get the job done as safely as possible is essential to gain a deeper understanding for the conditions that contribute to mishaps. These principles also apply outside Naval aviation and could be implemented within health care, oil and gas, transportation, maritime operations, and most any complex systems (Hollnagel et al., 2007) General Discussion The goal of this paper was to gain a deeper understanding of the factors that contributed to “human error” within severe Naval aviation mishaps. The first attempt to answer that question was moderately successful. Applying Bayes’ theorem to the DoD HFACS data set helped identify that the technological environment was strongly associated with performancebased errors. The remaining results, however, provided little insight for examining beyond “human error” as the label was displaced to elsewhere within the framework where humans were still involved. To better address the research questions, a subsequent content

analysis was conducted on a subset of mishap reports to extract qualitative data revealing the contextual constraints and conditions faced by the people involved. Three thematic explanations were derived from the content analysis, all providing a deeper understanding of “human error” within Naval aviation mishaps. 775 The content analysis performed on the information within the mishap reports was, to our knowledge, the first examination of this kind on these types of rare, extreme cases. The motivation to perform the analysis came when the results of the DoD HFACS analysis revealed a need to keep pursuing more context and information on what influenced the errors to occur. Other domains or organizations who have implemented error classification systems could benefit from this lesson. Error classification systems are effective at just that: classifying error. Populating an error database based on safety investigations does not guarantee that the information within the database can

be leveraged to predict future “human error” occurrences. The current project, however, demonstrates the limits of error classification systems regarding the goal of investigating beyond “human error.” This project adds to the growing body of literature emphasizing the need to look beyond “human error” as an acceptable explanation for why mishaps and accidents occur (e.g, Tamborello & Trafton, 2017) The motivation of applying Bayes’ theorem to DoD HFACS data and the accompanying content analysis originated from the understanding that “human error” is not independent of the operating context, supervisory practices, and organizational influences surrounding aviators and squadron commanders. By expanding our knowledge of how contextual conditions influence human performance in real-world military aviation mishaps, we can begin to work toward solutions that address the underlying systematic issues. The current project demonstrated complementary analysis methods to

provide a meaningful understanding of “human error” in Naval aviation mishaps. Practical Implications Identifying the specific latent factors and contextual conditions that influence the performance of Naval aviators provides valuable information about where authority figures can allocate resources and apply interventions to improve performance. The commanders of Naval aviation squadrons will want to know the specific areas of performance that should take priority. The Naval Aviation Command Safety Assessment Survey, for instance, is periodically administered to all members of a squadron and 776 September 2018 - Human Factors assesses the attitudes, perceptions, and overall views of safety practices within a squadron (for more details, see O’Connor & O’Dea, 2007). If results of the survey reveal concerns of teamwork performance within the squadron, the current project provides evidence, both from the DoD HFACS analysis and content analysis, specific to Naval aviation

that this issue should take top priority. Furthermore, the current project provided practical implications outside an esoteric application to Naval aviation. Error taxonomies are established in a variety of domains where “human error” is susceptible to being considered causal to accidents within complex systems (e.g, Taib, McIntosh, Caponecchia, & Baysari, 2011). The results of the Bayes’ theorem analysis revealed an inherent limitation to error taxonomies in that they lack the ability to capture context, meaning, and constraints faced by the people involved. These aspects of human work are essential for accurate assessment of the conditions that antagonize human performance to drift outside the parameters of safe operation. ACKNOWLEDGMENTS The views and opinions expressed in this paper are those of the author and do not necessarily represent the views of the U.S Navy, Department of Defense, or any other government agency. The author would also like to acknowledge the

valuable contributions from Krystyna Eaker, Paul Younes, and Shari Wiley in support of this paper. Key Points •• “Human error” is an unhelpful yet common explanation for the cause of accidents and mishaps in complex activities featuring vast combinations of people and technology (e.g, aviation) •• To better understand the conditions that influence human error within Naval aviation mishaps, we analyzed the DoD Human Factors Analysis and Classification System (DoD HFACS) data and found that technological environment impacted performance-based errors among Naval aviators. •• The DoD HFACS analysis, however, was insufficient at providing meaningful contextual information imperative for investigating beyond “human error.” •• A subsequent content analysis of mishap reports found that teamwork failures were the result of plan continuation aggravated by diffusion of responsibility. •• Resource limitations and risk management deficiencies were observed to constrain

the judgments made by squadron commanders when planning missions. References Andre, A., Wickens, C, & Moorman, L (1991) Display formatting techniques for improving situation awareness in the aircraft cockpit. The International Journal of Aviation Psychology, 1(3), 205–218 Aven, T. (2016) Risk assessment and risk management: Review of recent advances on their foundation. European Journal of Operational Research, 253(1), 1–13. Bakx, G. C H, & Nyce, J N (2012) Is redundancy enough?: A preliminary study of Apache crew behaviour. Theoretical Issues in Ergonomics Science, 14(6), 531–545. Belland, K., Olsen, C, & Lawry, R (2010) Carrier air wing mishap reduction using a human factors classification system and risk management. Aviation, Space, and Environmental Medicine, 81(11), 1028–1032 Bourgeon, L., Valot, C, Vacher, A, & Navarro, C (2011) Study of perseveration behaviors in military aeronautical accidents and incidents: Analysis of plan continuation errors.

Proceedings of the Human Factors and Ergonomics Society 55th Annual Meeting (pp. 1730–1734) Santa Monica, CA: Human Factors and Ergonomics Society. Chen, W., & Huang, S P (2014) Human reliability analysis in aviation maintenance by a Bayesian network approach. In D. Frangopol & G Deodatis (Eds), Safety, reliability, risk and life-cycle performance of structures and infrastructures (pp. 2091–2096) London: CRC Press Childs, J. M, Spears, W D, & Prophet, W W (1983) Private pilot flight skill retention 8, 16, and 24 months following certification (Accession Number ADA133400). Daytona Beach, FL: Embry-Riddle Aeronautical University. Cohen, T. N, Wiegmann, D A, & Shappell, S A (2015) Evaluating the reliability of the Human Factors Analysis and Classification System Aerospace Medicine and Human Performance, 86(8),728–735. Darley, J. M, & Latane, B (1968) Bystander intervention in emergencies: Diffusion of responsibility Journal of Personality and Social Psychology,

8(4), 377–383. Dekker, S. (2003) Illusions of explanation: A critical essay on error classification. The International Journal of Aviation Psychology, 13(2), 95–106 Dekker, S. (2014) The field guide to understanding “human error” (3rd ed.) Burlington, VT: Ashgate Publishing Company Department of Defense. (2011) Mishap notification, investigation, reporting, and record keeping (DOD Instruction 6055.07) Washington, DC: Author. Department of the Navy. (2014) Naval aviation safety management system (OPNAV Instruction 37506S) Washington, DC: Author. Department of the Navy. (2018) Operational risk management (OPNAV Instruction 3500.39D) Washington, DC: Author DoD Human Factors Analysis and Classification System. (2017) Department of Defense Human Factors Analysis and Classification System: A mishap investigation and data analysis tool. Retrieved http://wwwpublicnavymil/NAVSAFECEN/­ Documents/aviation/aeromedical/DOD HF Anlys Clas Sys .pdf Understanding Human Error Endsley, M.

(1995) Toward a theory of situation awareness in dynamic systems. Human Factors, 37, 32–64 Endsley, M. (2015) Situation awareness misconceptions and ­misunderstandings. Journal of Cognitive Engineering and Decision Making, 9(1), 4–32. Ericsson, K. A, Krampe, R T, & Tesch-Römer, C (1993) The role of deliberate practice in the acquisition of expert performance. Psychological Review, 100(3), 363–406 Fitts, P., & Jones, R (1947) Analysis of factors contributing to 460 “pilot error” experiences in operating aircraft controls (AMC Memorandum Report TSEAA−694−12). Dayton, OH: WrightPatterson AFB, Air Material Command Hollnagel, E., Woods, D D, & Leveson, N (2007) Resilience engineering: Concepts and precepts. Burlington, VT: Ashgate Publishing Company. Hsiao, Y. L, Drury, C, Wu, C, & Paquet, V (2013a) Predictive models of safety based on audit findings: Part 1: Model development and reliability. Applied Ergonomics, 44(2), 261–273 Hsiao, Y. L, Drury, C, Wu, C,

& Paquet, V (2013b) Predictive models of safety based on audit findings: Part 2: Measurement of model validity. Applied Ergonomics, 44(4), 659–666 Krulak, D. C (2004) Human factors in maintenance: Impact on aircraft mishap frequency and severity. Aviation, Space, and Environmental Medicine, 75(5), 429–432. Li, W. C, & Harris, D (2006) Pilot error and its relationship with higher organizational levels: HFACS analysis of 523 accidents. Aviation, Space, and Environmental Medicine, 77(10), 1056–1061. Lipshitz, R., Klein, G, Orasanu, J, & Salas, E (2001) Taking stock of naturalistic decision making. Journal of Behavioral Decision Making, 14(5), 331–352. Lundberg, J., Rollenhagen, C, & Hollnagel, E (2009) What-YouLook-For-Is-What-You-Find: The consequences of underlying accident models in eight accident investigation manuals. Safety Science, 47(10), 1297–1311. Meshkati, N. (1991) Human factors in large-scale technological systems’ accidents: Three Mile Island,

Bhopal, Chernobyl. Industrial Crisis Quarterly, 5(2), 133–154. Norman, D. A (1988) The psychology of everyday things New York, NY: Harper & Row. O’Connor, P., & O’Dea, A (2007) The US Navy’s aviation safety program: A critical review. International Journal of Applied Aviation Studies, 7(2), 312–328. O’Connor, P., & Walker, P (2011) Evaluation of a human factors analysis and classification system as used by simulated aviation mishap boards. Aviation, Space, and Environmental Medicine, 82(1), 44–48 Orasanu, J. (2010) Flight crew decision-making In B Kanki, R. Helmreich, & J Anca (Eds), Crew resource management (2nd ed., pp 147–179) San Diego, CA: Elsevier Orasanu, J., Fischer, U, & Davison, J (2002) Risk perception: A critical element of aviation safety. IFAC Proceedings Volumes, 35(1), 49–58. Orasanu, J., Martin, L, & Davison, J (2002) Cognitive and contextual factors in aviation accidents In E Salas & G Klein (Eds.), Naturalistic decision

making (pp 343–358) Mahwah, NJ: Lawrence Erlbaum. Patterson, J. M, & Shappell, S A (2010) Operator error and system deficiencies: Analysis of 508 mining incidents and accidents from Queensland, Australia using HFACS. Accident Analysis & Prevention, 42(4), 1379–1385. Ramoni, M., & Sebastiani, P (2007) Bayesian methods In M. Berthold & D Hand (Eds), Intelligent data analysis: An introduction (2nd ed., pp 131–168) New York, NY: Springer 777 Rasmussen, J. (1983) Skills, rules, and knowledge: Signals, signs, and symbols, and other distinctions in human performance models. IEEE Transactions on Systems, Man, and Cybernetics, SMC-13, 257–266. Reason, J. (1990) Human error New York, NY: Cambridge University Press Snook, S. A (2000) Friendly fire: The accidental shootdown of U.S Black Hawks over northern Iraq Princeton, NJ: Princeton University Press. Taib, I. A, McIntosh, A S, Caponecchia, C, & Baysari, M T (2011). A review of medical error taxonomies: A human

factors perspective. Safety Science, 49(5), 607–615 Tamborello, F. P, & Trafton, J G (2017) Human error as an emergent property of action selection and task place-holding. Human Factors, 59(3), 377–392. Theophilus, S. C, Esenowo, V N, Arewa, A O, Ifelebuegu, A. O, Nnadi, E O, & Mbanaso, F U (2017) Human factors analysis and classification system for the oil and gas industry (HFACS-OGI). Reliability Engineering & System Safety, 167, 168–176. Tvaryanas, A. P, & Thompson, W T (2008) Recurrent error pathways in HFACS data: Analysis of 95 mishaps with remotely piloted aircraft. Aviation, Space, and Environmental Medicine, 79(5), 525–532. Walker, P., O’Connor, P, Phillips, H, Hahn, R, & Dalitsch, W (2011). Evaluating the utility of DOD HFACS using lifted probabilities. Proceedings of the Human Factors and Ergonomics Society 55th Annual Meeting (pp 1793–1797) Los Angeles, CA: Human Factors and Ergonomics Society. Weber, D. E, & Dekker, S W (2017) Assessing

the sharp end: Reflections on pilot performance assessment in the light of Safety Differently. Theoretical Issues in Ergonomics Science, 18(1), 59–78. Wiegmann, D., & Shappell, S (1997) Human factors analysis of postaccident data: Applying theoretical taxonomies of human error. The International Journal of Aviation Psychology, 7(1), 67–81. Wiegmann, D., & Shappell, S (2001) Human error analysis of commercial aviation accidents: Application of the Human Factors Analysis and Classification System (HFACS). Aviation, Space, and Environmental Medicine, 72(11), 1006–1016. Wiegmann, D., & Shappell, S (2003) A human error approach to aviation accident analysis: The Human Factors Analysis and Classification System. Burlington, VT: Ashgate Publishing Company Winkler, R. (2003) An introduction to Bayesian inference and decision (2nd ed.) Gainesville, FL: Probabilistic Publishing Woods, D. D, & Cook, R I (1999) Perspectives on human error: Hindsight biases and local

rationality. In F Durso & R Nickerson (Eds), Handbook of applied cognition (pp 141–171) New York, NY: John Wiley & Sons. Yin, R. K (2016) Qualitative research from start to finish (2nd ed.) New York, NY: Guilford Press Andrew T. Miranda is an aerospace experimental psychologist at the Naval Safety Center. He received a PhD in human factors psychology in 2015 from Wichita State University. Date received: July 19, 2017 Date accepted: March 12, 2018