Strategic Warning Intelligence
Published:
Strategic Warning Intelligence
Metadata
- Author: John A. Gentry and Joseph S. Gordon
- Full Title: Strategic Warning Intelligence
- Category: #books
Highlights
- Jack Davis, a longtime CIA analyst, defined strategic warning: “Warning analysis seeks to prevent or limit damage to US national security interests via communication of timely, convincing, and decision-enhancing intelligence assessments that assist policy officials to effect defensive and preemptive measures against future threats and to take action to defend against imminent threats.”3 (Location 493)
- Tags: orange
- communication to senior national decision-makers of the potential for, or actually impending, events of major significance to national interests and recommendations that leaders consider making policy decisions and/or taking actions to address the situations. (Location 506)
- Tags: orange
- This important point is worth repeating: the warning recommendation is that senior leaders make decisions of their own choosing. (Location 510)
- we do not consider the IC’s failure to prevent the attacks of September 11, 2001, which killed many people but did not pose an existential (and thereby strategic) threat to the United States, to be a strategic warning failure.8 The IC warned repeatedly and persuasively that an al-Qaeda attack on the United States was imminent, but it did not identify specific attack-related activities and plans and therefore was unable to thwart them.9 (Location 513)
- Strategic warning is one of the four core functions of intelligence analysis, along with basic research (or strategic) intelligence, current intelligence, and estimative intelligence.11 (Location 525)
- National estimates, which typically focus on comprehensive evaluation of a specific international situation, often to support an impending policy decision, also serve a strategic warning function if they alert policymakers to impending, significant threats or opportunities as part of their assessments.14 (Location 531)
- Current intelligence, which examines the recent past and immediate future—typically measured in small numbers of days—sometimes has a tactical warning element, perhaps including notification that a threat strategic warning analysts long have forecast has finally arrived. (Location 533)
- strategic warning intelligence always is about relatively distant events—normally six months to two years in the future—whose outlines and courses are uncertain but whose implications might be significant. (Location 575)
- Successful strategic warning analysts must identify trends of potential importance and bring them to the attention of decision-makers before the actual course of events is clear—sometimes even to the people who eventually will perform those acts. (Location 579)
- other: adequate collection, sound analysis, and persuasive communication to decision-makers about the significance of the warning message. (Location 604)
- We define warning success as the integrated process of (1) managing available collection assets well enough to obtain information adequate to make sound warning-related judgments, (2) assessing the information accurately in ways that may generate a specific prediction but more commonly enable analysis or forecasting of aspects of an emerging situation of concern or opportunity in ways that reduce decision-makers’ uncertainty about the situation, and (3) persuasively communicating relevant information and judgments in a timely manner that enables decision-makers to understand the factual and judgmental components of the warning, to make informed decisions about whether to act or not, and to act effectively if they choose to act. (Location 650)
- Former senior CIA manager John McLaughlin recalls that a colleague, confronted by charges of intelligence failure, reminded a former secretary of state that she had warned the secretary of an impending war, to which he responded, “You told me, but you didn’t persuade me.”42 (Location 660)
- Bird also noted the importance of timeliness: “Warning that comes too late is not warning, it is entertainment.”46 (Location 675)
- Descriptions of apparent warning failures frequently fall into two general categories: (1) missing an actual threat, which generates the most attention and anxiety, and (2) warning of dire events that do not in fact occur. The first type of failure is usually obvious and may be associated with violence and death—as at Pearl Harbor. The second is a less noticed but more common failing that is costly in many ways, including unnecessary policy decision-making time and effort, unnecessary remedial action that may be financially or diplomatically costly, and damage to the credibility of the warning function. (Location 694)
- Excessive or unwarranted threat warning often is called the “cry wolf syndrome” or, less commonly, the “Chicken Little syndrome.”51 (Location 699)
- Tags: orange
- These types of errors have formal names in academic, especially statistical, usage. Failures to detect actual phenomena are “Type II” errors, or “false negatives.”52 “Type I” errors, or “false positives,” identify phenomena that do not in fact exist. Accurate identification of Type I errors is often much harder than seeing Type II errors of the Pearl Harbor sort. The infamous 2002 NIE on Iraqi weapons of mass destruction (WMD), which mistakenly warned that Iraq’s WMD programs were robust and threatening, was a false positive.53 (Location 701)
- Warning succeeds if it stimulates decision-makers to take remedial action that in turn indicates to some foreign entity that the action it contemplated is no longer viable or desirable and leads to abandonment of the warned-about activity. This kind of success, sometimes known as the “paradox of warning” or the “warning conundrum,” or what Michael Handel calls “self-negating prophesy,” is hard to identify but is the pinnacle of warning success even though it may generate recriminations about another apparent example of the cry wolf syndrome.56 (Location 714)
- Hitler merely verbally abused or transferred them, but he made clear what he wanted and generally got it. He engaged in “mirror imaging”—imagining what he would do if he were in the Allies’ position—and based his analytic judgments on those biases.59 He paid close attention to the assignments of Allied generals he respected, believing the Allies surely must think similarly. (Location 1189)
- AMAN monitored Egyptian and Syrian activities, including the Egyptian exercises, and issued at least eleven warnings of strategic threats in 1973—that is, of Arab military activities not designed for training or defensive purposes.92 When Israeli intelligence warned of Egyptian and Syrian military activities in April and May 1973 and the IDF mobilized for the third time in 1973 but no attack occurred, the cry wolf syndrome emerged to discourage later warnings and to reduce political leaders’ receptivity to such warning.93 On October 5, 1973, the day before the attack, Major General Eli Zeira, the IDF’s Director of Military Intelligence (DMI), told Prime Minister Golda Meir that observations of mounting Arab military activity in late September and early October probably reflected defensive preparations driven by fear of the IDF, not an offensive threat, and that a serious attack was unlikely because of the Arabs’ inferior air forces.94 (Location 1348)
- The Conception thus became a heuristic, a theory or model or core belief that its adherents in AMAN used to process new information. The two core beliefs that guided Conception’s influence on AMAN’s thinking became (1) Egypt would not go to war with Israel until it had gained an ability to conduct deep strikes into Israel to nullify Israeli air superiority and (2) Syria would not launch a large-scale attack without Egyptian participation.100 Therefore, when AMAN picked up signs of Egyptian and Syrian preparations in the weeks before October 6, under Zeira’s guidance IDF analysts dismissed their significance by viewing them as either normal activity or as parts of an ostensibly routine exercise the Egyptians called “Tahrir 41.” Because, in Israel’s political/military/intelligence organizational scheme, Zeira as the DMI was chief of Israeli national intelligence estimates, including warnings of war, his views prevailed. (Location 1378)
- five general types of governmental warning structures: (1) leaders as principal warning analysts; (2) organizations in which every analyst has warning analysis as an additional duty, which we call the Every-Analyst-a-Warning-Analyst (EAAWA) model; (3) dedicated, specialized warning organizations; (4) hybrid organizations that combine elements of types 2 and 3, with the warning function coordinated by a dedicated, senior-level warning specialist; and (5) a whole-of-government effort conducted by government agencies generally, meaning that warning is not solely an intelligence function. (Location 1714)
- In the EAAWA model, all analysts who see threatening activities or opportunities during the course of their research, estimative, or current intelligence responsibilities report them through normal channels. (Location 1751)
- The Watch Committee and the NIC developed a routine and issued “warning” reports periodically, normally weekly, just as current intelligence was published.24 They further developed the I&W method.25 As the I&W method and warning function became institutionalized, each major military command built a variant of the NIC in its intelligence directorate. These centers mainly supported their own commanders but also interacted with the broader Defense warning establishment. (Location 2106)
- Also in 1979, in the wake of the HPSCI report on warning and another warning failure—the surprise overthrow of the shah of Iran66—DCI Turner issued DCI Directive 1/5, which created a new National Intelligence Warning system that built on earlier methods and created the position of NIO/W.67 The directive gave the NIO/W specific responsibilities, including coordination of the warning activities of the regional and other functional NIOs, and provided some definitions that remain valid: a. Warning as used herein encompasses those measures taken, and the intelligence information produced, by the Intelligence Community to avoid surprise to the President, the NSC [National Security Council], and the Armed Forces of the United States by foreign events of major importance to the security of the United States. It includes strategic, but not tactical warning. b. Strategic Warning is intelligence information or intelligence regarding the threat of the initiation of hostilities against the US or in which US forces may become involved; it may be received at any time prior to the initiation of hostilities. It does not include tactical warning. c. Tactical warning is notification that the enemy has initiated hostilities. Such warning may be received at any time from the launching of the attack until it reaches its target.68 (Location 2280)
- Robert Vickers was NIO/W from 1996 to 2004, the longest tenure of any NIO/W. He and his small staff of analysts, mainly from the CIA, produced a series of warning products, usually in association with other NIOs—an example of collegiality that seems to have paid dividends in producing respect for the warning function and generally good warning performance. Vickers published a monthly warning document, assembled in coordination with other NIOs, which looked in depth at indicators associated with key warning issues.118 He also published a weekly one-page “Watch List” of the major issues of concern. He compiled these reports after consulting via teleconference with representatives of CIA, DIA, NSA, State/INR, and the National Photographic Interpretation Center.119 This group was called the National Warning Committee. Vickers normally issued warning messages after a majority of the agencies voted to do so.120 Beginning in 1999, Vickers published his Atrocities Watchlist quarterly.121 (Location 2454)
- Vickers says he believes two warning failures occurred on his watch: the 1998 India nuclear test and the 9/11 attacks. The CTC-McCarthy agreement excluding the NIC from responsibility for terrorism warning gave him considerable cover for the tactical warning failure of 9/11, but DCI George Tenet told Vickers to expect to take blame for the nuclear test, and Gannon confirms that line analytic units that had been convinced the Indians would not test tried to blame Vickers.135 Vickers survived both failures, however, and his long tenure indicates that the warning function can succeed in challenging times if it is performed well. (Location 2497)
- damaged the strategic warning function during his tenure. For example, after 2003, the United States was involved in two wars, and the attention of Defense intelligence shifted appreciably to supporting the war efforts. DIA soon abandoned its practice of teaching its new analysts how to conduct military analysis, incongruously arguing that war-support requirements were too pressing to devote time to preparing analysts enough to make the support competent—a poorly considered decision the DIA eventually reversed.141 (Location 2520)
- Third, Clapper’s decision to go to a relatively strong form of EAAWA—and explicitly to reject the hybrid approach—degrades the function bureaucratically and damages its performance. We see this move as derived, in part, from the inability of later NIO/Ws to perform the function in ways the IC as a whole found credible. (Location 2659)
- Nicoll observed six types of analytic and analysis-related errors that he believed recurrently plagued British efforts to avoid surprise. These differ slightly in some cases from similar American concepts: 1. Mirror imaging—the assumption that factors constraining the British government would equally constrain leaders of one-party states. In particular, Nicoll argued British intelligence inappropriately assessed that foreign actors would be constrained by the same forces of international opinion that constrain British use of force. 2. Transferred judgment—the assumption that foreign actors would make the same judgments about military balances, and thus about prospects for success, as did British analysts. This is not the same thing as mirror imaging but is similar. It reflects an inability to fully “empathize” with foreign actors’ situations, even when one consciously recognizes that they see things differently. 3. Perseveration—the belief that judgments made early in the development of a crisis would remain valid. Nicoll argued that it is important to keep an open mind and alter judgments as appropriate in the face of new information. (The term is roughly equivalent to what the intelligence literature usually calls confirmation bias.) 4. War as a deliberate act—failure to recognize that because wars are deliberate acts that often take a long time to prepare, it is possible to identify emerging threat situations as they are prepared, not just when forces are deployed, thereby increasing warning times. (Cynthia Grabo made this point strongly about US warning failures.226) 5. Coverage—The Assessments Staff found it hard to warn about events in low-priority areas, which reflected weak collection and sometimes meant less collection and analyst attention in the future. 6. Deception—Nicoll recognized that aggressors virtually always devote considerable effort to deception, meaning analysts must constantly keep in mind that foreign actors are trying to deceive them.227 (Location 2865)
- state. Analysts then task collection assets to gather information that would enable timely recognition of movement toward the actualization of a scenario. When events associated with an indicator change, analysts have an indication of possible movement toward (or away from) the end-state. (Location 3937)
- Warning problems extend over variable but sometimes considerable periods of time, and new ones develop periodically, sometimes rapidly. The DoD now calls these warning problems “enduring” and “emerging” problems, respectively. The names are new; the concepts behind them are not. The current terminology is preferable, in our judgment, to a perspective John McCreary, a onetime head of the NWS, proposed in 1983—that warning issues should be characterized as “gradual” or “sudden.”24 (Location 4032)
- The I&W method simplifies many analytic challenges because, as Gregory Treverton notes, if well done it helps convert mysteries into potentially solvable puzzles.39 It does so by establishing expected relationships between variables using many techniques, including the structured analytic techniques of “backcasting” and “timelines.” Backcasting posits events that may occur backward in time from the end-state, while timelines establish event order and duration. (Location 4091)
- Several British intelligence and non-intelligence agencies have adopted variants of horizon scanning, partly in response to Lord Butler’s recommendation in his autopsy of 2003 Iraq War intelligence-related problems. (Location 4411)
- resource, social, political, technological, and military variables, they see an expanding set of possibilities, which range from the “probable,” or most likely, events in the center of the cone, to less likely but “possible” events and then “plausible” but unlikely events at the outside of the cone.13 The goal is to identify possibilities in the “probable” core of the cone. There is not, unfortunately, a set of suggestions for ways to keep analyses in general or warning messages focused on the center of the cone. (Location 4424)
- Robert Jervis suggests that one reason the intelligence communities of the world so badly erred in their assessments of Iraq’s WMD programs in 2002 was that “Saddam did not have a plan, his control was less than complete, and the regime was less than fully competent.”59 (Location 4659)
- The challenge of understanding the perceptions of others is substantial. Robert Jervis suggests that the “fundamental cause” of intelligence analytic errors is that people around the world see things differently and that grasping the worldviews of others is difficult.63 He calls this challenge the “Rashomon effect” after a Japanese fable in which participants in an event saw the course of events very differently.64 (Location 4674)
- Tags: orange
- A much-discussed pathology, variants of mirror imaging—expecting others to act as one would oneself—frequently occur and are exploitable. (Location 5132)
- emphasize. People with mediocre warning skills—a core feature of the EAAWA model—may, when working against competent deceivers, therefore be worse than useless. The best defense against competent deceivers is first-rate individual analysts and teams that identify inconsistencies in collected information that others miss. (Location 5281)
- Senior national leaders—elected as well as career civilian and military officials—are typically smart and experienced people who have overcome significant challenges to reach positions of significant authority. They have personal, political, and ideological agendas, and they virtually always have constituencies they want to reflect and please. They frequently work hard to convince reluctant audiences to support their programs, sometimes using intelligence selectively for this purpose.2 Senior political leaders of states and the military leaders of national armed forces are busy people. They tend to have little patience for wide-ranging philosophical discussions and give their intelligence people modest amounts of their time. They are confident people who need to, and like to, act decisively to further their agendas.3 Senior political leaders therefore like intelligence that supports their programs. (Location 5808)
- Leaders are, therefore, differently inclined than are most intelligence analysts, who self-select into jobs that require at least some reflection about complex issues and are supposed to be, and usually are, politically neutral.5 John McLaughlin, a longtime CIA analyst and former deputy DCI, argues that these differences amount to a fundamental cultural difference between the worlds of intelligence and policymaking.6 A key job for intelligence professionals is to make the “worlds” interact in ways that primarily serve leaders but also are mutually beneficial. (Location 5818)
- As we have argued, warning is often about suggesting to leaders that they make decisions they would rather not make; hence, understanding how specific leaders respond to stress can be the difference between persuasive and unpersuasive warning. (Location 5827)
- Psychologists have demonstrated that people, including national political leaders, rely heavily on personal experiences when making decisions and that they value information consistent with their beliefs—not necessarily their desires—more than information inconsistent with their perceptions or assumptions; they thereby demonstrate “confirmation bias.” (Location 5834)
- Challenges to Good Intelligence Producer-Consumer Relations (Location 5904)
- President John Kennedy’s decision-making structure initially included a haphazard reading of intelligence briefs and undisciplined decision-making processes that frustrated his national security advisor, McGeorge Bundy.35 George calls it the “collegial” model, one Jimmy Carter also used.36 Kennedy became disenchanted with intelligence after the Bay of Pigs debacle in 1961—a fiasco for which both he and CIA bear significant responsibility.37 Kennedy eventually began to pay more attention to intelligence, and CIA produced a tailored daily briefing for him called the President’s Intelligence Checklist.38 Kennedy used intelligence well during the Cuban Missile Crisis of October 1962 but ignored CIA warnings that the war in Vietnam, to which Kennedy increased American commitments dramatically, was going badly and was not likely to end successfully. The CIA also warned that Vietnam’s President Ngo Dinh Diem, whom Kennedy supported, was a problematic leader but that there was no obvious replacement for him.39 Kennedy administration officials ignored the warning and tacitly approved the November 1963 coup that killed Diem and his brother, precipitating an extended period of political instability in Saigon that appreciably damaged the war effort. (Location 5943)
- Johnson disliked both bad news and the people who delivered it.41 He did not like DCI John McCone (a Kennedy administration appointee and a Republican), gave him little time, and did not appreciate warnings that the war in Vietnam probably would continue to go badly.42 McCone therefore resigned in April 1965. Johnson replaced him with retired Vice Admiral William Raborn, a fellow Texan with whom he got along well but who knew little about intelligence, and then with Richard Helms, a career intelligence officer he personally liked.43 He stated his view of intelligence in an earthy comment that Helms overheard at a White House dinner: Let me tell you about these intelligence guys. When I was growing up in Texas, we had a cow named Bessie. I’d go out early and milk her. I’d get her in the stanchion, seat myself and squeeze out a pail of fresh milk. One day I’d worked hard and gotten a full pail of milk, but I wasn’t paying attention, and old Bessie swung her shit-smeared tail through the bucket of milk. Now, you know that’s what these intelligence guys do. You work hard and get a good program or policy going, and they swing a shit-smeared tail through it.44 (Location 5956)
- President Bill Clinton cared little for intelligence early in his first term, leading his first DCI, James Woolsey, to resign after two years of frustrating lack of access to the Oval Office.56 When world events forced Clinton to pay attention to it, he made clear that he wanted only certain messages. In the unpleasant aftermath of the “Black Hawk Down” episode of October 1993 in Mogadishu, Somalia, he told his staff he did not want another military operation in Africa.57 He therefore discouraged warning of impending genocide in Rwanda in 1994, and subsequent reports of its occurrence, by banning use of the word “genocide” at the White House. Clinton re-emphasized his preference for favorable intelligence by publicly criticizing the IC and NIO for Latin America Brian Latell for intelligence assessments critical of the character of deposed Haitian President Jean-Bertrand Aristide—a favorite of Clinton and his political allies in the Congressional Black Caucus.58 Once again, a president publicly displayed a lack of receptivity to intelligence and blatantly encouraged intelligence to self-police the content of intelligence messages—a form of politicization the IC finds easy and acceptable, especially in small doses.59 The IC absorbed Clinton’s political use of intelligence but was much less willing to accept policies of Republican Presidents Nixon, Reagan, George W. Bush, and Trump—against whom intelligence professionals leaked and otherwise expressed their unhappiness.60 This selective self-policing continues to have potentially significant ramifications for the content and timeliness of warning by line units. (Location 6015)
- Less well known is that much of the IC accurately warned that the war in Iraq would not go smoothly and that post-invasion chaos and an insurgency were likely.64 Military intelligence, however, thought the war would soon be over without an extended occupation period and with no insurgency. Senior administration officials accepted the military’s view, which was consistent with administration hopes but was badly flawed for less understandable reasons than the WMD estimate.65 (Location 6034)
- Modern warning entities and analysts should have the following: 1. A “mental attitude” (of analysts), which facilitates early understanding of adversary intentions. The warning officer must have assumptions, or working hypotheses subject to evaluation, about the goals and plans of potential adversaries (and sometimes “friends”). Skepticism and a degree of controlled paranoia in warning personnel are good things! 2. A body of doctrine that guides the collection, handling, and analysis of relevant information. In other words, warning personnel should develop formal processes for doing at least some of their work, particularly for monitoring enduring warning problems. 3. Means for “developing new techniques and methods for collection, processing, evaluation, and analysis significant principally or solely for purposes of strategic early warning.” That is, analysts should develop mechanisms for improving their work in many important areas as they develop; the I&W method is the best historical example of such innovation. Warning analysts must address methodological issues as well as learn a lot about countries and issues of possible warning concern, thereby also generating substantive expertise. 4. Organizations within an intelligence community that help foster continuous processes of collection, analysis, and communication to senior consumers about the intelligence community’s insights about possible strategic threats and opportunities.2 (Location 6767)
- Jack Davis in 2003 made Gannon’s point in more popular contemporary terms. He suggested that warning be viewed as a kind of “alternative analysis”—a term embraced by the IC under congressional pressure after the failures of 9/11 and the Iraq WMD NIE.6 (Location 6824)
public: true
title: Strategic Warning Intelligence longtitle: Strategic Warning Intelligence author: John A. Gentry and Joseph S. Gordon url: , source: kindle last_highlight: 2021-06-09 type: books tags:
Strategic Warning Intelligence
Metadata
- Author: John A. Gentry and Joseph S. Gordon
- Full Title: Strategic Warning Intelligence
- Category: #books
Highlights
- Jack Davis, a longtime CIA analyst, defined strategic warning: “Warning analysis seeks to prevent or limit damage to US national security interests via communication of timely, convincing, and decision-enhancing intelligence assessments that assist policy officials to effect defensive and preemptive measures against future threats and to take action to defend against imminent threats.”3 (Location 493)
- Tags: orange
- communication to senior national decision-makers of the potential for, or actually impending, events of major significance to national interests and recommendations that leaders consider making policy decisions and/or taking actions to address the situations. (Location 506)
- Tags: orange
- This important point is worth repeating: the warning recommendation is that senior leaders make decisions of their own choosing. (Location 510)
- we do not consider the IC’s failure to prevent the attacks of September 11, 2001, which killed many people but did not pose an existential (and thereby strategic) threat to the United States, to be a strategic warning failure.8 The IC warned repeatedly and persuasively that an al-Qaeda attack on the United States was imminent, but it did not identify specific attack-related activities and plans and therefore was unable to thwart them.9 (Location 513)
- Strategic warning is one of the four core functions of intelligence analysis, along with basic research (or strategic) intelligence, current intelligence, and estimative intelligence.11 (Location 525)
- National estimates, which typically focus on comprehensive evaluation of a specific international situation, often to support an impending policy decision, also serve a strategic warning function if they alert policymakers to impending, significant threats or opportunities as part of their assessments.14 (Location 531)
- Current intelligence, which examines the recent past and immediate future—typically measured in small numbers of days—sometimes has a tactical warning element, perhaps including notification that a threat strategic warning analysts long have forecast has finally arrived. (Location 533)
- strategic warning intelligence always is about relatively distant events—normally six months to two years in the future—whose outlines and courses are uncertain but whose implications might be significant. (Location 575)
- Successful strategic warning analysts must identify trends of potential importance and bring them to the attention of decision-makers before the actual course of events is clear—sometimes even to the people who eventually will perform those acts. (Location 579)
- other: adequate collection, sound analysis, and persuasive communication to decision-makers about the significance of the warning message. (Location 604)
- We define warning success as the integrated process of (1) managing available collection assets well enough to obtain information adequate to make sound warning-related judgments, (2) assessing the information accurately in ways that may generate a specific prediction but more commonly enable analysis or forecasting of aspects of an emerging situation of concern or opportunity in ways that reduce decision-makers’ uncertainty about the situation, and (3) persuasively communicating relevant information and judgments in a timely manner that enables decision-makers to understand the factual and judgmental components of the warning, to make informed decisions about whether to act or not, and to act effectively if they choose to act. (Location 650)
- Former senior CIA manager John McLaughlin recalls that a colleague, confronted by charges of intelligence failure, reminded a former secretary of state that she had warned the secretary of an impending war, to which he responded, “You told me, but you didn’t persuade me.”42 (Location 660)
- Bird also noted the importance of timeliness: “Warning that comes too late is not warning, it is entertainment.”46 (Location 675)
- Descriptions of apparent warning failures frequently fall into two general categories: (1) missing an actual threat, which generates the most attention and anxiety, and (2) warning of dire events that do not in fact occur. The first type of failure is usually obvious and may be associated with violence and death—as at Pearl Harbor. The second is a less noticed but more common failing that is costly in many ways, including unnecessary policy decision-making time and effort, unnecessary remedial action that may be financially or diplomatically costly, and damage to the credibility of the warning function. (Location 694)
- Excessive or unwarranted threat warning often is called the “cry wolf syndrome” or, less commonly, the “Chicken Little syndrome.”51 (Location 699)
- Tags: orange
- These types of errors have formal names in academic, especially statistical, usage. Failures to detect actual phenomena are “Type II” errors, or “false negatives.”52 “Type I” errors, or “false positives,” identify phenomena that do not in fact exist. Accurate identification of Type I errors is often much harder than seeing Type II errors of the Pearl Harbor sort. The infamous 2002 NIE on Iraqi weapons of mass destruction (WMD), which mistakenly warned that Iraq’s WMD programs were robust and threatening, was a false positive.53 (Location 701)
- Warning succeeds if it stimulates decision-makers to take remedial action that in turn indicates to some foreign entity that the action it contemplated is no longer viable or desirable and leads to abandonment of the warned-about activity. This kind of success, sometimes known as the “paradox of warning” or the “warning conundrum,” or what Michael Handel calls “self-negating prophesy,” is hard to identify but is the pinnacle of warning success even though it may generate recriminations about another apparent example of the cry wolf syndrome.56 (Location 714)
- Hitler merely verbally abused or transferred them, but he made clear what he wanted and generally got it. He engaged in “mirror imaging”—imagining what he would do if he were in the Allies’ position—and based his analytic judgments on those biases.59 He paid close attention to the assignments of Allied generals he respected, believing the Allies surely must think similarly. (Location 1189)
- AMAN monitored Egyptian and Syrian activities, including the Egyptian exercises, and issued at least eleven warnings of strategic threats in 1973—that is, of Arab military activities not designed for training or defensive purposes.92 When Israeli intelligence warned of Egyptian and Syrian military activities in April and May 1973 and the IDF mobilized for the third time in 1973 but no attack occurred, the cry wolf syndrome emerged to discourage later warnings and to reduce political leaders’ receptivity to such warning.93 On October 5, 1973, the day before the attack, Major General Eli Zeira, the IDF’s Director of Military Intelligence (DMI), told Prime Minister Golda Meir that observations of mounting Arab military activity in late September and early October probably reflected defensive preparations driven by fear of the IDF, not an offensive threat, and that a serious attack was unlikely because of the Arabs’ inferior air forces.94 (Location 1348)
- The Conception thus became a heuristic, a theory or model or core belief that its adherents in AMAN used to process new information. The two core beliefs that guided Conception’s influence on AMAN’s thinking became (1) Egypt would not go to war with Israel until it had gained an ability to conduct deep strikes into Israel to nullify Israeli air superiority and (2) Syria would not launch a large-scale attack without Egyptian participation.100 Therefore, when AMAN picked up signs of Egyptian and Syrian preparations in the weeks before October 6, under Zeira’s guidance IDF analysts dismissed their significance by viewing them as either normal activity or as parts of an ostensibly routine exercise the Egyptians called “Tahrir 41.” Because, in Israel’s political/military/intelligence organizational scheme, Zeira as the DMI was chief of Israeli national intelligence estimates, including warnings of war, his views prevailed. (Location 1378)
- five general types of governmental warning structures: (1) leaders as principal warning analysts; (2) organizations in which every analyst has warning analysis as an additional duty, which we call the Every-Analyst-a-Warning-Analyst (EAAWA) model; (3) dedicated, specialized warning organizations; (4) hybrid organizations that combine elements of types 2 and 3, with the warning function coordinated by a dedicated, senior-level warning specialist; and (5) a whole-of-government effort conducted by government agencies generally, meaning that warning is not solely an intelligence function. (Location 1714)
- In the EAAWA model, all analysts who see threatening activities or opportunities during the course of their research, estimative, or current intelligence responsibilities report them through normal channels. (Location 1751)
- The Watch Committee and the NIC developed a routine and issued “warning” reports periodically, normally weekly, just as current intelligence was published.24 They further developed the I&W method.25 As the I&W method and warning function became institutionalized, each major military command built a variant of the NIC in its intelligence directorate. These centers mainly supported their own commanders but also interacted with the broader Defense warning establishment. (Location 2106)
- Also in 1979, in the wake of the HPSCI report on warning and another warning failure—the surprise overthrow of the shah of Iran66—DCI Turner issued DCI Directive 1/5, which created a new National Intelligence Warning system that built on earlier methods and created the position of NIO/W.67 The directive gave the NIO/W specific responsibilities, including coordination of the warning activities of the regional and other functional NIOs, and provided some definitions that remain valid: a. Warning as used herein encompasses those measures taken, and the intelligence information produced, by the Intelligence Community to avoid surprise to the President, the NSC [National Security Council], and the Armed Forces of the United States by foreign events of major importance to the security of the United States. It includes strategic, but not tactical warning. b. Strategic Warning is intelligence information or intelligence regarding the threat of the initiation of hostilities against the US or in which US forces may become involved; it may be received at any time prior to the initiation of hostilities. It does not include tactical warning. c. Tactical warning is notification that the enemy has initiated hostilities. Such warning may be received at any time from the launching of the attack until it reaches its target.68 (Location 2280)
- Robert Vickers was NIO/W from 1996 to 2004, the longest tenure of any NIO/W. He and his small staff of analysts, mainly from the CIA, produced a series of warning products, usually in association with other NIOs—an example of collegiality that seems to have paid dividends in producing respect for the warning function and generally good warning performance. Vickers published a monthly warning document, assembled in coordination with other NIOs, which looked in depth at indicators associated with key warning issues.118 He also published a weekly one-page “Watch List” of the major issues of concern. He compiled these reports after consulting via teleconference with representatives of CIA, DIA, NSA, State/INR, and the National Photographic Interpretation Center.119 This group was called the National Warning Committee. Vickers normally issued warning messages after a majority of the agencies voted to do so.120 Beginning in 1999, Vickers published his Atrocities Watchlist quarterly.121 (Location 2454)
- Vickers says he believes two warning failures occurred on his watch: the 1998 India nuclear test and the 9/11 attacks. The CTC-McCarthy agreement excluding the NIC from responsibility for terrorism warning gave him considerable cover for the tactical warning failure of 9/11, but DCI George Tenet told Vickers to expect to take blame for the nuclear test, and Gannon confirms that line analytic units that had been convinced the Indians would not test tried to blame Vickers.135 Vickers survived both failures, however, and his long tenure indicates that the warning function can succeed in challenging times if it is performed well. (Location 2497)
- damaged the strategic warning function during his tenure. For example, after 2003, the United States was involved in two wars, and the attention of Defense intelligence shifted appreciably to supporting the war efforts. DIA soon abandoned its practice of teaching its new analysts how to conduct military analysis, incongruously arguing that war-support requirements were too pressing to devote time to preparing analysts enough to make the support competent—a poorly considered decision the DIA eventually reversed.141 (Location 2520)
- Third, Clapper’s decision to go to a relatively strong form of EAAWA—and explicitly to reject the hybrid approach—degrades the function bureaucratically and damages its performance. We see this move as derived, in part, from the inability of later NIO/Ws to perform the function in ways the IC as a whole found credible. (Location 2659)
- Nicoll observed six types of analytic and analysis-related errors that he believed recurrently plagued British efforts to avoid surprise. These differ slightly in some cases from similar American concepts: 1. Mirror imaging—the assumption that factors constraining the British government would equally constrain leaders of one-party states. In particular, Nicoll argued British intelligence inappropriately assessed that foreign actors would be constrained by the same forces of international opinion that constrain British use of force. 2. Transferred judgment—the assumption that foreign actors would make the same judgments about military balances, and thus about prospects for success, as did British analysts. This is not the same thing as mirror imaging but is similar. It reflects an inability to fully “empathize” with foreign actors’ situations, even when one consciously recognizes that they see things differently. 3. Perseveration—the belief that judgments made early in the development of a crisis would remain valid. Nicoll argued that it is important to keep an open mind and alter judgments as appropriate in the face of new information. (The term is roughly equivalent to what the intelligence literature usually calls confirmation bias.) 4. War as a deliberate act—failure to recognize that because wars are deliberate acts that often take a long time to prepare, it is possible to identify emerging threat situations as they are prepared, not just when forces are deployed, thereby increasing warning times. (Cynthia Grabo made this point strongly about US warning failures.226) 5. Coverage—The Assessments Staff found it hard to warn about events in low-priority areas, which reflected weak collection and sometimes meant less collection and analyst attention in the future. 6. Deception—Nicoll recognized that aggressors virtually always devote considerable effort to deception, meaning analysts must constantly keep in mind that foreign actors are trying to deceive them.227 (Location 2865)
- state. Analysts then task collection assets to gather information that would enable timely recognition of movement toward the actualization of a scenario. When events associated with an indicator change, analysts have an indication of possible movement toward (or away from) the end-state. (Location 3937)
- Warning problems extend over variable but sometimes considerable periods of time, and new ones develop periodically, sometimes rapidly. The DoD now calls these warning problems “enduring” and “emerging” problems, respectively. The names are new; the concepts behind them are not. The current terminology is preferable, in our judgment, to a perspective John McCreary, a onetime head of the NWS, proposed in 1983—that warning issues should be characterized as “gradual” or “sudden.”24 (Location 4032)
- The I&W method simplifies many analytic challenges because, as Gregory Treverton notes, if well done it helps convert mysteries into potentially solvable puzzles.39 It does so by establishing expected relationships between variables using many techniques, including the structured analytic techniques of “backcasting” and “timelines.” Backcasting posits events that may occur backward in time from the end-state, while timelines establish event order and duration. (Location 4091)
- Several British intelligence and non-intelligence agencies have adopted variants of horizon scanning, partly in response to Lord Butler’s recommendation in his autopsy of 2003 Iraq War intelligence-related problems. (Location 4411)
- resource, social, political, technological, and military variables, they see an expanding set of possibilities, which range from the “probable,” or most likely, events in the center of the cone, to less likely but “possible” events and then “plausible” but unlikely events at the outside of the cone.13 The goal is to identify possibilities in the “probable” core of the cone. There is not, unfortunately, a set of suggestions for ways to keep analyses in general or warning messages focused on the center of the cone. (Location 4424)
- Robert Jervis suggests that one reason the intelligence communities of the world so badly erred in their assessments of Iraq’s WMD programs in 2002 was that “Saddam did not have a plan, his control was less than complete, and the regime was less than fully competent.”59 (Location 4659)
- The challenge of understanding the perceptions of others is substantial. Robert Jervis suggests that the “fundamental cause” of intelligence analytic errors is that people around the world see things differently and that grasping the worldviews of others is difficult.63 He calls this challenge the “Rashomon effect” after a Japanese fable in which participants in an event saw the course of events very differently.64 (Location 4674)
- Tags: orange
- A much-discussed pathology, variants of mirror imaging—expecting others to act as one would oneself—frequently occur and are exploitable. (Location 5132)
- emphasize. People with mediocre warning skills—a core feature of the EAAWA model—may, when working against competent deceivers, therefore be worse than useless. The best defense against competent deceivers is first-rate individual analysts and teams that identify inconsistencies in collected information that others miss. (Location 5281)
- Senior national leaders—elected as well as career civilian and military officials—are typically smart and experienced people who have overcome significant challenges to reach positions of significant authority. They have personal, political, and ideological agendas, and they virtually always have constituencies they want to reflect and please. They frequently work hard to convince reluctant audiences to support their programs, sometimes using intelligence selectively for this purpose.2 Senior political leaders of states and the military leaders of national armed forces are busy people. They tend to have little patience for wide-ranging philosophical discussions and give their intelligence people modest amounts of their time. They are confident people who need to, and like to, act decisively to further their agendas.3 Senior political leaders therefore like intelligence that supports their programs. (Location 5808)
- Leaders are, therefore, differently inclined than are most intelligence analysts, who self-select into jobs that require at least some reflection about complex issues and are supposed to be, and usually are, politically neutral.5 John McLaughlin, a longtime CIA analyst and former deputy DCI, argues that these differences amount to a fundamental cultural difference between the worlds of intelligence and policymaking.6 A key job for intelligence professionals is to make the “worlds” interact in ways that primarily serve leaders but also are mutually beneficial. (Location 5818)
- As we have argued, warning is often about suggesting to leaders that they make decisions they would rather not make; hence, understanding how specific leaders respond to stress can be the difference between persuasive and unpersuasive warning. (Location 5827)
- Psychologists have demonstrated that people, including national political leaders, rely heavily on personal experiences when making decisions and that they value information consistent with their beliefs—not necessarily their desires—more than information inconsistent with their perceptions or assumptions; they thereby demonstrate “confirmation bias.” (Location 5834)
- Challenges to Good Intelligence Producer-Consumer Relations (Location 5904)
- President John Kennedy’s decision-making structure initially included a haphazard reading of intelligence briefs and undisciplined decision-making processes that frustrated his national security advisor, McGeorge Bundy.35 George calls it the “collegial” model, one Jimmy Carter also used.36 Kennedy became disenchanted with intelligence after the Bay of Pigs debacle in 1961—a fiasco for which both he and CIA bear significant responsibility.37 Kennedy eventually began to pay more attention to intelligence, and CIA produced a tailored daily briefing for him called the President’s Intelligence Checklist.38 Kennedy used intelligence well during the Cuban Missile Crisis of October 1962 but ignored CIA warnings that the war in Vietnam, to which Kennedy increased American commitments dramatically, was going badly and was not likely to end successfully. The CIA also warned that Vietnam’s President Ngo Dinh Diem, whom Kennedy supported, was a problematic leader but that there was no obvious replacement for him.39 Kennedy administration officials ignored the warning and tacitly approved the November 1963 coup that killed Diem and his brother, precipitating an extended period of political instability in Saigon that appreciably damaged the war effort. (Location 5943)
- Johnson disliked both bad news and the people who delivered it.41 He did not like DCI John McCone (a Kennedy administration appointee and a Republican), gave him little time, and did not appreciate warnings that the war in Vietnam probably would continue to go badly.42 McCone therefore resigned in April 1965. Johnson replaced him with retired Vice Admiral William Raborn, a fellow Texan with whom he got along well but who knew little about intelligence, and then with Richard Helms, a career intelligence officer he personally liked.43 He stated his view of intelligence in an earthy comment that Helms overheard at a White House dinner: Let me tell you about these intelligence guys. When I was growing up in Texas, we had a cow named Bessie. I’d go out early and milk her. I’d get her in the stanchion, seat myself and squeeze out a pail of fresh milk. One day I’d worked hard and gotten a full pail of milk, but I wasn’t paying attention, and old Bessie swung her shit-smeared tail through the bucket of milk. Now, you know that’s what these intelligence guys do. You work hard and get a good program or policy going, and they swing a shit-smeared tail through it.44 (Location 5956)
- President Bill Clinton cared little for intelligence early in his first term, leading his first DCI, James Woolsey, to resign after two years of frustrating lack of access to the Oval Office.56 When world events forced Clinton to pay attention to it, he made clear that he wanted only certain messages. In the unpleasant aftermath of the “Black Hawk Down” episode of October 1993 in Mogadishu, Somalia, he told his staff he did not want another military operation in Africa.57 He therefore discouraged warning of impending genocide in Rwanda in 1994, and subsequent reports of its occurrence, by banning use of the word “genocide” at the White House. Clinton re-emphasized his preference for favorable intelligence by publicly criticizing the IC and NIO for Latin America Brian Latell for intelligence assessments critical of the character of deposed Haitian President Jean-Bertrand Aristide—a favorite of Clinton and his political allies in the Congressional Black Caucus.58 Once again, a president publicly displayed a lack of receptivity to intelligence and blatantly encouraged intelligence to self-police the content of intelligence messages—a form of politicization the IC finds easy and acceptable, especially in small doses.59 The IC absorbed Clinton’s political use of intelligence but was much less willing to accept policies of Republican Presidents Nixon, Reagan, George W. Bush, and Trump—against whom intelligence professionals leaked and otherwise expressed their unhappiness.60 This selective self-policing continues to have potentially significant ramifications for the content and timeliness of warning by line units. (Location 6015)
- Less well known is that much of the IC accurately warned that the war in Iraq would not go smoothly and that post-invasion chaos and an insurgency were likely.64 Military intelligence, however, thought the war would soon be over without an extended occupation period and with no insurgency. Senior administration officials accepted the military’s view, which was consistent with administration hopes but was badly flawed for less understandable reasons than the WMD estimate.65 (Location 6034)
- Modern warning entities and analysts should have the following: 1. A “mental attitude” (of analysts), which facilitates early understanding of adversary intentions. The warning officer must have assumptions, or working hypotheses subject to evaluation, about the goals and plans of potential adversaries (and sometimes “friends”). Skepticism and a degree of controlled paranoia in warning personnel are good things! 2. A body of doctrine that guides the collection, handling, and analysis of relevant information. In other words, warning personnel should develop formal processes for doing at least some of their work, particularly for monitoring enduring warning problems. 3. Means for “developing new techniques and methods for collection, processing, evaluation, and analysis significant principally or solely for purposes of strategic early warning.” That is, analysts should develop mechanisms for improving their work in many important areas as they develop; the I&W method is the best historical example of such innovation. Warning analysts must address methodological issues as well as learn a lot about countries and issues of possible warning concern, thereby also generating substantive expertise. 4. Organizations within an intelligence community that help foster continuous processes of collection, analysis, and communication to senior consumers about the intelligence community’s insights about possible strategic threats and opportunities.2 (Location 6767)
- Jack Davis in 2003 made Gannon’s point in more popular contemporary terms. He suggested that warning be viewed as a kind of “alternative analysis”—a term embraced by the IC under congressional pressure after the failures of 9/11 and the Iraq WMD NIE.6 (Location 6824)