The word “indicator” is defined as a generalized, theoretical statement of a course of action or decision that is expected to be taken in preparation for an aggressive act and that can be used to guide intelligence collection resources.[i] This definition can be found in a lexicon produced in 2001 by the Defense Intelligence Agency’s (DIA) then-Joint Military Intelligence College (JMIC) comprising intelligence warning terminology. (A more recent lexicon produced by the National Intelligence University (NIU), formerly JMIC, could not be found by the author.)
Unfortunately, in my experience, the word “indicator” has been greatly misused by intelligence analysts as to limit the credibility of these analysts in the eyes of customers making keys decisions. These misuses manifest themselves in the act of taking what are simply suspicious or noteworthy activities observed while monitoring events or targets of interest and “marketing” those activities as known precursors of a likely outcome. This point is not made to say suspicious or noteworthy activities cannot be indicators, but simply to remind readers that for such activity to rise to the level of “indicator” back testing of all related and similar events MUST be conducted within a reasonable timeframe. This back testing is where quantifiable correlations between independent and dependent variables may be calculated to truly ascertain whether any significant relationship exists (the establishing of causation being one hopeful outcome).[ii]
The negative consequences from a lack of focused analytical rigor for the identification of true indicators may include fostering incredulity amongst the very audiences meant to be served or, worse yet, total belief in a connection where none actually exists.
National Security-Related Intelligence Operations in the Cyber Domain
Within the Cyber Domain events occur in fractions of a second, which can set back a targeted enterprise’s operations, information, and communications security for years afterwards! To bring closure to such an event requires the conscientious intelligence analyst to quickly retrieve extensive amounts of data, process these data correctly, then properly analyze the reformatted data for what hopefully will be an accurate assessment addressing the original request for information (RFI). Moreover, if multiple activities are detected against the targeted network during a single campaign, lengthening the time it takes for processing greater amounts of data then compounds the difficulty of producing a timely product. (Timeliness is a U.S. Intelligence Community Analytic Standard that must be CONSIDERED during the production of deliverables, as dictated by the Office of the Director of National Intelligence to intelligence professionals.)[iii]
It is at this point, in my opinion, the cognizant cyber threat intelligence analyst will either SHINE or FAIL TO FULFIL THEIR OBLIGATIONS. The reason: it is at this point when the decision will be made either (A) to use a pre-existing indicator that may have been proven valid sometime in the past but has not been re-verified for continued accuracy in a dynamic environment; or (B) to hold back a product – risking timeliness issues with the deliverable – to process the most recent tranche of data available and subsequently find that pre-existing indicators may have been overtaken by a more valid indicator; presumably, doing the latter results in a potentially more accurate assessment. (“Makes accurate judgments and assessments” is Analytical Tradecraft Standard #8 of 10, now, which must be EXERCISED in the production of deliverables, as dictated by ODNI.)[iv] We will assume, for the remainder of this missive, the analyst chose to push forward with a deliverable using a pre-existing indicator (choice “A”).
As a consequence, the cognizant analyst’s assessment has a higher probability of being inaccurate, and subsequent mitigating actions taken by customers – only after further, hefty deliberation – could leave that customer feeling confused when outcomes from steps taken do not remotely align with expectations. Again, this confusion will likely be attributed to the manner in which the threat intelligence was delivered, namely as an impactful outcome based on notable (yet not properly vetted) activity given the title of “indicator.” Given the propensity of evidence of malicious cyber activity to be captured in the form of information network audit logs or some other IT system of record (in spreadsheet format), the processing of (large) data sets appears UNAVOIDABLE; there is no getting around it if the analyst is to provide the highest quality of assessment to a customer.
Ironically, line analysts may not be (totally) at fault for the aforementioned shortcomings in proper tradecraft, given the inclination by some management and leadership to prioritize “feeding the book.” “The book,” in this case, is typically a binder packed daily with the very latest deliverables used to inform key customers of any events thought worthy of their consideration. The mismanagement of this information tool arises when the view that “there can never be NOTHING going on” dictates that SOMETHING must be manufactured to “feed the book” lest the perception arises that analysts are not working optimally. The result: invaluable time is wasted addressing less substantive issues – in the cyber domain – where fractions of a second mean the difference between building a proper network defense and having that network “owned.”
I offer two suggestions for remedying this situation. The first requires the cognizant analyst to convey to cyber threat intelligence customers that any initial product addressing a cyber event is likely to be the first in a series. Taking this step in the management of intelligence customer expectations will allow customers to understand that any updates to the initial deliverable are susceptible to substantial changes in assessment. This understanding then also provides the analyst the time needed to address the processing of data that may provide more actionable insights. Managing expectations mitigates the “one and done” view of production which compels many analysts to misuse the term indicator while “packing” a deliverable with everything it is believed the customer might want to know at that time.
The second solution is for cyber threat intelligence customers to ensure optimal deliverables are almost always received by making the hard yet beneficial decision to forego shorter term “snippets” that “feed the book.” This step will likely allow affected analysts to conduct (as quickly as possible) the meticulous levels of data processing and analytical rigor sought by all cyber threat intelligence customers. As a consequence, more comprehensive and well-researched deliverables, which have their assessments based on the MOST CURRENT and ACCURATE indicators formulated, can be demanded without continually forcing management and subordinate analysts to satisfice in attempts to meet short-burst deadlines; deadlines for lesser deliverables that may not meet necessary thresholds for proper analytical rigor upon greater than cursory inspection.