Embracing Grading, Handling, and Dissemination Practices in OSINT
Explored the significance of grading frameworks, like the 3x5x2 model, and how these practices enhance the credibility, operational security, and effectiveness of sharing OSINT.
Authors: Paul Wright and Neal Ysart (The Coalition of Cyber Investigators)
Modern investigative work increasingly relies upon open-source intelligence (OSINT). However, an effective intelligence-gathering process depends on more than just collecting data: structured practices of grading, handling, and securely disseminating intelligence are essential. These methods help guarantee operational security, foster credibility, and enable effective collaboration.
For investigators working in areas like undercover operations, covert internet investigations, surveillance, crime scene forensics, digital forensics, and "whistleblower" case management, these practices are vital for success and help avoid "tipping off" subjects of investigations. Additional areas can involve victims, witnesses, and appeals for information from third parties, collectively known as "sources."
Any information OSINT investigators obtain should be recorded through the frameworks discussed in this article.
Grading Frameworks
One of the main challenges with any grading framework is that it must be comprehensible. Intelligence sharing is vital in responding more effectively to emerging threats and improving investigative capabilities. However, there is no use in disseminating intelligence to interested parties through a framework they do not understand. If someone cannot comprehend the grading system, they cannot accurately gauge the reliability of the intelligence they have been provided with. This makes it harder to determine what weight to assign to it when formulating a response or deciding how to use it effectively. To address this issue, a nationally standardised grading system is available to both public and private investigation bodies, providing them with an easily understood way to share intelligence and assess its trustworthiness.
Once information is acquired, it is documented and evaluated through a process called information/intelligence recording. This is a systematic method for managing information, involving skilled personnel, human expertise, and often, technological tools. An effective information management process leads to the further development of intelligence, facilitates research and analysis, and helps allocate resources for deployment more appropriately. When investment is made in an intelligence/information recording system, it can provide investigators and organisations with various tactical deployment methods quickly and effectively.
When managing information, four key aspects must be considered:
- How information is collected and recorded
- How intelligence is evaluated and authorised within the applicable intelligence framework
- How intelligence is accessed and disseminated
- Retention or deletion of intelligence
Intelligence stored within the framework needs regular review to ensure all information remains relevant, precise, and meets the original legitimate purpose that authorised its collection and retention. This includes managing and resolving any data conflicts and linking connected records. Upon completion of these processes, any intelligence products that have been developed are ready to provide information for the tasking and coordination process. This systematic approach ensures that the information management function complies with human rights, intelligence ethics, data protection legislation, and regulatory codes.
Intelligence grading is expressed via numerical and alphabetical values in this standardised system. These values are quickly expressed verbally and are highly auditable. Typically, the numbers in the name represent information and source evaluation, intelligence handling, and sharing rules. These evaluations and rules assess the reliability and validity of any intelligence received.
History of Grading Frameworks
Intelligence grading frameworks have evolved, with several variations introduced since their inception. The first recorded use of an information grading framework can be traced back to 1939, when the British Admiralty's Naval Intelligence Division (NID) implemented the "Admiralty System" or "Admiralty Code." This framework used a combination of letters and numbers (A1 to D5) to evaluate both the source's reliability and the information's credibility.
Since then, various intelligence grading systems have been developed and used by different organisations. For example:
- North Atlantic Treaty Organisation (NATO) Standard: An evolved version of the Admiralty System, using six grades for source reliability (A-F) and six grades for information credibility (1-6).
- US Army System: Used in their Human Intelligence Collector Operations Field Manual.
- Dutch Law Enforcement System: Initially a 4x4x4 system, modified to a 5x5x5 system in 2003.
- UK Law Enforcement Systems:
- 5x5x5 model: Used until 2016.
- 3x5x2 model: Introduced in 2016 to replace the 5x5x5 model.
While sharing common roots, these frameworks have been adapted over time to meet the specific needs of different organisations and countries. The evolution of these grading frameworks reflects ongoing efforts to improve the accuracy and reliability of intelligence evaluation.
A notable shortfall in the private sector is the lack of professionalism, including the inconsistent use of grading models. While some organisations have adapted older systems like the 5x5x5 framework, many fail to adopt grading, limiting their ability to collaborate effectively with law enforcement or leverage the benefits that grading can bring. Treating OSINT with the same rigour as human intelligence (HUMINT) ensures that even internet-based intelligence is appropriately graded, reflecting its value and reliability.
Sample of an Intelligence Grading Framework
In intelligence evaluation systems like the 3x5x2 model, numerical and alphabetical values provide a structured and standardised intelligence assessment. Here’s a breakdown of the critical components:
Source Evaluation
The numerical value (1–3) evaluates the reliability of the source providing the intelligence:
- 1 (Reliable): The source has consistently provided accurate information in the past. Examples include official government databases or trusted institutional data.
- 2 (Untested): The source's reliability is unknown or has not been previously established.
- 3 (Unreliable): The source has a history of providing inaccurate or misleading information.
This grading enables decision-makers to assess how much weight they should place on the source of the information.
Intelligence Handling
The alphabetical value (A–E) evaluates the reliability of the intelligence itself:
- A (Confirmed): Multiple independent sources or evidence verify and corroborate the intelligence.
- B (Probably True): The intelligence is not confirmed but appears plausible based on the context and available information.
- C (Possibly True): The intelligence is plausible but lacks substantial supporting evidence.
- D (Doubtful): The intelligence is questionable or unlikely to be accurate based on available information
- E (Improbable): The intelligence is almost certainly false or disinformation.
This grading indicates how reliable the actual content of the intelligence is, regardless of its source.
Sharing/Dissemination Rules
The final value (1–2 or sharing codes) relates to how the intelligence can be handled and shared:
- 1: Restricted – Controlled dissemination.
- 2: Unrestricted – Can be freely shared.
Why Use This System?
These codes simplify communication and ensure consistency across different organisations or agencies. For example:
Intelligence graded as "1A2" might come from a reliable source and is considered confirmed, with no restrictions on sharing.
Intelligence graded as "3D1" indicates an unreliable source providing questionable intelligence, which is restricted in dissemination.
By standardising evaluation, grading ensures that intelligence can be trusted, understood, and used effectively across operational environments.
Protecting Sources and Intelligence Integrity
As previously stated, effective intelligence handling involves more than mere collection. It ensures the protection of sources and compliance with regulations like GDPR. For example, sanitisation is a critical practice, where identifiable details about sources are removed from shared intelligence. This process ensures operational security while maintaining the reliability of the intelligence. However, sanitisation introduces a challenge: once a source’s identity is removed, the reader must rely on the intelligence grade to assess its credibility.
Proper handling should also extend to the classification of sources. Spontaneous contacts offering information, termed "opportunistic sources," must be carefully documented and evaluated. For instance, a whistleblower providing corporate data may initially be unverified. After validation and further interaction, they might be temporarily classified as an "activated opportunistic source" for the duration of the investigation. This primary classification framework organises intelligence and ensures investigators maintain a professional relationship with sources based on their credibility and relevance.
Balancing Operational Needs with Risk Management
Any dissemination must carefully balance operational effectiveness with the need to protect sources. The dissemination arrangements, including, most importantly, codes of practice procedures, processes, and best practices, must ensure that the risk of leaks and "tipping off" are managed and embed operational security, particularly in sensitive inquiries into undercover or covert internet operations. When sharing intelligence, it should be anonymised and released in a way that enables the recipient to understand and evaluate its reliability and validity in line with its grading.
Standardised grading frameworks like 3x5x2 streamline this process, making it easier for private and public organisations to collaborate. However, intelligence sharing must remain segmented, and where appropriate, "Chinese Walls" should be put in place, particularly when dealing with sensitive information, to help minimise exposure and mitigate the risk of tipping off those under investigation.
Why Grading Matters for OSINT Investigators
Historically, OSINT has often been blanket-graded, failing to reflect nuances at the source level. This need for nuanced OSINT evaluation is further compounded when its use in compliance, due diligence, or law enforcement fails to draw a line between reliable data and less credible data. Such a gap highlights the need for a robust methodology combined with advanced analytics for accurate intelligence gathering.
For instance, UK law enforcement traditionally graded all internet-derived intelligence at the same grade. This outdated approach does not do justice to OSINT's potential in today's information-rich environment. Through differentiation based on source reliability and value, investigators could attach more precise grades, lending credibility to the intelligence while making it more usable.
Grading between private and public sector investigators is also valuable. It standardises methods and builds confidence. It enhances the audibility of intelligence, a critical element of compliance and operational oversight. For example, assigning confidence levels and grades can further refine assessments, helping investigators prioritise and corroborate their intelligence more effectively.
The Coalition of Cyber Investigators believes intelligence grading is the next step in intelligence collection. Intelligence should be graded so anyone reading it can confidently rely on it. When submitted, intelligence should undergo a grading process, where a handling code is attached to the intelligence as part of an initial risk assessment process. Grading intelligence allows for the quick and accessible expression of this source of intelligence risk assessment and provides for sanitisation to protect that source.
Conclusion
To enhance their methods, OSINT investigators should incorporate grading, handling, and dissemination practices. Structured constructs, such as 3x5x2, create consistency in the evaluation process, while proper handling protects sources and helps ensure compliance with regulations. Such practices reduce risks, such as tipping off, and help improve collaboration between private investigators and public organisations.
Prioritising information recording can help achieve efficiency in recording intelligence. The control mechanism ensures that the most essential information is recorded first, providing a scale of priorities that should be imposed in line with the control strategy and current tactical plans. This process helps prioritise information recording based on strategic and tactical objectives, aims, or high-risk issues. Secondary issues for information recording can encompass emerging trends that, though low priority, still necessitate intelligence research.
Investigators and analysts want timely and valid information input to contribute noteworthy value to intelligence products. Any information that is inaccurately presented, such as untruthful content or non-adherence to codes of practice, procedures, or processes, will inhibit the development of high-quality intelligence products.
Embedding intelligence grading into OSINT so it aligns with other intelligence disciplines can help investigators achieve better operational outcomes, protect sources, and engender confidence in their findings. Adopting such methodologies forms a best practice and is increasingly necessary in today's high-speed, high-stakes investigative environment.