Review of this Week: OMG and WIS3

Generic Threat Modeling-20140326

This week, we made some pretty good progress toward formalizing the conceptual threat modeling work at OMG:

OMG SysA Meeting

On Wednesday the team presented the result of the last 3 months of work on the conceptual threat model (please find the presentation here). Overall, we have explored the problem area and have started to develop a decent understanding of how a specification would need to be written in order to enable semantic interoperability across different domain and protocol stacks.

In general, the conceptual model we are working on will describes the abstract concepts within the overall problem space of threat and risk modeling. With such a model, semantic mappings to specific logical models (such as the the cyber-domain specific model that STIX is creating) can be generated. This will allow the automated generation of ‘semantic glue’, i.e. mappings that transform different representations of data. The benefit of using a conceptual model lies in the fact that it is independent of the respective underlying logical models for each domain or protocol. This approach has been around for some time now, but it is currently formalized in the Semantic Information Modeling for Federation (SIMF) project.

Overview-20140326

Currently, we have a (very early) draft model up on our GitHub project page which captures some of the most important aspects of the model:

  • Domain agnostic concepts – all elements of the model can be equally applied to any threat domain. Currently, we are trying to capture the domain identification through an enumeration containing biological, chemical, cyber, electromagnetic, human, physical, and other domains. Eventually, this should be an extension point to the model.
  • Symmetry of offense and defense – all elements of the model can be instantiated by offensive and defensive entities. In fact, we expect that more complex scenarios will invariably leverage both offensive and defensive capabilities across they strategies and campaigns. One example would be using offensive elements in a defensive strategy, in order to implement an ‘active defense’ philosophy.
  • Identification of assets, patterns, and behaviors – all significant elements in the model are either assets (‘things’), patterns (‘plans’), or behaviors (‘events’). In this way, patterns are realized by behaviors, most commonly through the activity of actors.

The model itself lends itself fairly cleanly for extension to safety modeling as well: for example, natural disaster would be modeled through Operations with specific Effects, but without (offensive) Actors with Strategies or M.O.s.

As part of the review of the body of work, we decided to join forces with another emerging project on Risk Management within the Systems Assurance TF. The goal will be to produce a joint RFP in time for the June 2014 OMG meeting, and the commence the full specification development.

The reason for combining forces with the risk management project is fairly clear: a conceptual threat model will be significantly more useful if a risk metric can be associated in a repeatable way. Metricizing the threat space this way will allow a number of interesting applications, even beyond pure risk management. A way to leverage this in modeling is by identifying the threats in a use case diagram like this:

Generic Threat Modeling-20140326

Another one of the ideas raised in earlier discussion was to use the model for simulation. In this application, an experimenter can evaluate a large number of threat scenarios by averaging over the threat and environmental parameters using Monte-Carlo methods and then test the effectiveness of different defensive strategies. If the threat model is metricized with a stable risk methodology, the implementation and evaluation of such simulations can be much more effective.

WIS3 – Threat Panel

On Thursday, we participated in the annual WIS3 conference hosted by IJIS, PM-ISE, and OMG. We had a great panel consisting of representatives from the public and private sector:

IMG_1754

  • Greg Rattray, Delta Risk
  • Antonio Scurlock, ESSA, DHS
  • Any Pendergast, Cyber Squared
  • Justin Stekervetz, NIEM PMO, DHS
  • Cory Casanave, Model-Driven Software

With the perspective of the panelists, we were able to discuss the current state of threat sharing and its implications for national security, retail, law enforcement, and other domains. The discussions touched history, current state, and future of threat modeling, and the different initiatives in this area can help implementors in moving forward. One of the great takeaways of the workshop was the announcement of the Project Interoperability initiative by PM-ISE that aims at creating a community for enabling information sharing projects, and serves as a launch point to various development sites.

 

CND → CICyber

Threat Modeling Defender-Attacker

There are a lot of organizational Tactics, Techniques, and Procedures (TTPs) in Counterintelligence (CI) that can inform more efficient Defensive Cyber Operations (DCO) or more generally Computer Network Defense (CND) 1 processes and strategies. While the mapping is not perfect, it would be interesting to explore further how traditional CI organization can collaborate with information security groups. A perfect first step into this direction would be setting up better information sharing activities between CI and CND to assist in identifying and neutralizing insider activities: getting deep insight on potential bad actors inside an organization can help the defensive team to architect and deploy better monitoring and accounting systems in a targeted way. In the private sector, this would mean a very close collaboration between the human resources department and the security office.

Hypothesis: CND → CI|Cyber

Computer Network Defense was until May 2013 defined in the context of military operations as:

Actions taken through the use of computer networks to protect, monitor, analyze, detect and respond to unauthorized activity within Department of Defense information systems and computer networks.

This definition is mutatis mutandis very much applicable to how non-military domains approach information systems security. While not comprehensive (what is ever in security?), it scopes the area of activity for the security professional fairly nicely, especially if you include notions of addressing the insider threat and resilience (i.e. the capability to operate under attack).

Here is a proposition: The mapping from Computer Network Defense to the restriction of Counter Intelligence to the Cyber domain is a monomorphism:

CND → CI|Cyber

While this is not a strict mapping in the sense in which mathematical morphisms are defined, it is really intended to say this: the methods and procedures used in CND are very similar to those that are used for the application of counterintelligence to investigating information systems.

CI and CND Activities

How can you make this argument? In a recent article by Michelle Van Cleave2 describes the general field of CI. Specifically, she illustrates the areas where CI is active:

  • Identify
  • Assess
  • Neutralize
  • Exploit

Looking at any effective security program, most of these areas are very relevant to the network defender:

1. Identify

Similar to the need of CI to “identify the foreign intelligence activities directed against the United States”3, including attribution, types of activities, targets, etc., network defenders (both public and private sector) need to better understand the threat landscape that they are facing. Identifying the threat actors and their goals is a necessary prerequisite for all subsequent activities in defending against attacks for both CI and CND: The similarities in identifying the adversary in both fields seems obvious, and on closer inspection requires similar approaches.

This becomes especially striking in the context of identifying insider threats.  Indeed, dealing with insider threats — the heart of any CI activity — presents the tightest mapping between CI and CND across many of the areas discussed here. Historically, CI organizations have had severe challenges with Insider Threats, and some of these challenges are likely due to the disconnect between the CI organization’s goal and the monitoring capabilities that were available.

2. Assess

As for CI, it is important for information systems security to accurately assess the situation. The awareness of threats and the understanding of how they may affect critical assets is very important for the identification of the actual risks, and the development of effective countermeasures to limit the potential impact from those threats or – alternatively – accept the risk consciously. In CND, threat and subsequent risk assessments are the departure point for formulating the defensive strategy. This activity also encompasses the analysis of the Attacker’s business processes and intention from a Defender’s perspective. In the figure below, this analysis informs the understanding of the box on the left side by the analyst (“Good Actor”) on the right.

Threat Modeling Defender-Attacker

3. Neutralize

Once the threat is clearly characterized, both CI and CND need to address it and determine the best treatment for the risk it poses. While CI – based on the broad set of tools legally available to them – can be much more effective, CND and its more limited means aim to neutralize the threat as well. Regular defensive tools can deny the adversary access to the assets that they seek. In addition they can blind the reconnaissance activities of attackers by minimizing the cyber profile of the targets, or respond to attacker’s probe requests with wrong data.

A significant advantage for CI is that they have a wide arsenal of offensive capabilities such as search warrants and trained investigators that do not necessarily translate into similar tools for CND, at the least for private sector organizations. For public sector organizations, other means (such as Computer Network Attack – see reference in footnote 1) are available as additional approaches to neutralize the threat. Some approaches such as using honeypots for deceiving the adversary are acceptable in certain circumstances, but “active defense” is a very much still a legal gray area: cyber vigilantism is not currently acceptable in most regulatory regimes.

With both the threat actors and the targets under the control of the defender (such as in the case of the insider threat), the possibilities for neutralizing insider threats are less constrained, even in the private sector.   Concerns restricting active defense against an outsider, such as risks of imperfect identification and collateral damage, are less pressing.  Fairly heavy-handed interdiction or quarantine — up to confiscation of resources and physical restraint of the (human) attacker — may be possible.  At the same time, any activities along such lines would have to be carefully orchestrated to not run afoul of legal limitations.

4. Exploit

Exploitation of the attacker’s weaknesses is typically not within the scope of CND, but would be covered by Computer Network Exploitation (CNE) and subsequent Computer Network Attacks (CNA) in the sense of JP 3-13. Again, to a limited extent private sector defenders are engaging in these activities when they take an “Active Defense” position, which is much less common and may have legal repercussions for the defending side. Again, there may be greater latitude here in dealing with insider threats.   While considerations such as preservation of evidence can put prudential limitations on exploiting attacker-controlled systems operating “inside” the defender, the general legal prohibitions on unauthorized access to systems will not apply.

Discussion

The mapping discussed above seems fairly straightforward, but there are a number of additional aspects that should be considered. Initially, it is important to investigate what type of expertise is expected to perform tasks associated with identification, assessment, neutralization, and exploitation in either CND or CI. As such it may be helpful to visualize how specific expertise has been common in either discipline:

 CND-CI-Cyber

While this mapping is not particularly accurate and leaves out a number of more complex aspects in either CND or CI, it still illustrates how the two disciplines may effectively cooperate. One prerequisite for successful collaboration is a shared understanding of how responsibilities should be assigned and what the core concepts really mean (i.e. a shared ontology). Creating a mapping between CND and CI activities may assist is establishing this joint understanding4.

For large organizations the CND and CI functions may be implemented in different groups, leading to specialization and the danger of disconnectedness from each other. For example, traditional CI in national security has in the past mainly focused on the Social Domain and indicators derived from HUMINT and related disciplines. On the other hand, CND has in the past often operated in a “CI vacuum”, i.e. without considering motivations, psychological aspects, and other elements of the Social Domain when trying to defend cyber assets.

Smaller or private sector organizations typically do not have the resources to operate separate CND and CI groups. Instead, these functions are typically implemented by a security office or – less ideally – scattered across the entire organization between human resources, IT staff, marketing/business intelligence, and operational staff.

Thanks to Richard, Grant, Mike, John, Rick, John, and KP for feedback.

 

1 CND was formally defined in Joint Publication 3-13  until May 2013 at http://www.dtic.mil/doctrine/new_pubs/jp3_13.pdf. For the purpose of this article, I am also using it in a more general sense, i.e. the practice of defending any information system against attackers.

Michelle Van Cleave, “What is Counterintelligence?”, The Intelligencer: Journal of U.S. Intelligence Studies, Fall/Winter 2013, 2014

3 ibid.

4 Interestingly enough, this approach and the identification of specific conceptual domains echos modern understanding in Command and Control (C2) theory: Davis Alberts has discussed a modern model for C2 in which he identifies how different conceptual domains are necessary for developing an understanding of the battle space and creating plans and executing them. Alberts’ Social Domain mainly comprises the social interactions and relationships of the friendly force’s members, but in order to better understand the adversary it seem natural to extend the Social Domain to all participants, including adversaries and neutral entities. See e.g. David Alberts, “Understanding Command and Control”, CCRP Publication Series, 2006. Note that Alberts differentiates between Information Domain and Physical Domain. For the purpose of discussing CND, the Cyber Domain will need to be treated on the same basis as the physical domain, since this is the space where actions can be performed and effects can be observed. The Information Domain is about the communication space – Cyber only enters this picture by enabling an effective Information Domain.

 

Refining Use Cases for Threat Modeling and Next Steps

Here is some progress we made recently as part of the OMG threat modeling working group: in order to guide the development of the threat meta mode, we have agreed to scope the work by looking into defining specific use cases and scenarios. The idea is to focus and bound the work so that it will be easier to identify the specific individual tasks and shape the overall threat model architecture. So far, we have identified the following:

  • Large Company Use Case: a large multi-national corporation with sensitive information systems, datacenters, and multiple offices, with cyber and physical security systems
    • External Attack Scenario: outside criminals or competitors, looking for financial gain or competitive advantages
    • Insider Threat Scenario: malicious insider, motivated by financial gain, desire for retaliation, or industrial espionage
  • Critical Infrastructure Use Case: North Easterns US power grid
    • Terrorist Attack Scenarios: capable terrorists disable critical cyber systems, resulting in physical destruction of core equipment

Going forward we will refine and extend these use cases to develop a clearer picture of how the threat meta model will look like.

Next steps in the threat modeling itself will focus on generalizing the indicator concept to cover non-cyber related threat indicators. This will really be a starting point for the broader model.

 

 

Quick Update on Threat Modeling

We had a number of successful meetings after the OMG Technical Meeting in December, culminating in a Kick-Off Meeting on Jan 6, 2014. At that meeting, we reviewed the current status of the project, and received great guidance and support from Kshemendra Paul, Donna Roy, and Richard Soley.

Going forward, we have decided to use GitHub as a repository for collaboration. For Phase 1 the repo is at https://github.com/omg-threat-modeling/phase1 and will host all relevant documentation and code. Starting next Monday, Jan 13, we will have recurring conferences from 2-3pm ET.

Threat Modeling Project

At the OMG Technical Meeting in Santa Clara, CA today I presented some thoughts on creating a comprehensive model for describing information security threats. My session was hosted by the System Assurance Task Force as part of their charter to improve overall systems security and reliance. Prior to this meeting, I had a large number of discussions with various stakeholders in the emerging (really: maturing) field of threat information sharing.

Motivation

Given that there are already a number of specifications for threat information sharing, the question arises why we should bother looking at creating a complex project that would result in a threat model. There are – in my mind – a number of reasons why this not only makes sense, but in fact is  a necessity for furthering security on the internet. Some of these reasons include:

  1. Existing working specifications (such as STIX, OpenIOC, IODef, etc.) do work for their respective use cases, but are somewhat siloed and do not allow for cross-platform information sharing. In homogenous environments this is not a concern, but most systems – especially in large organizations or in a cloud-centric world – are quite heterogenous and will require interoperability between different stacks of threat sharing ecosystems. Without a consistent model that fosters semantic interoperability, we may see occasional convergence, but certainly no ability to federate at a multi-lateral level.
  2. Specific solutions are optimized for the original use-cases, but have a hard time extending cleanly beyond the original intended use. While it will always be possible to expand the scope and potential use cases for an existing ecosystem, it commonly either (i) takes increasingly complex extension mechanisms to support new areas, or (ii) starts off from a under-profiled specification that can be used for anything, but leaves enough room for interpretation in implementing it so that interoperability is effectively non-existent. Having a high-level conceptual model addresses both problems by (i) allowing a semantically consistent federation of best-of bread standards, or (ii) guides the development of profiles for enabling interoperability.
  3. Extension to new domains (such as supply chain threats or traditional law enforcement threats) or interfacing with adjacent problem areas (such as threat and risk management) is uncanny at best, if not outright impossible. Having a platform independent model or even a threat meta-model would make this connection significantly easier.

The core idea is to enable system engineers and architects to actually build systems-of-systems that implement and leverage the capabilities we have been building up to. Attached to this article are the presentation and the paper that I prepared for the SysA TF meeting – feel free to use and share as you see fit.

Approach and Goals

To address these issues, I propose a multi-phase approach that is intended to leverage existing work that has already been proposed in various communities. The goal is to understand the problems that will inevitably arise in identifying a shared ontology for threat sharing in a limited initial phase, and then broaden the scope to develop a comprehensive meta-model that can be leveraged in various ways. Specifically, for the phase 1 we can learn a lot by leveraging the UML Profile for NIEM  to generate a “Cyber Domain PIM” and a STIX Platform Specific Model (PSM) that is – effectively – a rendering of the Cyber Domain PIM in STIX compliant XML. Using the existing UML profile for NIEM is effective since it provides the necessary language to express the SITX model in a platform independent way. As an interesting side-note: completing this step will align two complex models that were originally defined in XML Schema and establish a semantic interoperability layer through a conceptual model.

In a second phase, the Cyber Domain PIM can then be broadened in multiple ways: Firstly, additional PSMs (such as for OpenIOC, IODef, Snort Rules, SI*, NIEM JSON, etc.) can be derived from the PIM in a consistent fashion, enabling the creation of meaningful mappings between these different languages. As a result, a properly profiled STIX Course of Action could then be automatically translated into a corresponding Snort rule that effectively implements a meaningful countermeasure against a specific threat. Secondly, the PIM can be extended by the capabilities supported by the new PSMs: for example, a Secure I* PSM could provide guidance on extending the PIM with advanced actor-based social and behavioral modeling.

Finally, the second phase should include a pathway towards a threat meta-model that (i) captures the domain-independent aspects of the threat model, and (ii) aligns the project with adjacent work such as the emerging risk management meta-model. In a (yet to be concretized) third phase, the cross-domain capabilities of the meta-model can effectively be used to integrate existing non-cyber domains (e.g. in the context of the Common Alerting Protocol – CAP), or guide the development or emerging non-cyber specifications in aligned specification stacks (such as the Supply Chain Observable eXpression project – SCOX – in STIX).

Road Ahead

Based on the positive feedback from the Santa Clara meeting, I am looking into starting work on phase 1 as soon as possible. There is currently great momentum within OMG and its member organizations as well as some of the affected communities, so I think that we can get a RFC for phase 1 ready in the March or June timeframe. To facilitate this, we will be starting a working group early January that will focus on creating the Cyber Domain PIM and the STIX PSM. If interested in observing or actively participating, please send me a note.

Email – Identity Management Gone Awry

Yahoo has started a program where users may request to have existing account and email names transferred to them. If a requested account has not been in use for an extended period of time, Yahoo will transfer this account to the requestor and they can use it going forward. From Yahoo’s perspective this is completely understandable: the existing namespace under the @yahoo.com has been used for more than a decade, and it is practically impossible to get a halfway decent email address since they have been used up a long time ago.

From a user’s perspective this is not so great: if you have used your email in the past for sensitive services (such as banking, shopping, or access to your health information) loosing access to the account that controls all could mean a world of hassle, potential loss of sensitive information, and ultimately personal harm. This is rooted in the fact that service providers have encouraged (and in many cases forced) users to use their email address as identifier (i.e. username) when signing up to a new service.

The argument that re-purposing the username as global identifier was a bad choice definitively has merits: ultimately, as a service provider you outsource at least components of your identification process to an external entity over which you (the service provider) have no control over. ‘Bad practice’ doesn’t even start to describe this. On the other hand, we have seen many other identity systems epicly fail: either there was no acceptance in the user base, with the service providers, or – most commonly – in both groups. The reasons are as diverse as the proposed identity management systems: ease-of-use, technical complexity, cost, and other factors have all contributed to the situation.

So where are we and how do we get out of this mess? In today’s world, the ability to proof ownership of an email address is a core pillar of identity proofing on the internet. The vast majority of providers allow password resets through email, often without any further proofing. Unilaterally changing the expectation of continued ownership by Yahoo is bound to cause a lot of problems for Yahoo, the service providers, and ultimately the end-users. While Yahoo has (inadvertently?) pointed the finger to the obvious gaping hole in our identification regime, their move will make me (as a blog owner, service provider, or web site operator) to have doubts about accepting their users in the future. Minimally, I will have to either (i) make significant investments into e.g. knowledge-based secondary authentication with limited expectation of success, or (ii) take the risk of allowing access to sensitive or personal information to people who should not have it. Either way, I – along with my users – will be paying for Yahoo’s move to refresh their pool of available usernames. Great.

Time will tell if this approach will be accepted by the internet eco-system and the courts, but I have serious reservations. The “Require-Recipient-Valid-Since” approach may mitigate some of the impact of this debacle, but broad adoption (never mind full standardization) may take years, leaving a gaping hole in the email security for the future. What annoys me most is the unilateral approach in changing the terms of use after training users for years to rely on email security for identification. That is simply put a colossal breach in trust into the integrity of the email provider. Maybe the comparison is slightly hyperbolic, but in a way Yahoo and Mailinator are now in the same bucket.

Multi Factor for Personal Use

photo

Right now, I am looking into alternative multi-factor authentication solutions. There are the obvious contenders such as SecureID or smartcards, but they tend to be on the pricy side, especially if want to use them for your blog, your home VPN, or generally for fun.

Screen Shot 2013-02-23 at 11.38.36 AM

Enter Duo Security: this company offers a cell phone based solution for multi-factor. Not overly exciting in itself, since this is something that has been around for quite a while. However, their solution is pretty flexible, since they support the usual callback and text-based flows, but also a smartphone app that leverages the device’s HSM for protection. The neat thing about this (semi-)soft token is that it does provide a OTP PIN solution based on hardware crypto providers, but without the hassle of keyfob distribution management.

photo

In addition to the phone-based authentication flows (which obviously also works with landlines since they support voice callbacks), they also support hardware tokens, including Yubikey. It is ultimately up to the administrator to determine which devices are sufficient, but Duo supports the option to allow multiple devices and let the user choose.

Now, the really nice part about all this is that their Personal Edition for up to 10 users is free. This means that I can finally start to take a look at my personal stuff and determine if and where to enable multi-factor. The first step is this blog, since Duo provides all necessary components (cell phone app, service, WordPress plugin) out of the box. Setting this up took me about 15 minutes. VPN into my home network will be next on my list. 

Some Changes

demandware logo

Starting with the new year, I am working with Demandware as their new Chief Security Officer. In this role I will be
responsible for developing and implementing a comprehensive corporate information security governance and management framework, covering all aspects of security and privacy within the company and our product. In addition I take ownership of Demandware’s expanding compliance portfolio (PCI, TRUSTe, EUDPA, SOC) and will work closely with the both Engineering and Technology organizations on measuring and continuously strengthening the overall security and privacy capabilities of the Demandware eCommerce platform.

demandware logo

At this point I would like to note that my time at MITRE was an outstanding experience: I was lucky to be able to work with some extremely talented and motivated people on both the Air Operation Center Modernization as well as on hData. During my tenure I learned a lot about the federal government and how it works (or sometimes not), but also had the opportunity to expand my own responsibilities and knowledge significantly.

Looking forward I am very excited about the opportunities and the high operational tempo at Demandware.

Dynamic Aggregated Confidence Sccore

Peter and Anil recently made a very important point why attributes cannot be assigned “assurance” levels akin to authentication decisions. Instead, they suggested that attributes and their sources may be assigned “confidence scores” that may allow a service provider to make an informed decision about trusting an attribute source, or not.

In this context, I would like to expand a little on Anil thoughts on how we can evaluate our confidence in authorization.

In the end – the service providers (or their delegates) are really interested a metric for the confidence of an authorization decision. As a SP, I am only mildly interested in the confidence in attributes: what I really care about is if the authorization decision that needs to be made is ok, or not. This is driven by a number of different factors (non exhaustive list)::

  • Identification and authentication – as a service provider, I need to know who want to access my resources.
  • Roles, attributes, and other authorization factors – determine what users are allowed to do, based on their characteristics. This process includes a very complex exercise of translating natural language policy into a conceptual access control model, and then into a machine interppretable set of policies and facts. These policies can be evaluated, using the facts as inputs, to compute the binary decision “grant access” or “don’t grant access”.
  • Overall trustworthiness of the system components – this extends from the reliability of the authentication decision (reasonably well captured by the LOA) to the trust the service provider has in the authorization decision.

The need to address the authorization trustworthiness is reflected in the discussion presented in Peter’s and Anil’s article. Looking at the problem of attribute confidence, it makes sense to dig a little into literature on data quality metrics: at the end of the day, attribute confidence is essentially a data quality problem. There are a number of articles that have been published on this topic, including a number of articles from Rich Wang et al.

I think that it would be a good exercise for the IdAM community to revisit the data quality work, annd start looking into profiling this work for computing a quantitive value for confidence into a auuthorization decision. In a sense, any *BAC model incorporates a number of data source (including the LOA of the authentication) into the authorization process, and computes a binary result from them.

Applying the data quality metrics would allow to calculate a confidence score from the component data sources, that can be calcualted at runtime, to reflect both

  • Existing service level agreements of the attributes that Anil mentions, as well a
  • The current operational status of the data sources, i.e. some consideration as to whether the source can be trust right now.

This would ultimately result in a system that expresses the trustworthiness of a authorization decission in quantitative terms, which in turn may be used to make truly risk-adaptive access  conntrol decisions.

Relying on Attributes

Anil talks about LoA for attributes in response to some of the discussion at the recent IDTrust at NIST. This discussion came up a couple of times before, and I seem to recall talking about this:

Trust building?In the bigger picture “assigning” a LoA for attributes is pretty pretentious, especially when there is no clearly defined relationship between the certifier and the attribute consumer. The ultimate decision to release information lies with the logical custodian of that information (in OAuth: the resource owner, in XACML: the service provider). This decision authority may be delegated to PEPs, PDPs, or be exercised within a workflow.

As the decision authority now pulls in additional information from attribute providers, the environment, and other pertinent data sources, it (the decision authority) must make a determination whether to utilize and trust these sources or not.This determination will depend on a number of factors which ultimately result in the need to perform a risk assessment answering the question:

“If data source A is used for an access control decision, is the risk of making type 1 and/or type 2 mistakes acceptable for my use case?”

Obviously, this question can only ultimately by the logical data custodian, or its delegate. So instead of having an external entity assign a “Level of Assurance” to a particular attribute provider (or more general: a data source), attribute provider should make a set of metric available to potential consumers, so that they can make an informed risk decision. Among these metrics, I would think that the following list would be useful for access control decisions:

  • Freshness – is the data up to date?
  • Comprehensiveness – is the offered data sufficient to make a decision?
  • Completeness – is the data available for all identities?
  • Correctness – is the data accurate?
  • Availability – will the attribute provider be available at all times, on all relevant networks?
  • Operational soundness – are the business processes for the attribute provider sufficiently trustworthy to protect confidentiality, integrity, and availability of the data?
  • Privacy/secrecy – is access to the data performed in a way that protects the data or the data consumer from unwanted disclosures?
  • Accountability – is the data provider willing to accept responsibility for mistakes on their part?
  • Arbitration – if something goes wrong, is there a binding arbitration process to determine responsibility?

There are probably many more, but this would be my shortlist.

Post Navigation