Peter and Anil recently made a very important point why attributes cannot be assigned “assurance” levels akin to authentication decisions. Instead, they suggested that attributes and their sources may be assigned “confidence scores” that may allow a service provider to make an informed decision about trusting an attribute source, or not.
In this context, I would like to expand a little on Anil thoughts on how we can evaluate our confidence in authorization.
In the end – the service providers (or their delegates) are really interested a metric for the confidence of an authorization decision. As a SP, I am only mildly interested in the confidence in attributes: what I really care about is if the authorization decision that needs to be made is ok, or not. This is driven by a number of different factors (non exhaustive list)::
- Identification and authentication – as a service provider, I need to know who want to access my resources.
- Roles, attributes, and other authorization factors – determine what users are allowed to do, based on their characteristics. This process includes a very complex exercise of translating natural language policy into a conceptual access control model, and then into a machine interpretable set of policies and facts. These policies can be evaluated, using the facts as inputs, to compute the binary decision “grant access” or “don’t grant access”.
- Overall trustworthiness of the system components – this extends from the reliability of the authentication decision (reasonably well captured by the LOA) to the trust the service provider has in the authorization decision.
The need to address the authorization trustworthiness is reflected in the discussion presented in Peter’s and Anil’s article. Looking at the problem of attribute confidence, it makes sense to dig a little into literature on data quality metrics: at the end of the day, attribute confidence is essentially a data quality problem. There are a number of articles that have been published on this topic, including a number of articles from Rich Wang et al.
I think that it would be a good exercise for the IdAM community to revisit the data quality work, and start looking into profiling this work for computing a quantitive value for confidence into a authorization decision. In a sense, any *BAC model incorporates a number of data source (including the LOA of the authentication) into the authorization process, and computes a binary result from them.
Applying the data quality metrics would allow to calculate a confidence score from the component data sources, that can be calculated at runtime, to reflect both
- Existing service level agreements of the attributes that Anil mentions, as well a
- The current operational status of the data sources, i.e. some consideration as to whether the source can be trust right now.
This would ultimately result in a system that expresses the trustworthiness of a authorization decision in quantitative terms, which in turn may be used to make truly risk-adaptive access control decisions.