Hi there,
No ThoughtSiphon blog post in almost 9 months, but just wanted to put up a small note to say that I'm currently actively blogging over on the site of my employer - Pirean.
My current and all future posts can be found at the following address: http://www.pirean.com/industry-insight/authors/stephen-williams/
For ThoughtSiphon I'm quite keen to put up some posts on topics such as the latest incarnation of 3D HD Tv that I have at home, my VOD/Music-On-Demand hub, why I'm thinking that time might not exist and why cat's are in my opinion one of the smartest (or just manipulative) animals on the planet. Must make time to write those up soon.
Until then please check out my Pirean posts, papers and its LinkedIn Group which I'm regularly contributing to.
Thought Siphon
Tuesday, 3 April 2012
Tuesday, 12 July 2011
Shifting away from traditional personal computing - or at least trying!
I've not posted anything for a while as I'm currently engaged on one of, if not the biggest, Tivoli Identity Manager projects ever embarked upon with a side order of SSO. All this is well on the way to a full live deployment so I'll be sure to post what I can on its many interesting features once that's up and running.
In the meantime I thought a post on a bit of home technology geekery would get me back into the swing of things.
A question that's been doing the rounds in my head for a while now is whether it is truly possible to remove the need for a traditional desktop/laptop and instead move to a thin-client private-Cloud type setup? The advantages for me personally would be that when working remotely I could swap out my heavy, temperamental and constantly warm laptop for something light, thin and easy to use. What started this process of self-examination was a realisation that I use my laptop in basically two modes; information gathering (mail, browsing, researching) and system development. When working in an information gathering mode my dual-core 3GB, 64bit Windows laptop is overkill. I can imagine the CPU rapping its fingers on the table or playing Civilisation or Angry Birds while I work. When working in system development mode the laptop is not powerful enough and is limited by the number of virtual machines I can store locally on the laptop's (or my external) hard disk as well as system resources that Windows 7 will spare me.
A better approach I believe would be where:
To start the process of (hopefully) achieving my goal of retiring my laptop, I have started to construct my own private Cloud, which will help with achieving point (1) above - 'offload all heavy lifting'. Initially I thought I'd have to buy a super expensive server and spend a great deal of time and effort setting this up. That's when I came across the fantastic HP Proliant Microserver.
HP were (perhaps still are) offering a £100 cashback on these fantastic servers, which when taken in light of the average purchase price of £200 is an absolute steal. The dual core 1.3GHz cpu seems small however it has many virtualisation features that give it much more punch. Since receiving my microserver around a week ago I have installed 3 mixed HDDs that I had lying around along with 2x4GB sticks of RAM, which cost me just £50. I've got on my eye on a Samsung EcoGreen f4 2TB drive as well which is just £50 if I need more space. I'm currently in the process of installing ESXi 4.1 onto the Proliant after which I'll start migrating over my existing VMs into the 1TB of collective disk space that this unit now has.
In parallel with the setup of my Proliant I'm also currently working on items (2) and (3). The secure tunnel will be achieved using OpenVPN, which I have found to be very easy to use and comes with some great utilities for supporting certificate based user authentication. My home router supports Dynamic DNS registration and port forwarding, which will allow me to dial into the VPN server remotely and feed in my VPN certificate. The power requirements I have will be achieved by setting the machine into hibernate mode when I'm not using it and then kicking it into life with a WoL packet when needed. You can also get a LightsOut card for the Proliant for under £30 which would allow me to power the unit off and on remotely. Something to consider at a later date.
I'm currently a day or two away from getting this all completed, which at the very least will help with the system development and testing I need to do on a daily basis. Having the ability to dial in remotely and start up one of the many VMs I have will be invaluable, along with the fact that these will be backed up and ready for duplication at the drop of a hat.
I'll be sure to post on the success of the above as well as the process of choosing a thin-client, which is currently not so clear. For info what I have in mind is a Android tablet that has USB and VGA/HDMI connectivity. Connectivity to the private Cloud will be achieved via an app that supports RDP-type connections over a VPN. Looking at the current apps available on the Android app store I can see that there are some good candidates.
Feedback comments from anyone else who's interested in moving away from the traditional PC way of working would be most welcome.
In the meantime I thought a post on a bit of home technology geekery would get me back into the swing of things.
A question that's been doing the rounds in my head for a while now is whether it is truly possible to remove the need for a traditional desktop/laptop and instead move to a thin-client private-Cloud type setup? The advantages for me personally would be that when working remotely I could swap out my heavy, temperamental and constantly warm laptop for something light, thin and easy to use. What started this process of self-examination was a realisation that I use my laptop in basically two modes; information gathering (mail, browsing, researching) and system development. When working in an information gathering mode my dual-core 3GB, 64bit Windows laptop is overkill. I can imagine the CPU rapping its fingers on the table or playing Civilisation or Angry Birds while I work. When working in system development mode the laptop is not powerful enough and is limited by the number of virtual machines I can store locally on the laptop's (or my external) hard disk as well as system resources that Windows 7 will spare me.
A better approach I believe would be where:
- All heavy lifting (running a VM, disk space utilisation, backup) would be done remotely in a 'private Cloud' (or remote data center as it used to be called).
- A tunnel exists into my private Cloud that is highly secure, reliable and easy to use
- The private Cloud uses the minimum amount of power
- The chosen thin-client has an excellent human computer interface (i.e. it was made to be an interface and not a jack of all trades mini-pc) with a high resolution screen, great connectivity and rapid startup time.
- The connection between my thin-client and private Cloud has great reliability, speed and no data limits.
- I can still effectively work in an information gathering or system development mode with no change or reduction in productivity.
To start the process of (hopefully) achieving my goal of retiring my laptop, I have started to construct my own private Cloud, which will help with achieving point (1) above - 'offload all heavy lifting'. Initially I thought I'd have to buy a super expensive server and spend a great deal of time and effort setting this up. That's when I came across the fantastic HP Proliant Microserver.
HP were (perhaps still are) offering a £100 cashback on these fantastic servers, which when taken in light of the average purchase price of £200 is an absolute steal. The dual core 1.3GHz cpu seems small however it has many virtualisation features that give it much more punch. Since receiving my microserver around a week ago I have installed 3 mixed HDDs that I had lying around along with 2x4GB sticks of RAM, which cost me just £50. I've got on my eye on a Samsung EcoGreen f4 2TB drive as well which is just £50 if I need more space. I'm currently in the process of installing ESXi 4.1 onto the Proliant after which I'll start migrating over my existing VMs into the 1TB of collective disk space that this unit now has.
In parallel with the setup of my Proliant I'm also currently working on items (2) and (3). The secure tunnel will be achieved using OpenVPN, which I have found to be very easy to use and comes with some great utilities for supporting certificate based user authentication. My home router supports Dynamic DNS registration and port forwarding, which will allow me to dial into the VPN server remotely and feed in my VPN certificate. The power requirements I have will be achieved by setting the machine into hibernate mode when I'm not using it and then kicking it into life with a WoL packet when needed. You can also get a LightsOut card for the Proliant for under £30 which would allow me to power the unit off and on remotely. Something to consider at a later date.
I'm currently a day or two away from getting this all completed, which at the very least will help with the system development and testing I need to do on a daily basis. Having the ability to dial in remotely and start up one of the many VMs I have will be invaluable, along with the fact that these will be backed up and ready for duplication at the drop of a hat.
I'll be sure to post on the success of the above as well as the process of choosing a thin-client, which is currently not so clear. For info what I have in mind is a Android tablet that has USB and VGA/HDMI connectivity. Connectivity to the private Cloud will be achieved via an app that supports RDP-type connections over a VPN. Looking at the current apps available on the Android app store I can see that there are some good candidates.
Feedback comments from anyone else who's interested in moving away from the traditional PC way of working would be most welcome.
Friday, 1 April 2011
Adapting to a changing IT landscape through the adoption of Cloud technologies
Recent significant advances in virtualization technology have allowed organizations to make enormous efficiency, resource and cost savings, as well as reduce their carbon footprint. This coupled with an explosion in outsourced IT processes through the usage industry strength Cloud based SaaS offerings have contributed to a significant change in the IT service management and hosting landscape. An increasingly global distributed user base coupled with social media offerings such as Twitter and Facebook have contributed to a change in the online user experience so that users now increasingly expect disparate services to be seamlessly 'joined-up' and personalized. At the same time the source and variety of security risks has increased exponentially due to the continued uptake of online services by the public and changes in the way these users consume online services. All these changes place massive demands on legacy IT infrastructures, as the days when the data center was king are now gone. Organizations will continue to struggle to meet these challenges without significant disruption and process reengineering. Coupled with the inability to rip up and replace legacy systems, the only recourse is to extend existing infrastructure in a secure and standards-based manner.
Achieving greater IT service flexibility and functionality whilst preserving existing investments and processes is obviously a non-trivial task. Coupled with the need to ensure that any (re)engineered services are still governed by an organizations' existing set of security controls and policies, requires a very different set of IT solutions. By extending, enriching and securing disparate systems in a standardized manner using Cloud technologies such as SAML, OAuth and XACML organizations can maximize their existing IT investments and gain increased service agility.
Security Assertion Markup Language (SAML) is a standard that allows a user population to securely access resources regardless of their location and HTTP domain. In 2005 SAML v2.0 became an OASIS standard, which is the convergence of SAML v1.x and Identity Federation Framework (ID-FF) v1.2. Since that time SAML has become the default protocol for Federated Identity Management (FIM) solutions and giving rise to other FIM related standards. Through the exchange of SAML tokens between Identity Providers (IdP) and Service Providers (SP) entities, users can seamlessly move between different parties within a Federation in a secure manner supported at all times by a predefined 'Circle of Trust'. Within the context of a business and its employees, SAML can be used to facilitate Single Sign On between a set of divorced systems, such as two Access Management vendor products within a single organization, or an existing in-house Access Management solution and a Cloud based SaaS offering. In this context it can be seen that the usage of SAML tokens would allow an organization to preserve their existing Identity and Access Management processes and extend these to encapsulate an outsourced service based in the Cloud.
Open Authorization (OAuth) is a standard that is used to increase data portability by facilitating the distribution of private user data between disparate IT systems through the exchange of tokens and not credentials. Published in April 2010 as the document RFC5849 and currently at v1.0a (with v2.0 in draft), OAuth is gaining increased popularity - most notably in 2010 when Twitter adopted it as its default mechanism for integrating with 3rd party applications. Although SAML tokens can be used to exchange user information (as 'attributes' or 'claims') in a trusted manner, the key advantage of OAuth is that it places consent in the hands of the user about whom the data describes. For organizations that store and use sensitive user data as part of an existing business service, OAuth can be used to easily and securely allow other consumer IT systems to request and reuse this user data in a user consent driven manner without the need for expensive application interface development. By increasing the distribution of user data between its IT systems, an organization could increase the richness and personalization of its user-centric systems whilst mitigating or reducing enterprise application integration costs.
Authorization policy management across an entire IT estate is traditionally a very resource intensive task as any change to a policy requires the modification (and testing) of a entire suite of applications. Externalizing authorization policy definition and decision making to an external entity, through the usage of eXternalised Access Control Markup Language (XACML) definitions can significantly increase the flexibility of an application portfolio, and also reduce the cost and effort required to make any future change. Originally published in 2001 and currently at v2.0, XACML support can be increasingly found in vendor product portfolios such as IBM's Tivoli Security Policy Manager (TSPM), ForgeRock's OpenFM and Axiomatics' Policy Server. Version 3.0 is currently in draft and will add critical new features such as Multiple Decision Profile, delegated administration and Obligation statements. In a business environment XACML definitions can be used to centrally define the policy governing what user entitlements are required to access a particular resource, thereby providing a single and auditable view of the security risk across the entire estate. If a new application resource is deployed or a new security risk is identified, the usage of a separate Policy Definition Point (PDP) an organization would simply be required to make the necessary XACML policy definition changes with immediate effect.
In summary, significant advances in Virtualization, the rise of Cloud services and the raft of new Federated Identity Management related standards have together had an enormous impact on the IT industry. These advances have given rise to a multitude of new opportunities such as Green IT through data center consolidation, smarter and richer IT services through increased user data portability and finally increased business agility through the federation of disparate services. Organizations that do not or cannot adopt standards such as SAML, OAuth and XACML will continue to experience great difficulty when attempting to adapt to their legacy systems to an ever changing IT service landscape.
Achieving greater IT service flexibility and functionality whilst preserving existing investments and processes is obviously a non-trivial task. Coupled with the need to ensure that any (re)engineered services are still governed by an organizations' existing set of security controls and policies, requires a very different set of IT solutions. By extending, enriching and securing disparate systems in a standardized manner using Cloud technologies such as SAML, OAuth and XACML organizations can maximize their existing IT investments and gain increased service agility.
Security Assertion Markup Language (SAML) is a standard that allows a user population to securely access resources regardless of their location and HTTP domain. In 2005 SAML v2.0 became an OASIS standard, which is the convergence of SAML v1.x and Identity Federation Framework (ID-FF) v1.2. Since that time SAML has become the default protocol for Federated Identity Management (FIM) solutions and giving rise to other FIM related standards. Through the exchange of SAML tokens between Identity Providers (IdP) and Service Providers (SP) entities, users can seamlessly move between different parties within a Federation in a secure manner supported at all times by a predefined 'Circle of Trust'. Within the context of a business and its employees, SAML can be used to facilitate Single Sign On between a set of divorced systems, such as two Access Management vendor products within a single organization, or an existing in-house Access Management solution and a Cloud based SaaS offering. In this context it can be seen that the usage of SAML tokens would allow an organization to preserve their existing Identity and Access Management processes and extend these to encapsulate an outsourced service based in the Cloud.
Open Authorization (OAuth) is a standard that is used to increase data portability by facilitating the distribution of private user data between disparate IT systems through the exchange of tokens and not credentials. Published in April 2010 as the document RFC5849 and currently at v1.0a (with v2.0 in draft), OAuth is gaining increased popularity - most notably in 2010 when Twitter adopted it as its default mechanism for integrating with 3rd party applications. Although SAML tokens can be used to exchange user information (as 'attributes' or 'claims') in a trusted manner, the key advantage of OAuth is that it places consent in the hands of the user about whom the data describes. For organizations that store and use sensitive user data as part of an existing business service, OAuth can be used to easily and securely allow other consumer IT systems to request and reuse this user data in a user consent driven manner without the need for expensive application interface development. By increasing the distribution of user data between its IT systems, an organization could increase the richness and personalization of its user-centric systems whilst mitigating or reducing enterprise application integration costs.
Authorization policy management across an entire IT estate is traditionally a very resource intensive task as any change to a policy requires the modification (and testing) of a entire suite of applications. Externalizing authorization policy definition and decision making to an external entity, through the usage of eXternalised Access Control Markup Language (XACML) definitions can significantly increase the flexibility of an application portfolio, and also reduce the cost and effort required to make any future change. Originally published in 2001 and currently at v2.0, XACML support can be increasingly found in vendor product portfolios such as IBM's Tivoli Security Policy Manager (TSPM), ForgeRock's OpenFM and Axiomatics' Policy Server. Version 3.0 is currently in draft and will add critical new features such as Multiple Decision Profile, delegated administration and Obligation statements. In a business environment XACML definitions can be used to centrally define the policy governing what user entitlements are required to access a particular resource, thereby providing a single and auditable view of the security risk across the entire estate. If a new application resource is deployed or a new security risk is identified, the usage of a separate Policy Definition Point (PDP) an organization would simply be required to make the necessary XACML policy definition changes with immediate effect.
In summary, significant advances in Virtualization, the rise of Cloud services and the raft of new Federated Identity Management related standards have together had an enormous impact on the IT industry. These advances have given rise to a multitude of new opportunities such as Green IT through data center consolidation, smarter and richer IT services through increased user data portability and finally increased business agility through the federation of disparate services. Organizations that do not or cannot adopt standards such as SAML, OAuth and XACML will continue to experience great difficulty when attempting to adapt to their legacy systems to an ever changing IT service landscape.
Labels:
authentication,
authorization,
Cloud,
federation,
OAuth,
SaaS,
saml,
XACML
Tuesday, 29 March 2011
Giving an organization the moon on a stick - is this ever correct?
An organization providing IT solution delivery to a business, be it as an external third party or an internal IT department, is primarily driven by the requirements of their client. Ideally these requirements are generated through a process of business analysis directed towards an overarching vision such as "how can the cost of identity management be reduced?", "how can the efficiency of service delivery be increased?" and "how do we more closely meet our compliance and auditing needs?". I said 'usually' as I have come across many supposed critical business requirements' in my time that almost certainly came about by a process of 'business analysis by mood'. In such a situation as this where the rationale for a given requirement is difficult if not impossible to justify, what course of action should the conscientious and well experienced solution delivery partner take?
Where such requirements increase the complexity of a solution, the minimum that can happen is scope creep which can result in an increase in project resource and time requirements, thereby driving up the overall project cost and the time required to gain return on investment. At the extreme, a bloated set of requirements can result in the project being impossible to deliver due to an inability to hit deadlines and show any real value to key stakeholders. Where the 'additional' requirements introduce extra middleware, processes and customization the result can be a solution that is difficult to use, maintain and extend. Regardless of their intention and ultimate delivery path, ill-conceived business requirements included within a project can ultimately lead to it's untimely end. With this is mind, it would be easy to say that all client requirements that have no purpose or business benefit should be refused or pushed out of scope. In reality the relationship between the solution delivery party and it's client is that a relationship must exist where the provider is receptive to any and all requirements, regardless of their conception, impact and intention.
So for the sake of increasing its billable time and perhaps software license sales (I'm excluding internal IT departments here), should a solution provider accept every ad-hoc business requirement and 'back of a post-it design'? The 'right' answer here is of course no, although this can be regularly seen in IT departments across every industry sector. By giving an organization the 'moon on a stick' what happens ultimately is that neither party in the relationship benefits. The client organization ends up with a solution that poorly fits their true requirements whilst also costing more then could be ever justified. Organisations that have been through several projects delivered in this manner commonly adopt a 'rip and replace' strategy to purge themselves of a solution that (in their mind) is not fit for purpose. By not working with their client in a responsible manner the service provider can eventually lose credibility, a referenceable engagement and any repeat business. Where the service provider is an internal department this can result in the outsourcing of large parts (or all) of an solution delivery team. Where a solution provider has outsourced some or all of the solution delivery tasks to another solution provider, such as software development, the need for effective and transparent project management is even greater, principally due to lack of visibility regarding roles and responsibilities within the partnership.
In conclusion, the most effective solution delivery projects will be those where the solution provider is able to listen, understand and respond to all customer requirements, whilst also be responsible and strong enough to raise objections and concerns over requirements that would endanger the project delivery process or deliverable. On the client side it is critical that they understand their own business model, drivers and technology before they start interacting with a solution provider. Any feedback from the provider on the defined list of requirements list should be accepted in an open and unbiased manner by the client. Finally, in addition to requirements management, it is of paramount importance that an open and trusted relationship exists between the two parties as both have goals that they are trying to seek.
Where such requirements increase the complexity of a solution, the minimum that can happen is scope creep which can result in an increase in project resource and time requirements, thereby driving up the overall project cost and the time required to gain return on investment. At the extreme, a bloated set of requirements can result in the project being impossible to deliver due to an inability to hit deadlines and show any real value to key stakeholders. Where the 'additional' requirements introduce extra middleware, processes and customization the result can be a solution that is difficult to use, maintain and extend. Regardless of their intention and ultimate delivery path, ill-conceived business requirements included within a project can ultimately lead to it's untimely end. With this is mind, it would be easy to say that all client requirements that have no purpose or business benefit should be refused or pushed out of scope. In reality the relationship between the solution delivery party and it's client is that a relationship must exist where the provider is receptive to any and all requirements, regardless of their conception, impact and intention.
So for the sake of increasing its billable time and perhaps software license sales (I'm excluding internal IT departments here), should a solution provider accept every ad-hoc business requirement and 'back of a post-it design'? The 'right' answer here is of course no, although this can be regularly seen in IT departments across every industry sector. By giving an organization the 'moon on a stick' what happens ultimately is that neither party in the relationship benefits. The client organization ends up with a solution that poorly fits their true requirements whilst also costing more then could be ever justified. Organisations that have been through several projects delivered in this manner commonly adopt a 'rip and replace' strategy to purge themselves of a solution that (in their mind) is not fit for purpose. By not working with their client in a responsible manner the service provider can eventually lose credibility, a referenceable engagement and any repeat business. Where the service provider is an internal department this can result in the outsourcing of large parts (or all) of an solution delivery team. Where a solution provider has outsourced some or all of the solution delivery tasks to another solution provider, such as software development, the need for effective and transparent project management is even greater, principally due to lack of visibility regarding roles and responsibilities within the partnership.
In conclusion, the most effective solution delivery projects will be those where the solution provider is able to listen, understand and respond to all customer requirements, whilst also be responsible and strong enough to raise objections and concerns over requirements that would endanger the project delivery process or deliverable. On the client side it is critical that they understand their own business model, drivers and technology before they start interacting with a solution provider. Any feedback from the provider on the defined list of requirements list should be accepted in an open and unbiased manner by the client. Finally, in addition to requirements management, it is of paramount importance that an open and trusted relationship exists between the two parties as both have goals that they are trying to seek.
Thursday, 17 February 2011
Adding a slice of reality to user entitlement policy definitions
Role Based Access Control (RBAC) as a means of defining user entitlements is as ubiquitous across the Identity Management industry as Facebook is to the 21st generation - there were several key players before it and it isn't always the best tool for the job, but it sure is popular.
Since its very inception and initial rollout, RBAC has forced the implementer to carryout some quite intensive business analysis tasks, which to name a few include:
Capturing common user entitlements and exceptions is obviously an extremely useful business analysis task, which is required during any Identity Management implementation. A risk that must be mitigated through such an activity is the loss of user entitlement fidelity. By trying to force 'square pegs into round holes' organisations can actually find that their resultant Identity Management solution is costly to build, difficult to maintain, and inflexible. Primarily this is because the real user entitlements on the ground must be (re)mapped to a set of roles during the initial deployment and regularly throughout entire lifecycle of the Identity Management solution. This is a key point that Attribute Based Access Control (ABAC) is able to alleviate through the implementation of standards such as XACML. For those with even a basic RBAC/ABAC knowledge this view is nothing new, however I think that its worth outlining a real-world use case that conclusions that are being made on the ground to back this up.
Before I jump in with the real world examples, it might be useful to briefly cover XACML and how it relates to user entitlement definition. Oasis (who published the XACML specification originally in 2003) provide the following useful description "XACML is a general-purpose access control policy language. This means that it provides a syntax (defined in XML) for managing access to resources".
Through XACML definitions, an organisation are able to model user entitlements in the form of roles (to preserve legacy implementations) and/or as direct application-specific attributes. Such functionality within XACML may finally allow organisations and implementers of IdM to break away from an entitlement interpretation stance and start defining user entitlements with never before seen expressivity.
Returning to RBAC, a business requirement that I am currently working on with a customer quite plainly shows the significant limitations of RBAC, as opposed to ABAC, in an easy to appreciate manner. The implementation is specifically tasked with the definition of Separation of Duties (SoD) policy. Tivoli Identity Manager (TIM), and many of its peers, implement SoD policy using roles, allowing the IdM administrator to define which role combinations are invalid in their organisation. The prerequisite of this task is clearly that an organisation must have already distilled the existing user entitlements found across the business into a set of roles. To attain the level of role granularity required to implement even a simple set of SoD rules for a small set of business applications (COTS and infrastructure such as Active Directory) requires the definition and management of a massive number of subtly different roles. Such an implementation within even a small to medium sized business can led to so called 'Role Explosion'.
TIM tries to address this deficiency through the usage of role hierarchies. The approach of chaining together roles, thereby reducing their overall number does not increase an organisation's ability to freely express its user entitlements. Instead it could be argued that this approach swaps the problem of role explosion for another - increased role complexity.
A better approach would be to give an organisation the ability to define SoD policies on the actual account entitlements that are under scrutiny, instead of a set of logical interpretations that do not actually exist anywhere. At Pirean we have worked hard to alleviate this problem for TIM customers by supplementing the existing product with our own Pirean Risk Manager (PRM) solution. PRM which is directly integrated into TIM and appears as an onscreen menu item, allows IdM administrators to express SoD rules in terms of business application attributes, thereby moving away from an RBAC centric approach and more towards ABAC. Of course if an application under SoD policy itself uses RBAC then this can also be supported as the role (or group) attribute within a business application will be viewed as just another user attribute.
Within the IBM portfolio ABAC is appearing in products such as IBM Tivoli Security Policy Management (ITSPM). This product is currently more focused towards the real-time processing of authorization requests from an access management entity (or Policy Enforcement Point in XACML terminology).
To wrap up, whilst the the main Identity Management vendors already have ABAC compatible products in their portfolios I believe that over time begin these will be brought directly into their Identity Management and Compliance offerings. This will result in the products taking a more ABAC-based approach as their customers look to express their user entitlements with greater clarity and increase the flexibility of their Identity Management processes.
Since its very inception and initial rollout, RBAC has forced the implementer to carryout some quite intensive business analysis tasks, which to name a few include:
- The need for (sometimes massive amounts of) role 'mining' to understand what business functions currently exist and how these map to a set of logical roles.
- Mapping logical roles to user entitlements within target business applications
- In very large deployments, the need to create role hierarchies to rationalise the overall number of roles
Capturing common user entitlements and exceptions is obviously an extremely useful business analysis task, which is required during any Identity Management implementation. A risk that must be mitigated through such an activity is the loss of user entitlement fidelity. By trying to force 'square pegs into round holes' organisations can actually find that their resultant Identity Management solution is costly to build, difficult to maintain, and inflexible. Primarily this is because the real user entitlements on the ground must be (re)mapped to a set of roles during the initial deployment and regularly throughout entire lifecycle of the Identity Management solution. This is a key point that Attribute Based Access Control (ABAC) is able to alleviate through the implementation of standards such as XACML. For those with even a basic RBAC/ABAC knowledge this view is nothing new, however I think that its worth outlining a real-world use case that conclusions that are being made on the ground to back this up.
Before I jump in with the real world examples, it might be useful to briefly cover XACML and how it relates to user entitlement definition. Oasis (who published the XACML specification originally in 2003) provide the following useful description "XACML is a general-purpose access control policy language. This means that it provides a syntax (defined in XML) for managing access to resources".
Through XACML definitions, an organisation are able to model user entitlements in the form of roles (to preserve legacy implementations) and/or as direct application-specific attributes. Such functionality within XACML may finally allow organisations and implementers of IdM to break away from an entitlement interpretation stance and start defining user entitlements with never before seen expressivity.
Returning to RBAC, a business requirement that I am currently working on with a customer quite plainly shows the significant limitations of RBAC, as opposed to ABAC, in an easy to appreciate manner. The implementation is specifically tasked with the definition of Separation of Duties (SoD) policy. Tivoli Identity Manager (TIM), and many of its peers, implement SoD policy using roles, allowing the IdM administrator to define which role combinations are invalid in their organisation. The prerequisite of this task is clearly that an organisation must have already distilled the existing user entitlements found across the business into a set of roles. To attain the level of role granularity required to implement even a simple set of SoD rules for a small set of business applications (COTS and infrastructure such as Active Directory) requires the definition and management of a massive number of subtly different roles. Such an implementation within even a small to medium sized business can led to so called 'Role Explosion'.
TIM tries to address this deficiency through the usage of role hierarchies. The approach of chaining together roles, thereby reducing their overall number does not increase an organisation's ability to freely express its user entitlements. Instead it could be argued that this approach swaps the problem of role explosion for another - increased role complexity.
A better approach would be to give an organisation the ability to define SoD policies on the actual account entitlements that are under scrutiny, instead of a set of logical interpretations that do not actually exist anywhere. At Pirean we have worked hard to alleviate this problem for TIM customers by supplementing the existing product with our own Pirean Risk Manager (PRM) solution. PRM which is directly integrated into TIM and appears as an onscreen menu item, allows IdM administrators to express SoD rules in terms of business application attributes, thereby moving away from an RBAC centric approach and more towards ABAC. Of course if an application under SoD policy itself uses RBAC then this can also be supported as the role (or group) attribute within a business application will be viewed as just another user attribute.
Within the IBM portfolio ABAC is appearing in products such as IBM Tivoli Security Policy Management (ITSPM). This product is currently more focused towards the real-time processing of authorization requests from an access management entity (or Policy Enforcement Point in XACML terminology).
To wrap up, whilst the the main Identity Management vendors already have ABAC compatible products in their portfolios I believe that over time begin these will be brought directly into their Identity Management and Compliance offerings. This will result in the products taking a more ABAC-based approach as their customers look to express their user entitlements with greater clarity and increase the flexibility of their Identity Management processes.
Thursday, 27 January 2011
Achieving Windows Desktop SSO (SPNEGO) across distinct AD domains
I've been recently working on a requirement for a customer on the subject of SPNEGO based authentication for WebSphere Application Server (WAS), specifically for a set of AD domains that bound together with an 'External Domain Trust'. Based on my findings and the lengths I've needed to go to find out the relevant information, I thought it would be useful to write it all up in a post.
So to start with let me provide some (obfuscated) background on the target AD domains, which are named staff.ouA.com and staff.ouB.org. These two domains are linked using a two-way non-transitive 'External Domain Trust'. The two AD domains were originally completely separate, serving their own part of the organisation. Over time requirements have changed and a unified approach was required. This is was the point at which I arrived as centralised identity management was required to integrate and manage the patchwork of AD domains hosted by this customer.
The customer requirement was quite understandable and logical. Users from any of the existing AD domains should be able to securely access any secure business web applications by carrying out Windows Desktop SSO (SPNEGO).
The main difficulties I faced was that Microsoft's documentation on the subject of Windows authentication across an External Domain Trust, and perhaps to some degree IBM's also, was quite inconsistent. The fallout of this inaccuracy is that an organisation would believe that it was not possible to carryout SPNEGO authentication for disparate AD domains, and be forced to come up with unnecessarily complex security architecture.
For those who are less familiar with Active Directory Trusts, Microsoft Technet provides the following useful description as well an explanation of the various Trust types:
Authentication using NTLM tokens is widely as a weak security mechanism, which has several well publicised vulnerabilities that can lead to a complete breach of the credential and therefore a compromised file system. The severity of a single NTLM breach is so high as its tokens contain a hashed copy of a user's password. Kerberos tokens in contrast contain an encrypted ticket which is usable only for a single session.
What I have found using a couple of hard to find Microsoft documents and a Windows lab is that SPNEGO/Kerberos authentication using WebSphere is indeed possible when using an External Domain Trust. The architecture of my environment is illustrated below.
Using the above solution and the below scenarios, I was able to prove that Kerberos authentication is possible in such an environment.
Scenario 1: Single domain Windows SSO
Scenario 2: Windows SSO across an External Domain Trust
To achieve this there a few key points to bear in mind both when working with a pre-existing or newly created set of domains that are linked using an External Domain Trust.
The drawbacks of this solution is that an AD account must be created in every required domain and be mapped to the required SPN(s). Plus the password of all these accounts must be resynchronised whenever they are weekly/monthly/annually changed. To streamline this account management I used IBM Tivoli Identity Manager and its Active Directory adaptor. By using this product to create these accounts in the required domains and then associating them all with a Service entity with password synchronisation enabled, I was able to make sure that the solution did not become 'un-stitched' over time.
A better approach would be to have one AD account defined within a chosen 'primary' AD domain, with the required SPN(s) associated with just that account. From discussing this approach with my Windows AD colleagues it appears that AD definitions called 'Routing Hints' could be defined within the peer (non-primary) domains so that the External Domain Trust is actually used to go and find the required SPN within the 'primary' domain. I will be testing this approach in the coming days, so I'll post an update on the result of this.
To close, the customer I worked with on this requirement has been able to use this solution to define an enterprise wide SSO solution, which will positively affect the application experience and security controls for every one of their employees. In addition this solution will significantly simplify the SSO design for all existing and future applications that the customer deploys. It has been discovered however, that this solution would be unsupported by Microsoft.
So to start with let me provide some (obfuscated) background on the target AD domains, which are named staff.ouA.com and staff.ouB.org. These two domains are linked using a two-way non-transitive 'External Domain Trust'. The two AD domains were originally completely separate, serving their own part of the organisation. Over time requirements have changed and a unified approach was required. This is was the point at which I arrived as centralised identity management was required to integrate and manage the patchwork of AD domains hosted by this customer.
The customer requirement was quite understandable and logical. Users from any of the existing AD domains should be able to securely access any secure business web applications by carrying out Windows Desktop SSO (SPNEGO).
The main difficulties I faced was that Microsoft's documentation on the subject of Windows authentication across an External Domain Trust, and perhaps to some degree IBM's also, was quite inconsistent. The fallout of this inaccuracy is that an organisation would believe that it was not possible to carryout SPNEGO authentication for disparate AD domains, and be forced to come up with unnecessarily complex security architecture.
For those who are less familiar with Active Directory Trusts, Microsoft Technet provides the following useful description as well an explanation of the various Trust types:
An Active Directory Trust:
Trust relationships between domains establish a trusted communication path through which a computer in one domain can communicate with a computer in the other domain. Trust relationships allow users in the trusted domain to access resources in the trusting domain.
Trust TypesThe key point about the External Domain Trust that exists between my two target AD domains is that (almost) all of the official Microsoft documentation states that Kerberos authentication is not possible, just NTLM (also called NT Lan Manager). IBM's stance is that WebSphere ND does not support SPNEGO when an External Trust is used, which is probably because of the fact that Microsoft has stated that only NTLM tokens can be generated.
External - A trust between domains within two distinct AD forests
Forest - A trust between domains in the same forest
Realm - A trust between a non-Windows Kerberos realm and a Windows Server 2003 domain
Shortcut - A trust between two domains in the same AD forest, predominantly for the purpose of improving system performance
Authentication using NTLM tokens is widely as a weak security mechanism, which has several well publicised vulnerabilities that can lead to a complete breach of the credential and therefore a compromised file system. The severity of a single NTLM breach is so high as its tokens contain a hashed copy of a user's password. Kerberos tokens in contrast contain an encrypted ticket which is usable only for a single session.
What I have found using a couple of hard to find Microsoft documents and a Windows lab is that SPNEGO/Kerberos authentication using WebSphere is indeed possible when using an External Domain Trust. The architecture of my environment is illustrated below.
Using the above solution and the below scenarios, I was able to prove that Kerberos authentication is possible in such an environment.
Scenario 1: Single domain Windows SSO
- Logon into the staff.ouA.com domain as user swilliams2, using a standard Windows desktop machine (I used WinXP)
- Open a browser and request the secured target application protected.application.com
- The user is prompted for SPNEGO authentication (a 401 HTTP response is returned by WAS)
- The user contacts AD (actually the Kerberos Key Distribution Center - KDC) for a Service Ticket, referencing the Service Principal Name (SPN) HTTP/protected.application.com
- AD searches its repository for an account that is mapped to the required SPN
- AD finds a matching account
- AD generates a Service Ticket for the user and encrypts this using the account's credential
- The Service Ticket is returned to the user, who in turn sends this onto WAS
- WAS consumes the received encrypted Service Ticket, which has been packaged up in base64 encoded HTTP header called Authorization.
- Using its Kerberos keytab, WAS decrypts the Service Ticket (thus validating the ticket) and extract the Kerberos Principal name.
- The extracted principal name is defined as the user's identity
- WAS searches its repository for an account with a matching principal name (uid/cn)
- A WAS session is created for the user and a set of JSESSIONID (session and load balancing) and LTPA (authentication) cookies are returned
- The user is presented with their target page
Scenario 2: Windows SSO across an External Domain Trust
- Logon into the staff.ouB.com domain as user swilliams4 (the UID could have been the same as in the ouA.com domain) using a standard Windows desktop machine
- ........ and then surprisingly exactly the same as above!
To achieve this there a few key points to bear in mind both when working with a pre-existing or newly created set of domains that are linked using an External Domain Trust.
- The required SPN must be mapped to an AD account in every AD domain that is part of the environment requiring SSO
- The password of the all the service accounts sharing the same SPN must be exactly the same
- The UIDs of the service accounts can be different
The drawbacks of this solution is that an AD account must be created in every required domain and be mapped to the required SPN(s). Plus the password of all these accounts must be resynchronised whenever they are weekly/monthly/annually changed. To streamline this account management I used IBM Tivoli Identity Manager and its Active Directory adaptor. By using this product to create these accounts in the required domains and then associating them all with a Service entity with password synchronisation enabled, I was able to make sure that the solution did not become 'un-stitched' over time.
A better approach would be to have one AD account defined within a chosen 'primary' AD domain, with the required SPN(s) associated with just that account. From discussing this approach with my Windows AD colleagues it appears that AD definitions called 'Routing Hints' could be defined within the peer (non-primary) domains so that the External Domain Trust is actually used to go and find the required SPN within the 'primary' domain. I will be testing this approach in the coming days, so I'll post an update on the result of this.
To close, the customer I worked with on this requirement has been able to use this solution to define an enterprise wide SSO solution, which will positively affect the application experience and security controls for every one of their employees. In addition this solution will significantly simplify the SSO design for all existing and future applications that the customer deploys. It has been discovered however, that this solution would be unsupported by Microsoft.
Wednesday, 5 January 2011
Improving the audio quality of my portable digital music player
I have owned an Apple IPod Touch (3rd generation) for about 1.5 years and have used it like I'm sure a great proportion of the population do i.e. load it with all the music that I owned and have subsequently purchased, regardless of whether or not I intend to actually listen to it. Since catching the audiophile bug I've wondered if and how I could improve the audio playback quality of my current digital music player. This post will outline how I tried to achieve this and some of the interesting observations I made on the way.
So, as mentioned in a previous post I have centralised the storage and structure of my digital music collection within a Network Attached Storage (ReadyNAS) appliance. All of the music has been tagged using MediaMonkey and had album artwork added where missing. My chosen digital audio format is FLAC. This is because it:
The most obvious and perhaps understandable issue was regarding size. A 'typical' MP3 track that is a few minutes long, recorded at a sampling rate of 44KHz will take up on average around 5MB of disk space. The same track recorded at 44KHz (not to mention 48 or 96KHz) in lossless would take about around 25MB. Scaling this up to an entire album means that you would get an average album for about 300MB, as opposed to 60MB in MP3 format. If the average music collection contained 100 albums, then this would require perhaps 30GB of your once limitless disk space. As my IPod Touch only has 30GB of disk space (actually 29.1 GB), which is partially used by the IPod OS and other nice things like Apps, Photos and Videos, I needed to sit down and think about what music I actually want to enjoy on the move, and what I wanted to reserve for my home audio system. Although not expected, this wasn't such a bad thing I believe as it allowed me to better appreciate the music I had and prioritise quality over quantity.
The second less-welcomed issue was specifically caused by my IPod. Apple does not support FLAC as a lossless digital music format, despite its widespread usage. Instead Apple have their own format called ALAC (Apple Lossless Audio Codec). Because FLAC and ALAC are both lossless formats, the data they contain can be restructured into either format easily using a number of free third party tools. To get my nicely tagged and formatted lossless music into iTunes and therefore my IPod I needed to firstly convert all my chosen music into ALAC. What would have been really useful would be if iTunes supported a FLAC to ALAC conversion process as an integrated feature.
I don't want you to think that I have any issues with Apple products; I am very happy with my Touch (less so with iTunes on Windows 7, but that's another story), I just think that Apple could make it much easier for its users to work with lossless digital music files if they have them. As a side note, this restriction is not specific to IPods. Before this I had a Creative Nomad Zen Xtra (they really knew how to name a product!), which was in my opinion much more straight forward to use as it was essentially a portable (40GB) hard disk that could play music. This too could not work with FLAC files and instead expected lossless tracks to be in WAV format.
Once I jam-packed my IPod with lots of lossless music I turned to the last piece of the 'improving audio quality' challenge - headphone/earphone quality. The earphones that come with any digital music player are almost always of a very low quality. Perhaps this is because large portions of the public do not have any desire to investigate better audio quality, therefore the manufacturers see an obvious ways to cut production costs. Replacing the out of the box earphones with even just a sub-£15 set from manufacturers such as Sennheiser, Creative, Ashure (amongst many others) would give a massive improvement in return.
This Christmas I received a pair of Sennheiser CX400-II noise-isolating earphones which I'm just breaking in now. These coupled with my lossless audio tracks have really opened my eyes (or should I say ears) to what is possible and what the original musicians were trying to achieve when they originally recorded their tracks. Pink Floyd's 'Wish you were here' and Miles Davis' 'Kind of Blue' for example sound simply amazing now.
I'm just trying to understand why I didn't go down this path much (much) sooner.
Hope this has been useful. Happy listening.
So, as mentioned in a previous post I have centralised the storage and structure of my digital music collection within a Network Attached Storage (ReadyNAS) appliance. All of the music has been tagged using MediaMonkey and had album artwork added where missing. My chosen digital audio format is FLAC. This is because it:
- lossless (the original recording untouched, but the data is compressed)
- is able to support very high (HD) sampling rates (mp3 files are commonly sampled at 44KHz, but original recordings can be made at rates or 48, 92 and even 192KHz, giving you significantly more detail)
- has fantastic software and hardware support (almost!) and is the format of choice for the digital audio authoring and publishing industries
The most obvious and perhaps understandable issue was regarding size. A 'typical' MP3 track that is a few minutes long, recorded at a sampling rate of 44KHz will take up on average around 5MB of disk space. The same track recorded at 44KHz (not to mention 48 or 96KHz) in lossless would take about around 25MB. Scaling this up to an entire album means that you would get an average album for about 300MB, as opposed to 60MB in MP3 format. If the average music collection contained 100 albums, then this would require perhaps 30GB of your once limitless disk space. As my IPod Touch only has 30GB of disk space (actually 29.1 GB), which is partially used by the IPod OS and other nice things like Apps, Photos and Videos, I needed to sit down and think about what music I actually want to enjoy on the move, and what I wanted to reserve for my home audio system. Although not expected, this wasn't such a bad thing I believe as it allowed me to better appreciate the music I had and prioritise quality over quantity.
The second less-welcomed issue was specifically caused by my IPod. Apple does not support FLAC as a lossless digital music format, despite its widespread usage. Instead Apple have their own format called ALAC (Apple Lossless Audio Codec). Because FLAC and ALAC are both lossless formats, the data they contain can be restructured into either format easily using a number of free third party tools. To get my nicely tagged and formatted lossless music into iTunes and therefore my IPod I needed to firstly convert all my chosen music into ALAC. What would have been really useful would be if iTunes supported a FLAC to ALAC conversion process as an integrated feature.
I don't want you to think that I have any issues with Apple products; I am very happy with my Touch (less so with iTunes on Windows 7, but that's another story), I just think that Apple could make it much easier for its users to work with lossless digital music files if they have them. As a side note, this restriction is not specific to IPods. Before this I had a Creative Nomad Zen Xtra (they really knew how to name a product!), which was in my opinion much more straight forward to use as it was essentially a portable (40GB) hard disk that could play music. This too could not work with FLAC files and instead expected lossless tracks to be in WAV format.
Once I jam-packed my IPod with lots of lossless music I turned to the last piece of the 'improving audio quality' challenge - headphone/earphone quality. The earphones that come with any digital music player are almost always of a very low quality. Perhaps this is because large portions of the public do not have any desire to investigate better audio quality, therefore the manufacturers see an obvious ways to cut production costs. Replacing the out of the box earphones with even just a sub-£15 set from manufacturers such as Sennheiser, Creative, Ashure (amongst many others) would give a massive improvement in return.
This Christmas I received a pair of Sennheiser CX400-II noise-isolating earphones which I'm just breaking in now. These coupled with my lossless audio tracks have really opened my eyes (or should I say ears) to what is possible and what the original musicians were trying to achieve when they originally recorded their tracks. Pink Floyd's 'Wish you were here' and Miles Davis' 'Kind of Blue' for example sound simply amazing now.
I'm just trying to understand why I didn't go down this path much (much) sooner.
Hope this has been useful. Happy listening.
Subscribe to:
Posts (Atom)