Monday, October 29, 2012

Detour, Detour

I don't think many people consciously sit down to create an application or environment thinking, "I'm going to create an insecure and potentially costly-to-remediate system today". I'm also confident that few people intentionally put their company's intellectual property or customer data at risk. That's what I'd prefer to think at least.

It's unfortunate that bad habits permeate the intricacies of technology to a point where I doubt, anyone, can attest to the actual cost of true security remediation. Potential Privileged Account misuse is a situation of "running amuck" that has to be addressed in companies, regardless of size or regulatory requirements. How many privileged accounts should there be in an environment? Should there be one super user account that is used across the enterprise? Who should have access to the privileged account information? How often should it be changed? Should it be more than 8 characters? Should the userid and password be prevented from matching? Should there be a use-on-demand policy with approval at an senior level? Would any of these help?

Let's talk about Detour. I got a call a few years ago from a former colleague who wanted to vent. Detour was the name of a service account so engrained in an environment he worked in that it would, according to the development manager, take thousands of man hours to remediate the millions of line of code Detour had been used in. Why would you want to remediate something like that? I guess I shouldn't have asked that question. The response was that the password was also "detour". Evidently every developer who had worked for the company for the previous ten years knew about Detour, detour. Additionally, it was used across an entire suite of applications for the developer's convenience. There was no supporting documentation; there had been no code review. The account had been set up in Active Directory with instructions that the password should never be changed or it would break multiple revenue generating applications.

The reason for my former colleague's frustration that particular day? An application support guy was troubleshooting an issue and tried to logon as detour, detour. Unfortunately, he kept mistyping detour when he typed in the password. The Active Directory setting for number of mistyped passwords before lock-out? 3. The Application Support guy didn't realize what he had done but monitoring screens in the Ops area turned red with all of the applications that were suddenly down. The development manager who also functioned as the application support manager called screaming bloody murder and threatening someone's job if Ops didn't figure out what was going on and get it corrected ASAP.

While there were a number of issues that needed to be corrected about the above scenario, the aspect that caught my interest the most was that there was a significant single point of failure in a company that appeared to either be undocumented or unrealized. It was obvious I didn't have to start naming the number of issues with the scenario, my former colleague got it.

  • Same userid used across multiple environments
    • Creating no separation of duty across a suite of products
    • Creating a single point of failure
  • Matching userid and password
    • A 5-character password made up of only letters can be hacked in nanoseconds
  • Development and application roles being shared
  • Developers, and thereby application support, having access to userid and passwords used in both development and production environments
    • Terminated employees had knowledge of this userid and password combination


       

I'm not a developer so I can't say that the development manager was exaggerating about his estimate of thousands of man hours to remediate the userid and password issue but I am fairly intelligent. I did take a few programming classes in college before I decided I wasn't interested in developing code. I have been an alpha and beta tester for multiple applications so have "some" knowledge of programming. At a minimum, I would like to see a report with the number of times the userid and password appear in the application's code. Then, someone could make an educated decision about the remediation efforts and whether the effort was greater than the potential liability. Perhaps that had already been done in the organization. Perhaps my former colleague was an alarmist who was constantly seeing threats where there were none.


 

A few steps that would help prevent this type from event from occurring in environments would include:

  • Service account guidelines including password complexity and lifecycle
  • Separation of duty between development and production environments
  • Separation of duty between development and application support roles
  • Risk documentation and analysis of the potential liability and threat of the perceived threat
  • Documented SDLC (Software Development Lifecycle)
  • Peer Reviews
  • Architectural Reviews prior to a system going into production with sign-off and approval by multiple managers (this should include a risk matrix documenting inherent and residual risks)
  • Accountability and consequences for non-compliance (REAL consequences and accountability not management turning a blind eye because someone says it'll take a lot of time)
    • I know red flags are going up for people here. In certain environments, this could be called whistle blowing. NOT if the facts are presented without an agenda other than securing the environment. If so, "it is what it is".
    • There should be assigned roles with responsibilities and timelines with reports to the business and IT executives until such time as the issue has been corrected. This gives the business the opportunity to be aware of the recognized risk.
  • Privileged Use Monitoring (this can get really pricy so the cost would have to be justified by the potential liability)
  • Maybe, a discussion with HR about appropriate behavior for managers with a bad temper.


     

While this company may never suffer a breach due to the lack of security surrounding detour, detour, doesn't it speak to an overall attitude regarding ease of support versus providing secure environments?

Monday, October 8, 2012

Do you know where your data is? Who else does?

A few months ago I visited a Chinese Restaurant for lunch. I was with a client and although I don't normally eat at Chinese Buffets, that was his pick, so we ate there. Critical information? No, except for one aspect of the visit – we were seated behind a group of employees from a local company who spent the hour talking about their "P drive".

I'm not sure what prompted the discussion but after sitting behind them for about thirty minutes, I can tell you their company leaves a lot to be desired as far as data governance is concerned. I'll provide a few more details about the mystery company. They are a fairly large and well-known company in Jacksonville. The Jacksonville office is their headquarters. They have offices up and down the east coast but also in other southern states. They are not in a regulated business but they do business with regulated companies. How do I know all of this? The answer is far easier than you could imagine. The company employees were wearing logo'ed shirts. Combine the loose lips with poor data governance and it could be a recipe for disaster if anyone sitting around the employees were hackers. I can assure you I did not have to go to a lot of effort to hear the conversation. The client that was with me does use that company's services and was horrified.

From the conversation, I gathered that the company's P drive is a dumping ground for anything that an employee wants to share with another employee. NTFS permissions (what secures the file and directory security) were inconsistent and anyone could create a directory off the P drive. Keep in mind; this was idle conversation between a bunch of guys at lunch. It's entirely possible that what was represented was not completely correct – but the gist of the conversation was that they had, at different times, stumbled across data that should have been considered private employee information and/or corporate intellectual property.

What's wrong with this picture?

A lot of companies choose to use specific network drive letters to help end users remember common repositories. For example, "P" for this mystery company stands for Public. Other common drive letters used are the "H" for Home directories, "U" for User directories and as already stated "P" for Public directories. Generally however, that is the end of the rules for data. Without a documented data governance plan however, a company can end up with data being stored in shares, directories and email mailboxes that were never intended for such use.

Problems created:

  • Sensitive employee data can be exposed,
  • Corporate strategies or plans can be delivered to the wrong individuals,
  • Intellectual Property such as trademarked material can be viewed,
  • Customer data can be exposed,
  • eDiscovery and litigation efforts can be exponentially prolonged,
  • It would be difficult to create a true business continuity plan without a full system recovery,
  • It would be difficult to document application workflows,
  • It would be impossible to secure all of the critical data


     

How do you go about creating a successful data governance plan?

  • Define what data can be housed by your corporation,
    • While this may seem like a curious statement, unstructured corporate file servers are ripe for employees to use as storage repositories for music and pictures
  • Define who will own the data,
    • What will be the record of source?
    • Who can access the data?
      • Create a review process to confirm the access accuracy
    • Can the data be copied or shared?
      • If so, who makes that decision?
      • What manner will be used to copy or share the data?
    • The data owners or their designated person should become data governance advocates in order to insure adherence
  • Define the lifecycle of the data,
    • Confirm the legal requirements for data retention
  • Define the data backup schedule and methodology
    • Confirm that this makes sense for recovery needs
  • Define who makes the governance decisions regarding data access
    • This should NOT be the technology department. The technology department will create the shares and grant the permissions the business requires but should have no part in the decision making process.
  • Educate end users on the criticality of maintaining the data structure once created.


 

What ELSE should be done in the "mystery environment"?

  • Educate end users on information security to include social engineering concerns
  • Include business leaders and technologists in all discussions regarding data governance
    • Data governance HAS to be a top-down initiative
  • It probably wouldn't hurt to talk to employees about avoiding bringing up specific information about their environment while in public (that's what water coolers are for)