Friday, December 14, 2012

Lessening the impact of a DDOS Impact


Bank robbery used to be simplistic. People, in masks, walk in with guns, real or pretend, and take whatever money was in the local vault. Unfortunately, the first warning anyone got that there was about to be a robbery was when the robbers burst into the bank in ski or comic masks. Today’s “robbers” don’t have to walk in the doors to be effective. They can sit comfortably in their living rooms with their feet propped up and commit crimes that undermine consumer confidence and a financial institution’s reputation in moments.
From a technologist’s standpoint, the technology behind the DDOS (Distributed Denial of Service) attack is brute force in nature. The attack’s target is internet facing servers that accept a certain number of connections and can then be overwhelmed by too many connections; basic and easy to perform.

There are steps you can proactively take to lessen the potential attack. These require:  

Planning

  • Banks with established incident response teams have a greater opportunity to control the impact of a denial of service attack.
  • Teams should rehearse an attack and the planned response
  • Teams should have assigned roles and responsibilities with multiple methods of contact
  • If a bank is a consistent target, perhaps cyber insurance should be considered.

Communication

  • Banks need to decide who will be the liaison with the FBI Cyber Unit, Homeland Security and any other security agencies that manage cyber incidents.
  • A phone tree should be created with security, legal, compliance, marketing or Public Relations and technology individuals who have actionable roles.
  • A plan for communicating with customers in some other method than through the public call center numbers should be established.

Active monitoring

  • Internet providers have tools that monitor traffic 24/7. Servers have tools that report the number of connections, whether it’s successful connections, waiting connections or failed connections. Metrics should be easily available that reflect normal traffic for the time of the month and day. There may be occasional outliers but for the most part, traffic is somewhat predictable. A rise in connections could be an attack beginning. When IT staffs see this type of increase in traffic, it should be investigated and preventative measures taken to avoid an attack completing shutting down the bank’s websites. 
  • If a bank does not have the type of active monitoring discussed then they should consider using a 3rd party to either a) host their web servers or b) implement monitoring for the bank.
  • Monitoring the web server interfaces will again offer insight into predictable traffic patterns. Outliers should be considered potential signs of an attack.

Training

  • Providing employees with training on how to detect an attack will go a long way toward lessening the potential impact.
  • Providing customers with training on ways to recognize potential malware that could launch an attack will also help.
  • Create two-factor authentication requirements and train customers on the need to have separate passwords for their banking environments and other browsing needs.

Successful patching program

  • Although a bank can’t do a lot to avoid zero-day exploits that have yet to be realized by the security company, a number of institutions are lax in their patching processes. Windows servers are no longer the lone targets. Teams can underestimate the hypervisor environment’s potential payload and with many institutions using virtual environments to lessen the physical server overhead, this is a potential gold mine for Trojans and malware.

If a bank is a target of a DDOS attack, the chances are there will be some impact. Following the steps above are designed to lessen the potential impact.

Friday, December 7, 2012

Ode to Doug Woods


 
 Over the years, I’ve had some interesting work experiences. You can’t work for thirty years without having some great experiences and some – not so great. OR, you could do as I have done – and wonder – is it the situation or is it me? My preference is to think – it’s the situation. I say that rather tongue-in-cheek as people that know me, would understand.

I met Doug Woods after I moved to Jacksonville. I was working as a consultant on an Active Directory project that resulted from a less than glorious audit finding. Doug’s career had traversed the development focus in the banking/mortgage industry. As a consultant, I was only exposed to him in progress meetings.

My first observations were that Doug was quiet spoken and exuded authority. I later found that he was a seasoned professional who asked intelligent questions and quickly got to the point without a lot of rhetoric. Little did I know at the time, there were a lot of changes going on in the department. At the end of the Active Directory project, I would go from being a consultant to being the Director of Operations Infrastructure. That’s when I learned about the man, Doug Woods.

Doug was a cowboy at heart. I don’t just mean someone who rode a horse and wore a cowboy hat and boots. Doug did do those things but he reminded me of the cowboy heroes that could be counted on to always try to do the right thing. Doug grew up in Oklahoma so I am guessing hard work was in his backbone; it definitely seemed to be. I’m not going to try to make Doug out to be an all-seeing, all-knowing super-hero. He was not a saint and he had his faults. If I tried to state otherwise, he would smack me on the back of the head, much like Gibbs in NCIS. I’m sure he could do it, all the way from heaven.

What Doug did do was allow people to work within their strengths. He allowed for the fact that no one person can know everything and not make mistakes. He had a phrase and a tone that I remember to this day, “That’s not good”. While he said that in an even tone, you could feel the disappointment wash over you. It’s not that Doug ever withheld direction from you. He expected you to put forth your best effort and if he suspected you had done less than that, you knew – he knew. I’ll never forget getting a call from him on my way to the office one morning and an invitation to meet him at Starbucks. I’m certain Starbucks lost money when Doug passed away. I didn’t know at the time that a morning invitation to meet at Starbucks was his way of having a conversation that didn’t place you in the formality of being in the CIO’s office. I appreciated his grace and thoughtfulness. I had many Starbucks meetings with Doug over the years we worked together. Some were to discuss problems in the department, some to talk about strategy and focus. Those meetings were when personal details came to light and I learned to appreciate the man Doug Woods was.

At some point in his life, Doug had the wisdom to learn patience. I suspect patience was not in his initial makeup. I know it certainly was not in mine. I had and still have an impatience for people not putting forth their best effort. Doug helped me temper that impatience though and become more understanding. Doug and I shared an empathy for people.  We shared a belief that people should be truthful in their dealings and a handshake or an agreement was golden. I’ll admit Doug and I both experienced disappointments when dealings didn’t turn out that way.  Doug had been in the business world long enough to believe you hold people, whether they be employees, peers or vendors to certain standards. He wasn’t blind to people’s shortcomings but was tolerant and gave people the opportunity to make amends. He treated vendors like partners but knew how to get the best possible deal. He was always willing to be a reference for vendors and gave a lot of them opportunities to grow because of his support.

Doug was a gentleman who had a respect for women that I appreciated. Women in technology are a minority and having worked in technology for over twenty-five years, I had been met with prejudice and discrimination on many occasions. Doug asked my opinion and for my expertise in infrastructure many times and appreciated my willingness to tell him when something was outside of my area of expertise.

An open-minded manager, Doug encouraged open conflict in an effort not to stymie the creative process, but also knew when to put the brakes on to keep the discussions from becoming personal. At times, he would cut through the postulating and territorial behavior to find common ground. His expectation was that everyone would work together to find the best possible solution for the business. If anything was his undoing, it was his underestimating the ferocity of people stuck in their ways.

It’s been three years since Doug Woods passed away from cancer. He fought his cancer with a quiet strength that spoke of the way he lived. He is sorely missed.

Thursday, November 29, 2012

It’s Q4, is your house in order?

4th Quarter always seems to be the busiest quarter of the year; add in elections and you have a quarter fraught with disruptions. For CIO's and IT Directors, it's the inevitable question – "did my team contribute to the success of the company this year?" There should be metrics readily available to prove or disprove this question. Hopefully, the question isn't, "Did my team's dysfunctions take away from the stability of the company this year?" If so, there are bigger issues that probably can't be resolved by (successful) strategy meetings with the business before the end of the year.

I'm sure there are some that are thinking, it's December 1, what can you possibly accomplish in 30 days? Actually, plan on 19 work days. Many of those days for IT employees will mean catching up with vacation time that you couldn't take at any other time during the year. But, let's take a stab at making that timeline productive. The chances are good more of the business is taking vacation, so, overall, the office should be quieter than the norm.

Day 19 – Pull in your management team or leads and review with them what the purpose of the strategy meetings is going to be – determine how to provide excellent IT services in a manner that supports the business.

Days 18-14 – prep for meetings with the business

  • Review year to date outages
    • Do you have metrics necessary to segregate the outages by function and reason?
      • If not, you need to create an independent Incident Management team
    • Have the issues that caused the outages been worked through?
      • If not, you need to establish a stronger root cause analysis protocol and create an independent Incident Management team to manage those type of incidents
    • Have either quick-fixes or permanent solutions been implemented?
      • If not, you need to create an independent Incident Management team to manage that process

Note, the recurring theme. If sufficient focus is not being given to understand, shorten and preventing outages, you have zero credibility with the business. Not a good place for a CIO or IT Director to be in.

  • Review incident and problem ticket totals
    • Do you have the metrics necessary to segregate the incidents by business unit, infrastructure component and application?
    • Are issues being resolved within the established Service Level's (SLA's)?
    • Are customers complaining?
    • Does someone in IT have regular meetings with the business to go over problem tickets? If not, why not?
      • An IT team that has regular meetings with the business to HEAR their issues is an informed team.
        • The takeaways from those meetings should be ACTIONABLE. Otherwise, the business will become impatient. The purpose of the meetings should be clearly understood – you tell us what's wrong, we'll present you with solutions.
        • If the complaints are too ambivalent, meet with management to get to the bottom of the ambivalence.
      • The business needs to feel that they are heard or they will stop listening to the IT team and go elsewhere.
  • Review IT Spend
    • Where did you spend money this year?
      • This should be fresh in everyone's minds as the 2013 budget planning sessions should have been completed just months before.
      • Money follows problems so if the money you are spending or planning to spend is NOT being used to a) understand, shorten and prevent outages and b) correct problem and incident ticket issues – you may have a disconnect that needs to be resolved.
      • Are you spending sufficiently to hire and train IT staff? Are you offering CBT and online training courses? Are you insuring they have sufficient time within the workday to keep up with evolving technology?
      • Are you providing your IT team the tools they need to efficiently do their jobs in a timely fashion?
    • Are you planning more spend to support or grow your business?
      • Does that coincide with the business leader goals?


 

Now that you have this information at hand, you can begin strategy meetings with the business. Better late than not at all. The best scenario here is for you to already have regular strategy meetings with the business and you can simply review where you and they are and what you can do to make the coming year better for both teams.

Days 13, 11, 9, 7 – Meet with the business

Days 12, 10, 8 and 6 – Meet as a team to review the information provided by the business and brainstorm as to resolutions. Include as many of your IT team as you can. They need to feel engaged and part of the solutions. While it may slow the process a bit, you need to embrace the craziness that is your team. The brightest IT people are not always managers or leads. Make the situation work for your whole team. If staffing is an issue, rotate in and out the IT team members that can attend.

[Separately, the CIO or IT Director should provide feedback to the business of what was heard. This may or not match up with what the business meant. BE SURE.]

  • No idea should be thrown off the table
  • All ideas should be assigned to someone other than the person proposing it to look at pros and cons
  • This should be a collaborative effort, but the person proposing the idea has "game" in the solution, you want this to be a fair collaborative effort.
  • Part of this process should be comparing the results of your prep meetings with the responses from the business and where each business issue fits.
    • If there is not a direct correlation, in theory, there should be. Investigate. It is your job to support the business.
    • You won't be effective in the business' eyes if your goals don't coincide with and support the business focuses.
  • Invite trusted vendor partners to review your feedback. Ask what they are seeing in the industry.
  • Meet with peers to ask what they are doing in problem areas.


     

Day 5 – Meet with Managers and Leads again to review progress, status and plans. Now is the time to trim out the proposed solutions that are a) too costly, b) too obscure that would take focus off of your core competencies and environment.

Day 4 – Prioritize your efforts and narrow the scope. If you don't narrow the scope you'll be playing darts with dull points because nothing will actually get accomplished.

Day 3 – Communicate with the business what your plans and takeaways are. Give them proposed timelines, letting them know that the dates could shift.

Day 2 – Call today IT Appreciation Day. Make it an annual event. Invite everyone to nominate someone for appreciation. Invite vendor partners to donate giveaways.

Day 1 – Actually appreciate everyone. IT teams love food. Give them silly awards that they can keep on their desks. Have drawings for the partner giveaways. Give away tickets to the local NFL football team or the Christmas concert being held or "something".

I don't honestly think IT Teams are appreciated enough because no one really understands what they do. So, to all the IT Teams out there, good job.


 

Friday, November 16, 2012

Access Review in light of Patraeus Scandel

Although I focus on technology risk discussions for the most part, and prefer to avoid politics in a public forum, the discussion about security clearance and access to confidential documents bring to light another aspect of risk management that I believe is highly relevant. I thought this even more so with the Jill Kelley involvement. Paula Broadwell and Jill Kelley represent two civilians that These two women were evidently granted security clearance based upon access they wanted to have to environments and individuals that average individuals don't have. I don't intend to go off as a moral compass for either General Patraeus or General Scott's alleged actions. This is simply about access rights.

Let's take Paula Broadwell first.

  • I can understand Broadwell's access being granted to talk to Patraeus based upon them meeting and his feeling comfortable with a potential Biographer.
  • I can understand Broadwell's being allowed to speak with Patraeus' colleagues and staff.
  • I can understand Broadwell being granted a basic visitor badge to public areas within the buildings and locations that Patraeus worked in. (give her to the bathroom, to the kitchen and vending machines IF she didn't have to pass through any potentially secure areas to get there)

Questions I have:

  • Was Broadwell provided with any level of security training prior to being granted her access?
  • Was her role defined and boundaries discussed?
  • Was her access reviewed by an independent source with no "game"?

Now, Jill Kelley.

  • Mrs. Kelley is a civilian who because of her social standing was given a title of "honorary consul general". I don't know about you but that title is impressive. Take away the "honorary" and I'm really impressed.
  • The State Department and the Department of Defense stated that Mrs. Kelley was a volunteer.
  • As a social liaison to MacDill Air Force Base, Mrs. Kelley had access to a number of individuals she otherwise would never have met but no real paid responsibilities even though she worked with South Korea


     

Questions I have:

  • Any volunteer can walk into Central Command at will?
  • Were there limitations on where Mrs. Kelley could enter?
  • Was there any type of review as to Mrs. Kelley's credit worthiness to hold that type of clearance?
  • Was Mrs. Kelley's access fitting for her every day role? Was it on the same level as a janitor, a cafeteria worker or the person who takes care of the plants?
  • Why did she have any expectation of protection for her role when she called 911 or sent emails to the Mayor of Tampa?

It would seem from the outside that both Mrs. Broadwell and Mrs. Kelley overstepped their boundaries and were allowed to do so because of their connections to senior military and government officials. From what the media has published, it would seem that both women had extraordinary access to resources the average person could only dream of.

I recognize some of these questions may appear naïve. Patraeus was, after all, the head of the CIA. What is disconcerting though is that there are people who are above scrutiny. If our national security is important to us, then no one person and no one person's access should be considered above question or reproach. At a minimum, those with security clearance should be reviewed and approved granted based upon a specific criteria.

I'm sure a lot more will come out about the scandal but at the root of it all, people were given access to places and persons that could have endangered the national security of our country. But is it possible that it boils down to access control and review?


 

Monday, November 12, 2012

Is your IT team the weak link?

A recent article in "Bank Info Security" http://www.bankinfosecurity.com/fraudsters-target-bank-employees-a-5269?rf=2012-11-09-eb&elq=7cc1647406704dd7bc60a34c9d54e8b0&elqCampaignId=5063, tells of a breach disclosing hundreds of thousands of customer's records. Experian, the credit reporting group, revealed that the reason for the breach was due to lax security at a credit union's IT department. This has to be especially vexing to those customers who then had to monitor their credit and lose sleep at night about the potential impact to their finances, present and future.

In IT, there has to be a balance between effort required to do your job and reward for discouraging data loss. The balance depends upon the potential liability to the company for data loss.

While it's difficult to insure a breach will not occur, there are some basic fundamental safeguards an IT department can take that will create a layered security approach and deter a breach from having a payoff.

  • Create, follow and periodically test adherence for policies and procedures that support an established risk appetite
    • Separation of Duty
    • Implement Least Privilege Principle
      • Establish a reporting system for certain Privileged Account uses and insure review is performed on a regular basis
      • Create multiple logins for system administrators with access to vulnerable environments
      • Require management approval for privileged account creation
      • Work with system vendors to establish granular permission requirements
      • Where possible, limit the scope for system accounts
  • Beginning with development, consistently follow a documented and socialized SDLC (Software Development Lifecycle) – including using either masked data or non-customer data for testing environments
  • Establish periodic security and risk training for IT employees
  • Establish architectural guidelines with management sign-off on any system changes that modify security parameters
    • Include a review by selected Security personnel
    • Include a sign-off from business to insure they are aware of security changes to applications
  • Use Role-Based Access Controls
  • Limit data transmissions from end user subnets
  • Secure all data transmissions containing customer data
  • Limit and monitor physical access to sensitive systems
  • Implement an Incident Response Team to insure consistent response in the event of a breach.
  • Implement a strong problem management/resolution process. Although this is not specifically security-related, it supports a consistent business approach to issues.

For the most part, none of these are expensive requirements. They do require a mature process attitude but there are a lot of positive benefits that come from this attitude including improved system availability, less overtime for IT staff and most definitely an improved technology risk footprint.

These steps will compliment and support a rigorous risk attitude in an environment where data loss could be costly. At a minimum, they instill a disciplined approach to managing the IT environment. You never go wrong by emphasizing this type of approach in IT


 

 

Monday, October 29, 2012

Detour, Detour

I don't think many people consciously sit down to create an application or environment thinking, "I'm going to create an insecure and potentially costly-to-remediate system today". I'm also confident that few people intentionally put their company's intellectual property or customer data at risk. That's what I'd prefer to think at least.

It's unfortunate that bad habits permeate the intricacies of technology to a point where I doubt, anyone, can attest to the actual cost of true security remediation. Potential Privileged Account misuse is a situation of "running amuck" that has to be addressed in companies, regardless of size or regulatory requirements. How many privileged accounts should there be in an environment? Should there be one super user account that is used across the enterprise? Who should have access to the privileged account information? How often should it be changed? Should it be more than 8 characters? Should the userid and password be prevented from matching? Should there be a use-on-demand policy with approval at an senior level? Would any of these help?

Let's talk about Detour. I got a call a few years ago from a former colleague who wanted to vent. Detour was the name of a service account so engrained in an environment he worked in that it would, according to the development manager, take thousands of man hours to remediate the millions of line of code Detour had been used in. Why would you want to remediate something like that? I guess I shouldn't have asked that question. The response was that the password was also "detour". Evidently every developer who had worked for the company for the previous ten years knew about Detour, detour. Additionally, it was used across an entire suite of applications for the developer's convenience. There was no supporting documentation; there had been no code review. The account had been set up in Active Directory with instructions that the password should never be changed or it would break multiple revenue generating applications.

The reason for my former colleague's frustration that particular day? An application support guy was troubleshooting an issue and tried to logon as detour, detour. Unfortunately, he kept mistyping detour when he typed in the password. The Active Directory setting for number of mistyped passwords before lock-out? 3. The Application Support guy didn't realize what he had done but monitoring screens in the Ops area turned red with all of the applications that were suddenly down. The development manager who also functioned as the application support manager called screaming bloody murder and threatening someone's job if Ops didn't figure out what was going on and get it corrected ASAP.

While there were a number of issues that needed to be corrected about the above scenario, the aspect that caught my interest the most was that there was a significant single point of failure in a company that appeared to either be undocumented or unrealized. It was obvious I didn't have to start naming the number of issues with the scenario, my former colleague got it.

  • Same userid used across multiple environments
    • Creating no separation of duty across a suite of products
    • Creating a single point of failure
  • Matching userid and password
    • A 5-character password made up of only letters can be hacked in nanoseconds
  • Development and application roles being shared
  • Developers, and thereby application support, having access to userid and passwords used in both development and production environments
    • Terminated employees had knowledge of this userid and password combination


       

I'm not a developer so I can't say that the development manager was exaggerating about his estimate of thousands of man hours to remediate the userid and password issue but I am fairly intelligent. I did take a few programming classes in college before I decided I wasn't interested in developing code. I have been an alpha and beta tester for multiple applications so have "some" knowledge of programming. At a minimum, I would like to see a report with the number of times the userid and password appear in the application's code. Then, someone could make an educated decision about the remediation efforts and whether the effort was greater than the potential liability. Perhaps that had already been done in the organization. Perhaps my former colleague was an alarmist who was constantly seeing threats where there were none.


 

A few steps that would help prevent this type from event from occurring in environments would include:

  • Service account guidelines including password complexity and lifecycle
  • Separation of duty between development and production environments
  • Separation of duty between development and application support roles
  • Risk documentation and analysis of the potential liability and threat of the perceived threat
  • Documented SDLC (Software Development Lifecycle)
  • Peer Reviews
  • Architectural Reviews prior to a system going into production with sign-off and approval by multiple managers (this should include a risk matrix documenting inherent and residual risks)
  • Accountability and consequences for non-compliance (REAL consequences and accountability not management turning a blind eye because someone says it'll take a lot of time)
    • I know red flags are going up for people here. In certain environments, this could be called whistle blowing. NOT if the facts are presented without an agenda other than securing the environment. If so, "it is what it is".
    • There should be assigned roles with responsibilities and timelines with reports to the business and IT executives until such time as the issue has been corrected. This gives the business the opportunity to be aware of the recognized risk.
  • Privileged Use Monitoring (this can get really pricy so the cost would have to be justified by the potential liability)
  • Maybe, a discussion with HR about appropriate behavior for managers with a bad temper.


     

While this company may never suffer a breach due to the lack of security surrounding detour, detour, doesn't it speak to an overall attitude regarding ease of support versus providing secure environments?

Monday, October 8, 2012

Do you know where your data is? Who else does?

A few months ago I visited a Chinese Restaurant for lunch. I was with a client and although I don't normally eat at Chinese Buffets, that was his pick, so we ate there. Critical information? No, except for one aspect of the visit – we were seated behind a group of employees from a local company who spent the hour talking about their "P drive".

I'm not sure what prompted the discussion but after sitting behind them for about thirty minutes, I can tell you their company leaves a lot to be desired as far as data governance is concerned. I'll provide a few more details about the mystery company. They are a fairly large and well-known company in Jacksonville. The Jacksonville office is their headquarters. They have offices up and down the east coast but also in other southern states. They are not in a regulated business but they do business with regulated companies. How do I know all of this? The answer is far easier than you could imagine. The company employees were wearing logo'ed shirts. Combine the loose lips with poor data governance and it could be a recipe for disaster if anyone sitting around the employees were hackers. I can assure you I did not have to go to a lot of effort to hear the conversation. The client that was with me does use that company's services and was horrified.

From the conversation, I gathered that the company's P drive is a dumping ground for anything that an employee wants to share with another employee. NTFS permissions (what secures the file and directory security) were inconsistent and anyone could create a directory off the P drive. Keep in mind; this was idle conversation between a bunch of guys at lunch. It's entirely possible that what was represented was not completely correct – but the gist of the conversation was that they had, at different times, stumbled across data that should have been considered private employee information and/or corporate intellectual property.

What's wrong with this picture?

A lot of companies choose to use specific network drive letters to help end users remember common repositories. For example, "P" for this mystery company stands for Public. Other common drive letters used are the "H" for Home directories, "U" for User directories and as already stated "P" for Public directories. Generally however, that is the end of the rules for data. Without a documented data governance plan however, a company can end up with data being stored in shares, directories and email mailboxes that were never intended for such use.

Problems created:

  • Sensitive employee data can be exposed,
  • Corporate strategies or plans can be delivered to the wrong individuals,
  • Intellectual Property such as trademarked material can be viewed,
  • Customer data can be exposed,
  • eDiscovery and litigation efforts can be exponentially prolonged,
  • It would be difficult to create a true business continuity plan without a full system recovery,
  • It would be difficult to document application workflows,
  • It would be impossible to secure all of the critical data


     

How do you go about creating a successful data governance plan?

  • Define what data can be housed by your corporation,
    • While this may seem like a curious statement, unstructured corporate file servers are ripe for employees to use as storage repositories for music and pictures
  • Define who will own the data,
    • What will be the record of source?
    • Who can access the data?
      • Create a review process to confirm the access accuracy
    • Can the data be copied or shared?
      • If so, who makes that decision?
      • What manner will be used to copy or share the data?
    • The data owners or their designated person should become data governance advocates in order to insure adherence
  • Define the lifecycle of the data,
    • Confirm the legal requirements for data retention
  • Define the data backup schedule and methodology
    • Confirm that this makes sense for recovery needs
  • Define who makes the governance decisions regarding data access
    • This should NOT be the technology department. The technology department will create the shares and grant the permissions the business requires but should have no part in the decision making process.
  • Educate end users on the criticality of maintaining the data structure once created.


 

What ELSE should be done in the "mystery environment"?

  • Educate end users on information security to include social engineering concerns
  • Include business leaders and technologists in all discussions regarding data governance
    • Data governance HAS to be a top-down initiative
  • It probably wouldn't hurt to talk to employees about avoiding bringing up specific information about their environment while in public (that's what water coolers are for)


 


 


 

Monday, September 17, 2012

Bring Your Own APPROVED Device

BYOD – how to manage the flood without getting swept under by the wave.


 

According to Gartner, 48% of employees chose smartphones without regard for IT support. It's time for security and risk professionals to rethink their approach to enterprise mobility. BYOD HAS to become BYOAD or corporate IT departments will be releasing control and security to hackers. It's possible personal devices in the workplace could become a thing of the past, but not until the new smart phone love affair cools down. There are a lot of legal questions that arise when a company allows employee property to be used at the office.

  • How is the employee reimbursed for the company-used portion of his service?
  • Who handles replacement if the device breaks?
  • How is personal information kept separate from corporate? Is it kept separate?
  • How does the corporation handle device wipes in the event the employee leaves the company?
  • How would the company handle e-discovery?
  • Are texts considered business property?


 

All of these questions and more should be discussed and policies written to support the company's stance. Otherwise, these unknowns could bite an IT group where it hurts the most – credibility and security.


 

MDM – Mobile Device Management is yet the latest case of many IT departments trying to capture the horse running out of the barn. Frequently, the business is out the gate leaving IT in their dust or in hard pursuit, trying to get ahead of the race. (I promise that's the last of the analogies).


 

It's a fine line that IT departments walk as far as locking down desktops, browsers and devices. Who can remember when desktops were locked down and suddenly people couldn't play Solitaire at lunchtime? In their minds, it wasn't because managers had requested it. "IT" DID IT. In order to be as successful as possible


 

  • First- understand what the business wants/needs the devices for. Is it for convenience? The cool factor? Ease of doing business?
  • Second – research the market to see what's currently available and what vendors are advertising as their next generation release. Without a Ouija board, that's the best you can do.
  • Third – research MDM vendors to identify which fits your business model. Which companies focus on being the market leader in releasing gadgets versus who focuses on securing and segregating personal data from business? Ask the question, what is your company's risk footprint? How will Mobile Device support impact your IT department? Is containerization critical to you?
  • Fourth – perform a pilot program. Document details such as ease of administration, configuration and Use. Use. Include a use-scenario for performing remote wipes.


 

Gartner's Magic Quadrant published their MDM Software edition in June, 2012. They reviewed approximately 20 different vendors offering Mobile Device Management solutions. Of those, you have the names you would expect to see in the Leader Quadrant, Fiberlink, Good Technology, and AirWatch. Of these vendors, all market to large enterprise environments.

  • AirWatch has a strong focus on security and can easily support to scale. It has a strong administrative interface according to Gartner.
  • Good Technology (the one I am most familiar with), has strong security capabilities with multi-factor authentication. It comes at a high cost per client because its mobile management component is part of an Enterprise package. Good does not support Blackberries which may or may not be meaningful to your environment.
  • FiberLink's product (MaaS360) does not support an in-house solution. They have strong feedback on integration with cloud email services. The only negative that would discourage shops is that the management approach is device centric versus user centric which means that IT support requirements would be greater than with user centric products.


     

In the Challenger's quadrant, you find SAP and Symantec, neither of which have a strong focus on mobile device support.


 

In the niche player quadrant, you have a number of players including McAfee, Trend Micro, LanDesk and Amtel. Of these, McAfee and Trend Micro have strong reputations for security focus. As far as support is concerned, I've used McAfee, Trend Micro and LanDesk and have high regard for their support. For MDM however, I don't know that I would recommend any of these products. LanDesk's interface is complicated and not user-friendly. McAfee hasn't gone a long way toward making their product a priority, rather it is an offering bundled with other products. TrendMicro is focusing on their current customer-base and integrating their product with their other bundle. While this may not be a relevant point if you already are a TrendMicro shop, it's a consideration.


 

In the visionary quadrant, you have IBM and BoxTone. Of these, IBM is not a mobile-focused vendor, it's solution provides minimal reporting features and it only supports native device encryption. On the positive side, if you are an IBM-centric customer, you will receive world-class support. BoxTone really stood out in my estimation. They have a long history with focusing on mobile devices, particularly enterprise-sized. They are strong components of a multilayer-defense approach to security and can remediate policy and compliance violations. The only negative that was disconcerting is that application containerization is not used for native apps on the device – such as Apple email client. It does have a version of NitroDesk's TouchDown app for Android and supports Good.


 

Whichever MDM solution you decide upon, make sure it fits your corporate risk footprint, the IT support model you can afford and the needs of your business. Do your research. If you'd like a copy of Gartner's report, email me at cgarland@cgsolutionsofjax.com and I'll be happy to forward it to you.

Wednesday, August 22, 2012

210 reasons not to outsource IT Operations

There are a lot of good reasons to outsource IT Ops but I've never seen the topic discussed in quite this way. Why WOULDN'T you want to outsource IT?

First of all, let's talk about the reasons Hosting companies tout for moving forward with outsourcing:

  • You don't have to carry as many IT persons on your payroll,
  • You don't have to maintain certifications and training levels for IT personnel,
  • Around the clock eyes on the environment,
  • You don't have to pay for real estate to maintain a server farm environment (nor pay for air conditioning and electricity for support environment)

Refuting those points:

  • No you don't have to carry as many IT persons on your payroll, but you have to pay for the Hosting company's IT personnel.
  • No, you don't have to maintain certifications and training levels for IT personnel, but you do have to pay for the Hosting company's personnel to maintain their certification levels.
  • Around the clock monitoring, call-home and self-healing technologies can provide the around the clock eyes on your environment w/o undue load on your IT staff.
  • You will be paying not only for the real estate for the server farm but also for the A/C and electricity to support them. Add in the cost of additional bandwidth necessary for your employees to work across your pipe to the Datacenter and these costs add up. While in a hosted environment, you can share the hardware costs with other environments, does that meet your regulatory requirements? Would that pass a regulatory audit?

It's critical for companies considering outsourcing to be aware of the terms in their contracts. What are you paying for? Are you paying for additional costs associated with upgrades or change management? Are you paying for electricity use above and beyond a specific threshold? Are you paying for hands-on support in the event of a hardware failure? Hosting companies are not in business to be philanthropic. They are making money. How? Virtualization, lights off utilities in the form of less electrical consumption during non-populated timelines, automation tools, low footprint server farms and yes, server and system consolidation. Are there build-out terms in the event your environment grows beyond your current needs? Is there cage space that can be guaranteed so that you don't have to move your entire environment and incur downtime?

Have you ever read a Service Level Agreement (SLA) or Operating Level Agreement (OLA) for a Hosting facility? Average SLA's are for four hour response time. It could be that your SLA is for eight hour response for less- critical systems with four hours being set up for your mission critical environments. That's not resolution time, that's response time. Are there Resolution time contract terms? There need to be. You can contract for ASAP response time but at what cost? Is the sum cost-prohibitive? Find out.

Now let's go back to those 210 reasons. In an in-house supported environment, I've never seen an Ops team wait four hours to respond to an incident. Generally, teams respond as soon as they receive alerts and resolve the issue as quickly as possible. This would generally lead to a 30 minute response time with less than an hour resolution time. 30 minutes versus 4 hours equates to a difference of 210 minutes in response between in-house support and a Hosted provider. While I'm not saying every Hosted provider is going to wait for four hours before responding to an incident, what is the inducement for them to do so? If it isn't in the contract, it doesn't matter that they have a great facility and really nice engineers. They are, as you should be, looking out for the bottom line.

In order to move to a hosted solution with support included, a company needs to have a stable environment. Are you applications problematic? Have you had consistent policies and procedures in place that easily translate to a hosted environment? Do you have experience with managing vendor relationships? All of these are questions that need to be addressed prior to signing off on outsourcing Ops.

How much revenue or missed opportunity does that 210 minutes represent to you? If you are in a regulated business, those 210 minutes could be costly in terms of fines. Either way, those 210 minutes are going to be paid for. The question is – are the 210 minutes worth the price?

Tuesday, July 31, 2012

What crucial components do Small Business Owners leave out of their planning that can sink their business? Part 1 of 3

When an individual makes a decision to leap into the role of entrepreneur, there are a lot of resources available to help them. These resources help them figure out everything from how to write a business plan, to where to buy the business license and how to pay employees. All of these are obviously crucial to the smooth running of any small business. Imagine this though, you get your license, you set up shop, you hire employees or initially run your business on your own. You're there! You're a small business owner. You have customers, you're getting paid. Then, one night something happens.

Scenario 1: You're sitting on your sofa, sending a note to a friend about the YouTube video you just watched and your PC shuts down. You don't panic because it's happened before and you were able to do that golden command, "REBOOT", and everything came back. You were never sure about why it shutdown, but hey, as long as you got it back. Right?

Only this time, your pc won't come back up.

What are you going to do? Oh wait, your wife has a pc that you can use tomorrow. Only, your accounting software isn't loaded on her pc. Ok, so you won't be able to process customer billing. You can do that by-hand if needed. Oh, but your portal to your bank is registered on that pc and its payday. You always issue your payroll checks first thing in the morning so your employees can run to their banks at lunchtime. Remember thinking you should write that website information down somewhere?

Scenario 2: You're sitting on your sofa, sending a note to a friend about the odd pictures you just got from him via email and your PC shuts down. You don't panic because it's happened before and you were able to do that golden command, "REBOOT", and everything came back. You were never sure about why it shutdown, but hey, as long as you got it back. Right?

Only this time, your pc starts up but lines and lines of stuff roll across the screen and then your screen is blank, except for a single blinking cursor in the upper left hand corner. This doesn't look promising. Well, not a huge big deal, you use this pc to send business communications and customer information, nothing you have to have the next day. Of course, all of your personal information is on the pc because it didn't make sense to buy a pc specifically for business when you could use your personal one. You keep your banking and credit card information and in a special text file that no one would ever guess the name of, you keep your account information and passwords loaded. How else would you remember all of those passwords?

Scenario 3: You're shutting down the shop pc for the night, running the reports from your online customer reports when a message pops up on your screen informing you that you don't have virus protection for your pc and asks if you want to install it. How nice? "Someone" is worried because you don't have virus protection. You know you bought that though. It should be working. What was going on? Well, you might as well buy this new one from whoever this is. You put in your credit card information, the page says HTTPS so you know it's safe, and wait. And wait… And wait… Maybe you have to reboot. The reboot seems to take FOREVVVVEEEERRRR. Then, nothing.

So, what do you do? You're not a pc expert. You bought your machine at one of the local retail electronics stores, maybe you can take it back there first thing tomorrow morning and they can fix it.

So what does this have to do with Small Business Owners and what they leave out of their plans? 3 out of 5 Small Business Owners forget to include technology and business continuity in their plans.

There are a lot of considerations related to technology and business continuity for which small business owners are not prepared, because they are issues they've never had to deal with before. So the question arises, what SHOULD be included?

  • PC Protection
  • Data Protection
  • Environment Protection
  • Device Protection

Forgetting or underestimating any of the above can hurt a small business owner's reputation and credibility. Many small business owners go into business with excellent business plans as far as market research, understanding how to fill out all the proper forms but what about technology? What about security? And what about business continuity?

Part 2 to come next week.

Saturday, July 21, 2012

PCI Compliance for small businesses – Keep It Simple

The PCI-DSS were not pulled out of the air or specifically written up to drive merchants crazy. They are all based on security standards that have been around for years with updates as technology has evolved. PCI Compliance requirements on the surface can be intimidating if you don't have a large Tech Support team and a rather large bankroll. There are however ways to insure your environment is compliant without breaking the bank.

KEEP YOUR ENVIRONMENT SIMPLE – the simpler the environment, the easier the Compliance Standards are to meet.

  • Purchase authorized PIN and Credit Card devices from your bank. Ensure they are PCI Compliant.
  • Don't store customer data in your environment. This doesn't mean don't have a marketing mailing list. This means don't include any customer financial data.
  • Use commercial products for your POS system that are certified PCI Compliant.
  • Trust your employees, but verify. DO background checks to insure you're not hiring an individual who shouldn't be trusted with someone else's personal information.
  • Only allow access to customer data to those employees who have a definite business need.
  • Purchase and maintain antivirus and malware software for all pc's (and servers) in the environment.
  • Use Windows Update and apply security fixes. Same for other operating systems. They too get hacked.
  • Don't browse social media sites on your work pc. (This may be considered overkill by some but if you flat don't allow it in the first place, you don't have to potentially worry about a Trojan getting through your virus protection).
  • Use individual logons for all employees. This makes a trail to troubleshoot potential misuse much easier.
  • Find vendors who will partner with you, regardless of your small size, to help you maintain your environment. Insure THEY are security minded and compliant.
  • Write some basic policies and procedures and have employees sign-off that they have read them and understand them. (Core policies and procedures are available that you can fit to your environment).
  • Turn on Windows Firewall.
  • Purchase a warranty on your hardware. (This goes to recovering from a disaster and environment stability).
  • Back up your data. You can purchase an external hard drive from many vendors for under $200 in a lot of cases. Windows has a built-in backup program. You don't have to purchase additional software.
  • Follow basic security rules published by vendors such as Microsoft. They have security baseline documentation that will guide you into creating a more secure environment.
  • Fill out your self-attestation paperwork and provide it to your merchant bank.


 

None of these recommendations are expensive nor should they drive any Mom-and-Pop-sized shop out of business. Best of luck.


 

Saturday, July 14, 2012

Where do you start when you have nothing to start with?

Ever walked into a new management position and thought, "Holy crap, what did I get myself into?" Old hardware, no warranties on equipment, no support on software and software either at or past support deadlines litter your server room or datacenter. You spend the first week answering calls from Senior Vice Presidents who insist you fix their systems as your first priority. Employees trail into your office with complaints about everything from work hours to irate development managers who don't understand why your team can't keep their applications running. What do you do?

If your response was not "RUN", then keep reading.

First you watch, you listen, and you ask questions. Evaluate the reasons for the environment getting into the dilapidated state in the first place. Listen to the business leaders and your new team. Listen to the vendors that have supported the environment. Of course, everyone will have their own perspective. Take the emotion out of the equation, don't allow blame and finger pointing to color your game plan.

Then, act.

Start with the basics.

  1. Supporting the business
    1. Can the current systems sustain projected business growth?
    2. If not, what is the business willing to invest to support that growth?
    3. What is the business's risk appetite?
    4. Is there a cohesive vision of what the business needs?
  2. Resource Planning
    1. Does your team have clear job descriptions?
    2. Is there a proper staffing model?
    3. Do the technologists have adequate training for the technologies necessary to support the business?
    4. What is the percentage of effort spent on supporting the existing environment versus enabling the business to do more?
  3. Governance
    1. Is there a documented governance framework in place?
    2. Does the culture lend itself to structure and standards?
    3. Is there a cohesive vision of IT's role?


       

  • Understanding the business goals will go a long way toward insuring your credibility as you present your vision for transforming the services your team provides. Do your best to understand the agendas of those you work for and with.
  • Creating a stable environment for your team will go a long way toward insuring they believe in you enough to not abandon their posts. They'll at least, "go along for the ride".
  • Bring in vendors who can partner with you to help stabilize and then transform your environment.
  • Don't make promises you can't keep.
  • Working with your team, create policies and procedures to insure everyone is aware of the new rule book. It's unfair to existing employees to change the rules and not enlist their support and buy-in. While some may push back at the new structure and boundaries, most will understand the need in order to move forward.
  • Create a remediation plan with business drivers, costs and estimated effort in time and resources. When you do this, take into consideration that everyone has "day jobs". They may be unwilling to put in the additional effort that would be required to clean up. So, reward them. Complement those who do well, who perform above and beyond. Give them public appreciation. Give them feedback, both positive and negative. Employees deserve to know whether they are adhering to the "new world order" or are not meeting expectations.

Lastly, don't expect miracles. The environment did not languish into this state over night. It will take time, concerted effort and focus to get yourself into a positive place. It will also take compromises. What may be seen as the best technical approach may not be the best business approach. After all, isn't an IT department's role to support the business? Best of luck.


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 

Wednesday, June 20, 2012

K.I.S.O.F.F.

You may wonder what KISOFF stands for. No, it isn’t a term of endearment nor is it a response to a nasty comment from someone. It stands for:
Keep
It
Simple
Or
Fail
Fabulously

After all, if you attempt something, shouldn’t you give it the most fabulous attention possible? Every effort should go into making the attempt successful. You’ve used your company’s resources for the effort. You’ve used your team’s efforts. Every project worth doing… We all know the way that sentence ends, “is worth doing fabulously.”

Ok, enough with the light-hearted talk. Now for the realities of IT projects.
A well-conceived project doesn’t begin with a bunch of technologists sitting around a table talking about servers or application performance.  An IT project starts with a business or technology problem or process improvement initiative.

The business may have:
·         A problem that needs corrective action,
·         An idea on a new business direction or improvement,
Technology may have:
·         An area that would benefit from process improvement,
·         An audit finding requiring remediation,
·         Obsolete hardware

These are the most common reasons for a project implementation.  Following SDLC or COBIT can definitely help you increase your chance for a successful project but in simple terms, here’s the KISOFF approach to improving your chance for a successful project.

K – Keep the scope simple. Smaller projects with documented milestones that can be achieved in a short time frame are win/wins for technology and the business. IT has the opportunity to be successful and the business can have working parts of their project developed and in production versus waiting for the entire development cycle.

I – Include the business in each stage of the project. An executive sponsor is key, even if the executive is the CIO. Having the business involved has multiple reasons for improving the overall state of the project. Don’t just involve the business though. Involve the Subject Matter Experts (SME’s) for the area you are working on. They will give you more exact requirements than someone not involved in the day-to-day operations.

S – Scope creep can be the death of project managers AND just as importantly, the project itself. Stick to the original project scope. Have you ever been involved in a project that just wouldn’t conclude? The one where everyone points fingers at the other because they don’t want to accept responsibility for “The project that wouldn’t die”.

O – Outsource where necessary. Make effective use of your partners. If you are buying software, it is important to the software vendor that your project be handled correctly and you are successful. Include knowledge transfer and training in any SLA’s (Service Level Agreements).

F – Factor in additional resources, whether it is staff, time or money, to deal with unforeseen issues. That doesn’t mean to frontload all of your quotes by tens or hundreds of thousands of dollars but adding 10 or 15% for contingency is an established industry standard.

F – Focus. Don’t get distracted by every day fire drills. I am not saying ignore your day to day responsibilities but create a reasonable project schedule allowing for staff resources to focus on the project as a priority and from a management state which has priority for your team. If possible, assign different resources to the project than those who are primary on supporting production.

While this month’s blog was definitely written a bit tongue-in-cheek, that shouldn’t diminish the gems contained. By following the KISOFF strategy, you can improve your project success rate. While this strategy may not be as detailed as SDLC or COBIT, it may be all that a small company needs to get to the end game. Best of luck.


Monday, March 5, 2012

It all depends on the environment


Solution recommendations



 I was reading a technical discussion board this past week and saw an interesting Discussion Item title. “What is the most effective Antivirus solution for the corporate sector?” – Unfortunately the response isn’t as simple as – Trend Micro, Symantec or McAfee. It depends upon the environment. It depends upon the business requirements. Antivirus software is just one component in a larger strategic risk response that needs to partner seamlessly in order to be truly effective.

Questions to ask/answer:

·         How big is the environment?

·         Are you in a regulated industry?

·         Are there multiple geographical locations? Multiple logical environments separated by firewalls?

·         What type of desktops? What type of servers? What OS?

·         Is the intention to use the same product across desktops and servers?

·         Is antivirus the only product in scope?  Is Malware protection also a requirement?

·         Are there mobile users?

·         Is there a need to provide remote live update functionality?

·         Is there a requirement for self-discovery of machines on the physical LAN that do not have antivirus installed on them?

·         What report types are expected and /or required? Should they be automated? Scheduled?

·         Will training be provided to the IT staff?

·         Are there other solutions, such as Intrusion Detection Systems in play?

·         Do you need to have a multi-tier notification tree configuration?

·         How will you manage licensing?

·         If a machine does not report in for “x” amount of time, what happens?



Where technology protection is concerned, there are few simple questions and even fewer simple solutions. Do your homework before making an information security-related product decision. It hurts your credibility to make a wrong decision, but it hurts your credibility even more if you’ve asked for the monies to buy an expensive, complicated system, only to find you don’t have the right resources to support the product. Don’t buy a Porsche if your environment only needs a Toyota. The same is true in reverse. There is no one-size-fits-all scenario.

                Once you’ve made a purchase decision, there are configuration decisions that need to be made, tested and documented. Winging it with anti-virus protection, in a corporate environment, could be compared to releasing killer bees in a planetarium filled with preschoolers. Somebody’s gonna get hurt.

                While a large part of your environment should be cookie-cutter, there is the 80/20 rule that says there will be exceptions to the standard configuration. Different types of servers will need specific rules set up to avoid degrading performance. Different environments (firewalled environments) may require additional servers to support them. OR you may wish to punch holes in your firewalls to allow the update server and the individual systems to communicate.

                These are the types of decisions that should not be made in a vacuum. Talk to your frontline engineers who will be supporting the product. Talk to your Information security group. Talk to your Internal Audit team. Talk to other organizations, your size, in your industry, to see how they handle antivirus in their environments.

                Last, but not least, are there written policies and procedures that have to be met or updated? Don’t let this part of your implementation go to the byway. If audited, these are the basis for the audit so make sure you cover this area. Not only will it protect your environment, it could well protect your job as well.