In today’s world, the moment we think about actually creating data, we must also think about how to protect its confidentiality, integrity, and accessibility. When creating data, we need to assume possible worst case scenarios including, but not limited, to data leakage, unauthorized access, and completely losing access to the data. Unauthorized access means losing control over the data. One side effect of data processing is that additional copies are produced—be it in a swap memory on hard drive, a temporary file, or a memory of a process we are using to process the data. In the case of a cloud-based solution, another side effect of data processing that must be taken into an account is metadata creation. Even simple processing can generate large amount of metadata. Collecting metadata is a huge business nowadays, which means that metadata also requires some form of protection, especially since it can contain sensitive information about us. Secondly, the place and time of encryption/decryption is crucial from a security standpoint. If encryption takes place in the cloud, one has to provide secure measures to send unencrypted data into it.

Encryption: Possible Pitfalls, Traps and Considerations

When speaking about data leakage or unauthorized access, encryption comes to mind immediately. Since encryption entered the enterprise world from the military universe, it is often misunderstood and being described as a single magic solution solving all security matters. Obviously, such an approach is flawed from very beginning. There are number of reasons, which we discuss below.

Insecure Process

Solving complex problems with complex solutions is not an easy task – this is exactly the case when you try to solve security matters with cryptography. The security starts at the process of information processing. If you design the process badly, it will be insecure. For example, if you push users to use too long passwords with high complexity, they will find an insecure way to bypass it somehow: we all know the post-it sticker with the password story, for example. The rule here is that the end user experience is crucial for security: controls should allow the user do his task quickly without providing too many obstacles. The business side of your organization also will love such an approach, since it means maximizing profits in a long term. What does password policy have to do with encryption? While passwords should be long gone from our security landscape, they are still here, still going strong. How do you log into your mobile? Aren’t you using PIN? And how do you log into your social network account or email account in the cloud? The moral of the story is: If you are using passwords to protect access to your data, the password needs to be secure and changed periodically. The cloud solutions didn’t make the passwords disappear completely.

User Education and Security Awareness

No matter how well you will design the process and later implement, it remember words of Albert Einstein who said: “Only two things are infinite, the universe and human stupidity, and I’m not sure about the former.” If users are able to enter 11 one-time passwords into a window created by malware, don’t expect any cryptosystem stop them doing so. The users must be aware of threats and how they must handle controls in order to retain security. Phishing scams are being actively exploited just like malware attacks. If we talk about malware, it is worth mentioning that, unless the cloud service provider is providing malware protection both at his cloud service and at cloud service customer system, there is very little he can do about end-user malware protection, except educate the customers about the need for such an up-to-date solution being installed. Scanning files or data being uploaded to cloud service does not protect the end user system from being infected by malware. In consequence, the malware can not only infect the system but also be able to steal data from the cloud service customer in unencrypted form and/or steal information needed for the cloud service access, compromising the cloud service as well.

Insecure Architecture

Coming back to the cloud service provider side, the architecture of whole system, including the cryptosystem, is crucial for the final security level. In most cases, it is not feasible to keep data encrypted all the time; the decryption process is therefore crucial in terms of unauthorized access and data leak risk. This leads us to the following best practices in architecture design:

Deploy multilayer security – encryption can’t be a single layer Encrypt and decrypt data in single place if possible – this limits the risk of some information not being encrypted properly, for example, and it also makes security audit easier Protect encryption keys and IVs (initialization vectors) if needed Ensure strong random number generators are in place Limit the attack surface by not making all functionality publicly available (where applicable)

Obviously, the above list does not exploit all best practices. Some best practices will be tightly connected with type of the service and/or business model of cloud service.

Weak Protocols and Algorithms

Even the best architecture from a security perspective can still be a pitfall if implemented and deployed badly. A great example is usage of insecure protocols and algorithms like SSHv1 or old implementations of SSL, just to name a few cases. Some vendors started to ban support for insecure protocols already [1] fortunately. Another issue is when you are using strong protocol but with weak keys, due to either the smaller size of keys or bad generation of them. In general, do not use short keys—they were often result of expired export regulations—as they may be easily broken. If the attacker is lacking computing power at his end, he can always buy a cloud service to gain more of it, enabling him to break short keys quickly.

Random Numbers Not So Random

Unless using a hardware-based solution, generation of random numbers is not an easy task for a computer. Most programming languages provide a random() like function, yet those are considered insecure and mustn’t be used in cryptosystems. To make a long story short: A weak random number generator will make whole the cryptosystem insecure and easy to break because they let the attacker guess generator results and seed IV appropriately, making cryptanalysis attacks easier. Depending on the circumstances, the whole encryption process can be subverted by the attacker.

Strong Algorithms – Vulnerable Implementation

Even when all deployed protocols and algorithms are considered strong from a cryptography perspective, it does not mean that their implementation is secure. Basically, we can have two classes of problems:

Improper implementation of a secure algorithm or protocol, resulting in weakening its cryptographic properties Defects in software or hardware, leading to vulnerabilities that can be exploited by a third party

The long stream of OpenSSL vulnerabilities is a great example of the last class. The first class of problems requires a bit more explanation. Every cryptographic algorithm has a set of properties defining its strength. If one of those properties is changed, it affects the final encryption strength. One already mentioned cause of improper implementation may be the use of insecure random generators. Such a problem may not necessarily be a deliberate programmer’s action. For example, he may be using external libraries that provide insecure pseudorandom generators while described as being secure. Such bugs are impossible to find during a simple code review, being part of the quality assurance process when source code of external libraries is not available.

Temporary Files and Swap Memory

If swap memory and temporary files are not encrypted at both ends of the cloud service, then it is a perfect way of leaking data. Let’s consider two scenarios:

Encryption happens at the cloud service provider end and data is being sent through a secure channel to the cloud. Encryption happens at the cloud service customer end-point and encrypted data is sent to the cloud service provider so it can be stored in encrypted form.

In both cases, swap memory and temporary files can contain a copy of unencrypted data. Even if an attacker can access only incomplete unencrypted data and gain access to an encrypted copy, it will make a cryptanalysis attack more feasible. So what differs in the above scenarios? The location of swap memory and temporary files: In the first case, the swap is located at the cloud service provider infrastructure; in the second case, it is the mobile device or workstation of a cloud service customer. Obviously, protecting access to those areas should be restricted but the implementation of it differs. In first case, it is the responsibility of the cloud service provider and in most cases the cloud service customer has very little knowledge about implementation details. In fact, in most cases only the contract between customer and provider and SLA can define some measurements taken by the provider. In the second scenario, it is the responsibility of the cloud service customer to protect swap memory and temporary files. The good news is that some operating systems encrypt swap memory by default nowadays, but in the case of mobile devices it may differ drastically. The protection of temporary files is different matter. Most operating systems secures access to temporary files by file access restrictions and by eventually removing them. Unfortunately. in case of modern operating systems and supported storage devices, secure removal of A file is usually impossible. There are also cases when temporary files aren’t being deleted. Such A situation can be the result of an application crash, for example, or by the locking of a temporary file by another process. The solution to this problem is an encrypted file system; in such a case, even if a temporary file isn’t deleted at all or isn’t deleted in a secure manner, access to it is still limited. Why are swap memory and temporary files such a big issue? Imagine a case when you lose your mobile device or a cloud service provider is exchanging storage media due to its failure. The last scenario brings another crucial case: safe disposal of IT equipment by cloud service provider. Such procedures should be in place and clearly defined in SLA.

Incidents Handling, Forensic and Data Recovery

Incidents do happen, unfortunately. When considering data encryption, you also have to take into account issues in the case of a security incident. Conducting a forensic analysis on encrypted files or filesystem or swap memory may be complex or even an impossible task. In the case of cloud service, this may be even more complex because you may not have access to the filesystem, depending on the type of the service. In fact, in some services there is not even a concept of filesystem, so typical forensic procedures will fail from the very beginning. Again, this should be taken in account when considering encryption and planning your incident handling procedures. Encrypted data can be decrypted only if it retains its integrity. However, if its integrity has been compromised, either due to a security incident or random events like hardware failure, it may not be possible to successfully decrypt data. In such a case, data recovery methods will be helpless. This brings up an important consideration for backup strategy, especially when you are using encryption. Some compliance requirements actually leave no choice and force usage of encrypted backups as the only option. So, when using a cloud service, it is crucial to check the provider backup strategy and encryption policy.

Encryption as a Vendor-Lock-In

If your data is being encrypted by cloud service provider (it doesn’t matter where the encryption takes place) you should check at least two things:

Do you have an unencrypted copy somewhere? How can the encrypted data be transferred to another cloud service provider or back to you?

If you have an unencrypted copy of data uploaded to cloud, you can easily your switch cloud service provider assuming you or a new cloud service provider can provide a secure channel to send it over. In reality, this scenario happens quite rarely due to problem of synchronizing and keeping update data between your system and cloud. In fact, as a result, such an approach could limit taking full advantage of cloud offering. The second case is more typical and some vendors may be willing to strengthen their business by not letting you go away. It turns out that cryptography could be the perfect method to do it. If encrypted data can’t be just send from one cloud service provider to another, you have to decrypt the data, upload it, and encrypt it again, for example. With single, small files this operation poses no issues; however, as the number of files and their sizes rises, the complexity of the process and time required for switching between cloud service provider are rising. In extreme case the time and complexity could result in such high costs that switching cloud service providers isn’t considered an option any longer. If you are encrypting data locally and just use the cloud for storing encrypted files, then you are immune to the issues discussed above.

Cryptography Controls at SLA Level

So far we haven’t touched the terms of cloud service being provided. Cryptography, like the rest of the security model offered for a particular cloud service, should be mentioned somehow in the service level agreement. The “CSIG Cloud Service Level Agreement Standardization Guidelines” [2] describes the following service level objectives: If you read carefully the above SLOs, you will quickly discover that all those points have been covered in our discussion so far: protection level

By providing strong cryptographic algorithms and protocols including strong random number generators, we provide brute force resistance. By providing secure storage of encryption keys, we are also providing key access control policy. Secure key storage and random number generators can be implemented in hardware.

In the above points proper, vulnerability-free cryptography implementation is being assumed. This assumption extends to proper cryptosystem architecture. This is the ideal case situation that may not always apply. This makes SLA crucial in terms of responsibility: If the cloud service user is processing data in cloud for third parties and needs to fulfill compliance requirements, the SLA with adequate SLOs is the answer to the problem. Clearly, the cloud service customer cannot audit every statement by the cloud service provider but he can assure that there is a proper contract in place. If any cloud service provider statement turns out to be false, the legal way to protect customer interest with proper contract in place is generally a lot more feasible. The key access policy SLO probably requires some clarification. This particular SLO applies only in a case in which the keys are stored within the cloud. This may not be the case and, in fact, some best practices advise storing the keys locally, resulting in not giving cloud service provider any access to it. However it may not always be possible: for example, when a customer is using infrastructure as a service (IaaS), he may setup remote access between virtual services through SSH. The SSH authentication can be key-based, so the keys must be stored on an IaaS platform. Obviously, the IaaS provider should not be able to access customer virtual machines. The same rules should apply to other IaaS customers of the same provider. However, simple customers isolation at the hypervisor level may not be enough to guarantee system separation similar to physical separation. If you take a look at Venom vulnerability [3] CVE-2015-3456 [4], it quickly becomes obvious that similar security issues will be identified in the future as well. Due to a bug in the floppy controller code, execution of arbitrary code is allowed in the context of host, not guest system. By the way, this is another perfect demonstration of how legacy code can result in a security incident. Further, you usually shouldn’t be using a floppy controller, but how often do you actually disable it on your virtual machines? While we can’t protect from all vulnerabilities, even when limiting the attack surface by disabling unused features (like floppy support) and controlling access to the rest, there is still some risk to be addressed. Secondly, the Venom case shows that unauthorized access may not be the result only of cloud service provider actions but actually of other customers’ actions. Here comes the SLA contract to the rescue. However the game is not just about having an SLA in place, it is actually about getting the proper and complete SLA for the first time between parties. I’ve seen it already too often when the customer renegotiates the SLA after an incident. If he would follow guidelines like the one already mentioned or hire a proper security consultant, he would end up with an SLA that would fit his needs and address the risk associated with his business. Speaking about terms and conditions, it is important to mention that strong cryptography cannot be legally exported and imported on all markets. This has to be taken into account for a global cloud service offering. Similar regulations exist in the area of data processing and privacy protection, possibly limiting the number of data centers due to their geographical location. Differences in local regulations can be a real show-stopper for employing the cloud, even more than cryptography export restrictions. This is why there are number of different initiatives trying to create more global legal consensus in the area of data processing and privacy protection. There are also at least several certification schemes and standards aiming to foster this process. To recap: Having the proper SLA in place must be treated equal to technical safeguards deployed at the cloud service provider sites. In addition, different organizations, such as EuroCloud or the Cloud Security Alliance, provide not only guidance for complex SLAs but also a list of safeguards that should be deployed by cloud service provider. This gives the customer know-how about what to ask for when selecting cloud service provider and negotiating both the contract and SLA.

Recoverable Encryption and Key Recovery

Speaking about terms and conditions, it is crucial for a cloud service customer to understand the security level of encryption provided. Here are the main points:

Which information is being encrypted and on what conditions and in which cases? Is the encryption process recoverable? Is it possible to recover encryption keys?

Obviously, the recoverable encryption option may not be feasible for number of reasons for all customers while highly valuable for others. The option to recover encryption keys should be part of key access control policy. It is crucial to understand if duplicates of encryption keys are available and, if this is the case, under what conditions and where and for how long they are being stored. In the case of encryption keys/certificates, a revocation procedure is also crucial. You need to take in account the case where these get leaked or stolen. Also, a validity period can be a crucial attribute.

The Metadata Protection

So far, we’ve been discussing protection of data, without actually touching the issue of metadata protection. Since most metadata is being generated at the cloud service provider perimeter, the cloud service customer usually has very limited if any (from technical point of view) selection of options. This changes dramatically if the cloud service customer is able to negotiate terms and conditions of the service, including the service level agreement. As already noted, an SLA is a great place for describing the requested security level and possible fines for violating it, either by the cloud service provide or a third party. This point applies not only to metadata but to the customer data as well. Why is metadata so valuable? It can tell a lot about the customer: for example, login hours and frequency can tell much about customer habits. Some parts of the metadata can be sometimes partially protected by privacy regulations, but that depends on particular region. Metadata can be generated even by visiting a cloud site. For example, metadata can contain information sent by a web browser, such as type and version of operating system and browser. Another metadata example could be storage space occupied by cloud service customers. Such info could be valuable for creating a better offering for customers but also could be used by third parties to offer different services or devices with better suited storage capacity. Examples are endless. The biggest problem is metadata perception by users—they don’t see any value in protecting it, while it can reveal a lot about them, clearly violating their privacy.

Not Only Encryption

Up to this point, encryption has been mostly discussed in this article; however, cryptography offers a lot more functionality. So far integrity has been barely mentioned but, from the security standpoint, it is also an important service. For example, financial (Sarbanes Oxley requirement for example) or healthcare data not only requires protection from unauthorized access and leaks (encryption) but also their integrity and authenticity is crucial. Cryptographic integrity services can be applied not only to data but also to application binary files; in such a case, they can help to detect signs of intrusion (changed executable files) and are part of forensic process. Furthermore, integrity logs can be used as evidence in court if the appropriate handling of evidence took place during forensic analysis in some cases. This brings questions regarding both legislation and a contract between parties. Just the fact that the cloud service provider checks the integrity of its internal cloud component doesn’t mean you will have access to those when an incident happen. Since, in one of its simplest forms, cloud offerings can be treated as a convenient way to store files, integrity can be important functionality. Another service offered by cryptography is time stamping. This can be useful for a number of more advanced cloud services but even if one just “deposits” files in the cloud, it still may be used to assure the proper date of upload. Passwords are a nightmare for security experts yet, while insecure, they don’t want to disappear from our landscape as mentioned already. Cryptography is used not only to store encrypted (or in a form of one-way hashes, which can be treated as a form of encryption in this case) passwords to protect their real value in case of unauthorized access but also makes strong authentication schemes possible. Available authentication options should be taken into consideration when selecting a cloud service offering. Obviously, encryption offers a lot more and we barely touch the tip of an iceberg; however, further analysis of encryption services is far beyond the scope of this article. Last but not least, encryption is often used in conjunction with compression. This is due to the fact that encrypted data usually expand in size by a rather large percentage (around 30% is a safe assumption for most cases). In order to address this issue, compression is being used.

Private Cloud Model

Until this point, we’ve made a quiet assumption that we are talking about a public cloud model solution. However, most points are valid also for private clouds as well. In fact, the differences are dependent on the particular implementation. If the organization can control all aspects of private cloud deployment and remote access by cloud service provider it can—at least theoretically—control the security level and metadata. If the organization receives a form of “black box” delivered onto a virtualization platform, then we can treat it similarly to the public cloud model. Again, proper SLA clauses related to security, metadata, etc., apply. It is important to remember and understand that, just because something is running on our infrastructure, that doesn’t necessary mean we have automatically more control over it and it is free from vulnerabilities. This brings up another important issue: patch management process – it is important to understand how this process works in a particular private cloud implementation and to align it properly with the rest of organization patch management.

Summary

Switching to the cloud model does not mean we can forget about security measures at the end-point. When implementing or assessing cryptography controls, it is crucial to fully understand the whole cryptosystem and deployment model. Last but not least, user education is a crucial success factor for any security-related actions but, at the same time, proper security functionality at the cloud service provider must be deployed. A security awareness program without proper controls is useless, just as cryptography can’t solve all security issues alone. Do not treat this article as the solution to all cloud-based issues connected with cryptography; rather, look at it as introduction to complex subject. So far we’ve only scratched the tip of an iceberg. I’ve intentionally skipped areas that are of concern to home users or some small businesses to focus on the issues regarding corporate solutions. This does not mean that cryptography isn’t suitable for home users. In fact, it is and, as a privacy protection advocate, I strongly believe in providing cryptography solutions to home users in order to protect their data and identity. In the case of small businesses, the cloud-based solution can offer a higher security level for a fraction of the costs for building the same infrastructure at his site (DDoS mitigation solutions are a great example here). However, this is a discussion for a different article.  

Bibliography

OpenSSH: ssh protocol 1 now disabled at compile time http://marc.info/?l=openbsd-tech&m=142716088814469&w=2 CSIG Cloud Service Level Agreement Standardization Guidelines https://ec.europa.eu/digital-agenda/en/news/cloud-service-level-agreement-standardisation-guidelines Venom Vulnerability site http://venom.crowdstrike.com/ CVE-2015-3456 http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-3456