Skip to content

Communities

Cloud Native Data Pipelines

Cloud Computing Software Development - Wed, 06/21/2017 - 18:40
Big Data companies (e.g. LinkedIn, Facebook, Google, and Twitter) have historically built custom data pipelines over bare metal in custom-designed data centers. In order to meet strict requirements on data security, fault-tolerance, cost control, job scalability, network topology, and compute and storage placement, they need to closely manage their core technology. In recent years, many companies with Big Data needs have started migrating to one of the public cloud vendors. How does the public cloud change the game? Specifically, how can companies effectively marry cloud best-practices with big data technology in ...
Categories: Communities

Crank Up Your Cloud Security Knowledge with These Upcoming Webinars

Cloud Security Alliance Blog - Mon, 06/12/2017 - 17:44

By Hillary Barron, Research Analyst and CloudBytes Program Manager, Cloud Security Alliance

Whether you’re trying to make the move to cloud while managing an outdated endpoint backup, attempting to figure out how to overcome the challenges pertaining to developing and deploying security automation, or determining how and why you should build an insider threat program CSA has a webinar that can answer your questions and help set you on the right path.

June 13: 4 Lessons IT Pros Have Learned From Managing ​Outdated Endpoint Backup (Presentation by Aimee Simpson of Code42, Shawn Donovan of F5 Networks, and Kurt Levitan of Harvard University)

In this session, you’ll hear​ from IT professionals at F5 Networks and Harvard University, as well as​ a Code42 expert​ as they ​discuss:

  • Why all endpoint backup isn’t created equally.
  • How outdated or insufficient backup solutions leave you with gaps ​that put user data at risk.
  • What technical capabilities you should ​look for in your next ​backup solution.

 

June 15: Security Automation Strategies for Cloud Services (Presentation by Peleus Uhley of Adobe)

Security automation strategies are a necessity for any cloud-scale enterprise. There are challenges to be met at each phase of developing and deploying security automation including identifying the appropriate automation goals, creating an accurate view of the organization, tool selection, and managing the returned data at scale. This presentation will provide the details of various of open-source materials and methods that can be used to address each of those challenges.

 

June 20: How and Why to Build an Insider Threat Program (Presentation by Jadee Hanson of Code42)

Get a behind-the-scenes look at what it’s really like to run an insider threat program — a program in which you can take steps to prevent employees from leaking, exfiltrating, and exposing company information. This webinar will provide cloud security professionals with insider threat examples (and why you should care), recommendations for how to get buy-in from key stakeholders, and lessons learned from someone who has experienced it firsthand.

The post Crank Up Your Cloud Security Knowledge with These Upcoming Webinars appeared first on Cloud Security Alliance Blog.

Categories: Communities

Who Touched My Data?

Cloud Security Alliance Blog - Fri, 06/09/2017 - 14:00
You don’t know what you don’t know

By Yael Nishry, Vice President of Business Development, Vaultive, and Arthur van der Wees, Founder and Managing Director, Arthur’s Legal

Ransomware
IT teams generally use encryption to enable better security and data protection. However, in the hands of malicious parties, encryption can be utilized as a tool to prevent you from accessing your files and data. We have been aware of this kind of cyberattack for a long time, but the most recent attack by the WannaCry ransomware cryptoworm was extensive, global and on the front page.

Under any circumstance, a ransomware exploit is terrible for an organization. The preliminary impact can cause extensive downtime and may put lives and livelihoods at risk. However in the latest attack, several hospitals, banks, and telecom providers found their names mentioned in the news as well, suffering damage to their reputations and losing the trust of patients and customers alike. For a thorough summary of the events, we refer you to the many articles, opinions and other publications about the WannaCry ransomware attacks. This article covers the rarely discussed secondary effects of ransomware attacks.

Data exploits
What should you do if you discover your data has been encrypted by ransomware?

When there is a loss of data control, most IT teams immediately think of avoiding unauthorized data disclosure and ensuring all sensitive materials remain confidential. And indeed, these are sound measures.

However, what if you can retrieve your organization’s data because a decryption tool was made available by a third-party (experts recommend strongly against paying the ransom)? One may think that business can continue as usual and it can be assumed the data was not compromised or disclosed, right?

Who touched my hamburger?
Unfortunately, if no mechanism was in place beforehand to track if the retrieved data maintained its integrity during the ransomware timeframe, one simply does not know. Thus it will not be clear whether it was modified, manipulated, or otherwise altered. Are you willing to still eat that hamburger?

Furthermore, one does not know whether a copy has been made, either in part or as a whole. And, if a copy was made, IT teams cannot track where it is, and whether it left regulatory data zones such as the European Union or European Economic Area.

Secondary effect of ransomware
The loss of control described above is the secondary effect of a ransomware attack, which may be even more far-reaching than the original wave. With very little information about what happened to the data during the attack, it is up to the respective data controller or data processor to perform analysis on the long-term impact to the data, data subjects, and respective stakeholders.

Under the Dutch Security Breach Notification Act (WMD), established in 2016, data integrity breaches are a trigger to initiate the notification protocols, in the same way as confidentiality breaches and availability breaches are triggers. Under Article 33 of the General Data Protection Regulation (GDPR), loss of control is also a trigger to notify the data protection authorities.

In most cases it will be very difficult to demonstrate accurately that the breach has not resulted in a risk to the rights and freedoms of the respective natural persons (or as set forth in both the GDPR and WMD, the breach must not adversely affect the data, or adversely affect the privacy of the data subject), obligating the data controller to notify the authorities.

Besides notification, what other measures should be put in place to monitor irregular activities, and for how long? The window of liability for any identity thefts resulting from the breach will remain open for quite a while, so mitigating risk should be on the top of the priority list.

Encryption
Encrypting data and maintaining the encryption keys on site would not have spared an organization from falling victim to such an attack. However, it would enable the exposure to be significantly reduced. This would allow an organization to convey, with confidence that, by maintaining the original encryption keys on-premises, they were in complete control of the data, even when it was encrypted by the attackers using another set of keys.

Accountability
The GDPR is aimed to give data control back to the data subjects. Encryption is mentioned four times in the GDPR, which will enter force within one year, on May 25, 2018. It is explicitly mentioned as an example of a security measure component that enables data controllers and data processors to meet the appropriate level of state-of-the-art security measures as set forth in Article 32 of the GPDR. In real-life examples, such as WannaCry and similar ransomware hacks, it can also make the difference between control and loss of data, and the associated loss of trust and reputation.

The GDPR it is not about being compliant but about being accountable and ensuring up-to-date levels of protection by having layers of data protection and security in place to meet the appropriate dynamic accountability formula set forth in the GDPR. Continuously.

So, encryption can not only save embarrassing moments and loss of control after the ransomware or similar attacks, but it can also help organizations to keep data appropriately secure and therefore accountable.

The post Who Touched My Data? appeared first on Cloud Security Alliance Blog.

Categories: Communities

My Second Attempt at Explaining Blockchain to My Wife

Cloud Security Alliance Blog - Wed, 06/07/2017 - 14:00
I tried explaining blockchain to my wife and here’s what happened…

By Antony Ma, CTO/PowerData2Go, Founding Chairman/CSA Hong Kong and Macau Chapter, and Board Member/CSA Singapore Chapter

I introduced my wife to Python around nine months ago, and now she’s tinkering and has drawn a tortoise on her MacBook. After spending more time on geeky websites, she became more inquisitive, asking me one day, “Can you explain to me what blockchain is and why it is a game changer?” It sounded like a challenge!

With my 15 years of experience in banking, audit, and IT security, I should be able to nail this. I opened my mouth and mentioned some terms I’ve read on blogs and news websites—distributed ledger, low transaction cost, no central computer, smart contracts, etc. After 45 minutes and some drawings, she asked, “Why the fuss? Is it like a database with hash?”

It looked like I was able to explain what blockchain is but failed to justify why it is ground-breaking. Her question on how a distributed ledger can profoundly transform the Internet was unanswered.

That question also struck me. Despite reading so many articles on the importance of blockchain and how it could change our digital life, not many articles can explain in layman’s term how the technology is so different from other Internet tech and why it’s a paradigm shift.

I started reviewing my readings, and here now is my second attempt at explaining blockchain in understandable terms.

The reason for blockchain
It all started in the 1970s when military research labs invented TCP/IP (Transmission Control Protocol/Internet Protocol), the foundation of the Internet with a high priority on resilience and recoverability. Researchers could add/remove nodes to/from the system (following some protocols) without affecting other network components.

Trust (or simply security) was secondary. If your enemy could cripple your network with one strike, protecting the system against espionage or infiltration was irrelevant. Flexibility and resiliency were implemented first, but came as costs. A lack of security design exposed the network and data transmitted on it to spoofing and wiretapping.

Confidentiality and integrity features were not mandatory in the first version of the Internet. Most of the security features we are using today are patches on a design that was focused on availability and recoverability. SSL (Secure Sockets Layer), OTP (One-Time Password), and PKI (Public Key Infrastructure) were adopted after the Internet started proliferating.

Elements of trust such as authenticity, accuracy, and non-reversible records are hinged on a non-security-minded design (just like when the first version of the Internet was built) and decades of patching. The Internet is virtual and intangible because the integrity of information is not guaranteed. You don’t know whether you are chatting with a dog. Trust on the Internet relies on information security controls deployed and their effectiveness.

A software bug or control lapse may allow anyone with access to a system to make unauthorized changes. For example, a bank staff may exploit a known vulnerability and edit records in the credit score database. As it was already proven that no security control is 100-percent effective, trust in cyberspace is built on multi-layers of data protection mechanisms.

We do not trust cyberspace since information integrity is not guaranteed. Because of a lack of trust between different parties on the Internet, there are many intermediaries trying to use physical world verifications to secure or protect transactions. Since the virtual world is intangible and alterations are sometimes hard to detect, when security controls fail users need to go back to the physical world to fix it either by calling a call center or even visiting an office.

Consider this example now from Philipp Schmidt:

In Germany, many carpenters still do an apprenticeship tour that lasts for no less than three years and one day. They carry a small book in which they collect stamps and references from the master carpenters with whom they work along the way. The carpenter’s traditional (and now hipster) outfit, the book of stamps they carry, and (if all goes well) the certificate of acceptance into the carpenter guild are proofs that here is a man or woman you can trust to build your house.

Being in control doesn’t mean it would be easy to lie. Similar to the carpenter’s book of references, it should not be possible to just rip out a few pages without anyone noticing. But being in control means having a way to save credentials, to carry them around with us, and to share them with an employer if we chose to do so.

You may say it is old-fashioned or outdated, but carpenters trust it—even now. Their trust is built on their understanding that the paper cannot be easily tampered with without leaving a trace. Each page is linked to the next and alterations are easily detected without relying on a third party. With the law of physics, there is no need for an intermediary.

The virtual and physical worlds
Blockchain is the new form of paper in cyberspace, which breaks the wall between the virtual and the physical world. Records created using blockchain technology are immutable and do not require other systems or entities for verification. The immutable properties of blockchain are defined by mathematics, similar to how paper follows the law of physics.

An interaction that was recorded using the blockchain system cannot be altered, but you can add a new record that supersedes the previous one. Both the first and the new versions are part of the chain of records. Blockchain is a technology that defines how the chain of records is maintained. Integrity is an inherent part of a blockchain record.

How does blockchain achieve immutability?  The Register has a simple explanation:

In blockchain, a hash is a cryptographic number function which is a result of running a mathematical algorithm against the string of data in a block and results in a number which is entirely dependent on the block contents.

What this means is that if you encounter a block in a chain of blocks and want to read its contents you can’t do it unless you can read the preceding block’s contents because these create the starting data hash (prefix) of the block you are interested in.

And you can’t read that preceding block in the chain unless you can read its preceding block as its starting data item is a hash of its preceding block and so on down the chain. It’s a practical impossibility to break into a block chain and read and then do whatever you want with the data unless you are an authorized reader.

Bringing properties of the physical world into the virtual world is why blockchain is ground-breaking.

For my next post, I will write about the physical properties that blockchain creates and how they are related to trust.

Antony Ma received CSA Ron Knode Service Award in 2013. Follow him on Twitter at https://twitter.com/Antony_PD2G.

The post My Second Attempt at Explaining Blockchain to My Wife appeared first on Cloud Security Alliance Blog.

Categories: Communities

Office 365 Deployment: Research Suggests Companies Need to “Think Different”

Cloud Security Alliance Blog - Fri, 06/02/2017 - 14:00
Survey shows what companies expected and what they found out

By Atri Chatterjee, Chief Marketing Officer, Zscaler

It’s been six years since Microsoft introduced Office 365, the cloud version of the most widely used productivity software suite. In those years, Office 365 has earned its place as the fastest-growing cloud-delivered application suite, with more than 85 million users today, according to Gartner. Even so, it’s just getting started. The use of Office 365 represents a fraction — just seven percent — of the Office software in use worldwide, and there is tremendous growth on the horizon. That means there is still plenty of room for enterprises of all sizes to capitalize on the agility benefits of Office 365, but getting the deployment right is the key to success.

Understanding the Office 365 deployment experience
We know that Office 365 brings about considerable changes in IT so we teamed up with market research firm TechValidate to do an independent survey of enterprises that had deployed Office 365 or were in the process of doing so. The results have been illuminating.

We surveyed 205 enterprise IT decision makers from a variety of industries in North America. More than 60 percent of them were managers of IT, 25 percent were at the director or VP level, and 14 percent were C-level. In our questions, we hoped to learn about their experiences in three broad categories:

  1. What they did to prepare for their Office 365 adoption
  2. How the implementation went, given their preparation
  3. What they learned and what are they going to do going forward

Key results

Preparation for Office 365 was “old school” and fell short
A majority of companies surveyed used traditional approaches to prepare for the increased network demands of Office 365. Many increased bandwidth capacity of their existing hub-and-spoke network by over 50 percent in preparation for deployment, and an even greater majority (65 percent) upgraded their data center firewall appliances. And while most companies estimated big budget increases in network expenditures, almost 50 percent had cost overruns after deployment.

Fewer than one in three companies implemented a network architecture involving local breakouts to the Internet from branch offices.

Most implementations fell short on user experience due to bandwidth and latency
Even after bandwidth increases, latency was a big problem, with 70 percent reporting weekly problems and 33 percent reporting daily problems. Firewall upgrades did not help. Sixty-nine percent of those who upgraded firewalls still had latency problems. Ultimately, this results in issues with user experience, with almost 70 percent of C-level executives citing these issues as a top concern.

Lessons learned
Seventy percent of the respondents are now looking to do something different to their existing network architecture and deploy a direct-to-Internet connection to improve performance and user experience.

In addition, 85 percent reported problems with bandwidth control and traffic shaping, and are now looking for solutions to better control network traffic so that business applications like Office 365 are not starved by consumer traffic to Facebook, gaming sites, and streaming media.

More data, insight, and recommendations
The full report provides a lot more data in each of these areas, and it offers key recommendations based on the real-world experiences of over 700 enterprises that have been through the transition to Office 365. You can also check out a summary of the findings on this infographic.

If you are thinking about embarking on such a journey, here are some additional resources to help you plan.

The post Office 365 Deployment: Research Suggests Companies Need to “Think Different” appeared first on Cloud Security Alliance Blog.

Categories: Communities

Want To Empower Remote Workers? Focus On Their Data

Cloud Security Alliance Blog - Wed, 05/31/2017 - 14:00

By Jeremy Zoss, Managing Editor, Code42

Here’s a nightmare scenario for IT professionals: Your CFO is working from the road on a high-profile, highly time sensitive business deal. Working late on documentation for the deal, a spilled glass of water threatens everything. His laptop is fried; all files are lost. What options does your organization have? How can you get the CFO these critical files back, ASAP, when he’s on the other side of the country?

Remote user downtime has high costs
It’s not just traveling executives that worry IT pros. Three-quarters of the global workforce now regularly works remotely, and one in three work away from the office the majority of the time. Across every sector, highly mobile, on-the-go users play increasingly important roles. When these remote users lose, destroy or otherwise corrupt a laptop, the consequences can be serious.

  • On-site consultants: Every hour of downtime is lost billable time.
  • Distributed sales teams: Downtime can threaten deals.
  • On-site training and technical support: Downtime interrupts services, which can hurt relationships and reputations.
  • Work-from-home employees: These might not be high-profile users, but downtime brings productivity to a halt—a cost magnified across the growing work-from-home workforce in most organizations.

Maximizing remote productivity starts with protecting remote user data
Businesses clearly recognize the huge potential in empowering remote workers and mobile productivity. That’s why they’re spending time and money on enabling secure, remote access to digital assets. But too many forget about the other end of the spectrum: collecting and protecting the digital assets that remote workers are creating in real-time—files and data that haven’t made it back to the office yet. As productivity moves further away from the traditional perimeter, organizations can’t let that data slip out of view and beyond backup coverage.

Get six critical tips to empower your mobile users
Read the new white paper and see how endpoint visibility provides a powerful foundation for enabling and supporting anytime-anywhere users.

The post Want To Empower Remote Workers? Focus On Their Data appeared first on Cloud Security Alliance Blog.

Categories: Communities

Thu, 01/01/1970 - 01:00