Intel has issued an updated advisory for more than 30 fixes addressing vulnerabilities across various products – including a critical flaw in Intel’s converged security and management engine (CSME) that could enable privilege-escalation. The bug (CVE-2019-0153) exists in a subsystem of Intel CSME, which powers Intel’s Active Management System hardware and firmware technology, used for remote out-of-band management of personal computers. An unauthenticated user could potentially abuse this flaw to enable escalation of privilege over network access, according to the Intel advisory, updated this week. The flaw is a buffer overflow vulnerability with a CVSS score of 9 out of 10, making it critical. CSME versions 12 through 12.0.34 are impacted: “Intel recommends that users of Intel CSME… update to the latest version provided by the system manufacturer that addresses these issues,” according to Intel’s advisory.**** Overall, the chip giant issued 34 fixes for various vulnerabilities – with seven of those ranking high-severity, 21 ranking medium-severity and five ranking low-severity, in addition to the critical flaw. These latest flaws are separate from Intel’s other advisory last week revealing a new class of speculative execution vulnerabilities, dubbed Microarchitectural Data Sampling (MDS), which impact all modern Intel CPUs. Those four side-channel attacks – ZombieLoad, Fallout, RIDL (Rogue In-Flight Data Load) and Store-to-Leak Forwarding – allow for siphoning data from impacted systems. High-Severity Flaws In addition to the critical vulnerability, Intel released advisories for several high-severity flaws across different products. One such glitch is an insufficient input validation that exists in the Kernel Mode Driver of Intel i915 Graphics chips for Linux. This flaw could enable an authenticated user to gain escalated privileges via local access. The vulnerability, CVE-2019-11085, scores 8.8 out of 10 on the CVSS scale. Intel i915 Graphics for Linux before version 5 are impacted; Intel recommends users update to version 5 or later. Another high-severity flaw exists in the system firmware of Intel NUC kit (short for Next Unit of Computing); a mini PC kit that offers processing, memory and storage capabilities for applications like digital signage, media centers and kiosks. This flaw, CVE-2019-11094, ranking a 7.5 out of 10 on the CVSS scale, “may allow an authenticated user to potentially enable escalation of privilege, denial of service and/or information disclosure via local access,”according to Intel. Intel recommends that the impacted products (below) update to the latest firmware version. Another high-severity flaw, discovered internally by Intel and disclosed last week, exists in in Unified Extensible Firmware Interface (UEFI), a specification defining a software interface between an operating system and platform firmware (while UEFI is an industry-wide specification, specifically impacted is UEFI firmware using the Intel reference code) “Multiple potential security vulnerabilities in Intel Unified Extensible Firmware Interface (UEFI) may allow escalation of privilege and/or denial of service,” according to last week’s advisory. “Intel is releasing firmware updates to mitigate these potential vulnerabilities.” The flaw, CVE-2019-0126, has a CVSS score of 7.2 out of 10, and may allow a privileged user to potentially enable escalation of privilege or denial of service on impacted systems. This vulnerability stems from “insufficient access control in silicon reference firmware for Intel Xeon Scalable Processor, Intel Xeon Processor D Family, according to Intel. In order to exploit the flaw, an attacker would need local access. Other high severity flaws include: an improper data-sanitization vulnerability in the subsystem in Intel Server Platform Services (CVE-2019-0089), an insufficient access control vulnerability in subsystem for Intel CSME (CVE-2019-0090), an insufficient access control vulnerability (CVE-2019-0086) in Dynamic Application Loader software (an Intel tool allowing users to run small portions of Java code on Intel CSME) and a buffer overflow flaw in subsystem in Intel’s Dynamic Application Loader (CVE-2019-0170). Lenovo for its part released an advisory with several target dates where it aims to apply patches for its Intel-impacted products, including various versions of the IdeaPad and ThinkPad (see a full list here). Want to know more about Identity Management and navigating the shift beyond passwords? Don’t miss our Threatpost webinar on May 29 at 2 p.m. ET. Join Threatpost editor Tom Spring and a panel of experts as they discuss how cloud, mobility and digital transformation are accelerating the adoption of new Identity Management solutions. Experts discuss the impact of millions of new digital devices (and things) requesting access to managed networks and the challenges that follow.
With businesses continuing their digital migrations to cloud services and applications, IT is finding itself wrestling with how to keep companies’ data safe. The challenge? The cloud has created a next-generation, virtual perimeter. Businesses are using infrastructure-as-a-service (IaaS), cloud storage, software-as-a-service (SaaS) applications housed by third parties, and are connecting to these resources using mobile and fixed devices that are not tied to a company branch office or headquarters. The result is data being housed across a fragmented landscape, where achieving the control and visibility that organizations have traditionally had over their data has become more complex — thus introducing new areas of risk. Threatpost Senior Editor Tara Seals was recently joined on a webinar by Jim Reavis and Sean Cordero of the vendor-neutral Cloud Security Alliance to discuss best practices for locking down data across the cloud-enabled architecture. The full video with the slides is below. _Below the video is a lightly edited transcript of the webinar. _ Tara Seals: Thank you for attending today’s Threatpost webinar, entitled “Data Security in the Cloud: How to lock down data when the traditional network perimeter is no longer in place.” I’m Tara Seals, senior editor at Threatpost, and I’ll be your moderator today. I’m excited to welcome our panelists, who will give a pretty comprehensive dive into cloud security, which is a topic I think on most people’s minds these days. To that end, let me introduce them. We are going to hear today from Jim Reavis, who is CEO at the Cloud Security Alliance, as well as Sean Cordero, who is VP of Cloud Strategy at Netskope – and he’s here in his capacity as a member of the Cloud Security Alliance today. I wanted to let you guys know that they’re going to run through a presentation, and then after that we’re going to have a panel discussion and a Q&A session with you, our audience members. You can submit your questions at pretty much any time during the webinar using the control panel widget on the right hand side of your screen. If you look, there’s an option for questions. You can click on that to open up a window where you can submit your queries. Speaking of which, I have a couple of housekeeping notes before we begin. First of all, the webinar is being recorded. We’ll be sending out a link where you can listen on demand, so you can share that with your colleagues. We’re also going to eventually have a transcription video posted on threatpost.com, so keep an eye out for that. With that, before we get started, I also wanted to just briefly frame our discussion and talk a little bit about why this topic is so timely or why we think it’s so timely, when businesses are embracing on-demand and software-as-a-service, SaaS applications at a rapid clip. I think we’re aware that small businesses might have only three or four applications, but Fortune 500 companies might have literally thousands of cloud applications. So this is something that is definitely unavoidable. On top of that, businesses are using infrastructure-as-a-service and cloud storage, expanding their network footprints. They’re connecting to those resources using a vast set of new and different types of devices, both mobile and fixed, that may or may not be located within a company branch or headquarters. And the result is that you have a lot of data flying around. You have both structured and unstructured data that can either rest in some kind of cloud repository or flying back and forth between endpoints and various services. And all of that spread out across across multiple parts of the corporate architecture. Some parts of which the business might manage or own themselves, and other parts they might not have a whole lot of oversight on because it’s hosted in the cloud. So you end up with a fragmented landscape where a lot of the control and visibility that organizations have traditionally enjoyed over their data has kind of gone away. That in turn introduces risk – and new areas of risk – where the concerns that people should maybe be thinking about aren’t necessarily that well-known. Jim and Sean are going to cover this ground today, and I’m really excited to hear what they have to say. They’re going to give us some ideas and best practices for locking down data across this new cloud-enabled architecture. With that, I’m going to turn it over to these guys. Welcome Jim and Sean. Thank you for joining us. Jim Reavis: Pleasure to be here. Sean Cordero: Thank you for having me. Tara Seals: I’d love it if you guys could introduce yourselves and then tell us a little bit about what you’re bringing to the table today. Jim Reavis: Sure. I’ll go first. Hi, this is Jim Reavis. I started in information security in a bank in 1988 doing some computer security. Obviously the world has changed quite a bit. I’ve always enjoyed being in this industry because it’s a very interesting, thoughtful combination of art and science, where you have the technology, you also have adversaries who have the psychology of the organizations to be thinking about. I started Cloud Security Alliance, started thinking about it in 2007, 2008 when you were starting to see this as a coming trend and a lot of virtualization, just a very virtualized view of the world. We are now 10 years old and have done a lot of work in terms of as a nonprofit doing vendor-neutral types of research, best practices, certification for providers, as well as individuals. Just happy to be here. We’ll try to share as much of what I have learned over those 31 years that might be relevant to the topic. Tara Seals: Great. Thank you. Sean, what about you? Sean Cordero: Hi. This is Sean Cordero. Thank you again everyone for joining us for today’s conversation. I’ve been in the IT and security space now going on 21 years, which is longer than I like to admit. Grew up coming up as a network engineer architect really focusing on trying to solve for the risk-management puzzle as it related to international and global risk, the company that I served. One of the key things that led to my engagement with the Cloud Security Alliance was the acknowledgment that there was an inadequate amount of guidance from other organizations. That then led me to the CSA, where I’ve been a contributor to some of their core research, specifically the Cloud Controls Matrix and the Consensus Assessment Initiative Questionnaire. I can’t believe I didn’t stumble saying that. Happy to be here, looking forward to the conversation. I’m hopeful that folks on the phone are able to glean something from it and ask questions as well. Tara Seals: Great. Well, thank you guys. Appreciate it. And with that Sean, I’m going to turn it over to you. I know you’re going to be running the slides today, and you and Jim are both going to tag-team on this presentation. I’m excited to hear what you guys have to say, and over to you. Sean Cordero: Great. Thank you very much. Jim, you see this deck? Good to go. Jim Reavis: Yep. Sean Cordero: Excellent. As we’ve already introduced ourselves, I’ll move past this. For the next 25 to 30 minutes or so there’s going to be a lot of content that we’re going to be sharing. One of the key things that we really want to encourage everyone is to please ask questions. We want to keep this interactive. At the same time, if there’s something that you feel you agree or do not agree with, please with let’s have discourse about that. I think that’s how we all get better. For the next 25, 30 minutes or so, I’ll give you an overview of what the core drivers of cloud adoption are and why it is that it seems to have kind of gotten out of control from a IT risk management perspective. We’ll talk about some very specific and troublesome cloud risks that some organizations may or may not know about. Then we’ll provide some recommendations high-level as starting points in terms of trying to get ahead of the inevitable adoption of cloud-based technologies. And then of course, we’ll move forward with the discussion. So the 2012 Harvard Business Review in conjunction with I believe it was Verizon at the time did a study. And what they found was that this cloud adoption thing was moving pretty quickly and much faster than anyone had anticipated. In 2012 what they said is, organizations that are moving towards cloud, they will have a competitive advantage in terms of competing in the market. Interestingly enough, three years later they came back and they did a very similar analysis, and what they found was a bit startling. What they found is that the organizations that had not adopted cloud or had no plans for cloud adoption had actually lagged significantly behind and fallen down from a competitive standpoint. So cloud literally from their analysis has become table stakes for most business leaders, simply due to the agility and speed and capability that it provides. That also has been echoed with some of the top leaders in the cloud space, where Mark Benioff, the founder and CEO of Salesforce, has said on multiple occasions that this is really the next evolution and revolution in terms of how we work and interact with data, how we interact with process, and ultimately how we empower businesses. When we think about our push to digital transformation and what it means from a security and risk-management standpoint, there are some really tough truths that I think we as the security industry and even as security practitioners have had to face directly or indirectly, but now I think cloud and cloud adoption has really forced and exposed a lot of the weaknesses that have existed across the information in cyber-landscape. We all know that breaches keep going up, et cetera, et cetera. One of the things that rarely gets asked is, well, if we’re getting better at security, why is the problem seeming to get worse? Part of it is, and I’m speaking simply from my point of view, I don’t actually think security as a practice is something that IT and cyber were really that good at to begin with. And I’m painting with a very broad brush here. Probably due to the fact that some of the things that we all struggle with as cybersecurity pros, they’re really fundamental, basic things that often don’t even fall within the purview of cyber. For example, you have organizations that will spend inordinate amount of time on managing vulnerabilities, some of them on developed applications or in other cases just simply getting a patch management process in place, and that becomes sometimes like a multi-month, multi-year effort, and often it never really gets to where it wants to be. But now, since a lot of that responsibility has kind of been pushed out to the cloud, you still have as the cyber-professional a responsibility to ensure that not only do you understand what your provider’s providing you, but really the crux of this discussion is, what is it that you can effectuate from a controls’ perspective. And that’s kind of the crazy part because what we found is that the great majority of organizations that are saying, “Hey, we’re doing a cloud move or a cloud migration,” they actually may or may not know or often I think they know but kind of avoid that discussion because it’s difficult, that really the great majority of cloud usage is already in their enterprise and it’s not under the control of anyone in that organization. That creates immediate friction because as professionals we get stuck. How is it then that we’re going to enable let’s say our human resources team that is utilizing a software-as-a-service platform to do payroll, but they didn’t get it set up via IT, thus it’s not set up via single sign-on, it doesn’t utilize some of the basic controls that cyber might want. How is that then we come in as risk-management professionals and tell them that they can’t use it anymore? And that immediately puts us at odds, it always has, but in the modality of cloud-based access it’s even worse, because there’s nothing we can do to really prohibit it from the get-go outside of some architectural things. But the risks are really the same. I mean, this is the issue which makes it so complex, is that we’ve had management and risk management models and cyber-technical control models that have been in existence for a solid 30 years, and I’ve always kind of questioned the efficacy of those anyways, but now we’re trying to apply them into the cloud context where all of these components are really completely different. This is where we start looking at the core challenges, which is there’s a lot, and Jim will be speaking to this about the shared responsibilities and the scope in more detail a little further on here, but one of the things that is very troubling is within the cloud service providers (CSPs). And I understand why from a business standpoint and also from a supportability standpoint, additional features that might be necessary to provide data protection or reduce management or overhead associated with risk management are often requiring the consumer of the service to pay extra for it, which is fascinating because a lot of the CSPs will also in the same breath speak about how deep and wide the security capabilities are. But then we have another challenge which is, and the CSA many years ago, and I believe Jim, the Open Shared API Initiative that the CSA was driving as a research project, one of the ideas that CSA had brought to the industry was why don’t we create a uniform set of application programming interfaces (APIs) that can then be leveraged across the entirety of the space. Maybe Jim, you can speak to how well that was received. I can certainly speak to what I’ve seen in terms of the adoption of something like that. Jim Reavis: Sure. I think that there’s an aspect of how we do security or how we think about IT in general is somewhat idealistic, that we can create a massive amount of standards that allow a maximum amount of flexibility, and the idea in the open APIs working group project was to allow a certain modicum of portability from a consumer perspective, and that could be an enterprise consumer to be able to securely manage encrypt information to a variety of different cloud providers. Jim Reavis: You don’t necessarily see things happen that way, and from a cloud provider perspective. We’ve seen them innovate to compete with each other to provide a lot of unique services. Those unique services could be considered proprietary and you could say that’s proprietary in a good way. So it’s continuing back and forth that I think we have that we need to sort of manage to understand. It is going to be a complex environment and we can try to advocate and the consumers of cloud can try to insist that their providers adhere to standards that allow you to for example bring your own keys to any cloud provider which is something we advocate quite a bit, and it’s been in our guidance. But it really is a challenge and it’s probably foolish to think we are going to have such a level of cooperation on all the different cloud providers that it’s so easy to move applications in data between them seamlessly. We can always strive towards that, but we need to understand it’s continued to grow in complexity. Sean Cordero: Yeah, that’s great insight. And to everyone on the phone, Jim made a very critical point there, which is the cloud consumers i.e. your enterprises really need to drive the need for that by requesting it, enforcing your CSPs to engage with your other partners. Because what’s happened is not only are certain security features in some cases behind paywalls. There is no parity around this, which then leads to a really complex problem. Back in the day when single sign-on was first introduced, everyone was like, “This is great. We would love to expand this elsewhere.” And it did create a boom in terms of internal effective efficacy and efficiency for IT and security teams. However, in the cloud model that is at a minimum table stakes, so having some sort of identity provider, but what isn’t solved for is the fact and it’s tied to the first portion that the fact is that if you need to create security policies of say data protection policies on one cloud or data protections on another cloud, you’re forced to log in to each one of these clouds independently and configure them. Now I grew up as a Windows administrator as well, and I remember how difficult it was just to get the correct folder rights set up for a share. Or imagine trying to get the folder rights on a SaaS service and then ensuring that the SaaS service is configured in a secured manner on top of which hopefully you’re paying for the additional security features that actually enable you to do more control. It’s kind of like this vicious cycle that we’re finding ourselves in, and I think this is where we as practitioners and folks that consume cloud services really need to engage with the CSPs to rethink this because I don’t believe it’s going to be a model that’s going to do the right thing for organizations. And then because the vendors are limited, it creates a lot of friction for our end users and the folks that really are getting the most benefit from the usage. Some of the key things that lead to very specific cloud risks are tied to the data protection piece. And some organizations may or may not be aware of this. I know in my other capacity this is a conversation that we talk about a lot. We’ve already discussed the fact that the business organizations, those are your lines of business, your sales teams, your marketing teams, your human resources team, which is ironic actually one thing. The top two organizations within almost any enterprise that tend to adopt cloud fastest and often can’t create exposure because maybe they’re not engaging with security in our research show that it’s human resources and marketing where those two lines of business tend to kind of switch back and forth. Part of it is because they may perceive that the usage of a particular cloud irrespective of how “secure it is” is it really in the purview of IT because unlike in the past where they would have to call somebody and say, “Can I get this deployed? Can I do this? Can I do that,” that is not a process or workflow that exists in the cloud context. It doesn’t have to exist because by definition it’s meant to work that way. That leads to a variety of other issues as well. One of the key things is that we know we kind of ignore the fact that this data is doing this now. A lot of organizations are still trying to go down the path of home running architecturally everything back down to their on-premises security stack. But what very quickly occurs is that when you have this architecture that really wasn’t that effective to begin with, if you really think about it, one of the key things that a lot of organizations are dealing with is the pervasiveness efficient attacks. I don’t know if everyone kind of recalls why some of those attacks became so prevalent early on and why organizations were so subjected and weak to them. It was because it was a very effective way to bypass all of the traditional network controls because the trust model in it of itself where anything coming out from your network going to the internet is considered to be safe, phishing like attacks, command and control types attacks exploit that. Sadly the technology in place right now can’t really handle that. So where we end up with is this is one of the really scary parts and something that I work with a lot of organizations on talking about is this. If we think about this gate as your … I was going to use a firewall as an example. It could also be a proxy. If we say, “Hey, we want to prohibit our enterprise from going to bad sites,” so those might be working appropriate sites, potentially legal sites, or even storage services that are not cleared by IT or cyber or risk. Traditionally the way this gets handled is you’ll put some rules in place on your firewall going outbound and put some rules in place on your proxy going outbound and then you’ll kind of call it good and you leave it to the vendor to kind of say, “Yes, this is a good site. This is a bad site.” But back to the issue that I brought up before because of the lack of openness in terms of standards for the integration with existing security tools and other new tools that might come into existence, you end up with a situation where simply by enabling these services outbound through your enterprise, you’re actually having a implicit and sometimes explicit acceptance of risk. And that risk acceptance looks like one of your end-users intentionally or unintentionally taking your data on one of your devices that you are responsible for, and moving it to a different tenant on that same cloud. What happens then is that your traditional controls do not have any way of prohibiting that because the only way to traditionally block that would be through some level of acknowledgment that it’s going to a different instance. The way that a lot of these technologies work, it doesn’t do that. This is where one example by the consumer can really drive that discussion because for me as a practitioner who’s very passionate about this, that to me is an unacceptable risk. I could never go to my executive vice president and say, “Hey, just so you know, we’re totally okay with somebody saving all of the sensitive stuff to their home version of Office 365.” But this is where we need to really stand with like the CSA and all these other organizations to force that discussion between our security vendors, our cloud service providers, to get us all in a healthier place to address things like this that right now there is no way to easily address this. When we think about this whole data piece and how it goes, one of the key things that happens quite regularly is if you think of the kill chain and you say, “Well, how does it actually change in the cloud,” it gets even a little scarier. It’s unrealistic to say, “Hey, we’re going to just pull back all the cloud because Sean and Jim were talking about this kill chain, and we’re at high risk,” because the problems are still effectively the same. It’s just a question of how you approach it. I’ll give you an example. Let’s say for a minute that you’re utilizing a CRM, your sales team uses a CRM. Insert whomever it may be. It can be one of the leading ones or it could be a startup that nobody knows about. So let’s call it, I’m going to call it seancrm.com. Now we’ve got a bad actor that is out there really interested in our customer information, and the way that they would’ve tried to poke and prod the infrastructures in the past, they would have to get a foothold internally, which is fairly trivial phishing. I’m not saying that it would’ve been any better in the on-prem model. In fact, it’s probably worse. But what they would do in the past is they would sit there, they’d do things that were fairly loud and like port scanning, or they would learn something by gleaning header information. And it was all very rudimentary. But now with cloud, what you can really do is if you know that your organization is utilizing, I don’t know, this particular CRM, if I’m a bad actor, all I have to do to start finding ways to potentially attack your instance, your tenant, I simply just need to figure out what the name of the tenant is, which often is your company’s name. For example, let’s say it was martyscars.seancrm.com. That’s most commonly the naming scheme that’s used across all CSPs where it’s your company plus their FQDN at the end. Well, if you know that, now all of a sudden you can start doing something very basic like, hey, let me see if I can figure out how I can log into these things. With that information in hand, you can then start tailoring very specific attacks. So if you want to do spear-phishing attacks against senior executives or research folks, you can leverage that knowledge to create highly customized and very difficult to prohibit and prevent delivery mechanisms that look completely legitimate. Because the way that a lot of our technologies have worked and the fact that our CSPs may or may not be quite where they need to be in terms of their ability to support and protect against these types of things, you end up and get stuck where now you have another vector where your data can actually slip. Let me give an example of where this actually has occurred and continues to occur is, let’s say if somebody wants to do a very specific spear-phishing campaign. One way that they will get around all of your control is simply by leveraging the fact that within our architectures, we are trusting the final destination of the CSP. Let’s say for a minute that you have a cloud service provider that you’re engaged with. They’re doing data storage for you. It might be user level. It might be server level. But what you end up with is your machines, I mean your devices or your mobile devices do have a kind of trust between that CSP and what you’re doing. Usually from a risk management standpoint organizations sign off and say, “Yeah, of course you can use that.” Well, the attackers know this, and what they do instead is when they create their phishing campaigns, they leverage the fact you are further trusting that CSP, so they will provide a link as part of the spear-phish that ties to the same cloud, but it’s not against your tenants. What that ends up happening is when a user gets phished, your swigs or firewalls, all of that can’t do anything to prevent it, and now your user is exposed. Interestingly enough, one of the things that was identified is that you’re seeing attacks where end users are being compromised and subsequently the larger part of the enterprise are being compromised by a combination of drive-by infections, which have always been a thing and continue to be a thing; where users in a browser, they’re accessing some site that’s got malware loaded. They get infected. And then from there, they leverage the fact to then start feeding you other payloads leveraging the cloud infrastructures as a repository. Again, because they know that from a detection and control standpoint there’s little that can be done, you end up where not only is it difficult to identify the attack, but in addition to which, it’s really difficult, if not impossible in some cases, to pull that back once that’s occurred. With that, do we have any questions at the moment from the audience, Tara? Tara Seals: Hi Sean. Yeah, we do actually have a couple of questions. If you wanted to go ahead and we can maybe field those. We have a question about I guess who has a level of oversight over the cloud providers to make sure that they’re compliant versus just managing risk, I think is what the person is asking. She says that she gets push-back that she is requiring more than what other customers are asking for. She’s being told that drive encryption is good enough to detect the bugs, but with multi-tenant it doesn’t ensure that data is protected from other customers, especially with shared keys and administrators or service accounts that can access all of them. She wants to know can any one measure the right level of controls and requirements within a cloud environment? Sean Cordero: Yeah, that’s a great question. Jim, you want to take that one and I can tag along after? Jim Reavis: Sure. So just kind of stepping back to like who governs the cloud and who manages it, it tends to cut several different ways where there’s sort of national types of standards and you look at something like FedRAMP for the United States covering the federal government’s procurement of cloud based on NIST standards. So that’s something that you tend to see a lot of alignment even in the private sector for that. So you have that country based thing. You have maybe industry based things like PCI for the payment card industry where they’ve tried to adapt some of that. And then you have just regulatory bodies that try to use available standards. Because technology is moving so quickly, it’s hard to use standards that take years and years to develop. That’s where an organization like Cloud Security Alliance where we move pretty quickly with creating best practices and we map them to a lot of different global standards that are out there that it ends up for an organization that is doing risk management in the cloud, they have to understand what are the applicable laws that they need to be dealing with. And then they look at sort of this hybrid approach to take maybe something like the cloud Security Alliance, Cloud Controls Matrix, our STAR program and how it maps to these different standards to be able to understand what are the different governing laws and what are the different standards and how do we sort of bring those all together in a risk management program based on what you’re doing. In terms of getting more specific to the question about how people looked at the issues with encryption and the control over shared secrets, we’ve had this in our best practices for quite a while that ideally the appropriate way to manage the data is that the user, the tenant, the owner in EU parlance, you might say the data controller, that they should be managing the keys directly and encrypting the information. Ideally you get to a point where the cloud provider is a data processor and they’re managing the systems, but they’re not actually managing your data. That’s very easy to do in infrastructure-as-a-service. In software-as-a-service that’s very difficult to do in how it’s implemented today. The cloud providers, the SaaS providers actually need to be able to manipulate the data to make sure it’s correctly backed up and everything else. We’re moving to a point where that is where I believe we’re going to have the sort of hybrid best of both worlds where you bring your own key to do that. Because we don’t have this perfect world, it becomes very important to look at other indirect controls and say for example do they have very good vetting of their employees with this cloud provider, do they have security clearances, do they have proper training, do you have the proper audit trails so that if someone does have physical access to information, do we know that it’s being governed properly. So you have to end up looking at a lot of those different things. We would again encourage you to look at their certifications, the audits that they’ve had, and do they align with things like CSA with matrix and our STAR program. Tara Seals: Okay, great. We do have one more question along the same lines. This person would like to know if there are any independent reports out there that you guys are aware of on the security posture of available cloud providers, infrastructure providers like Amazon or Azure with recommendations on who has the better security posture, and he would also like to know if any ethical hackers have tackled the question I’m assuming and the sense of hunting for bugs. Jim Reavis: I can take just a quick pass on this and give Sean a chance. Frankly I wouldn’t trust some report that compared them in a Consumer Reports fashion and said one or the other is better because it is so complex. And what we find is that 80 percent to 90 percent of the security responsibility remains with the customer. But I’ll say this, that on apples to apples for what the major cloud providers do, the tier one cloud providers do in terms of the scope of what they’re responsible for, they are far better than anyone in the world. Maybe there’s a few banks and a few defense departments in different nations that are equivalent, but they do a far better job. That’s why cloud can be very secure. But most of the responsibility is on your site. So I would look at how they all answer the different compliance questionnaires, but you have to turn inward and say, “How am I using it? What are the different applications I’m going to be using,” and then say this from a risk management perspective, this is the right solution. But comparing an AWS to Azure to a Google cloud platform if we’re talking about the big US-based ones, they are all in order of magnitude better than what any typical customer be able to do on their own for apples to apples. Got it. Thank you. Okay. And let’s tackle one more before you get back to the presentation if that’s okay. We had another question that this person wants to know. How can one ensure that the cloud provider is not commingling your data and that it’s being deleted from backups and temporary or redundant copies are being eliminated as requested? Is there any way to kind of keep tabs on that? Jim Reavis: Sean, you want to answer this one? Sean Cordero: Yeah, I can take that one. The answer is the honor system. And that’s kind of what we as an industry I think are at a crossroads. Where that’s like the other two questions because to me these are all interrelated and hitting on the same problem. I’m going to jump ahead to one other slide here really quick because this is what I was going to chat about. We’ll come back to the other piece because it’s all interrelated to the last three questions where Jim stated very clearly and I 100% agree that the majority of the responsibility in the shared responsibility model still falls on the customer. What I think has been happening is there has been an over focus as a … I think it’s actually as a response … the ineffectiveness of being able to effectually risk in the cloud, i.e. Jim mentioned bring your own key. Well, if we think about that, that’s such an obvious necessary saying. But why is it so difficult for the CSPs to support it? Well, it’s because we’ve never coded it that way, and until the market demands, i.e. office practitioners that they enable these types of things, it’s always going to be a whack-a-mole in terms of the controls that are necessary to really secure your data. One of the things that occurs in this situation is really this idea of understanding the controls and the gaps of the cloud service provider. But I don’t see it from the perspective of, hey, AWS is, they don’t do this one thing and thus we’re going to move down that direction away from them because they don’t meet such and such control. Now in some cases that’s totally appropriate. In other cases, it may not even be that meaningful because really where the majority of the risk continues to reside is on the consumer. So if we think about where a lot of us right now are probably standing up third-party risk management or cloud governance programs, I actually believe that, and no disrespect to anyone that’s doing this, but just having done it and seeing the end result of it, I actually think it’s kind of a huge waste of time in the long term. The reason is is because if you ultimately come down to a conversation with your CSP where you’ve identified a control deficiency that for whatever reason is critical to you conducting business, your only way of effectuating change there is either a) if it’s a bug that they acknowledge and they can fix, second piece is whether or not … If it’s not a bug, it’s just a lack of a feature. If you can convince them to create and add that feature i.e. it’s got to go into the CI/CD pipeline and it’s got to be prioritized just like our IT teams have to, except they’re dealing with a much larger scale, and they’re going to be a little more risk-averse, why because if they make a change to something like that, that is going to be across the board, it may have significant negative impacts across all of their clients or against everyone that they’re serving. So they tend to kind of slow roll some of that stuff. And then third is that you end up now then asking these questions over and over again. Do you have this? Do you have that? And now what they’ve done, and a lot of this is great because the CSA’s been leading this whole thing for some time now, is giving a contextualized view through reporting via like the CSA’s Cloud Controls Matrix or the Assessments Commissioners Questionnaire or via STAR certification that actually gives you a higher level of assurance that they are doing the right thing. One of the questions that was asked is where could I get a sense of where … kind of where the cloud providers reside, kind of where they stand. Jim made a good point that it’s so difficult to assess them from the outside in and then back out that I would say that the best resource that you will find is going to be at the Cloud Security Alliance via their STAR registry. The STAR registry has the same types of questionnaires that our teams are all going out and asking our vendors for, and those are pre-answered in many cases by the leading CSPs. Then you have an initial starting point to kind of assess them, but then again, it comes back to, well, how much time will you spend as a risk practitioner assessing these CSPs over and over again when really your only levers are going to be during your negotiation, i.e. have a contract, post issue that’s related to a bug or a failure of service so you might have a little bit of leverage. And then third, the threat of a lawsuit of you walking away. Those are not very good options for us as security practitioners to try to get capability added. So if you really think about it then, it all comes back down to either a ticket that gets submitted, or it comes down to a contractual/legal conversation for which we as practitioners may or may not be part of because that’s going to go potentially to litigation if something bad happens because of it. That’s why I kind of say that broadly that we spend so much time trying to assess how well they do it. Meanwhile, I’ve seen organizations that they literally have like a staff of three or four people doing this full-time across all the vendors with an overt focus on the CSPs, meanwhile they’re not even looking at their procurement process to understand how is this, how is somebody in development going and spending up a $10,000 a month AWS instance? How is it getting expensed? Why does that happen? These are the things that actually cause the greater exposure and the greater risk as opposed to focusing just on what the CSP is or isn’t doing, specifically in the context that you really can’t effectuate it, at least not directly. I mean, perhaps the Fortune Global five might have the ability to pull those types of levers, but you’re talking about potentially hundreds and hundreds of millions of dollars of spend where a CSP would then perk up their ears and go, “Okay, let’s talk about this.” I don’t know about you guys, but at least for me, we don’t have that kind of budget to be able to spend that much on the CSP, so we end up kind of having to go down the contractual route. Now, there was another question that was asked, which was around how do I ensure that this data isn’t being commingled? What I would suggest is have your technical teams, your architecture teams work with … Because often the way that the CSPs will engage with you, they’ll have like a salesperson and they’ll have like a pre-sales engineer of some kind. Fairly typical in our industry. If there’s always an additional level of engineering knowledge that exists outside of the in-field teams, I would suggest engaging your architects but understand cloud architecture. Hopefully you have those. And if not, it’s a good opportunity to engage and learn a couple of things and really have them help you decompose and understand how it is they do things. I think what ends up happening is that because of the lack of cloud being effectively a black box in most cases, we end up in a situation where we say, “Oh, well that can’t be good, so we shouldn’t do it,” and it may turn out that it’s actually quite great, but perhaps we haven’t asked the right questions. So to find out the answer of whether or not you’re commingling data, if you asked a CSP seller, somebody that is selling you Amazon and his or her engineer is telling you to go, “Oh yeah, of course we don’t do that,” and they may be giving you a legitimate and correct answer. But I don’t know about you. That wouldn’t be good enough for me. I would want to know more, just so I would feel better about it. And that’s why I think it requires partnership and deep engagement with the CSP. Tara Seals: Got it. Okay, great. Thank you. Okay. We’re about 10 minutes away from reaching our time limit here. So did you want to quickly run through the rest of your slides and then maybe we can field one or two more questions before we wrap? Sean Cordero: Sure. Jim, did you want to speak a little bit about the cloud security focus? I know we jumped ahead a little bit there, but I think it was a good discussion. Jim Reavis: Yeah, that’s not a problem. What I want the audience with this slide to understand is sort of how you should strategically view your responsibilities in securing your organization. The National Institute of Standards and Technology (NIST) came up with a cloud definition years ago. What you’ll see on the left there in that layered model, CSA took the NIST definition of cloud infrastructure platform software-as-a-service and the different deployment models and we had it visualized this layered model to say that software-as-a-service resides on top of platform, resides on top of infrastructure-as-a-service. And what that should mean to you is when you’re engaging, the most applications and providers you’re dealing with, they actually exist as a mash-up of several different companies and services. It’s important to understand that. And then the idea for the inverted pyramid is that you’re actually going to have in software-as-a-service a large number, thousands of SaaS applications that are going to be residing on just a small number of infrastructure providers. We’ve already mentioned a few of who those are. Also, you might be developing your own applications if you are engaging directly with some of those major cloud providers. The things that you should understand is that vetting for procuring and managing means you’re going to have a large number of SaaS applications, you’re going to have a smaller amount of time that you’re going to be able to vet their suitability and their security practices, and you’re going to have a smaller number of infrastructure providers that you are going to have the responsibility. Because they make those pretty open platforms, it’s your job to actually implement the security controls when you’re using that. So what you should be thinking about on the top right, the shared responsibility, is the fact that if you are engaging directly with the infrastructure-as-a-service, it’s the raw compute, it’s the virtualization, it’s the containerization that you … It’s mostly the consumer, the tenant, the data controller to actually implement the security program and it’s as I’ve seen 80 percent of the security controls. If it’s a software-as-a-service, fully baked business application, it’s mostly going to be the provider implementing the controls and then your job becomes more of the audit perspective. So implement technical security if it’s infrastructure, and do the audit vendor procurement stuff if it’s SaaS. There’s a little bit of exception there. You want to have a very strong identity management infrastructure. There might be some things you can do encrypting the information before it goes in a SaaS provider, but essentially that’s just the big thing, understand the layers, understand it’s a mash up, understand the shared responsibility in those different areas, and then use your resources appropriately inside of your security program to very quickly do the assessment and the triage on the SaaS applications, and then be very careful and implement strong technical controls on the infrastructure side. To really do all this it’s really about thinking very virtually about the world and understand that your information and this technology can exist in a lot of different dimensions and planes. So think very virtually. I’ll turn that back to Sean now. Sean Cordero: Thanks Jim. With this context that Jim’s provided us, one of the things that we should think about when we’re talking about data security in the cloud, it’s I think we’ve kind of beat to death a little bit the … this concept that, yes, it’s important to engage with the CSP, you got to force conversations. But again, Jim stated it best when he says, the majority of this stuff still falls on us as internal practitioners irrespective of where the data and what CSP it’s residing on. Some of the things to consider as we work our skill sets as risk management professionals and cyber professionals is that the focus is going to shift away from the traditional, hey, let’s run the scan, everything … what’s the status on that open compliance issue, or let’s get the IRR stuff up and running. It all has to change. Even something as simple as vulnerability management in the cloud context actually starts looking more like a different version of vendor management and quality management to ensure that they but if something is found that indeed you’re able to hold them accountable for those types of things, should it unfortunately ever lead to a particular negative outcome for your organization. One of the key things that Jim was also talking about is the scope of this assurance, which is it’s really critical that when you’re looking at the data elements across SaaS and IaaS, that it’s well understood where and how the roles and responsibilities really start and end. This is what I think with regulations like GDPR, and if you’re not familiar with the Cloud Security Alliance’s Code of Conduct which is a document that’s been created by a global team in conjunction with a lot of the leaders in this particular space, privacy, that that document in and of itself gives some excellent guidance around a lot of this. One of the challenges is that even if you’re able to tick the boxes from a controls’ perspective after CSP, there’s still a lot in terms of data flow and ownership of data or who can or who cannot that still falls on us as the practitioners that have responsibility. But understanding this, I think this will be some of the levers that can then be pulled to effectually change at these large CSPs to try to address these things, because the way it’s kind of been done now isn’t terribly effective, nor efficient. Jim, is there something here else that you’d like to touch upon? Jim Reavis: No, I think you covered it pretty well. Sean Cordero: Same thing that was being discussed before which is what’s really critical is that by understanding each one of these components and knowing where the responsibility resides, and this is where it gets really complicated, is when you start talking about data and metadata, which traditionally would fall in the realm of the CSP, if you’re not confident that data is being handled appropriately or they haven’t disclosed that perhaps that data is being piped elsewhere, I knew of an organization that found out that their cloud service that they were consuming was actually not dissimilar from what Jim said earlier was cobbled together from two or three different cloud services and what they had done is created a front end for it. Well, that only was disclosed when the contracting phase came in and in the contract they stated, “By the way, we have three other CSPs that are part of our service, but don’t worry, we got it handled.” Well, I don’t know how well or comfortable everyone would be with that, but the fact that wasn’t mentioned from the get-go. There’s a very well-known company that is known for making very high-end cellphones that some years ago, they have a consumer-based cloud service that is used for storage, for photos and everything like that, and it was determined that actually their back-end, even though it looks like from our point of view looks like it’s all theirs, is actually hosted on Google. And that, we’re talking Fortune 500 companies where even that’s happening there. It’s not because it’s been done in any way of a malicious form. It’s simply an effect of modality for them to be able to deliver the high quality service that they want. But unless we as practitioners understand that or ask those questions, don’t expect that anyone’s going to disclose it certainly from the get-go. This comes back down to the concept of context which is and it’s really important. All of these standards that exist, PCI has been one of the most ineffective standards from a cloud perspective. If you even look at the standard of SaaS, the first three controls are talking about an on-prem based architecture that is not really relevant in the cloud modality. What that means then is if you’re trying to take your control and standards and clients and just simply apply it over to cloud, it’s going to fail miserably because the way that everything gets done is completely different, and they don’t have the context to actually include these things. Even 27001 which is still a great international standard had to create outcroppings standards for example 27017 to really be able to address the deficiencies that they were aware were in the ISO standard because it was never intended to address cloud computing for it didn’t exist back then. That’s why the CCM is such a critical component of this tapestry that we have of risk management across the industry, because it was the first to market and is still the leading standards and framework to get levels of measurement across your CSPs but also to look inward and ask the hard questions. Then just the last two things that I’ll leave you with is that back to my comment about so much focus on what the CSP is or isn’t doing leaves a massive gap in terms of understanding how you apply and work within that model. Let’s say that you do have a data breach. If you were to say we had a data breach associated with a misconfigured S3 bucket, you as a practitioner, a stakeholder in this process, would you know what to do? My findings, this is empirical and speaking from my experience, is most organizations are not ready for that. The reason is because the detection process is totally different. Usually SaaS and IaaS based deployments kind of reside within an LOB. So you may not have it feeding back into some sort of system that you might have access to. But on top of which, you may not even have access to it to even be able to pull the information that’s necessary, and worse off, in many cases, specifically in the case of IaaS where you’ve got let’s say ECQ-based Windows deployments for whatever reason running in this cloud, and let’s say it gets infected or you have a breach or something like that, how will you handle the forensics on that? What’s the process for that? A lot of organizations are getting caught unprepared because they thought, “Oh well, I’m just going to take my guiding software or whatever else I’ve got, I’m going to image something and then I’m going to inspect it offline the way that I did it in the ’90s in the last 20 years.” Well, it doesn’t work that way. I mean, even basic things like being able to image forensically and bring it down without any loss of integrity is a very difficult thing to do. So in terms of your data protection, it’s really critical that we as practitioners look at this. One of the key inputs for this is the Cloud Security Alliance’s Top Threats. Great document put together by some of the top minds in the cloud security and risk management space that really call out a lot of the things that we already know as practitioners, but it makes it a little bit difficult to articulate because it’s kind of looming in to everything else. It’s a great resource that helps practitioners understand here, and it’s backed by their data as well. A really good resource if you haven’t familiarized yourself with it. Jim, was there anything on the Top Threats that you may want to lead the team to? Jim Reavis: No, I think we might be into overtime. Tara might give us the hook, but it’s a great document. Go check it out. A lot of free resources at CSA. Tara Seals: Great. And yeah guys, I think we’re going to have to leave it there unfortunately. I think that this is such an interesting topic, we could probably continue to talk for a very long time about it. But I’d like to thank you very much for participating today. Tara Seals: I’d like to thank our audience members for joining us. I’m sorry that we couldn’t get to all of the questions. But I would like to let you guys know that if you wanted to reach out to me, the email address is there. I can try to get any and all additional questions answered from these guys, and/or at least point you to appropriate resources. I’m here to help. Thanks very much everybody again for joining us for our latest Threatpost webinar, and thank you very much Sean and Jim. Jim Reavis: Thank you. Sean Cordero: Thank you. Jim Reavis: Thanks everyone. Bye-bye. Want to know more about Identity Management and navigating the shift beyond passwords? Don’t miss our Threatpost webinar on May 29 at 2 p.m. ET. Join Threatpost editor Tom Spring and a panel of experts as they discuss how cloud, mobility and digital transformation are accelerating the adoption of new Identity Management solutions. Experts discuss the impact of millions of new digital devices (and things) requesting access to managed networks and the challenges that follow.
By Uzair Amir The stolen OGUsers database is available on RaidForums for download. On 12th May, hackers managed to steal the database of a famous hijacker forum called OGUsers. This forum is used by hackers and online account hijackers, which means that the hackers have now been given a taste of their own medicine. The database contained around […] This is a post from HackRead.com Read the original post: Hackers hacked: Account hijacking forum OGUsers pwned
Cisco has issued a handful of firmware releases for a high-severity vulnerability in Cisco’s proprietary Secure Boot implementation that impacts millions of its hardware devices, across the scope of its portfolio. The patches are the first in a planned series of firmware updates that will roll out in waves from now through the fall – some products will remain unpatched and vulnerable through November. Secure Boot is the vendor’s trusted hardware root-of-trust, implemented in a wide range of Cisco products in use among enterprise, military and government networks, including routers, switches and firewalls. The bug (CVE-2019-1649) exists in the logic that handles access control to one of the hardware components. It was disclosed last week. The vulnerability could allow an authenticated, local attacker to write a modified firmware image to that component. A successful exploit could either cause the device to become unusable (and require a hardware replacement) or allow tampering with the Secure Boot verification process, according to Cisco’s advisory. “The vulnerability is due to an improper check on the area of code that manages on-premise updates to a Field Programmable Gate Array (FPGA) part of the Secure Boot hardware implementation,” the networking giant explained. Dozens of Cisco products are affected (the full list is here). In Cisco’s updated advisory, the vendor issued fixes for its network and content security devices, as well as some products in the routing gear segment: the Cisco 3000 Series Industrial Security Appliances, Cisco Catalyst 9300 Series Switches, Cisco ASR 1001-HX and 1002-HX Routers, Cisco Catalyst 9500 Series High-Performance Switches, and Cisco Catalyst 9800-40 and 9800-80 Wireless Controllers all now have updates. Other routing and switching gear patches won’t roll out until July and August, with some products slated for even later fixes, in October and November. Voice and video devices will get fixes in September. The good news is that an attacker would need to be local and already have access to the device’s OS, with elevated privileges, in order to exploit the issue. An attacker would also need to “develop or have access to a platform-specific exploit,” Cisco noted. “An attacker attempting to exploit this vulnerability across multiple affected platforms would need to research each one of those platforms and then develop a platform-specific exploit. Although the research process could be reused across different platforms, an exploit developed for a given hardware platform is unlikely to work on a different hardware platform.” Also this week, Cisco issued an updated advisory for a medium-severity Cisco FXOS and NX-OS software command injection vulnerability (CVE-2019-1780); it updated the Nexus 3000 Series Switches and Nexus 9000 Series Switches. Want to know more about Identity Management and navigating the shift beyond passwords? Don’t miss our Threatpost webinar on May 29 at 2 p.m. ET. Join Threatpost editor Tom Spring and a panel of experts as they discuss how cloud, mobility and digital transformation are accelerating the adoption of new Identity Management solutions. Experts discuss the impact of millions of new digital devices (and things) requesting access to managed networks and the challenges that follow.
IT services provider HCL Technologies has inadvertently exposed passwords, sensitive project reports and other private data of thousands of customers and internal employees on various public HCL subdomains. HCL, an $8 billion conglomerate with more than 100,000 employees, specializes in engineering, software outsourcing and IT outsourcing. As such, it manages an influx of data regarding in-house personnel and customer projects. On May 1, researchers discovered several publicly accessible pages on varying HCL domains, leaving an array of private data out in the open for anyone to look at. That includes personal information and plaintext passwords for new hires, reports on installations of customer infrastructure, and web applications for managing personnel from thousands of HCL customers and employees within the company. The data was secured on May 8. It’s unclear whether malicious actors accessed the data, but researchers stressed that credentials and internal IDs could be used to log into other HCL systems, while other customer and employee data could be used for other nefarious purposes, such as phishing attacks, they said. “The most obviously sensitive data were the freshly minted passwords for new hires,” researchers with UpGuard, who discovered the exposed data, said in a Tuesday post. “But credentials are valuable because they provide access to information, and the detailed and long-running project plans are the kind of information an attacker might abuse credentials to access. Furthermore, the pages accessible here show how identifying information like internal IDs can be used to expand the scope of a breach to collect more information.” Personnel Data Exposed Several subdomains were included in the set of resources with personnel-specific information from HCL, researchers said – including the information for hundreds of new hires and thousands of employees. Click to Expand One such subdomain, containing pages for various HR administrative tasks, allowed anonymous access to a dashboard for new hires. This included records for 364 personnel, dating from 2013 to 2019 (In fact, 54 of the new hire records were as recent as May 6). Most critically, the data exposed cleartext passwords for new hires, which could be used to access other HCL systems to which these employees would be given access. Also exposed were candidate IDs, names, mobile numbers, joining dates, recruiter SAP codes, recruiter names and a link to the candidate form. Another personnel management page listed the names and SAP codes for over 2,800 employees. Customer Data Exposed In addition to HCL employees, the company was also accidentally exposing thousands records for customers. That’s because a reporting interface for HCL’s “SmartManage” reporting system – which facilitates project management for customers – exposed information regarding project statuses, sites, incidents and more to anyone, unprotected by authentication measures. That included internal analysis reports, detailed incident reports, network uptime reports, order shipment reports and more for over 2,000 customers. The internal analysis report page included “Detailed Incidences Report,” which listed about 5,700 incidents (including 450 recent records for April 2019). This page exposed data about various incidents within customer projects, including the duration, reason and description. Also exposed was the “Service Window Uptime Report” which detailed service uptime and any service issues having to do with customer projects. Researchers were also able to access weekly customer reports, including 18,000 records in 2019 for tracking system performances. Finally, more than 1,400 records were exposed from 2016 to 2018 as part of the “Reason Analysis Report” which outlined shipments, customer issues and other data. This included names, email address, and mobile phone numbers for fifteen cab hubs and seven bus hubs. Disclosure The data was first discovered on May 1 when an Upguard researcher, who was monitoring for the exposure of sensitive information for customers, discovered a publicly accessible file on Upguard’s domain. After this initial discovery, it took a few more days to ascertain how big the exposure was: “Due to the nature of the exposure, ascertaining its extent required several days of work,” researchers said. “Whereas a typical data exposures involves one collection of data, either in a single storage bucket or database, in this case the data was spread out across multiple subdomains and had to be accessed through a web UI.” On May 6, after reaching a reasonably complete level of analysis of the public pages and data, the researcher notified HCL. On May 8, the data was fully secured. “In addition to taking to heart the risk of data leaks, business leaders should also note the effectiveness of HCL’s response,” researchers said. “HCL has a data protection officer, which not all companies do. The existence of that role is clearly advertised, and an email address for contacting them easy to find. Though HCL never responded to UpGuard, they took action immediately on notification.” HCL did not respond to a request for comment from Threatpost. Data Leakage An Increasing Issue Inadvertent data exposure continues to plague companies – in fact, according to the recent Verizon Data Breach Investigations Report, insider-initiated incidents account for 34 percent of data breaches, with many of these being accidental exposures as opposed to malicious. In many cases, data may be posted on a software-as-a-service application (such as Trello), or hosted on an infrastructure-as-a-service platform like Amazon Web Services. For instance, recently in May misconfigured cloud databases inadvertently leaked personally identifiable information (PII) in the care of two companies: The Ladders headhunting and job recruitment site, and the SkyMed medical evacuation service. And in April an ElasticSearch database that was left open to the internet exposed about 4.9 million data points of personally identifiable information (PII) related to individuals seeking treatment at an addiction treatment facility, Steps to Recovery. In HCL’s case, the publicly accessible data was available for download straight from HCL domains. “A large services provider like HCL necessarily manages lots of data, personnel, and projects,” researchers said. “That management complexity writ large is the root cause of data leaks in general. In this case, pages that appeared like they should require user authentication instead were accessible to anonymous users. The fact that other pages on those same apps did require user authentication speaks to the challenge that causes data leaks: if every page must be configured correctly, eventually a misstep will result in an exposure.” Want to know more about Identity Management and navigating the shift beyond passwords? Don’t miss our Threatpost webinar on May 29 at 2 p.m. ET. Join Threatpost editor Tom Spring and a panel of experts as they discuss how cloud, mobility and digital transformation are accelerating the adoption of new Identity Management solutions. Experts discuss the impact of millions of new digital devices (and things) requesting access to managed networks and the challenges that follow.
Finding cloud databases with sensitive information left open to the internet has become par for the course these days – as a new exposure of millions of sensitive data points for the users of a golf app demonstrates. Millions of golfer records from the Game Golf app, including GPS details from courses played, usernames and passwords, and even Facebook login data, were all exposed for anyone with an internet browser to see — a veritable hole-in-one for a cyberattacker looking to build profiles for potential victims, to be used in follow-on social-engineering attacks. Security Discovery researcher Bob Diachenko recently ran across an Elastic database that was not password-protected and thus visible in any browser. Further inspection showed that it belongs to Game Golf, which is a family of apps developed by San Francisco-based Game Your Game Inc. Game Golf comes as a free app, as a paid pro version with coaching tools and also bundled with a wearable. It’s a straightforward analyzer for those that like to hit the links – tracking courses played, GPS data for specific shots, various player stats and so on – plus there’s a messaging and community function, and an optional “caddy” feature. It’s popular, too: It has 50,000+ installs on Google Play. Unfortunately, Game Golf landed its users in a sand trap of privacy concerns by not securing the database: Security Discovery senior security researcher Jeremiah Fowler said that the bucket included all of the aforementioned analyzer information, plus profile data like usernames and hashed passwords, emails, gender, and Facebook IDs and authorization tokens. In all, the exposure consisted of millions of records, including details on “134 million rounds of golf, 4.9 million user notifications and 19.2 million records in a folder called ‘activity feed,’” Fowler said. The database also contained network information for the company: IP addresses, ports, pathways and storage info that “cybercriminals could exploit to access deeper into the network,” according to Fowler, writing in a post on Tuesday. No word on whether malicious players took a swing at the data, as it were, but the sheer breadth of the information that the app gathers is concerning, Fowler noted. “When combined, this data could theoretically create a more complete profile of the user and adding additional privacy concerns,” he wrote. “This incident once again raises this issue of how applications gather and store user data. A growing concern about tracking and metadata is that users do not see all of this information, how it is used, or what it is used for.” Diachenko said that he sent notices to Game Golf several times about the exposure, but he said he didn’t get a reply. Nonetheless, the database was secured about two weeks after he sent his initial notification. Fowler added that Game Your Game could find itself in the rough if it’s not dealing with the situation. “It is unclear if this data incident was reported to users who may have been affected or the California Attorney General’s Office,” Fowler noted — California law requires a business to notify any California resident whose unencrypted personal information was potentially exposed. Threatpost also reached out to Game Golf, and will update this post with any comment. Want to know more about Identity Management and navigating the shift beyond passwords? Don’t miss our Threatpost webinar on May 29 at 2 p.m. ET. Join Threatpost editor Tom Spring and a panel of experts as they discuss how cloud, mobility and digital transformation are accelerating the adoption of new Identity Management solutions. Experts discuss the impact of millions of new digital devices (and things) requesting access to managed networks and the challenges that follow.
Elastic, the company behind the most widely used enterprise search engine ElasticSearch and the Elastic Stack, today announced that it has decided to make core security features of the Elastic Stack free and accessible to all users. ELK Stack or Elastic Stack is a collection of three powerful open source projects—Elasticsearch, Logstash, and Kibana—that many large and small companies are using to format, search, analyze, and visualize a large amount of data in real time. In recent months, we have seen how thousands of instances of insecure, poorly configured Elasticsearch and Kibana servers had left millions of users sensitive data exposed on the Internet. Since the free version of Elastic Stack by default does not have any authentication or authorization mechanism, many developers and administrators fail to properly implement important security features manually. The core security features—like encrypted communication, role-based access control, authentication realms—in previous versions required a paid Gold subscription, but the latest versions 6.8.0 and 7.1.0 of the Elastic Stack released today offers these features for free so that everyone can run a fully secure cluster without any hassle. Here’s the list of core security features that are now free in the latest Elastic Stack versions as a part of the Basic tier: TLS (Transport Layer Security) for encrypted communications. File and native realm for creating and managing users. Role-based access control for controlling users’ access to cluster APIs and indexes; also allows multi-tenancy for Kibana with security for Kibana Spaces. These features now make it possible for users to “encrypt network traffic, create and manage users, define roles that protect index and cluster level access, and fully secure Kibana with Spaces.” However, the company clarifies that its advanced security features like single sign-on, Active Directory/LDAP authentication, attribute-based access control, and field-level and document-level security remain available only for paid customers. You can download versions 6.8.0 or 7.1.0 of the Elastic Stack to take advantage of the security features.
High-quality cybersecurity posture is typically regarded as the exclusive domain of the large and heavy resourced enterprises – those who can afford a multi-product security stack and a skilled security team to operate it. This implies a grave risk to all organizations who are not part of this group, since the modern threat landscape applies to all, regardless of size and vertical. What is less commonly known is that by following basic and well-defined practices and wise security product choices, any organization can level up its defenses to a much higher standard. “At the end of the day it comes down to strategic planning,” says Eyal Gruner, CEO and co-founder of Cynet, “rather than thinking in term of specific product or need, zoom out and breakdown the challenge to its logical parts – what do you need to do proactively on an on-going basis, while you’re under attack and when you manage a recovery process.” From the various frameworks of security best practices, the prominent one is the NIST cybersecurity framework, which suggests the following pillars: Identify – know your environment and proactively search for weak links attackers might target. Such links can include unpatched apps, weak user passwords, misconfigured machines, carelessly used admin accounts, and others. Protect – security technologies that automatically block attempted the malicious activity. The prominent examples here are AV and firewalls. However, while these cannot efficiently confront the more advanced threats, one should always assume that a certain portion of active attacks will bypass them. Detect – security technologies that address the attacks that successfully evaded prevention and are alive within the targeted environment, ideally, as earlier as possible in the attack lifecycle. Respond – security technology that takes charge from the point an active attack was detected and validated and consists of enabling defenders to understand the attack’s scope and impact as well as to eliminate malicious presence from all parts of the environment. Recover – restore all compromised entities as close as possible to their pre-attack stage. Achieving this has much to do with proactive steps such as having backups and implementing disaster recovery workflows in the context of cyber attacks. At first glance it seems as if adequately addressing all these pillars is complex with at least one security product or more for each, says Gruner, and unfortunately there are many organizations that try to take that path. Usually, the end result is a patched framework of many products that don’t talk to each other and become heavy resource consumers.’ Cynet 360 platform radically simplifies working with NIST guidelines. The various security technologies Cynet natively integrate are easily matched to each step in the NIST framework: vulnerability assessment and asset management to Identify; NGAV and network analytics prevention to Protect; EDR, UBA, and deception to Detect; and the wide array of manual and automated remediation to Respond. “Our goal,” continues Gruner, “was to make cybersecurity easy and manageable – being able to address most needs with one platform is a major part of our vision.” Learn more on how Cynet addresses the NIST cybersecurity framework in their webinar next week on May 29th, 2019, 1:00 PM EDT – Security for all – How to Get Enterprise-Grade Security for Your Mid-Sized Organization. Register Now to secure your place!
By Waqas Google knows “what, when, where & how much” about your online shopping but claims it does not use the data for advertisement. Gmail is home to more than 1.5 billion users which makes it a lucrative service for Google whose business model demands data collected from users’ online searches, web browsing, and other online activities. […] This is a post from HackRead.com Read the original post: Gmail wittingly storing your online purchase data for years
Most organizations don’t really have a good way of sharing threat-related data outside of their own industry verticals. Sure, there are Information Sharing and Analysis Centers (ISACs); i.e. FS-ISACs for the financial-services industry. But the information still tends to stay in industry-specific silos. In this article I’ll be talking about some new ideas for broadening how threat intelligence is shared, and how to make it more useful. A Tale of Two Companies It’s just another Tuesday morning in New York City and a security analyst at a major financial services firm sifts through intrusion alerts, in hopes of detecting the next wave of attacks from an unknown adversary that’s been pummeling the firm over the past three months. This is her sole focus. The attacks have gotten worse. The techniques have gotten more advanced. Her job is in the spotlight and she’s hoping that everything she’s learned about the adversaries’ tactics, techniques and procedures (TTPs) will help inform her defense when the next wave strikes. Meanwhile, across the country in Los Angeles, an incident-response team at another major firm completes a report on how the latest cyberattack it suffered took place, what the bad guys got and the series of events that got them there. This company is a fast-growing gaming company, and they have invested millions into hiring the best and brightest security professionals. It shares one thing in common with the financial services firm in New York: The exact same bad guys have attacked both organizations. Understanding precisely what happened as each of the companies’ defenses failed can be just as informative as if they had stopped the attack outright. But unfortunately, the gaming organization hasn’t shared meaningful information about the adversary with the financial services firm in New York, so they can’t benefit from comparing notes. This is an all-too-common scenario. As defenders looking to meet the vision of making the internet more secure in a measurable way, it’s important that we find ways to get more people into ISAC sharing organizations — and to get those sharing organizations talking to each other. Moving Beyond the ISAC We need to give organizations of all sizes the ability to share threat-intelligence data and apply strategic controls for addressing those threats in a manner that is measurable and repeatable. This can be done with cloud-based security controls that can be applied uniformly across these organizations, with the goal of cooperatively sharing in the gathering of threat-related indicators of compromise (IoCs) and attacker TTPs. With this new grouping of organizations, we would be able to apply controls in a way that is consistent across verticals, environments and geographical regions. Within the current vertical-focused model, we would need to create new classes or levels of risk for delivering threat intelligence. These wouldn’t be based on the type of exploit or the CVE severity of a given vulnerability; instead, they would be based on the ubiquity of the current attack campaign, the velocity of its growth, or how novel the nature of the exploit technique is, as observed across a large swath of internet-based organizations. This would allow for more granular risk identification, based on how likely a threat is to affect a specific organization. It would also allow for a more accurate measurement of what is actually being exploited, and what threats are active and growing; this aids in priority-setting for emergency mitigation. Achieving Global Visibility into Web-Borne Attacks When it comes to combating DDoS and other web-based attacks, global visibility is required for more accurate identification of threats, which would require a different sort of cooperation, amongst those who provide internet networks. Source: Akamai. Click to enlarge. To achieve more granular and actionable inspection of malicious traffic, it’s possible to use a reverse proxy architecture geographically deployed close to end-user connections. As traffic passes through the internet, certain devices and network paths have different viewpoints on what’s taking place on the web. If we could gather threat-related IoCs by looking at the application layer of a web session (instead of just watching IP traffic talk from one network hop to the next), we might find that behind a “top-talking” IP address there are actually thousands of individual web sessions taking place. With a reverse proxy architecture, we can identify client web sessions by surveying the session-related context of internet traffic and more closely apply controls. This would especially be helpful for web application or API-level threats. And, if we could implement inspection points for malicious traffic at locations very close to where end user (and malicious) traffic originate, we’d have the ability to watch for abusive volumetric traffic before it accumulates into a DDoS flood. Also, using high level Border Gateway Protocol (BGP)-routed inspection points, we would be able to implement signature related rule sets to carry out the basic block-and-tackle of scripted rent-o-bot-based attacks. Although this might seem like a pipe dream, there are several vendor organizations that are in prime position to pull this off. These include large global ISPs, global cloud services providers and the large content delivery networking (CDN) platforms. Using their scale and influential relationship with customers, these internet specialists could band together to develop a new structure around sharing this valuable threat intelligence data – and more importantly, actually be able to provide a consumable service around each of these areas. By developing a uniformed way to share and respond to threats at a global scale, maybe we can knock down the barriers between vertical markets (i.e., those hypothetical companies in New York and Los Angeles) and change the way we think about sharing threat intelligence. (Tony Lauro manages the Enterprise Security Architecture team at Akamai Technologies. With over 20 years of information security industry experience, Tony has worked and consulted in many verticals including finance, automotive, medical/healthcare, enterprise, and mobile applications. He is currently responsible for Akamai‘s North America clients as well as the training of an Akamai internal group whose focus is on Web Application Security and adversarial resiliency disciplines. Tony‘s previous responsibilities include consulting with public sector/government clients at Akamai, managing security operations for a mobile payments company, and overseeing security and compliance responsibilities for a global financial software services organization.)
Our Standard Office Hours
Monday – Friday: 8:00AM – 5:00PM EDT
Saturday – Sunday: Closed
Where to Find Us
Data Privacy Notice
- – All product names, logos, and brands are property of their respective owners.
- – The use of these names, logos, and brands is for identification purposes only and does not imply endorsement.
- – Content syndication and aggregation of public information is solely for the purpose of identifying information security trends, all syndicated content contains source links to the content creator website. All content is owned by it’s respective content creators.
- – If you are an owner of some content and want it to be removed, please email firstname.lastname@example.org