image
The U.S. Federal Election Commission (FEC) said today political campaigns can accept discounted cybersecurity services from companies without running afoul of existing campaign finance laws, provided those companies already do the same for other non-political entities. The decision comes amid much jostling on Capitol Hill over election security at the state level, and fresh warnings from U.S. intelligence agencies about impending cyber attacks targeting candidates in the lead up to the 2020 election. Current campaign finance law prohibits corporate contributions to campaigns, and election experts have worried this could give some candidates pause about whether they can legally accept low- to no-cost services from cybersecurity companies. But at an FEC meeting today, the commission issued an advisory opinion (PDF) that such assistance does not constitute an in-kind contribution, as long as the cybersecurity firm already offers discounted solutions to similarly situated non-political organizations, such as small nonprofits. The FEC’s ruling comes in response to a petition by California-based Area 1 Security, whose core offering focuses on helping clients detect and block phishing attacks. The company said it asked the FEC’s opinion on the matter after several campaigns that had reached out about teaming up expressed hesitation given the commission’s existing rules. In June, Area 1 petitioned the FEC for clarification on the matter, saying it currently offers free and low-cost services to certain clients which are capped at $1,337. The FEC responded with a draft opinion indicating such offering likely would amount to an in-kind contribution that might curry favor among politicians, and urged the company to resubmit its request focusing on the capped-price offering. Area 1 did so, and at today’s hearing the FEC said “because Area 1 is proposing to charge qualified federal candidates and political committees the same as it charges its qualified non-political clients, the Commission concludes that its proposal is consistent with Area 1’s ordinary business practices and therefore would not result in Area 1 making prohibited in-kind contributions to such federal candidates and political committees.” POLICY BY PIECEMEAL The decision is the latest in a string of somewhat narrowly tailored advisories from the FEC related to cybersecurity offerings aimed at federal candidates and political committees. Most recently, the commission ruled that the nonprofit organization Defending Digital Campaigns could provide free cybersecurity services to candidates, but according to The New York Times that decision only applied to nonpartisan, nonprofit groups that offer the same services to all campaigns. Last year, the FEC granted a similar exemption to Microsoft Corp., ruling that the software giant could offer “enhanced online account security services to its election-sensitive customers at no additional cost” because Microsoft would be shoring up defenses for its existing customers and not seeking to win favor among political candidates. Dan Petalas is a former general counsel at the FEC who represents Area 1 as an attorney at the law firm Garvey Schubert Barer. Petalas praised today’s ruling, but said action by Congress is probably necessary to clarify the matter once and for all. “Congress could take the uncertainty away by amending the law to say security services provided to campaigns to do not constitute an in-kind contribution,” Petalas said. “These candidates are super vulnerable and not well prepared to address cybersecurity threats, and I think that would be a smart thing for Congress to do given the situation we’re in now.” ‘A RECIPE FOR DISASTER’ The FEC’s decision comes as federal authorities are issuing increasingly dire warnings that the Russian phishing attacks, voter database probing, and disinformation campaigns that marked the election cycles in 2016 and 2018 were merely a dry run for what campaigns could expect to face in 2020. In April, FBI Director Christopher Wray warned that Russian election meddling posed an ongoing “significant counterintelligence threat,” and that the shenanigans from 2016 — including the hacking of the Democratic National Committee and the phishing of Hillary Clinton’s campaign chairman and the subsequent mass leak of internal emails — were just “a dress rehearsal for the big show in 2020.” Adav Noti, a former FEC general counsel who is now senior director of the nonprofit, nonpartisan Campaign Legal Center, said the commission is “incredibly unsuited to the danger that the system is facing,” and that Congress should be taking a more active roll. “The FEC is an agency that can’t even do the most basic things properly and timely, and to ask them to solve this problem quickly before the next election in an area where they don’t really have any expertise is a recipe for disaster,” Noti said. “Which is why we see these weird advisory opinions from them with no real legal basis or rationale. They’re sort of making it up as they go along.” In May, Sen. Ron Wyden (D-Ore.) introduced the Federal Campaign Cybersecurity Assistance Act, which would allow national party committees to provide cybersecurity assistance to state parties, individuals running for office and their campaigns. Sen. Wyden also has joined at least a dozen other senators — including many who are currently running as Democratic candidates in the 2020 presidential race — in introducing the “Protecting American Votes and Elections (PAVE) Act,” which would mandate the use of paper ballots in U.S. elections and ban all internet, Wi-Fi and mobile connections to voting machines in order to limit the potential for cyber interference. As Politico reports, Wyden’s bill also would give the Department of Homeland Security the power to set minimum cybersecurity standards for U.S. voting machines, authorize a one-time $500 million grant program for states to buy ballot-scanning machines to count paper ballots, and require states to conduct risk-limiting audits of all federal elections in order to detect any cyber hacks. BIPARTISAN BLUES Earlier this week, FBI Director Wray and Director of National Intelligence Dan Coats briefed lawmakers in the House and Senate on threats to the 2020 election in classified hearings. But so far, action on any legislative measures to change the status quo has been limited. Democrats blame Senate Majority Leader Mitch McConnell for blocking any action on the bipartisan bills to address election security. Prior to meeting with intelligence officials, McConnell took to the Senate floor Wednesday to allege Democrats had “already made up their minds before we hear from the experts today that a brand-new, sweeping Washington, D.C. intervention is just what the doctor ordered.” “Make no mistake,” McConnell said. “Many of the proposals labeled by Democrats to be ‘election security’ measures are indeed election reform measures that are part of the left’s wish list I’ve called the Democrat Politician Protection Act.” But as _Politico _reporter Eric Geller tweeted yesterday, if lawmakers are opposed to requiring states to follow the almost universally agreed-upon best practices for election security, they should just say so. “Experts have been urging Congress to adopt tougher standards for years,” Geller said. “Suggesting that the jury is still out on what those best practices are is factually inaccurate.” Noti said he had hoped election security would emerge as a rare bipartisan issue in this Congress. After all, no candidate wants to have their campaign hacked or elections tampered with by foreign powers — which could well call into question the results of a race for both sides. These days he’s not so sanguine. “This is a matter of national security, which is one of the core functions of the federal government,” Noti said. “Members of Congress are aware of this issue and there is a desire to do something about it. But right now the prospect of Congress doing something — even if most lawmakers would agree with it — is small.”

Source

image
Google Home smart speakers and the Google Assistant virtual assistant have been caught eavesdropping without permission — capturing and recording highly personal audio of domestic violence, confidential business calls — and even some users asking their smart speakers to play porn on their connected mobile devices. In a Wednesday report, Dutch news outlet VRT NWS said it obtained more than one thousand recordings from a Dutch subcontractor who was hired as a “language reviewer” to transcribe recorded audio collected by Google Home and Google Assistant, and help Google better understand the accents used in the language. Out of those one thousand recordings, 153 of the conversations should never have been recorded, as the wake-up command “OK Google” was clearly not given, the report said. “VRT NWS was able to listen to more than a thousand excerpts recorded via Google Assistant,” according to the report. “In these recordings we could clearly hear addresses and other sensitive information. This made it easy for us to find the people involved and confront them with the audio recordings.” Google for its part on Thursday acknowledged that VRT NWS had obtained authentic recordings, but argued that its language experts only review around 0.2 percent of all audio snippets. “As part of our work to develop speech technology for more languages, we partner with language experts around the world who understand the nuances and accents of a specific language,” according to David Monsees, product manager at Google Search. “These language experts review and transcribe a small set of queries to help us better understand those languages. This is a critical part of the process of building speech technology, and is necessary to creating products like the Google Assistant.” Google did acknowledge that some audio may be recorded by Google Home or Google Assistant without the wake-up word being said, due to error. “Rarely, devices that have the Google Assistant built in may experience what we call a ‘false accept,’” said Monsees. “This means that there was some noise or words in the background that our software interpreted to be the hotword (like ‘OK Google’). We have a number of protections in place to prevent false accepts from occurring in your home.” Google also argued that audio snippets are not associated with user accounts in the review process. Despite that, VRT NWS said, “it doesn’t take a rocket scientist to recover someone’s identity; you simply have to listen carefully to what is being said.” While the incident shows that audio is being collected when users expect the devices to be dormant, it also highlights concerns around third-party security and Google’s data retention and sharing policies, given that a subcontractor leaked these recordings to a news outlet. Monsees said that said subcontractor is currently under investigation. “We just learned that one of these language reviewers has violated our data security policies by leaking confidential Dutch audio data,” he said. “Our security and privacy response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again.” Voice Assistant Data Privacy? The incident comes as voice assistants such as Amazon Alexa and Google Home are coming under increased scrutiny about how much data is being collected, what that data is, how long it’s being retained and who accesses it. In April, Amazon was thrust into the spotlight for a similar reason, after a report revealed the company employs thousands of auditors to listen to Echo users’ voice recordings. The report found that Amazon reviewers sift through up to 1,000 Alexa audio clips per shift – listening in on everything from mundane conversations to people singing in the shower, and even recordings that are upsetting and potentially criminal, like a child screaming for help or a sexual assault. Voice assistants are increasingly being criticized for how they handle private data. In July, Amazon came under fire again after acknowledging that it retains the voice recordings and transcripts of customers’ interactions with its Alexa voice assistant indefinitely – raising questions about how long companies should be able to save highly-personal data collected from voice-assistant devices. And last year, Amazon inadvertently sent 1,700 audio files containing recordings of Alexa interactions by a customer to a random person –and later characterized it as a “mishap” that came down to one employee’s mistake. Security experts for their part said that voice assistants will soon be a big focus for regulators in light of laws like the General Data Protection Regulation (GDPR). “I definitely think that we’re going to have, let’s say within the next 15 months, a GDPR ruling on the data-collection policy of home-automation devices,” Tim Mackey, principal security strategist at the cybersecurity research center at Synopsys, told Threatpost. “Voice assistants will probably be high on the list, as would things like video doorbells. And effectively it’s going to be a case of what was disclosed and how was the information processed.” Don’t miss our free live Threatpost webinar, “Streamlining Patch Management,” on Wed., July 24, at 2:00 p.m. EDT. Please join Threatpost editor Tom Spring and a panel of patch experts as they discuss the latest trends in Patch Management, how to find the right solution for your business and what the biggest challenges are when it comes to deploying a program. Register and Learn More Write a comment Share this article: IoT Privacy

Source

image
By Waqas As surprising as it may sound, Facebook is up but Twitter is down. The online news and social networking site Twitter is down right now after suffering a massive outage. This comes as a surprise since Twitter has a proven track record of maintaining its service even when social media giants like Facebook and Instagram […] This is a post from HackRead.com Read the original post: Twitter is down – Twitter’s website & app suffering outage (Updated)

Source

image
By Uzair Amir Have you ever received emails from unknown sources claiming to offer insurance, lottery tickets or advertisements? You may have noticed that such emails always have a link that they prompt you to click. What lies on the other side of the link can be any one of many ways to phish users into giving away […] This is a post from HackRead.com Read the original post: 10 ways to keep yourself secure online against cyber attacks

Source

image
Apple has pushed a silent update to Mac users that removes a hidden web server from Zoom users’ machines. The Zoom web- and video-conferencing service has come under scrutiny for its handling of a zero-day bug (CVE-2019–13450) found by researcher Jonathan Leitschuh, which would allow an attacker to hijack a user’s web camera without their permission. However, the researcher also flagged a concerning persistence feature in the service: Even if users uninstalled the Zoom client, the service maintained a web-facing connection on computers via a hidden localhost web server. “If you’ve ever installed the Zoom client and then uninstalled it, you still have a localhost web server on your machine that will happily re-install the Zoom client for you, without requiring any user interaction on your behalf besides visiting a webpage,” explained Leitschuh, adding that this deepens the security risk from the vulnerability. Apple’s update – automatically pushed to users without any need for action on their part – removes the hidden Zoom web server. It’s a move that the Cupertino, Calif.-based giant usually reserves for addressing malware. “We’re happy to have worked with Apple on testing this update,” Zoom said in a media statement. “We appreciate our users’ patience as we continue to work through addressing their concerns.” Apple’s update is somewhat superfluous (though automatic): Zoom itself released an emergency fix earlier this week that also removes the web server, and the platform now allows users to manually uninstall Zoom completely. The update is the result of media attention in the wake of Leitschuh’s responsible public disclosure of the flaw, which highlighted Zoom’s incomplete fix for the bug and slow action on its part in working with him. On July 12, Zoom will further update the client to address the concern around enabling video on by default. First-time users who select the “always turn off my video” pop-up box will automatically have their video preference saved, it announced. The Zoom flaw affects about 4 million workers that use Zoom for Mac. Don’t miss our free live Threatpost webinar, “Streamlining Patch Management,” on Wed., July 24, at 2:00 p.m. EDT. Please join Threatpost editor Tom Spring and a panel of patch experts as they discuss the latest trends in Patch Management, how to find the right solution for your business and what the biggest challenges are when it comes to deploying a program. Register and Learn More

Source

image
Apple has temporarily disabled the Walkie-Talkie feature from the Apple Watch due to a vulnerability that could allow potential attackers to eavesdrop in on iPhone calls, a TechCrunch report said. The Apple Watch Walkie-Talkie app allows users to converse with friends in real-time, without having to make a phone call, simply by pressing a button on their watches, talking into it, and releasing to listen for the reply. Apple added the feature to the watch in 2015 in its WatchOS 5 update. The bug could allow someone to listen through another customer’s iPhone without consent, Apple told TechCrunch. Further details about the specifics of the vulnerability and how it could be exploited have not yet been made public, however, Apple did confirm to TechCrunch on Thursday that it has disabled the feature on Apple Watch while it works to fix the issue. According to Apple’s statement to the tech outlet, “We were just made aware of a vulnerability related to the Walkie-Talkie app on the Apple Watch and have disabled the function as we quickly fix the issue. We apologize to our customers for the inconvenience and will restore the functionality as soon as possible.” Apple said in its statement it is not aware of any use of the vulnerability against a customer and specific conditions and sequences of events are required to exploit it. According to reports, the flaw was discovered and reported through Apple’s vulnerability portal on its website. The issue is similar to another Apple incident earlier this year where the phone giant had to make Group FaceTime temporarily unavailable following a major flaw discovered in the feature. The bug – which has since been fixed– allowed anyone with iOS to FaceTime other iOS users and listen in on their private conversations – without the user on other end rejecting or accepting the call. The bug makes use of a new function presented in FaceTime as part of iOS 12.1, called Group FaceTime. Beyond that issue, Apple has also dealt with an array of vulnerabilities across its products in the past few months – including an iMessage bug last week that could brick iPhones running older versions of the company’s iOS software; and a flaw disclosed in June that allowed hackers to mimic mouse-clicks to allow malicious behavior on macOS Mojave. Apple did not respond to a request for comment from Threatpost asking for further details about the vulnerability and timeline of the fix. Don’t miss our free live Threatpost webinar, “*Streamlining Patch Management,” on Wed., July 24, at 2:00 p.m. EDT. Please join Threatpost editor Tom Spring and a panel of patch experts as they discuss the latest trends in Patch Management, how to find the right solution for your business and what the biggest challenges are when it comes to deploying a program. *Register and Learn More

Source

image
While bug-bounty programs may seem like a cure-all solution for companies looking discover vulnerabilities in their systems more efficiently, the fact remains that a program could overwhelm a firm’s internal security team and cause other major headaches if implemented the wrong way. “You have to realize that the crowd is going to find a lot more vulnerabilities than your typical in-house pen-test team. So oftentimes, there’s this engineering push back, like hold on, we don’t have our internal processes set up,” David Baker, chief security officer at Bugcrowd told Threatpost. Threatpost caught up with Baker to discuss the right — and wrong — approaches for implementing a bounty program that can boost companies’ security effectively with minimal operational disruption. Listen to the full podcast below. [ ](http://iframe%20style=border:%20none%20src=//html5-player.libsyn.com/embed/episode/id/10472057/height/360/theme/legacy/thumbnail/yes/direction/backward/%20height=360%20width=100%%20scrolling=no%20%20allowfullscreen%20webkitallowfullscreen%20mozallowfullscreen%20oallowfullscreen%20msallowfullscreen/iframe) Click here for direct download of the podcast. For a lightly-edited transcript of the podcast, see below. Lindsey O’Donnell: This is Lindsey O’Donnell with the Threatpost podcast. And I’m here with David Baker, the chief security officer at Bugcrowd and the VP of operations at Bugcrowd. David, thanks so much for joining us. David Baker: Thanks for having me. LO: So since we haven’t talked before, can you give us a brief introduction of yourself and how long you’ve been at Bugcrowd for? DB: Excellent. So I’ve been at Bugcrowd for two and a half years. I was previously the chief security officer with Okta…before that, security architect at WebEx. I [also] ran all of the professional services at IOActive, which is a security research firm in Seattle. So I’ve been around security for about 15 years. LO: So you’ve really seen it all then. I wanted to talk a bit about some of the top trends that you’re seeing in the bug-bounty landscape at this point. Is there anything that’s really sticking out to you in terms of how programs are evolving or how bounty hunters are changing? DB: Yeah, about five years ago was, I think, the [era of the] web application, and so your web platform was sort of the mainstay for how researchers interacted. A lot of companies had a platform, a web application platform, that was accessible via the internet. And so researchers would typically interact that way, to find bugs and report them. What we’re seeing now is, a lot of the companies are moving towards more of an API-based platform, where you have a web application that sits on your cell phone, and that has an API layer that feeds back into a back-end service. And that’s pushing things to be more mobile, [and] it’s a much different target, particularly the API fabric. And the other aspect is, they’re always trying to bring in a device, so with your mobile phone, there’s probably a device that sits on your wireless network in your house or office or something. So the combination of your API layer, some sort of IoT device being involved, this is actually becoming more and more prevalent. So what that’s doing is changing the nature of how your researchers are interacting with something that’s internet-available, but isn’t a web front-end. So it’s a much different style of attack, it’s a much different style of test, and so on and so forth. Devices are typically commoditized, you can pick them up anywhere, researchers have them, are able to get them very easily. And so they’re interacting at a hardware level sometimes. So we see the ability now to expand to new frontiers, with new complexities. We are actually trying to raise awareness and train researchers to start competing in that area as well. LO: Yeah, I’m curious, you mentioned IoT. Obviously, that’s a huge threat when it comes to security. What kind of interest level are you seeing with IoT devices? And how is that different in terms of researchers and bounty hunters looking at those types of devices, because obviously, you have hardware and operating systems and different levels of that. DB: Yeah, it’s interesting, because you had what I would call Gen One of your IoT device. And that Gen One would be your routers, or your switches that you typically have in your office. And nowadays, you have Gen Two (maybe even a generation three) which is your Ring devices, or…Xfinity as a cable provider, they have these devices that plug into your wall, and they help you sort of control the wireless signal in your house. You know, the Alexa devices, so on and so forth. So these interactive devices are very new, they sit in the household. Your watches, your Fitbits, so on and so forth, your drones, all of these are sort of these Gen Two devices that are pretty ubiquitous now. And those are going to be much different than your first gen where it’s pretty easy to tap in, you can actually get access to the firmware pretty easily. Nowadays, the Gen Two devices are much, much more commoditized hardware, but gaining access and the actual target is much different. And it also introduces the security awareness around supply-side security threats. So there’s a lot of interest in around having researchers identify potential areas of supply-side security issues. LO: So can you talk a little bit about bug-bounty programs in general from a vendor’s perspective, and some of the challenges that vendors are facing when they launch a bounty program? DB: Sure, yeah. From an operational perspective, that’s my team that handles the actual operations. So as a vendor, we manage the bug-bounty program for our customers — so [that includes] all the complexities of how to pair researchers (what researchers to bring in), how many researchers, and so on and so forth. I think oftentimes the initial challenge we see is that…the security team goes out and purchases a solution, but you have to realize that the crowd is going to find a lot more vulnerabilities than your typical in-house pen-test team. So oftentimes, there’s this engineering push back, like hold on, we don’t have our internal processes set up. There is oftentimes confusion with marketing people about “do we want to talk about it.” To talk about it is actually very beneficial, and it also attracts a lot of researchers, because they want to participate. So sometimes we see that the companies may not necessarily be ready to go right away. It’s part of the management piece; we help them get up and running. I think that companies want to start very slowly, they want to dip their toe in the water. And sometimes that can have a detrimental effect to recruiting researchers. When you go into a bounty program, you want to be ready, and to be ready means that you want to attract researchers, you want them to participate. And it’s hard to hard to do that with two or three researchers for a long time. LO: Yeah, that’s a really good point. And then what so what, on the heels of that? What would be the very first one or two steps that a company could do to prepare themselves to launch a bug-bounty program? DB: That’s a great question. You gotta have a good [service level agreement] (SLA) with your security in engineering. So have that SLA established, and if there is a really critical bug identified, how long [does] engineering [have to] fix it? Typically, that’s 24 hours, but[is it] fine if it’s a week? You have to have that [time period] established. And also-..if it’s a much more lower-impact bug, you don’t want to ignore those. You need to be able to have a commitment that engineering can fix those lower-level flaws in a month or six months, right, and have that process laid out. Because what’s going to happen is, when you have the money initialized, you will have these bugs come in and engineering will have to spend time on them, so they have to be part of the conversation. We have the ability to integrate with the engineering life-cycle around how they’re doing software and security development. So you want them to be at the table when this happens; they need to be part of the process. And the next thing is, you really want to understand what your value target is, right? It’s not about saving money, it’s about identifying vulnerabilities. You will naturally save money by having those vulnerabilities identified. And, you can reduce your risk — that’s your ultimate cost savings. But you want to be able to have the capital in the beginning to make sure that researchers are coming to the table. Those are the two things that are really important. LO: Do you think companies are aware of those at this point? Or do you guys really need to kind of hold their hands to help them understand that? How do you think the perception of bug-bounty programs is at this point? DB:It’s a good question. It is getting there. So I think, you know, the past five years, it was really just more about really early adopters expanding their programs, and those early adopters have made a name for the bug bounties. Not everyone was coming in and saying, I want to do this. Early on, there was a crowd fear. The early adopters got around this fear of the crowd. Now that’s going away. So now, more companies are wanting to do this. But the idea here is well, I want to just try it, and spend a very little amount of money just to try it out. And while you can start at a lower-cost tier, you really have to have your processes [and] your engineering on board. That’s sort of the next phase for people, not just jumping in. LO: Another question I had is, where do you see the future landscape going? When it comes to the operations behind the bug-bounty programs or when it comes to even the concept of vulnerability disclosure and how that plays into it? DB: It’s, it’s good question…Oftentimes, it’s very specialized. But what we’re seeing is that you’re really able to approach this as more of a Gig economy solution; we have companies that have very, very specific needs. And so while the bug-bounty program is going to be just a natural layer of your security fabric, or part of the onion, a few layers of the onion, companies will be able to leverage these crowd-sourced platforms to be able to do really targeted Gig-economy programs [for] a very specific problem, and having a very specific type of talent for that problem. Connecting them one-to-one that is where this next evolution is happening. LO: It should be interesting to see how that plays out in the future. Well, David, thank you so much for joining us today on the Threatpost podcast. DB: Thank you Lindsey. Don’t miss our free live Threatpost webinar, “*Streamlining Patch Management,” on Wed., July 24, at 2:00 p.m. EDT. Please join Threatpost editor Tom Spring and a panel of patch experts as they discuss the latest trends in Patch Management, how to find the right solution for your business and what the biggest challenges are when it comes to deploying a program. *Register and Learn More

Source

image
A vulnerability in GE Healthcare’s Aestiva and Aespire anesthesia devices would allow an unauthenticated cybercriminal on the same network as the device to modify gas composition parameters within the devices’ respirator function, thus changing sensor readings for gas density. According to GE Healthcare, that means that the bug (CVE-2019-10966) could allow an attacker to impair respirator functionality in GE Aestiva and Aespire Versions 7100 and 7900, theoretically changing the composition of aspirated gases – while also silencing alarms and altering time and date records. That sounds bad on the surface, but GE Healthcare said that cybercriminals wouldn’t be able to actually cause any danger to a patient given that these devices are never used without human oversight. “Anesthesia devices are qualified as an attended device, and device location is where primary control is maintained by the physician,” it explained in a website posting this week. “While an alarm could potentially be silenced via the insufficiently secured terminal server TCP/IP connection to the GE Healthcare anesthesia device, both audible annunciation of the alarm, and visual signaling of the alarm are presented to the attending clinician at the GE Healthcare anesthesia device interface.” Deral Heiland, IoT research lead at Rapid7, said that the assessment of no patient danger should not make the find any less alarming. “GE’s response of …. determining no risk to patients makes me wonder what level of control can be conducted over the network against the anesthesia and respiratory machines,” he said via email. “My first thought is, if the device can except commands over the network without authentication, then that would be a critical risk. Either way, medical facilities should always maintain segmentation of their critical-care networks from exposure and this we help mitigate many known and unknown risks.” The flaw, reported by Elad Luz of CyberMDX to NCCIC, exists thanks to the configuration exposure of certain terminal server implementations that extend GE Healthcare anesthesia device serial ports to TCP/IP networks. It affects models sold before 2009, which may have employed an external gas monitor. “A vulnerability exists where serial devices are connected via an added unsecured terminal server to a TCP/IP network configuration, which could allow an attacker to remotely modify device configuration and silence alarms,” ICS-CERT said in an advisory posted this week. While there isn’t a patch, GE Healthcare issued a recommendation that organisations use secure terminal servers when connecting device serial ports to TCP/IP networks. “Secure terminal servers provide robust security features, including strong encryption, VPN, authentication of users, network controls, logging, audit capability and secure device configuration and management options,” according to the advisory. “One of the best solutions to mitigate potential exposure like this is for medical facilities to segment their critical-care networks from business networks, not allowing the two to communicate with each other, nor allowing Internet access from the critical-care networks,” Heiland said. “Following this practice will help reduce risk and impact of attacks, malware and virus infection within critical-support medical technology.” Don’t miss our free live Threatpost webinar, “Streamlining Patch Management,” on Wed., July 24, at 2:00 p.m. EDT. Please join Threatpost editor Tom Spring and a panel of patch experts as they discuss the latest trends in Patch Management, how to find the right solution for your business and what the biggest challenges are when it comes to deploying a program. Register and Learn More

Source

image
After facing public outcry over its handling of a zero-day vulnerability in its collaboration client for Mac, the Zoom web- and video-conferencing service has rushed out an emergency patch. The flaw (CVE-2019–13450), allows a malicious website to hijack a user’s web camera without their permission, putting at risk the 4 million workers that use Zoom for Mac. Researcher Jonathan Leitschuh explained that an outside adversary would need only to convince a user to visit a malicious website with a specially crafted iFrame embedded, which would automatically launch a Mac user into a Zoom web conference while turning on their camera. As Threatpost previously reported, the issue exists because the default setting for creating a new meeting is the “Participants: On” option. This automatically joins an invited person to the meeting, with webcam enabled, without the person having to give permission beyond clicking the meeting link itself. And, adding insult to injury is a persistence feature in the service. “If you’ve ever installed the Zoom client and then uninstalled it, you still have a localhost web server on your machine that will happily re-install the Zoom client for you, without requiring any user interaction on your behalf besides visiting a webpage,” explained Leitschuh. The company initially deployed only a partial fix and was slow to respond to Leitschuh during the disclosure process, the researcher said. Once the facts around the case became public on Tuesday however, prompting extensive media coverage, Zoom changed its tune on fully addressing his concerns. pic.twitter.com/roIEK7Uhpw — BadBadRobot 🕹 (@thebadbadrobot) July 9, 2019 “Initially, we did not see the web server or video-on posture as significant risks to our customers and, in fact, felt that these were essential to our seamless join process,” the company said on its blog Tuesday evening. “But in hearing the outcry from some of our users and the security community in the past 24 hours, we have decided to make the updates to our service.” The patch, available here, removes the local web server entirely, once the Zoom client has been updated. Also, the platform now allows users to manually uninstall Zoom. “We’re adding a new option to the Zoom menu bar that will allow users to manually and completely uninstall the Zoom client, including the local web server,” the company said. “Once the patch is deployed, a new menu option will appear that says, ‘Uninstall Zoom.’ By clicking that button, Zoom will be completely removed from the user’s device along with the user’s saved settings.” On July 12, the company will further update the client to address the concern around enabling video on by default. First-time users who select the “always turn off my video” pop-up box will automatically have their video preference saved. “The selection will automatically be applied to the user’s Zoom client settings and their video will be off by default for all future meetings,” the company said. “Returning users can update their video preferences and make video off by default at any time through the Zoom client settings.” For his part, Leitschuh posted on his blog that “hopefully this patches the most glaring parts of this vulnerability. The Zoom CEO has also assured us that they will be updating their application to further protect users’ privacy.” Don’t miss our free live Threatpost webinar, “Streamlining Patch Management,” on Wed., July 24, at 2:00 p.m. EDT. Please join Threatpost editor Tom Spring and a panel of patch experts as they discuss the latest trends in Patch Management, how to find the right solution for your business and what the biggest challenges are when it comes to deploying a program. Register and Learn More

Source

image
The latest iOS and Android versions of the FinSpy espionage malware have been deployed in the wild, and are capable of collecting a raft of personal information such as contacts, SMS/MMS messages, emails, calendars, GPS location, photos, files in memory, phone call recordings and data – even from the most popular “secure” messaging platforms. FinSpy is a targeted tool sold by European firm Gamma Group to governments and law-enforcement organizations; it’s been around since 2011, but recently Kaspersky researchers have seen new instances of it within the firm’s telemetry, including activity recorded in Myanmar last month. According to Kaspersky, several dozen unique mobile devices have been infected over the past year, using revamped implants. “FinSpy…is able to monitor almost all device activities, including recording VoIP calls via external apps such as Skype or WhatsApp,” researchers said in a blog post on Wednesday, adding that targeted applications also include secure messaging platforms such as Threema, Signal and Telegram. “After the deployment process, the implant provides the attacker with almost unlimited monitoring of the device’s activities.” There’s a catch though for operators going after iOS users: The implant can only be installed on jailbroken devices; and, an attacker would need physical access to the device in order to jailbreak it. If a device is already jailbroken, remote infection vectors include malicious SMS messages or emails, and WAP push messaging, which can be sent from the FinSpy Agent operator’s terminal. Also, the latest iPhone/iPad version is compatible with iOS 11 and below, but newer versions of the Apple operating system are not confirmed as susceptible; also, implants for iOS 12 have not been observed. The Android version meanwhile can be installed manually if the attacker simply has physical access to the device, or remotely using the same three remote infection vectors as the iOS version. Main Functionality The core implant module for the latest iOS version of FinSpy (“FilePrep”) contains 7,828 functions. It controls all the other modules, and takes care of HTTP and SMS heartbeats and other service functions. Communication between components is implemented using CPDistributedMessagingCenter (a wrapper over the existing messaging facilities in the operating system, which provides server-client communication between different processes using simple messages and dictionaries). It also uses a local HTTP server to receive data requests. Click to Expand Of particular note is a module called “.hdutils,” which is designed to configure the processing of all incoming SMS messages. It parses the text looking for specific content and will hide message notifications from the user. Then it sends relevant texts to the core module. The module “.chext” meanwhile targets messenger applications and hooks their functions to exfiltrate almost all accessible data: message content, photos, geolocation, contacts, group names and so on. Targeted platforms include BlackBerry Messenger, Facebook Messenger, InMessage, Signal, Skype, Threema and Wechat. The collected data is submitted to the local server deployed by the main module. The “keys” module has multiple hooks that intercept every typed symbol; and, Kaspersky said that there are several hooks to intercept passwords during login and the “change password” process. A module dubbed “MediaEnhancer” records calls. “The module starts a local HTTP server instance on port 8889 upon initialization, implementing VoIPHTTPConnection as a custom connection class,” researchers explained. “This class contains a handler for requests to localhost/voip.html that could be made by other components.” And finally, the module “.vpext” implements more than 50 hooks used for VoIP calls processed by external messaging apps, including BlackBerry Messenger, KakaoTalk, LINE, Signal, Skype, Viber, WeChat and WhatsApp. “These hooks modify functions that process VoIP calls in order to record them,” according to Kaspersky. “To achieve this, they send a post request with the call’s meta information to the HTTP server previously deployed by the MediaEnhancer component that starts recording.” The Android implant provides access to information such as contacts, SMS/MMS messages, calendars, GPS location, pictures, files in memory and phone call recordings. All the exfiltrated data is transferred to the attacker via SMS messages or via the internet (the C2 server location is stored in the configuration file). “Personal data, including contacts, messages, audios and videos, can be exfiltrated from most popular messengers,” according to Kaspersky. “Each of the targeted messengers has its own unified handling module, which makes it easy to add new handlers if needed.” The Android version adds an additional capability to the above features: Gaining root privileges on an unrooted device by abusing the Dirty Cow exploit, which is contained in the malware. “After successful installation, the implant tries to gain root privileges by checking for the presence of known rooting modules SuperSU and Magisk and running them,” researchers explained. “If no utilities are present, the implant decrypts and executes the Dirty Cow exploit, which is located inside the malware; and if it successfully manages to get root access, the implant registers a custom SELinux policy to get full access to the device and maintain root access. If it used SuperSU, the implant modifies SuperSU preferences in order to silence it, disables its expiry and configures it to autorun during boot. It also deletes all possible logs including SuperSU logs.” FinSpy Activity Grows Overall, during Kaspersky’s research, up-to-date versions of the implants used in the wild were detected in almost 20 countries – and given the size of Gamma’s customer base, it’s “likely that the real number of victims is much higher,” the analysts said. They also said that it was clear that FinSpy operators go after carefully selected targets, tailoring the behavior of each implant for a particular victim. Kaspersky researchers also said that FinSpy’s developers are constantly working on the updates for their malware; and in fact, Kaspersky researchers have found yet another version of the threat and working now to analyze it. “Since [a source code] leak in 2014, Gamma Group has recreated significant parts of its implants, extended supported functionality (for example, the list of supported instant messengers has been significantly expanded) and at the same time improved encryption and obfuscation (making it harder to analyze and detect implants), which made it possible to retain its position in the market,” according to Kaspersky. Don’t miss our free live Threatpost webinar, “Streamlining Patch Management,” on Wed., July 24, at 2:00 p.m. EDT. Please join Threatpost editor Tom Spring and a panel of patch experts as they discuss the latest trends in Patch Management, how to find the right solution for your business and what the biggest challenges are when it comes to deploying a program. Register and Learn More

Source