The CyberPHIx Roundup: Industry News & Trends, 1/16/23

Subscribe on your favorite platform:

About the Podcast: The CyberPHIx is a regular audio podcast series that reports and presents expert viewpoints on data security strategy for organizations handling patient health or personal information in the delivery of health-related services. These timely programs cover trends and data security management issues such as cybersecurity risk management, HIPAA and OCR compliance strategy and vendor risk management. Meditology Services, the healthcare industry's leading security and compliance firm, moderates the discussions with leaders in healthcare data security.

The CyberPHIx Roundup is your quick source for keeping up with the latest cybersecurity news, trends, and industry-leading practices, specifically for the healthcare industry. 

In this episode, our host Britton Burton highlights the following topics trending in healthcare cybersecurity this week: 

-

  • New FDA authority granted by December’s omnibus bill is a big step towards better medical device security 
  • HITRUST teases their new CSF v11 release 
  • CommonSpirit Health class action lawsuit 
  • The fallout from the LastPass follow-on breach 
  • The possibly similar situation that might be occurring at Okta  
  • JAMA Health Forum’s outstanding metrics study on ransomware attacks in healthcare from 2016 – 2021 
  • The nefarious use cases of OpenAI’s ChatGPT 
  • Clop ransomware group’s tactics for taking advantage of Telehealth appointments to deploy malware 
  • An apology from LockBit ransomware group for an attack on a children’s hospital (really!) 
  • Healthcare CISOs collaborating thru Healthe3PT to solve the third-party risk problem 
  • A major precedent-setting breach settlement order from FTC against Drizly and its CEO 

PODCAST TRANSCRIPT

Britton: [00:00:15] Hello and welcome to The CyberPHIx Health Care Security Roundup. Your quick source for keeping up with the latest cybersecurity news trends and industry-leading practices specifically for the healthcare industry. I am your host, Britton Burton. In addition to this roundup. Be sure to check out our Resource Center on Meditology Services, which includes our CyberPHIx interviews with leading health care, security, privacy, and compliance leaders, as well as blogs, webinars, articles, and lots of other educational stuff. We have a full agenda to cover today, so let's dive into it. 

Britton: [00:00:53]  I'm sure you're all aware of the $1.7 trillion omnibus bill that was passed by Congress in late December. But one area that might be of particular interest to healthcare cybersecurity folks, is that the bill grants the FDA new authority to establish medical device security requirements for manufacturers. So here's the breakdown of some of the key points that are included in the law, in case you missed it. Gives FDA $5 million and the authority to insure all new medical devices brought to market are designed with security in mind. So what that means and some of the fine print is all medical device submissions will now be required to include an S bomb, a software bill of materials. They will also be required to include adequate evidence that demonstrates the product can be patched throughout the lifecycle of the device. And submissions must also include a description of security testing and controls. And then finally, FDA and CISA are ordered to collaborate on medical device security requirements going forward. So the FDA will likely publish a date in the future that will outline when manufacturers must comply with these new rules. That was not included in the law as of yet, or the date by which FDA and CISA will produce these collaborative requirements. But we're expecting those dates to come soon and expect that when those dates come, the device might be sent back. Any new submission for a device might be sent back to resolve deficiencies. So look, this is obviously very welcome news for iOS. 

Britton: [00:02:21] This is a fight that we've been fighting for years. And to the device manufacturer's credit, many of them are interested in this type of regulation as well and know they need it. But we also, if you're on the practitioner side, on the health care side, you know that that's not true of all of them necessarily. So there are definitely some open questions, though, such as what will be done for current and recently approved devices. As you all know, legacy medical devices hanging on to a network for many, many, many years is one of the core security problems we face. And it's unclear how long the FDA will allow the manufacture and sale of these already-approved devices if they do not meet the new rules. It's also unclear how the FDA will handle post-market manufacturer support of current and legacy systems and then whether those s bombs and coordinated disclosure of vulnerabilities will be required for those older devices. So there will obviously be a fair amount of upheaval and significant cost to manufacturers and health care providers alike. If all of these really old devices were deemed unusable, my guess is that this will be sort of a going forward type of situation from the FDA or perhaps maybe apply retroactively to some of the more recent approvals at some point because of that, that very thing, you know, older devices will phase out naturally over the course of years, unfortunately, many years. 

Britton: [00:03:47] So the FDA can kind of fall back on that without causing too much chaos. As a cybersecurity purist, I'd love to see it apply across the board, but I also understand the tough position they're in with these legacy older devices and not wanting to cause some of that chaos. And, you know, again, any positive momentum here will absolutely reduce risk. And this is unquestionably positive momentum, giving the FDA some teeth, at least even if it is just going forward, at least giving them the teeth to to reject new submissions and to send those back and say, look, you've got to have the bomb, you've got to have the ability to continually update and patch and you've got to prove some security control testing went into this. And then, of course, if you listen at all, you know, I'm a pretty big fan of some of the things CISA is doing. We'll be very interested to see how CISA and FDA partner to produce some may be very specific guidance, not just guidance, but actual control requirements, maybe for medical devices in the coming year or years. For those of you who are HITRUST shops or who ask your vendors to pursue HITRUST or who do qualified assessor work for HITRUST,  you need to have this one on your radar. HITRUST announced just before Christmas that version 11 of their CSF is coming out in January of 2023. 

Britton: [00:05:04] So the month we're headed now to improve mitigations against evolving cyber threats, to broaden the coverage of their authoritative sources, and to streamline the journey to higher levels of assurance. There's not a ton of detail yet on what this change will entail, but the three points they hit in their press release are number one, to enable the assessment portfolio to leverage adaptive controls appropriate for each level of assurance. Number two, to reduce effort on achieving the certification levels due to some improved control mappings and precision. And number three, a change that makes all high trust assessments, either subsets or super sets of each other, which allows organizations to reuse the work and lower level high trust assessments to progressively achieve higher assurances by sharing common control requirements and inheritance. So that last point is of particular interest to me, as I could see immense value in being able to start with HITRUST, lowest level validated assessment, which I believe is the B one. To get some baseline assurance in place while not having to repeat work to get to the higher levels or not have to retest controls, for example, I'm not entirely sure that's what this means. But reading between the lines, that seems like a pretty good guess. I'm sure a whole lot more information will come out soon since they put a January due date on all of this. And we will definitely cover it in more detail when that happens. 

Britton: [00:06:29] Moving into some breach and settlement news. Common Spirit Health is back in the headlines. I'm covering this one not because I want to pick on them at all. I deeply feel for their security teams who have been dealing with this for months. But it's just the classic Please don't let this happen to us story that I think we as security pros all have our antenna up for. And you've also heard how much we cover class action lawsuits on this podcast and how we really believe they're becoming just the expected norm from large data breaches now. And so, unfortunately, that's the new headline for Common Spirit. A complaint was filed December 29th in US District Court for the Northern District of Illinois. Over 600,000 patients were notified in December that their data had been breached and the cyber attack against Common Spirit, which started in September and caused widespread EHR outages and appointment cancellations. So the lawsuit claims that Common Spirit failed to, quote, implement and follow basic security procedures and follow its own policies to safeguard patients PII and PHI, leaving them vulnerable to identity theft. The suit asks for damages, restitution, and other forms of monetary relief. We'll see where this goes. Again, this looks like kind of the norm for large data breaches. Common spirit so far has not commented publicly on this, but obviously, one to keep an eye on. Here's another repeat from a podcast a few months ago. 

Britton: [00:07:51] By now, I'm sure you've all seen that LastPass was hit with another security incident. I thought it was important to touch on this one, even though the news is about three weeks old at the time of publishing this podcast because of how we covered it back in, I believe, the September podcast stemming from the August initial data breach at LastPass. So if you recall, the August incident was a breach of some source code and other technical data from LastPass Development Environment. We mentioned it in sort of a quick hit or kind of way to say, Hey, keep your eye on this. It could be nothing. A compromise of a dev environment doesn't automatically mean that something bad is coming later, but the possibility of a follow on attack is is very real. And since LastPass is used in many corporate environments, customers just need to have this on the radar. Well, it looks like it's something after all. So LastPass admitted that some of the data stolen back in August was used to target another employee, obtain credentials and keys, which were then used to access and decrypt some storage volumes within the cloud-based storage service. It sounds like many customer vaults were stolen, but the encryption should be intact. And so for this reason, LastPass still maintains that very little customer information was compromised. But there's also plenty of skepticism from security pros if you've seen some of the write-ups on this. 

Britton: [00:09:11] So certainly any kind of credential stuffing or brute force attack could pop your vault. And if you don't have two-factor authentication or if you use a weak master password or you reuse a master password, you can have a problem. And that's really the main thing, I think, that LastPass is falling back on for why they say, for the most part, customer data wasn't compromised because of that client-side decryption it uses. But there's enough skepticism that we need to be careful here. So current recommendations for private users to change their or their master passwords and begin working through changing passwords of the accounts in their profile, especially starting with those critical ones like email, financial, etc. You know, the type that you would want to triage first. And then also if you keep any two-factor authentication keys in your vault, that's another change you'd want to make as you work through this. Obviously, just a huge pain for any users of this service, not the type of thing anyone wants to do, but probably a necessary step for those of you who may be using it. For corporate users, it's maybe a little bit more of a tangled web. Again, LastPass is saying This isn't a big deal. You're not impacted except for a small percentage that they have contacted directly. But if you use the LastPass Federated Login Services, they're stating there's no need for action because it uses basically the way it works is it uses a hidden master password that is a combination of two or more separately stored 256 bits or 32 characters long cryptic cryptographically generated random strings that must be specifically combined to use. 

Britton: [00:10:47] And they say the threat actor did not have access to those key fragments and that those key fragments were not included in the backups that were copied in the in the breach. So again, I mentioned some skepticism from the security community on all of this. If you do use this in your corporate environment, I would definitely deploy the I r team, and work with your identity and access management team to make sure you trust this statement and that you fully understand the architecture. I think what I just ran through probably isn't a deep enough dive for you to go, Yeah, I'm good. But you really got to understand that architecture and trust what they're saying. And then finally, from all of this, private and corporate users alike should really be ready for an onslaught of phishing email. Targeting you, discussing the breach, and the need to take action. Definitely. If you have a configurable fish test solution where you can kind of customize the messaging. That would probably be a great one to do this month, especially if you're a LastPass corporate user where your users actively store credentials and so on, and their corporate offering. 

Britton: [00:11:49] In a similar vein, I want to quickly mention Okta's GitHub breach, because it's basically the same thing that happened with LastPass. There was some source code stolen from GitHub. Okta announced this late in December and that was actually the third different compromise they've experienced in 2022. So just like with LastPass, we said, keep your eye on this because it could be part of a larger attack campaign that will result in something very bad. This one seems maybe even more likely to head that direction since they've experienced three different events. And if you're an active customer, there's probably some questions you should be asking. Has the platform been completely compromised in a way that threatens our organization? Is there something fundamentally wrong with how Okta is managing its environments, especially its dev environments? What, if anything, can I do right now to reduce risk to my organization? So definitely keep an eye on it. Very similar to what happened with LastPass, and we've now seen what happened with LastPass. A really interesting study came out right before the New Year from a group of authors in the JAMA Health Forum, not not a forum that I'm familiar with, but wanted to share this because it covers some amazing statistical data on trends in ransomware attacks on US hospitals, clinics and other health care delivery organizations from 2016 to 2021. So in this study, they created the tracking health care, ransomware events, and Traits or Threat Database. 

Britton: [00:13:12] It's a comprehensive accounting of 374 ransomware attacks on on US health care delivery organizations from 2016 to 2021, as we mentioned. The key question they were trying to answer is, is one, how frequently the healthcare delivery organizations experience ransomware attacks and to how the characteristics of these ransomware attacks changed over time, and they turned up some really valuable data. For those of you who are fighting the fight to quantify this risk in some way, whether you're doing it formally through the fair models of the world or you're just trying to put some numbers to this to help you or your estimations and more of an ordinal scale. Some really fascinating stats that I'll read through. For the five-year period of the study, there were 374 ransomware attacks on HDOs. In that same period, the authors found that the annual number of ransomware attacks more than doubled from 43 in 2016 to 91 in 2021. High exposure increased more than 11-fold from approximately 1.3 million in 2016 to more than 16.5 million in 2021. 84 of these ransomware attacks, or 22 and a half percent did not have information or exposure and they did not appear in the OCR database. Out of 290 ransomware attacks reported to HHS, the majority 203 were reported outside of the legislative reporting window of 60 days following the attack. The authors found that about one in five healthcare organizations were able to restore data from backups. 

Britton: [00:14:48] That is a huge one because we know that backups is our not primary defense, but certainly one of our main defenses in the ransomware event is to say we're not paying the ransom, rely on the backups. It looks like out of these attacks they studied only one in five were actually able to successfully restore. So that's a big one to talk to your i.t shop about your disaster recovery teams, your data center operations, your cloud operations, etc.. A couple more 59 ransomware attacks had bad actors, makes some or all of the stolen public generally by posting it on dark web forums where the data is advertised for sale. And then the last one clinics of all specialties were the most common those targeted for ransomware attacks rather than those large hospital systems. That's one that we've talked about a lot. I'm sure you've seen a lot yourself. That is definitely the trend. The smaller the smaller specialties, the smaller clinics who are just not as mature. And of course those third parties who have access into many, many healthcare organizations are just the trending attack targets as opposed to the larger systems themselves. You may have more resources and more mature security practices. So continuing through this, the authors of this paper make three recommendations after conducting the study, and I think they're pretty important. So I'd like to go through them a little bit with you. Number one, they say we need to improve our existing data collection processes. 

Britton: [00:16:13] They believe that half of the ransomware attacks are not reported within the 60 day window after Breach Discovery and one in five ransomware attacks are not reported at all due to less than 500 records. Compromised or confusion on whether or not a ransomware attack that only encrypts the data but doesn't necessarily exfiltrate it counts as a breach because that data in theory wasn't stolen. There has been some guidance from HHS that clarifies that but it is still sort of a judgment call and it's kind of up to the burden of proof on the victim to show that data was not actually stolen or exfiltrated. And I think the authors are saying that that may not be the best way to handle this. So a couple of thoughts here. I mean, I completely understand what the authors are saying here, and where they're coming from, but I also know what it's like to be responding to an event like this. And we have to balance the speed of reporting with the realities of that all these teams face of keeping their business operations going, restoring systems, trying to get back to some semblance of normalcy. When this happens, it's the worst day, week, month of the year or decade for pretty much everyone involved. Family time, personal lives, and sleep are not happening during this. And so I agree that we need to be better at this. But I think we've also got to find a way to do it that relieves the burden of the victim organization. 

Britton: [00:17:43] The victim organization. Who are trying to recover because that is first and foremost the goal, especially in a health care setting, when you've got to get patient care back online systems back up for clinicians to use, and so on. At the same time, with some of the new legislation we're seeing coming around, that requires reporting and three days or ten days, 60 days doesn't seem all that hard anymore. So so we can definitely do better. And I agree with the recommendation. I just a measured approach and thinking about the realities of the situation I think are important. Number two, they recommend that we expand data collection because currently there is no requirement to report what the ransom demands were from the attacker or whether or not the victim paid. They argue that this lack of data basically makes it impossible to paint the true picture of how bad the problem is and thus undercuts the collective ability for all of us to get the support needed to address this problem. And this is certainly an interesting topic, too. I'm sure most boardrooms will say absolutely not to this. We are not going to tell certainly what we paid, but even what the demand was and it's kind of a short-term pain for long-term gain situation in my mind. 

Britton: [00:18:57] If we want better hard figures to prove how bad the problem is, this would certainly help. But this feels even more than number one, like an uphill battle. But if there were a federal entity who could help with that and do it in such a way that reduces the concern from compromised entities of punishment and penalty, which a lot of them say that that's what they're doing. But I think there's still sort of a lack of trust that we don't want this to come back and harm us. We're a little hesitant to share information with law enforcement. We've got to get past that. We've got to work together with the three-letter and four-letter agencies to get past that and to truly execute on that. I think there's something there, but definitely an interesting topic and probably a bit of an uphill battle. And then their third recommendation align cybersecurity recommendations with the realities of health care delivery. They urge policymakers to prioritize evidence-based actions that have been shown to work within the context of healthcare delivery organizations. Because the list of recommended best practices for avoiding ransomware attacks is long and expensive. So there is concern that those recommendations are not actionable for the average hospital or certainly for smaller clinics, which actually represent the largest pie slice of ransomware victims as we've said. And a few of the other stories here today, given existing I.T. 

Britton: [00:20:23] Budgets and workforce challenges, this is just a major problem. So they also mention that carrots and sticks are needed. Enforcement already exists, but subsidies and technical assistance are needed as well on the carrot side. So I totally agree with this one. The realities of health care make these massive comprehensive frameworks that we all depend on and execute against, make them really tough to fully implement, especially at the smaller organization level. However, I don't know that these authors took into account some of the recent developments that we've covered extensively from HP and Cisco Cyber Performance Goals and OCR-recognized security practices that actually make this more doable. These are the kinds of things we need to be looking to for practical, implementable steps that entities can take to cover their bases and still got to do more than those things, but at least cover their bases as a starting point. The mention of carrots really stands out to me too. Again, if you've listened to us at all recently, you've heard me mention there are some hints that these kind of IMU style incentive programs centered around cybersecurity. And I'm really curious to see where that goes because I just think that can be a big, big deal for the healthcare industry to get ahead of this problem in a way that right now we just aren't. All right. It wouldn't be a tech podcast without talking about chat GPT. 

Britton: [00:21:49] Right. The cool new tool that everyone in the tech community is loving right now has unfortunately already been turned to nefarious purposes by attackers. That didn't take long and we're not surprised. Some researchers have been successful recently in getting GPT to write phishing emails and malicious code. So on the phishing side of things. Attackers have already gotten better at writing verbiage that doesn't raise alarm bells, but this kind of takes that to a whole new level. Checkpoint researchers told GPT to, quote, write a phishing email from a fictional web hosting service, and it did a really good job. Researchers at Abnormal Security took a less direct approach and asked it to write an email that, quote, has a high likelihood of getting the recipient to click on a link. In the first case, the researchers got a warning that this may violate its content policy, but it still produced a result for them. And then in the second case, it wasn't flagged since they didn't explicitly ask it to be involved in committing a crime. So I actually want to give Openai some credit here. This article came to my attention about two weeks ago. I think that's when it came out and I actually went just a few days later and tried the hey, write a phishing email thing in chat GPT and I got the alert about it violating its terms of ethics, but it would actually not perform the operation for me. 

Britton: [00:23:15] It didn't give me a result. So Open AI has said they are constantly working on ways to combat abuse and at least in this case it appears they've done that. So credit to them a little personal science experiment on my own prove that. But it's also just a super interesting topic to be on the mind of security pros. This is inevitable that it will be abused not just for security, but for who knows what, Right? You could probably go down a rabbit hole thinking of all the bad ways this thing can manifest and whatever open AI does to combat it will by nature just be largely reactive. That's the nature of the beast, right? I've seen tons of content about how great it is and how it can make the lives of coders easier. And it's true, 99% of those folks are doing work, good work for good things. But we all know that the 1% exists and what it will do with any tech revolution. And I think that's just something we're going to have to be aware of. I certainly don't envy the ethics team at Openai. All right. Here's a quick hitter in the threat domain that I think healthcare security folks should hear. The Clop Ransomware Group is taking advantage of the telehealth revolution by weaponizing medical documents and images. An example is that they'll intercept a telehealth referral for imaging or other services. They then send a file disguised as the results back to the physician who sent the referral. 

Britton: [00:24:39] So the physician opens it and it executes malware. There are even examples of them just actually signing up for telehealth, a legitimate telehealth appointment that they don't really need specifically for the purpose of sending back malware-laced files to the physician after the referral occurs. But that second one, you know, that's a really tough one because it appears as a legitimate appointment to the health care provider. So this attack is most prevalent in smaller operations like dentist offices and physician practices right now. The main recommendation would be just to make sure those files coming back into your network have to route through a gateway that does typical things like AV scanning, sandbox detonations, etc. rather than just providing those files directly to an end user. Again, that may be the type of challenge that's really hard for some of these smaller organizations to build the process and technology around. But that's probably table stakes for any larger operation. The element of trust here is just really problematic in that you can't necessarily rely on your training no matter how good you do on your phishing training for folks like this to be suspicious because especially in that second example, they are in a trusted situation. They have really no reason to think that this isn't a safe thing to click on. So this is an interesting one. I wanted to mention it in case you don't have some of the protections in place for how you're receiving files back from either patients or referrals, or if you maybe even aren't really sure how that business process works. 

Britton: [00:26:13] That's a great example of needing to understand your business and your processes so that you can build technology around it to protect it. Now, I wanted to share this next one because it's such a different angle from what we are normally used to seeing and normally covering on the podcast. So the Lock Bit Ransomware group has apologized for an attack on Toronto's Hospital for Sick Children and has laid blame at the feet of one of their, quote, partner organizations. They issued this apology on December 31st to their dark Web pages where they post ransoms and data leaks. They also offered the hospital a free decryptor to unlock its data. The hospital has said the attack delayed lab and imaging results, knocked out phone lines, and shut down the staff payroll system. Another example of that from the Common Spirit news as of January 2nd. So this is a couple of weeks old now, but I haven't seen an update. But as of January 2nd, over 60% of its priority systems have been brought back online, including many that had contributed to those diagnostic and treatment delays. And they say restoration efforts are progressing well. But they have been under a code gray, a hospital code for system failure since December 18th. 

Britton: [00:27:24] Again, this is as of January 2nd. So you're looking at two solid weeks and I'm not completely sure what the status is as of today, January 13th, when I'm recording this. The hospital is apparently assessing that decryptor with some third-party expert help. And as you all know, these descriptors are not the silver bullet. There are plenty of stories of victims paying and getting those descriptors only to find out they don't actually work. So here's hoping. Toronto's Hospital for Sick Children can recover quickly. This appears to be the first time this notorious ransomware group has issued an apology or offered a free decryptor. So I guess [00:28:00] it's nice to see they have a heart after all. But, you know, this feels a lot like the promises from threat groups back in 2020 to not attack healthcare during COVID, which lasted about two weeks. All right. Switching gears to third-party risk, a group of health care. CISOs has formed a collaborative to solve the third-party cyber risk problem. This council is called the Health Third Party Trust or Health Report for short and is committed to bringing standards, credible assurance models, and automated workflows to solve the third-party risk management problem and advance the mission to safeguard sensitive information. For full disclosure here, our sister company Coral learned about this a few months ago and is supporting the initiative along with high trust. From their press release. Unfortunately, today's methods to manage these third-party risk exposures are burdensome and inadequate, with each vendor handling their assessments differently and often manually resulting in blind spots on risks limited follow through on remediation of identified risks, complacency regarding continuous monitoring and insufficient assurance programs to prove that the right security controls are in place. 

Britton: [00:29:06] This is especially true for smaller organizations that have limited resources and are often where many breaches occur. In response, the health RPT is collaborating to overcome these challenges and achieve greater efficiencies throughout the ecosystem. The Health Report will focus first on a series of common practices to effectively manage information security risks associated with vendors and third-party service providers. These include methodologies and tools that address multiple best practice frameworks, foster standardization and transparent assurance and validation, and address legislative and regulatory requirements. They go on to say that they will publish their first deliverable in Q one of 2023, which looks like it's going to be research on third-party risk metrics to benchmark the state of the industry. And then some interesting things here. Later on in 2023, the Health Report will establish some working groups will host industry-wide events, including a summit for vendors for health care, third-party risk management stakeholders, and for assessor organizations. So super interested to see where that goes. All of this is interesting. I think the best news of all is that it's this Health Report Council is made up of a mixture of security leaders from health systems and payers, but also from healthcare service organizations and vendors. 

Britton: [00:30:20] That is just critical. Some of the names mentioned are on the provider side CVS, HCA, Health Care, Memorial Sloan-Kettering Cancer Center, Premera Blue Cross, and Humana. But then on that vendor side, AmerisourceBergen Health, ICS Health stream. And that partnership is so critical because this can't be an adversarial relationship anymore. We've got to work together from vendor to health care organization, come together and say there's a better way to do this. And I'm really, really interested to see where this goes. Okay, One last topic for today. This one is actually a little bit dated and it's not specific to the healthcare industry, but when I saw it when it came to my attention, I just thought, I have to cover this because the FTC has set a major unprecedented accountability standard with a recent action against Drizly and their CEO, James Corey, release for security failures that expose the data of 2.5 million consumers. So if you haven't seen this, please listen to the next 4 minutes or whatever. This is some mind-boggling stuff that I've just never seen anything like it. The FTC order requires Drizly to destroy unnecessary data, restricts future consumer data collection and retention, and binds the CEO to specific data security requirements. So the FTC actually released this press release from their own website on October 24, 2022, covering what happened. 

Britton: [00:31:46] So again, a little bit dated, but I just had to cover it because of what's in here. Some of the quotes and statements in this story are absolutely remarkable. So here are a couple of them from the FTC. Quote, Our proposed order against Drizly not only restricts what the company can retain and collect going forward but also ensures the CEO faces consequences for the company's carelessness, said Samuel Levine, director of the FTC's Bureau of Consumer Protection. Again, Samuel Levine speaking. Ceos who take shortcuts on security should take note. So wow, already, right? Well, that's not it. Later on in the press release, quote, Notably, the order applies personally to relist who presided over Drizly lacks data security practices as CEO in the modern economy, corporate executives frequently move from company to company, notwithstanding blemishes on their track record. Recognizing that reality, the commission's proposed order will follow reckless even if he leaves, Drizly specifically relists will be required to implement an information security program at future companies if he moves to a business collecting consumer information from more than 25,000 individuals and where he is a majority owner, CEO or senior officer with information security responsibilities. So I have never seen language like this targeted to an individual. So it really floored me. If you've seen that before and I'm you know, maybe I'm behind the times, I'd love to hear about it and get examples of it. But this was new to me. Now it appears that part of the reason for this direct approach was that Drizly was more than lax about its security practices. 

Britton: [00:33:30] They had several security incidents in the previous four years that FTC and others investigated, and they apparently at least never really addressed any of the deficiencies. And at the same time, they were making public statements claiming that they use appropriate security practices despite not really making many, if any, of those changes. Some examples that the FTC cites includes not requiring employees using their GitHub instance. Interesting data point given some of the stories we've talked about today with Okta and LastPass. I don't know that that's what's going on there, but you just keep seeing some themes run through some of these stories, don't you? Also not doing anything to limit employee access to customer personal data, not having written policies or procedures, and then later on not training employees on those unwritten policies and procedures, which is, you know, it's kind of hard to train if you just don't have them and then not having a senior executive responsible for security. So if you're not familiar with FTC, their jurisdiction essentially is that they enforce federal consumer protection laws that prevent fraud, deception, and unfair business practices. They also enforce federal antitrust laws that prohibit anti-competitive mergers and other business practices that could lead to higher prices, fewer choices, or less innovation. They cover virtually every area of commerce, with some exceptions concerning banks, insurance companies, nonprofits, transportation, and communications, common carriers, air carriers, and some other entities. 

Britton: [00:35:01] So I've not come across too much in the way of FTC in my career. But it's fairly obvious from this scathing press release that Drizly was not doing much in the way of security. So FTC went after them hard. It's just shocking to see those personal penalties assigned here. You know, I think this is the kind of story that keeps CEOs up at night. I know I've had conversations with some in my career about some of the scary kind of personalized penalties that have come down before. This isn't the first time I've seen that. But this level of sort of stick to itiveness of the penalty follows you wherever you go. Pretty, pretty amazing. And as you've also heard in some other podcasts and things we've done, there are rumblings of more FTC rulemaking coming. So one to keep in mind, even if you haven't had a lot of interaction with FTC or dealt with. Their compliance obligations in the past. That could be something that. Changes in the. That's all for this session of The CyberPHIx Health Care Security round-up. We hope this was informative for you and we'd love to hear from you. If you want to talk about any of this. Please just reach out to us at CyberPHIx@MeditologyServices.com. That's all for this week. So long. And thanks for everything you do to keep our healthcare organizations safe.