Why Are You Getting This?
You signed up to receive the Tips, or initiated contact to stay in touch with Rebecca and/or Privacy & Security Brainiacs (PSB) and consented to receive the Tips. Please read our Privacy Notice & Communication Info at the bottom of this message for more information. You may unsubscribe from there as well.
| |
Security and Privacy April Fools! | |
This month we bring you an interesting number of news, answers to reader and listener questions, and tips for protecting your information and technologies from the growing number of April fools, criminals, and scammers throughout the world.
Do you have stories, examples, or concerns about the topics covered in this issue that you would like us to provide feedback on? Send them over! We may discuss them in an upcoming Tips.
We hope you are finding all this information valuable. Let us know! We always welcome your feedback and questions.
Thank you for reading!
| |
April Tips of the Month
- News You May Have Missed
- Privacy & Security Questions and Tips
- Data Security & Privacy Beacons*
- Where to Find the Privacy Professor
| |
We’re sharing interesting security and privacy news that demonstrate that such types of risks exist basically anywhere in the world, and that everyone needs awareness. Here is a list of 17 such articles our Privacy & Security Brainiacs team found interesting throughout the past month, in no particular order. Read next month for even more. Do you have interesting, unusual, bizarre or odd stories involving security and privacy? Let us know!
1. A Lincoln, Nebraska, woman exploited a gas pump software code error to get over $27,000 of free gas. She was charged with one count of theft by unlawfully taking over $5,000. Police said she used the unsecure code in the pump to exploit the coding error and pump free gas for more than six months.
2. Friends of a dead man propped his corpse in their vehicle and withdrew money from the dead man’s bank account. The bank had previously allowed the women to withdraw money from his account when he was alive as long as he was accompanying them. He was visible in the car to the bank staff, so they allowed the money to be withdrawn.
3. An engineer bought a clear plastic prison laptop on eBay, then posted on X/Twitter and got help to unlock the secured device. The device turned out to be a secure computer used for jail and prison education. Of particular concern was an article on a hacker website that shared the default password for the underlying software that started the laptop’s operating system, presenting what the Department of Corrections considered a security concern. The department then collected the 1,200 incarcerated students' assigned devices, “to provide an immediate system update.”
4. In response to a group of Beverly Hills middle school students creating and circulating AI generated nude photos of classmates, the district's school board voted to expel five students involved in the scandal.
5. A cyberattack against an Ottawa resident wiped out his cache of Aeroplan loyalty points, enough for a pair of round-the-world flights. Air Canada says the incident occurred because "the member's email account was compromised, and entry was gained that way, not through the Aeroplan platform." Air Canada recommends, but does not require, multi-factor authentication, which likely would have prevented the successful access into the account, and stealing the points.
6. A retired Army lieutenant colonel who held a Top Secret security clearance at the US Air Force Strategic Command attended classified briefings on the war in Ukraine from February through April 2022. He then sent this classified information to someone who claimed to be a woman living in Ukraine.
7. A Cambridge professor escaped a medieval castle tower toilet using eyeliner and cotton. A great real-life problem-solving example.
8. At cruising altitude about 30 minutes to an hour after departure, a surprise feline passenger appeared in the cockpit and clawed at the terrified pilot. Unfortunately for the passengers on board, the pilot was forced to make an emergency landing back at Khartoum International Airport after the crew’s attempts to wrangle the cat failed. Clearly a significant physical security vulnerability is somewhere. Would be a good time to do a full assessment for all types of risks as well.
9. Burned-out cybersecurity professionals dealing with layoffs and stressful working conditions are increasingly finding a better way to earn a buck: cybercrime. One example: A Google engineer is accused of stealing AI secrets; more than 500 files containing Google IP that he sold to two China-based startups at the same time. This is a huge insider threat for which ethical cybersecurity pros need to be aware.
10. Please think twice before letting an AI powered app scan your genitals for STIs. The HeHealth app is promising its AI service can accurately scan pictures of penises for signs of sexually transmitted infections and cancer. This is raising the ire of healthcare advocates and digital privacy experts, among many other critics. HeHealth claims their services are HIPAA compliant because they utilize Amazon Web Services (AWS) “to collect, process, maintain, and store” data. YIKES!!! FACT: A business is NOT in compliance with HIPAA simply because one of their contracted vendors are (or claim to be) in compliance. HeHealth, as a separate entity from AWS, must within their own business ecosystem meet all HIPAA requirements if they claim they are compliant, and if they are a HIPAA covered entity or business associate.
11. Romanian mobs are stealing personal information at self-checkout aisles as authorities warn of organized crime. The gangs are doing this by placing skimming devices on the self-checkout aisles that can extract a customer's information. NOTE: Any organization that uses ATM and credit card self-scanning devices needs to assign a person or team to check these regularly for skimmers. These are exponentially increasingly being used, by many different types of criminals beyond the Romanian mobs, to steal the personal data of merchants' customers.
12. The HHS OCR delivered their annual reports to congress on HIPAA compliance and breaches of unsecured protected health information (PHI). The reports highlight where covered entities and business associates need to focus their HIPAA compliance efforts.
13. On March 19, 2024, the U.S. Environmental Protection Agency and National Security Advisor Jake Sullivan sent a letter to all U.S. Governors inviting state environmental, health and homeland security Secretaries to a convening by their deputies to discuss the urgent need to safeguard water sector critical infrastructure against cyber threats.
14. New cars are now ‘the worst’ products when it comes to protecting consumer data. Connected cars (vehicles equipped with internet access) are becoming the norm, sounding the alarm for many privacy risks. Most car manufacturers provide virtually hidden options to opt out of unnecessary data sharing where there is money to be made from the sale of data. A McKinsey report from 2021 predicted that various use cases for car-data monetization could deliver $250 billion to $400 billion in annual revenue for industry players by 2030.
15. A Texas man boarded a Delta flight using a photo of another passenger's ticket at Salt Lake City International Airport on March 17. He entered the lavatory in the front of the aircraft as others were boarding. After boarding was complete, he went to the back lavatory. After the airplane started to taxi he left that lavatory and a flight attendant noticed there were no seats available on the plane and approached him. He was subsequently arrested and charged after the plane went back to the gate.
16. There are still many people who use ham radios; I know several of them! Here is an interesting article they should read, “The Most Hackable Handheld Ham Radio Yet.” Ham radios can be controlled through application program interfaces (APIs) running on Windows and Apple systems, and this hack could take down the entire system.
17. This is an interesting news suggestion from Tips reader, Elijah, Handheld Computing Device Security Management Overview by Rebecca Herold. Yes, me! He said he was searching for mobile computing policies and was surprised to find one of my educational talks online; a session I did at an IIA meeting in 2004! Elijah also let me know that he found the majority of my guidance and suggested security and privacy risks and associated model policies still applicable today.
Have you run across any surprising, odd or bizarre security and/or privacy news? Please let us know! We may include it in an upcoming issue.
| |
Privacy & Security Questions and Tips
Rebecca answers hot-topic questions from Tips readers
April 2024
| |
We continue to receive a wide variety of questions about security and privacy. We are also receiving more questions than ever about HIPAA and personal health data. Thank you for sending them in! This month in addition to our Question of the Month we’ve included seven Quick Hits questions. Are the answers interesting and/or useful to you? Please let us know! Keep your questions coming! | |
Image by Alexandra Shatornaya | |
Question of the Month:
Q: If a nurse in our hospital is not directly working with a patient B but has access to the system that contains patient B’s health records AND she intentionally goes in to look at that patient’s records, is this a reportable HIPAA breach?
| |
A: This question is actually a smash-up of six different questions from six different individuals in six different organizations throughout the U.S. about a similar situation. Upon additional questioning, we learned it was originated through discoveries made in their own organizations after recent news about hospital workers snooping on Princess Katherine’s health records in the UK.
Over the years I’ve seen many healthcare providers whose entire caregiver staffs have access to patient records, who are not provisioning care and fall victim to curiosity, which is never a good situation to deal with. Sometime the thought of a famous person, or even local celebrity or politician also is too tempting for some.
Some systems, particularly legacy systems, do not allow for granular access control limitations, so they must allow, technologically, all nurses, and others who support care in any way, access to all patients’ records. However, these types of situations are where the administrative policies and controls are necessary. They require policies that state that such requirements as:
1) Each employee/contractor/etc., can access only the files of patients for whom they are directly involved with providing care.
2) All access to patient records will be logged by the system (a technological control).
3) The logs will be reviewed by a member of staff who has the responsibility to do so, who is not a care giver (to avoid conflict of interest), and that no other staff member may have access to the log files but this person with the assigned responsibility.
4) Anyone who is violating policy 1) (that prohibits accessing patient records where care is not involved) will be violating the policies, and will face disciplinary actions up to and possibly including termination.
If any type of HIPAA covered entity (CE), such as a healthcare provider, insurer or clearinghouse or employee or business associate (BA) employee, purposefully accesses the patient records of an individual for whom they are not involved in their treatment, payment or operations (TPO), then yes, it is a breach. A HIPAA privacy breach is any event that results, or could result, in unauthorized use or disclosure of PII/PHI where persons other than authorized users have access (or potential access) to PII or PHI, or use it for an unauthorized purpose.
This is in contrast with an allowable type of incidental use or disclosure, which is not a HIPAA violation. Incidental uses and disclosures occur as a by-product of another permissible or required use or disclosure. This is as long as the CE has applied reasonable safeguards and implemented the minimum necessary standard, where applicable to the context within which healthcare is being provisioned, with respect to the primary use or disclosure. An incidental use or disclosure is a secondary use or disclosure that cannot reasonably be prevented, is limited in nature, and that occurs as a result of another use or disclosure that is permitted by HIPAA. For example, a healthcare professional may discuss lab test results with a patient or other provider in a joint treatment area, such as in a post-surgery recovery area. Others in the area may overhear some of the discussions as a result. However, an incidental use or disclosure is not permitted if it is a by-product of an underlying use or disclosure which violates HIPAA.
Many such situations have occurred throughout the years where such HIPAA PHI breaches happened. For example, in a recent case several security guards from Yakima Valley Memorial Hospital impermissibly accessed the medical records of 419 individuals over an extended period of time. On June 15, 2023, the HHS OCR fined Yakima Valley Memorial Hospital $240,000 and required them to update its policies and procedures to safeguard PHI and train its workforce members to prevent this type of snooping behavior in the future.
As OCR Director Melanie Fontes Rainer said at the time of the Yakima Valley settlement, “Data breaches caused by current and former workforce members impermissibly accessing patient records are a recurring issue across the healthcare industry. Health care organizations must ensure that workforce members can only access the patient information needed to do their jobs.”
If a healthcare provider employee is purposefully accessing the PHI of a patient for whom they are not providing treatment, it is a HIPAA breach. HIPAA breaches must be reported to:
-
To the associated individuals. Within 60 days of the discovery of a breach.
-
To the media. This is for PHI breaches of 500 or more residents of a state or jurisdiction.
- To the HHS.
- For PHI breaches involving 500 or more individuals, notification must be made following discovery of the breach within 60 days.
- For PHI breaches involving less than 500 individuals, the CE must maintain a log or other type of documentation of such breaches and, no later than 60 days after the end of each calendar year, provide the notification to the HHS for breaches discovered during the preceding calendar year, in the manner specified on the HHS web site.
- Given it is one person’s PHI involved, this should be reported to HHS OCR in the associated CE’s annual summary, and based upon the HIPAA requirements and limited information I have about the situation, likely does not need to be reported to HHS immediately.
-
To the CE when discovered by a BA. Breaches are discovered as of the first day that it is first discovered by a BA, when they are the ones who discover it and not the CE. BAs should report to the CE as soon as possible, given the 60 days reporting to the individuals, media and HHS starts upon that first day discovered. The CE will be the entity ultimately notifying those other entities since they have ultimate accountability and responsibility for the security of the PHI, and the direct relationship with them.
Appropriate actions should be taken with the employee inappropriately accessing the PHI based upon determining the context of the situation, the intent for accessing the PHI, any misuse of the PHI, etc.; basically, based upon determination of the risks created by the situation.
Depending upon the findings from such a breach risk analysis, the associated disciplinary action may be, at a minimum, additional education and perhaps other actions, or it could require termination, and/or reporting to law enforcement, etc. It ultimately depends upon the specific situation, the PHI breached, and what the individual did with that information.
For even more guidance and tips about these issues, here are some more of our resources: Visit our webpage; check out our blog; subscribe to our YouTube channel; follow us on LinkedIn.
| |
Quick Hits:
Here are some questions we are answering at a high level. We provide more in-depth information and associated details about these topics in separate blog posts, videos on our YouTube channel, in infographics and e-books, LinkedIn posts to our business page, and within our online training and awareness courses.
Q: What is an MITM cyberattack?
A: MITM stands for main-in-the-middle. When a computing device is sending and receiving data with another computing device, such as another computer, an online website server, an internet of things (IoT) product, etc., through a connection that has security vulnerabilities, it is possible for threat actors, such as a hacker, malicious bot, etc., to inject themselves into that connection path and steal the data passing through it, modify or delete the data passing through, trick the associated users into providing them access into corporate networks and servers, and an unlimited number of other harmful actions. These are types of MITM cyberattacks.
MITM attacks have been a threat since computers started connecting to other computers many decades ago. A recent real-life example of a MITM attack was against a Tesla automobile, as demonstrated by security researchers. In this case the researchers showed how an MITM attack could occur when registering a new phone key to access and subsequently unlock, steal and take control of a Tesla.
| |
Q: I want to be a cybersecurity expert! Help me be one. Tell me, how can I hack a Wi-Fi password?
A: Interesting question! Based solely on the question, it sounds like instead of being a cybersecurity expert, you more specifically want to know how to hack Wi-Fi networks. Let’s start with a couple of facts:
1) To become a cybersecurity expert takes time a lot of time, experience and learning. It requires the ability to know more than just how to hack one specific type of technology. It requires knowledge and capabilities within a wide range of administrative, operational, technical and physical domains. The very specific area of expertise for Wi-Fi hacking will not make you a cybersecurity expert; it could make you a Wi-Fi hacking expert, though.
2) Hacking Wi-Fi networks that are not your own (that you maintain, pay for, etc.,) is a crime in most parts of the world. It is also a huge invasion of privacy for those whose Wi-Fi networks you are hacking. You could be a criminal with expertise in how to hack Wi-Fi networks, or have expertise in hacking Wi-Fi networks to support a goal of then being able to better protect Wi-Fi networks, but that will not make you a cybersecurity expert; that takes having expertise in a very wide range of topics, each with a significant depth of experience and understanding.
With this in mind, instead of telling you how to hack a Wi-Fi network, here’s a high-level list of seven critical actions to secure a Wi-Fi network to help protect it against criminal Wi-Fi hackers, who look for Wi-Fi networks that lack these important protections.
1) Change the service set identifier (SSID) of your Wi-Fi network. Most manufactures still give all their wireless routers a default SSID, so changing this default (that all hackers know) should be done right away.
2) Another default to change is the password; it is also typically the same for all the same types of wireless routers. Make it long and strong…at least 20 characters, upper-and-lower case alpha, numerals and special characters.
3) Enable encryption. This is turned off by default on wireless routers by most manufacturers. Use WPA3 where available, and WPA2 where WPA3 is not available.
4) Turn off network name broadcasting. This is often turned on by default.
5) Enable automatic software updates. This is usually turned off by default.
6) Enable your wireless router firewall. Most come with it turned off by default. Consider getting an additional firewall; many wireless router firewalls do not have strong firewall protections, and lack important basic protections, such as intrusion prevention capabilities.
7) Use a virtual private network (VPN) with your router, as well as on the computing devices connecting to your router.
Q: Is it possible for computer viruses to spread through file copying?
A: Yes. If you copy an entire digital file that is infected by a computer virus, that computer virus can infect any other computers, systems or other files that it is copied into. The specifics for how depends upon the type of associated virus. The virus can also infect computing devices it is transferred to (as opposed to being copied).
Q: How can healthcare entities ensure their business associate agreements fully address the cybersecurity risks and responsibilities related to PHI protection?
A: The US Department of Health and Human Services (HHS) provides a model business associate agreement (BAA), often also called a business associate contract, that shows the *minimum* components that BAAs must contain. It is a really great model. However, given that it does not include the specific actions that are unique for every type of covered entity (CE) relationship with each of their business associates (BAs), it is also important to add those details into the BAA to more fully address the cybersecurity risks and responsibilities within the context of each unique CE/BA relationship. Details to add to ensure fully addressing cybersecurity risks will vary from CE to CE and each of their associated BAs. However, they should include all the minimum security controls (administrative, technological and physical) required within the CE’s organization, to mitigate the risks that are created by the be specific services and/or products that each BA has been contracted to provided.
Make sure to list issues that reflect current risks that have been exploited, and the new and emerging tools and practices being performed. For example, explicitly include prohibitions against, or requirements and/or permissions to use, the following for supporting the services and products they provide to your organization: online tracking pixels such as Meta Pixels (these often track PHI, usually unbeknownst to the CEs); artificial intelligence (AI) tools (PHI that is involved could be violating HIPAA requirements); sharing PHI with others; using internet-of-things (IoT) products (if PHI is involved it could result in breaches and compliance violations); working in public areas (using unsecured wi-fi networks, having PHI viewed, overheard, and/or stolen via taking images); disposing of computing and storage components without removing data (this happens a lot…and so have subsequent breaches and HIPAA penalties); and using personally-owned computing devices to do the work (most are not secured to HIPAA requirements). Add more for your specific situations.
| |
Q: What questions should we never ask our "smart" IoT devices?
A: Smart devices that answer questions are always listening. This is necessary for them to hear the “wake-up” word. And there have been hundreds of situations where recordings were made of the things going on in the associated environments when the “wake-up” word was never said. In my own research with one of those devices, such situations have occurred over a dozen times in four years. Everyone with such devices in their environments must expect that anything being said and done in the environments could be being recorded. Also realize that those recordings are usually shared with an unlimited number of other types of entities, of all types.
With this in mind, if you are concerned about your, or others’, privacy, here are some questions that those within business environments, and when in personal living spaces, should never ask because all the IoT devices that answer these questions are storing the recordings in at least one, and usually in many (including third parties’) locations, putting privacy and information at risk:
- Never ask questions that contain your personal information, including health information, or any other type of information that you would not want made public. Even if you don’t ask it in a way that explicitly indicates it is your information, your name and/or other domicile inhabitants will be associated with the question and information.
- Never ask any IoT device a question asking what to do in an environmental, physical, health or other serious emergency. Not only are the answers often incorrect, but you are wasting precious time. Call 911 in such instances.
- Never ask an IoT device for medical advice. These devices are not human; they do not answer with any consideration of the context of the associated situation. They may give general medical information, but if you need to know if you are experiencing a stroke, heart attack, etc., call 911 and/or your doctor, depending upon the situation. Not only for your health, but also to keep those questions out of the multiple repositories and from the thousands of third parties that may be interested in accessing such information about you.
- Don’t ask an IoT device how to commit a crime. It is easy for that type of question to be collected or given to law enforcement, depending upon the question.
- Don’t ask questions that imply you’ve committed or are planning a crime. For example, “Alexa, how can I get blood out of my shirt,” or “Google, where is a good place to get away from everyone and not be found?”
We anticipate you can think of more after reading the previous list. Bottom line, never ask an IoT product a question that could be misinterpreted to mean something completely different than you intended, or that includes your own personal information in some way.
The only way to stop 100% of any “smart” IoT device from listening is to unplug it, and take out the batteries, if those are also used.
Q: Are there any security or privacy risks for getting rid of daylight-saving time (DST)?
A: Actually, there are. It is similar to the very necessary need that existed for planning ahead to change the computer code of millions of software programs back in the late 1990s to prepare for changing dates from beginning with “19” to “20”…in other words addressing the widespread Y2K software code problems well in advance to prevent harmful impacts. I was part of that work, and at the Fortune 200 financial and health insurance corporation where I worked, thousands of programs had to have portions of code rewritten to prevent program crashes, errors, incorrect calculations and other problems specific to the purpose of each software program. All of these events could have resulted in security incidents (e.g., loss of confidentiality, data integrity, and access), and privacy breaches (e.g., failed authentications, incorrect changes to personal data, etc.).
It is hard to say how many software programs have code that is run based upon the two DST time changes each year, but it is a sure bet that there are many of them. Eliminating DST would most likely require changes in computer code that performs, and depends upon, time-based and/or date-based calculations, scheduling, and time zone conversions.
The specific programs needing changes would depend upon at least the following considerations:
- Legacy systems: Older software systems require reviewing, and might subsequently require substantial modifications to accommodate the removal of code based upon DST.
- Scope of use: Software code used in regions and industries that incorporated DST into their business activities, such as finance, transportation, scheduling, and healthcare (such as for medical devices).
- Time zone conversions: Programs that perform time zone conversions or time-sensitive transactions across different regions require review and subsequent modification based upon findings.
- Embedded systems and devices: Devices and embedded systems with accurate time-dependent functions, like routers, servers, and IoT devices, may need firmware updates.
- Complex time-related calculations: Software that includes complex time-related operations, such as calendaring, scheduling, events support, time-sensitive data management, and systems management need to be review and may require significant adjustments.
Because of the millions of software programs that exist, including hundreds of thousands coded many decades ago that are still in use up through recent ones that perform activities based upon times, it will be necessary to review all such software code, make necessary code changes, test thoroughly and then put into production before any DST elimination actually occurs.
Q: Can receptionists and office personnel read a doctor’s report sent to another doctor without permission, and is it a HIPAA violation?
A. The employees within a HIPAA covered entity (CE), which includes healthcare providers (where doctors and their supporting personnel work) have a wide range of treatment, payment and operations (TPO) activities that support the work of the CE. Many of these activities are performed by personnel who never work directly with the patients, but their defined job responsibilities may include the need to view patient data. This is particularly true in small-to-mid-sized organizations where many staff members perform a wide range of different types of administrative and healthcare support activities. In such situations, those who are performing receptionist and office support may also be supporting the doctors who are treating patients, and in such cases may have responsibilities including the review of such reports, such as to allow them to prioritize for the doctors the order of the reports they need to look at based upon urgency of response. These types of situations are not HIPAA violations.
If receptionists, office workers, and anyone else who does not have a verifiable need to read a doctor’s report that is not part of their job responsibilities, does read the reports it is a HIPAA violation committed by the CE, not just the individual doing the reading. The CE is ultimately responsible for any of their HIPAA violations.
With these facts in mind, there is not one single yes or no answer to your question. The answer depends upon the context of each organization’s operations, and the job responsibilities for each of the workers within. In some situations, the answer may be yes, in others it may be no, and in yet others it may be “it depends…” based upon each unique situation. Each organization needs to thoughtfully consider their own business ecosystem and the job responsibilities for each of their employee’s roles to determine whether or not a receptionist or other type of office worker should have access to doctors’ reports to support TPO. The resulting determination needs to be reflected in each CE’s security and privacy policies. The same types of considerations should also take place for the BAs that support CEs.
| |
Data Security & Privacy Beacons*
People and Places Making a Difference
| |
We get many suggestions for beacons from our readers and Rebecca’s podcast/radio show listeners; thank you! We include many of them when the suggestions are for businesses other than the suggester’s. Typically for those the suggester feels deserve recognition for noteworthy data security and privacy actions. However, we do not include businesses, organizations, or people trying to promote themselves to get free marketing, and we do not take payments to put organizations or people on this list. We try to contact as many as possible after publishing our Tips to let them know we put them on our beacons list, though. If you have someone or an organization to suggest, let us know! We may include them in an upcoming Tips issue.
-
Identity Theft Resource Center (ITRC). For maintaining a list of most of the publicly announced breaches that have occurred. It provides the associated industry where the breach occurred, the date the breach was reported, the types of records breached, the actual date of the breach (if it could be determined), the breach geographic location, and a short summary of the associated risks from the breach. Nice work, ITRC!
-
ACM. For their, “Automated Vehicles,” tech brief, which includes some good cybersecurity points.
-
FTC. For a couple of their many great free privacy and security related awareness products.
-
Phishing Quiz
-
Tenant background checks guidance. Tenant Background Checks and Your Rights and Disputing Errors on Your Tenant Background Check Report.
-
Dan Swanson. For putting together and making available the free privacy book, “Risk and Privacy.” It includes entries from Jim Seaman, Michael Giola, Alan Tang, Andrew Boyarsky, Ulf Mattsson, James Bone, Jessie H Lee, and Sean Lyons.
-
EFF. For their article, “How to Figure Out What Your Car Knows About You (and Opt Out of Sharing When You Can).”
-
William Webster and Lynda Webster. For their article, “Scams are on the rise, and they’re ruining lives. We can stop it.”
-
Mirko Zorz at Help Net Security. For his article, “20 essential open-source cybersecurity tools that save you time.”
-
Lucy Handley at Time Magazine. For her article, “What to Do if You’ve Been Scammed.”
| |
*Privacy Beacons do not necessarily indicate that an organization or person is addressing every privacy protection perfectly. It simply highlights a noteworthy example of privacy-aware practices. | |
Permission to Share
If you would like to share, please forward the Tips message in its entirety. You can share excerpts as well, with the following attribution:
Source: Rebecca Herold. April 2024 Privacy Professor Tips
www.privacysecuritybrainiacs.com.
NOTE: Permission for excerpts does not extend to images.
Privacy Notice & Communication Information
You are receiving this Privacy Professor Tips message as a result of:
1) subscribing through PrivacyGuidance.com or PrivacySecurityBrainiacs.com or
2) making a request directly to Rebecca Herold or
3) connecting with Rebecca Herold on LinkedIn.
When LinkedIn users invite Rebecca Herold to connect with them, she sends a direct message when accepting their invitation. That message states that in the spirit of networking and in support of the communications that are encouraged by LinkedIn, she will send those asking her to link with them her monthly Tips messages. If they do not want to receive the Tips messages, the new LinkedIn connections are invited to let Rebecca know by responding to that LinkedIn message or contacting her at rebeccaherold@rebeccaherold.com.
If you wish to unsubscribe, just click the SafeUnsubscribe link below.
| | | | |