Cyber Crooks Show their True Colors
 


We've always known cyber crooks and other scammers to be opportunists. But, never before have we seen them stoop to this low of a low. As people around the planet suffer through health, financial and other stresses related to the cornoavirus pandemic, they continue to plague us with an additional layer of dread. 

Personally, I received more phishing email attempts in the first 5 days of April than I'd gotten from January through March! This is the current trend; the FBI says cybercrime reports have quadrupled during COVID-19 pandemic. While it may not be surprising to see the underbelly of society activate during a chaotic time, it's still as disheartening. 

But, we have some very big protections on our side. Awareness is among the most effective among them. By reading this newsletter and staying up to date with legitimate news sites, you are doing one of the best things you can do to stop COVID-19 scammers in their tracks.  You can help others by also forwarding this to them, so they are kept up to date, as well .

As you scroll through this month's issue, I hope you'll enjoy the springtime pictures. A little hope that fresh air and renewal is just around the corner!

  
us  Data Security & Privacy Beacons
People and places making a difference**

Have you seen an organization or individual taking actions to improve privacy? Send me a note to nominate a privacy beacon of your own!

The U.S. Federal Trade Commission (FTC)  has really stepped up its efforts to provide warnings about COVID-19 cons. Just a few they've released in recent weeks, include:


The last of these (grandparents scam) was visited on me a few weeks ago when I received an email that my 80-year-old aunt was "requesting to follow me on Facebook." However, she and I have been connected on Facebook for many years. The scammers used my aunt's current profile photo and her profile info to create a look-alike account. I can only imagine what kinds of requests "my aunt" would have made of me had we actually connected through this phone profile. It's not difficult to see how people fall for this when it looks so legitimate. 

BBB Scam Tracker  of is another wonderful resource for anyone concerned about communication coming their way via email, text, phone or other means. In addition to providing a repository of reported scams (complete with a heat map), the Scam Tracker gives people a simple, easy way to report a business or offer that sounds like an illegal scheme or fraud .  

Red Folder  is an downloadable tool that helps users create a plan for their digital identity during a life interruption, be it a natural disaster, illness or accident or memory loss. Have you considered what would happen to your social media pages or who would take care of all your banking and online passwords if something happened to you? Right now, Red Folder founders Christopher and Kathy are generously offering this truly thoughtful and comprehensive downloadable tool at no charge. Red Folder is a terrific resource for anyone who wants to ensure their life is remembered, and appropriately closed out, the way they want it to be.

**Privacy beacon shout-outs do not necessarily indicate an organization or person is addressing every privacy protection perfectly throughout their organization (no one is). It simply highlights a noteworthy example that is, in most cases, worth emulating.
Virtual conference apps keep work (and cyber attacks) moving   
  

Employees are not the only ones benefiting from the increased use of virtual conferencing. Cyber criminals love them, too. Zoom, in particular, has been a favorite target of a very wide range of cyber crime schemes, including credential stuffing. Every type of technology has security, and privacy, challenges, and virtual conferencing tools are no exception. 

Below are some quick tips for securing your meetings and protecting the privacy of yourself, your colleagues -- and if you're working out of a home office -- your roommates or family members. 

Secure your space before the conference starts. Remove personal or confidential information that could be seen from your webcam and shut down external listening and viewing devices, such as Alexa or Siri, security cameras or smart speakers; close the door or shut the blinds to prevent those around you from eavesdropping or seeing confidential information.   or log out when not in use. 

Secure your network. Require ID and password authentication to get onto your home wireless network; use the strongest level of encryption, such as a VPN; keep wireless networks, as well as remote and online meeting tools, patched and updated.

Follow your security and privacy policies. Your employer should have these in place. If they don't, please review and download the complimentary set of work-from-home, remote working and mobile computing device security and privacy policies available at Privacy Security Brainiacs. A policy on virtual conferencing tools in included. 

For more context and tips on virtual conferencing and remote working security and privacy, check out Staying Protected While Connected - Video Conferencing Best Practices for Businesses and Consumers

wanted The Dangers of Integrating Robots into Daily Life
As lifelike as it may seem, AI is not human.

Alexa and similar devices have been built to emulate human interaction. They've gotten so close to that ideal some may believe they have more capabilities than they actually do. 

Take the heart-wrenching story of nursing home resident who passed away from COVID-19. In her last days, she begged her Amazon Echo to help relieve her pain. It's unfortunate proof we've come to rely on artificial intelligence (AI) for more than it's actually able to provide.

AI-powered devices become increasingly intelligent. You may recall I purchased an  Amazon Echo  to perform some research with throughout this year to report on for Data Privacy Day 2021. That Echo has since learned of my preference for jazz, especially music performed by Ella Fitzgerald. And, the device will kindly ask if I'd like to hear similar songs. Convenient, yes. But, this type of interaction presents a bigger issue.  For instance, I've been discussing topics in the vicinity of the Echo that I usually wouldn't... topics like race car tires... and now the Echo is playing occasional ads for tires and servicing. Coincidence? Hardly. 

Not only can we come to appreciate a device for convenience and entertainment, we can start to feel like it understands us. A line of code that generates a "Good morning" or "Sweet dreams," can lull us into believing this is a true personal human connection we've built.

It can be easy to form a bond with "someone" who seems to be there for you 24x7 and who responds (even anticipates) your every beck and call.  It might sound a little silly, but a great listener who remembers what we like, has answers to many of our questions, will do most of what we ask while also employing common courtesy can feel like a friend. AI has the potential to set us up for a dangerous emotional connection to our technology. Just ask Joaquin Phoenix who played Theodore in Her, the futuristic love story in which Theodore falls in love with his computer's operating systemSuch emotional connections can lead to people revealing a lot more to their Echo-like device than they would to the actual people at Amazon who listen in on a large number of those conversations.

Although these devices are always listening (Yes, even when you haven't said the trigger word... they have to be listening to hear that trigger! ), we can't depend on them for everything. The fact of the matter is they are voice-enabled databases with the ability to collect, catalog and analyze your information , as well as everything that can be heard in the vicinity of the device. Selecting your music is one thing, but clearly, Alexa, Google Assistant or Siri can't take away your physical pain.

worldsA Right to Privacy: Sharing Personal COVID-19 Test Results
To share or not to share - that is the question

If someone tests  positive for COVID-19, who has the right to know?  The issue could become both a moral and legal one.

With social distancing measures ramping up and most states on total lockdown, many of us are trying our best to avoid contact with others. We just don't know where it could be lurking. Everyone presents a potential threat. 

While staying healthy  may be easier if we knew exactly which individuals to avoid , that kind of intelligence comes at a massive privacy cost. 

And, here's the other complication: Without widespread testing, few people know for certain they have the virus ; a large number of the infected are asymptomatic. Therefore, demanding that those who actually do receive a positive test result disclose that information is asking them to carry a disproportionate load of the responsibility. The ramifications for these folks could be devastating and long lasting. Imagine someone who is sick recovers fully but is then d iscriminated against when he goes to apply for a job.

Currently, there are no set guidelines for disclosing a positive COVID-19 test result. Worse, there's no coordination between doctors and testers or any government authority.   And, there are many initiatives beyond COVID-19 to track our health records and the places we've been.

Imagine the wide variety of harms all that data could bring to the associated individuals, especially when the use goes beyond COVID-19 tracking and management.

I'm curious to see what changes we may see to HIPAA privacy laws following the pandemic crisis. Right now, the law is the U.S.'s only safeguard against public scrutiny of our health records. There have already been what have been described as temporary allowances from the HHS OCR for sharing health data during the pandemic.

There have also been complaints about these expanded data sharing allowancesand there is concern worldwide that surveillance established for COVID-19 control will become the new normal.

Some may feel a moral obligation to share a positive result, especially if they have a high likelihood of infecting others. If it helps others  who are at risk, I agree it would be a good thing... so long as the information was shared only with those who actually need to know, and if the information could not be used for anything else beyond the purpose of managing the COVID-19 spread. 

At the end of the day,  the entire health record for every person is not necessary to track and control COVID-19. Knowing the number of people in each geographic area should be enough for any publicly used statistics. Healthcare workers caring for the infected will still be the ones who generally should have detailed personal information and healthcare records, and they already know from complying with HIPAA for two decades how to release de-identified summaries.

Ultimately the decision for sharing personal information, along with detailed health data, should remain with the individual, not with the state or federal government. And, as history shows us, that can happen.  Consider "duty to warn" and other laws enacted around the discovery of AIDS and HIV. Lawmakers may decide that mandating the public disclosure of a positive COVID-19 test result serves a greater good. However, we can't forget the large number of lives damaged, and lost, after the AIDS and HIV records outed specific individuals. We should learn from history and not allow that to happen again.
 
 oldAlgorithms Making Decisions Formerly Assigned to Humans
 
 
 
 
 
 
 
 
An algorithm determining the fate of individuals on probation is now a reality and it's one of the many making decisions about people's lives in the United States and Europe. Local authorities use  so-called predictive algorithms  to set police patrols, prison sentences and probation rules. ( Thank you to Elizabeth from Norwich University for bringing this to my attention.)

Using an algorithm in this manner is dangerous to say the least. That's because algorithms are only as objective as the humans that have developed them. What's more, v ery few algorithms are broadly tested to validate their accuracy. And, many studies have shown the inherent bias that exists within most AI algorithms.

In the case of a predictive algorithm that determines the likelihood of a repeat offense, the developer should be obligated to test it against a fully representative section of the population before releasing for use in the real world. 

Predictive algorithms are designed to use historical data to calculate the probability of future events. In the case of the youth scoring system discussed in the New York Times article linked to in the first paragraph of this section, the algorithm pulls data from police reports, social benefits and government records, as well as tallies crime and housing data and even domestic abuse cases involving the child's parents. All of this information is factored into the offender's "risk score." The idea is to better identify at-risk youth, but as of yet there have been no answers as to how the collection of this data will be turned into action. 

I'm sure it won't surprise you to hear I'm concerned, as are others. Opponents of this system say it is not completely transparent, nor does it come with mandatory notification for the subjects identified. The young people and parents are not told they are on the list or what their risk score is.

Couple that with the general lack of transparency around how algorithms are developed, how well they were tested and the controls used to prevent bias, and the concerns begin to pile up. This is especially true in states like Pennsylvania, which is calling for an algorithm to help courts determine sentencing after a conviction.

What other decisions will states attempt to pawn off on algorithms, artificial intetlligence (AI) and the robots / automated machines the technologies drive? I'm not blind to the fact that human decision-making is inherently flawed and biased as well, but are we really better off leaving it up to a machine? It is way too early in the design and use of AI not to have responsible human oversight. Perhaps that will always be needed.

Davi Ottenheimer and I covered the good and bad sides to AI on my February radio show . In the episode, we chat not only about the potential benefits of AI, but also the risks to information security, privacy and safety when flawed, biased and even maliciously-engineered AI is used. We also touch on boundaries Davi recommends for preventing the development of bad AI.

In upcoming issues of Privacy Professor Tips, I'll delve into even more discussions around the roles and responsibilities of AI in our lives and the implications to our data privacy.

My wife and I have noticed some strange things lately. Things we do on our computers or say in the privacy of our home are apparently becoming known to others at work. Is  it  possible for someone to hack into our private lives through our cell phones and home computer? If so, please advise how we can stop it. 

I'm sorry to hear of your trouble. Short answer, yes, anything you do on an Internet-connected device (including home speakers / digital personal assistants, smart TVs, security systems, etc.) can potentially be captured by a cyber attacker.  It all depends on the security and privacy controls in place. Don't assume devices come with security and privacy controls in place by default; generally, they do not.

Here are some protections and additional steps to consider:

On your smartphones
On your computers
On your smart devices
  • Keep unplugged / turned all the way off when not in use.
  • Use the devices' strongest security settings. 
I would also recommend you contact one of the following to share what's going on and see if they are aware of any specific threats / risks to your particular devices. Best of luck to you and your wife!


noticesFresh Phish: Global Crises Distracts Victims
Scammers attempt to lure me in 

Our desire for information has always been there, but during a global crisis we're extra afraid of missing a critical update. And, we're distracted. When we check our emails, texts or voicemails, the many other things on our minds may prevent us from spotting a scam we'd otherwise suspect. 

Here are a couple emails I received recently:

Email #1: Appeared to be from American Express, a trusted source (That's how phishing artists hook you). Take a look at the red flags I've highlighted below the screen capture. 


Red Flags:
  • The sender's address doesn't match the sender name: This email says it's coming from American Express, however, the sender address is [email protected].
  • The "To" line doesn't include my name: Check out who it's being sent to - "Recipients." This tells me it was sent to multiple email addresses with a "Reminder: Your payment could not be completed." Odd.
  • This email is a classic attempt to take advantage of financial worries and can be especially tempting when talking about a payment of some sort. As intriguing as it is, an unsolicited email with an attachment should almost always be considered highly suspicious.
  • There was also a PDF attachment. Phishing crooks love to spread malicious code, such as key stroke loggers and ransomware, through PDFs and other types of attachments. Plus, the name of the files didn't look like a filename American Express would use. In fact, it would be very unusual for American Express to even attach a file to an email if they did happen to send one; it wouldn't be unsolicited; and, I'd be expecting it.
     

Email #2: I also received this email claiming I had "6 pending incoming email(s) in my spam quarantine inbox."



Red Flags:
  • The "To" and "From" lines are suspicious: This shows my own website address, rebeccaherold.com, and the email is addressed to "rebeccaherold.
  • Reading the remainder of the email is even further evidence of a message that just doesn't make sense and is clearly "phishy."
  • I checked the properties of the email, and it was coming from a completely different domain, one that was registered in eastern Europe.
  • The underlying link is questionable: As I've highlighted in blue in the second image, if you're being asked to click a link in an email, it's good practice to hover over it to view the underlying link before you click. Does it leave you with questions? Best not to click.
Phishing attempts also come from phone calls. See many  examples of phishing  phone calls I've gotten  at Free Resources/phishing.  In May, Privacy Security Brainiacs will also be posting many of my other email and text phishing messages to provide examples of current tactics you need to be on the lookout for.

Have you seen an uptick in messages of a "phishy" nature too? Send us a message, and we may include it in a future Tips message. 

PPInewsWhere to Find the Privacy Professor  
  
 

On the air... 

HAVE YOU LISTENED YET? 

Do you have an information security, privacy or other IT expert or luminary you'd like to hear interviewed on the show? Or, a specific topic you'd like to learn more about? Please let me know!

I'd also love for your organization to be a sponsor! Shoot me an email and I'll send you more details.

All episodes are available for on-demand listening on the VoiceAmerica site, as well as iTunes, Mobile Play, Stitcher, TuneIn, CastBox, Player.fm, iHeart Radio and similar apps and sites. 

Some of the many topics we've addressed... 
  • student privacy
  • identity theft
  • medical cannabis patient privacy
  • children's online privacy and safety  
  • applications and systems security
  • cybercrime prosecutions and evidence
  • government surveillance
  • swatting 
  • GDPR
  • career advice for cybersecurity, privacy and IT professions
  • voting / elections security (a series)
Please check out some of my recorded episodes. You can view a complete listing of shows to date, grouped by topic. After you listen,  let me know what you think ! I truly do use what I hear from listeners.

SPONSORSHIP OPPORTUNITIES: Are you interested in being a sponsor or advertiser for my show? It's quickly growing with a large number of listeners worldwide. Please get in touch! There are many visual, audio and video possibilities.

We have current sponsorship openings in three of the four weeks' shows each month. If your organization wants to sponsor one show each month, I will cover topics  related to your organization's business services and/or products.


In the news... 

Advertising Now Available!

Tips of the Month is now open to sponsors. If you're interested in reaching our readers (maybe you have an exciting new privacy product or service or an annual event just around the corner), the Tips email may be just the thing to help you communicate to more people! 

We have a variety of advertising packages to meet every budget. 


3 Ways to Show Some Love

The Privacy Professor Monthly Tips is a passion of mine and something I've offered readers all over the world for since 2007 (Time really flies!). If you love receiving your copy each month, consider taking a few moments to...

1) Tell a friend! The more readers who subscribe, the more awareness we cultivate.

2) Offer a free-will subscription! T here are time and hard dollar costs to producing the Tips each month, and every little bit helps. 

3) Share the content. All of the info in this e mail is sharable (I'd just ask that you follow

 
 
There will always be people waiting to capitalize on misery. They stoke the fires of fear and uncertainty to burn away our natural protections against being deceived. 

Yet, for every bad guy, there are 10 good ones, waiting in the wings to help. 

Data security and privacy experts are working overtime to ensure you have the most accurate and legitimate information to safeguard against the trickiest of the COVID-19 tricksters. We will continue to do so throughout the crisis, and likely long after!

Please stay safe and healthy,

Rebecca
Need Help?


share2Permission to Share

If you would like to share, please forward the Tips message in its entirety. You can share  excerpts, as well, with the following attribution:

Source: Rebecca Herold. May 2020 Privacy Professor Tips. www.privacyprofessor.com.

NOTE: Permission for excerpts does not extend to images.

Privacy Notice & Communication Infoprivpolicy

You are receiving this Privacy Professor Tips message as a result of:

1) subscribing through PrivacyGuidance.com
2) making a request directly to Rebecca Herold; or 
3) connecting with Rebecca Herold on LinkedIn

When LinkedIn users initiate a connection with Rebecca Herold, she sends a direct message when accepting their invitation. That message states that in the spirit of networking and in support of the encouraged communications by LinkedIn, she will send those asking for LinkedIn connections her Tips message monthly. If they do not want to receive the Tips message, LinkedIn connections are invited to let Rebecca know by responding to that LinkedIn message or contacting her at [email protected]

If you wish to unsubscribe, just click the SafeUnsubscribe link below.
 
 
The Privacy Professor
Rebecca Herold & Associates, LLC
Mobile: 515.491.1564
View our profile on LinkedIn     Follow us on Twitter