It's a good idea to be careful what we're posting on social media these days. There have just been too many frightening incidents recently. Like the time last year when Facebook mistakenly exposed millions of users' photos.
Situations like that are bad enough to make people run away from social media altogether. If you want to stick it out with Facebook, here are five things to change in your account to help protect your privacy.
But hackers and Facebook's shady moves aren't the only things to worry about. Now, some of the things you post on social media could end up impacting your Social Security disability claims.
Social media posts could be used against you
Your mom seeing an embarrassing photo of you on social media is one thing. Having an image used against you to deny disability benefits is a whole other level of frightening.
That might actually happen in the very near future.
The Social Security Administration (SSA) currently uses social media posts to flag fraudulent activity from people who already receive disability benefits. According to its 2020 budget proposal, SSA plans to expand that procedure. The agency wants an additional $10 million for its anti-fraud investigation efforts for next year.
If approved, SSA could start reviewing posts on sites like Facebook and Instagram from those applying for disability benefits. If images make it seem the applicant is out and about and participating in activities that someone with a particular disability shouldn't be able to do, they could be denied benefits.
The 2020 SSA budget proposal reads, "We are evaluating how social media could be used by disability adjudicators in assessing the consistency and supportability of evidence in a claimant's case file."
But how would it work? Think about how easily an image posted could be taken out of context, leading to a denied claim. That picture of you hiking may have been taken a few years ago and it doesn't represent your current health condition.
Many things could go wrong here. Investigating potential fraud is important, but it needs to be done the right way.
Facebook's fake-news fact-check fail
What's real? What's fake? With more than 1 billion people using Facebook, the company's artificial intelligence algorithms can’t answer these questions, so it hired outside firms to moderate content. These human "content moderators" decide what’s fake, real, satire, inappropriate and illegal. No surprise: It's not working. In this Komando on Demand podcast, hear from Brooke Binkwoski, former Facebook fact-checker, about the real story, and learn how Facebook's content moderators watch the seedy side of life all day, so you don't have to.