Tag Archives: Facebook

What Was Novel About Social Media Use During Hurricane Sandy?

We saw the usual spikes in Twitter activity and the typical (reactive) launch of crowdsourced crisis maps. We also saw map mashups combining user-generated content with scientific weather data. Facebook was once again used to inform our social networks: “We are ok” became the most common status update on the site. In addition, thousands of pictures where shared on Instagram (600/minute), documenting both the impending danger & resulting impact of Hurricane Sandy. But was there anything really novel about the use of social media during this latest disaster?

I’m asking not because I claim to know the answer but because I’m genuinely interested and curious. One possible “novelty” that caught my eye was this FrankenFlow experiment to “algorithmically curate” pictures shared on social media. Perhaps another “novelty” was the embedding of webcams within a number of crisis maps, such as those below launched by #HurricaneHacker and Team Rubicon respectively.

Another “novelty” that struck me was how much focus there was on debunking false information being circulated during the hurricane—particularly images. The speed of this debunking was also striking. As regular iRevolution readers will know, “information forensics” is a major interest of mine.

This Tumblr post was one of the first to emerge in response to the fake pictures (30+) of the hurricane swirling around the social media whirlwind. Snopes.com also got in on the action with this post. Within hours, The Atlantic Wire followed with this piece entitled “Think Before You Retweet: How to Spot a Fake Storm Photo.” Shortly after, Alexis Madrigal from The Atlantic published this piece on “Sorting the Real Sandy Photos from the Fakes,” like the one below.

These rapid rumor-bashing efforts led BuzzFeed’s John Herman to claim that Twitter acted as a truth machine: “Twitter’s capacity to spread false information is more than cancelled out by its savage self-correction.” This is not the first time that journalists or researchers have highlighted Twitter’s tendency for self-correction. This peer-reviewed, data-driven study of disaster tweets generated during the 2010 Chile Earthquake reports the same finding.

What other novelties did you come across? Are there other interesting, original and creative uses of social media that ought to be documented for future disaster response efforts? I’d love to hear from you via the comments section below. Thanks!

Behind the Scenes: The Digital Operations Center of the American Red Cross

The Digital Operations Center at the American Red Cross is an important and exciting development. I recently sat down with Wendy Harman to learn more about the initiative and to exchange some lessons learned in this new world of digital  humanitarians. One common challenge in emergency response is scaling. The American Red Cross cannot be everywhere at the same time—and that includes being on social media. More than 4,000 tweets reference the Red Cross on an average day, a figure that skyrockets during disasters. And when crises strike, so does Big Data. The Digital Operations Center is one response to this scaling challenge.

Sponsored by Dell, the Center uses customized software produced by Radian 6 to monitor and analyze social media in real-time. The Center itself sits three people who have access to six customized screens that relate relevant information drawn from various social media channels. The first screen below depicts some of key topical areas that the Red Cross monitors, e.g., references to the American Red Cross, Storms in 2012, and Delivery Services.

Circle sizes in the first screen depict the volume of references related to that topic area. The color coding (red, green and beige) relates to sentiment analysis (beige being neutral). The dashboard with the “speed dials” right underneath the first screen provides more details on the sentiment analysis.

Lets take a closer look at the circles from the first screen. The dots “orbiting” the central icon relate to the categories of key words that the Radian 6 platform parses. You can click on these orbiting dots to “drill down” and view the individual key words that make up that specific category. This circles screen gets updated in near real-time and draws on data from Twitter, Facebook, YouTube, Flickr and blogs. (Note that the distance between the orbiting dots and the center does not represent anything).

An operations center would of course not be complete without a map, so the Red Cross uses two screens to visualize different data on two heat maps. The one below depicts references made on social media platforms vis-a-vis storms that have occurred during the past 3 days.

The screen below the map highlights the bio’s of 50 individual twitter users who have made references to the storms. All this data gets generated from the “Engagement Console” pictured below. The purpose of this web-based tool, which looks a lot like Tweetdeck, is to enable the Red Cross to customize the specific types of information they’re looking form, and to respond accordingly.

Lets look at the Consul more closely. In the Workflow section on the left, users decide what types of tags they’re looking for and can also filter by priority level. They can also specify the type of sentiment they’re looking, e.g., negative feelings vis-a-vis a particular issue. In addition, they can take certain actions in response to each information item. For example, they can reply to a tweet, a Facebook status update, or a blog post; and they can do this directly from the engagement consul. Based on the license that the Red Cross users, up to 25 of their team members can access the Consul and collaborate in real-time when processing the various tweets and Facebook updates.

The Consul also allows users to create customized timelines, charts and wordl graphics to better understand trends changing over time in the social media space. To fully leverage this social media monitoring platform, Wendy and team are also launching a digital volunteers program. The goal is for these volunteers to eventually become the prime users of the Radian platform and to filter the bulk of relevant information in the social media space. This would considerably lighten the load for existing staff. In other words, the volunteer program would help the American Red Cross scale in the social media world we live in.

Wendy plans to set up a dedicated 2-hour training for individuals who want to volunteer online in support of the Digital Operations Center. These trainings will be carried out via Webex and will also be available to existing Red Cross staff.


As  argued in this previous blog post, the launch of this Digital Operations Center is further evidence that the humanitarian space is ready for innovation and that some technology companies are starting to think about how their solutions might be applied for humanitarian purposes. Indeed, it was Dell that first approached the Red Cross with an expressed interest in contributing to the organization’s efforts in disaster response. The initiative also demonstrates that combining automated natural language processing solutions with a digital volunteer net-work seems to be a winning strategy, at least for now.

After listening to Wendy describe the various tools she and her colleagues use as part of the Operations Center, I began to wonder whether these types of tools will eventually become free and easy enough for one person to be her very own operations center. I suppose only time will tell. Until then, I look forward to following the Center’s progress and hope it inspires other emergency response organizations to adopt similar solutions.

Crowdsourcing Humanitarian Convoys in Libya

Many activists in Egypt donated food and medical supplies to support the Libyan revolution in early 2011. As a result, volunteers set up and coordinated humanitarian convoys from major Egyptian cities to Tripoli. But these convoys faced two major problems. First, volunteers needed to know where the convoys were in order to communicate this to Libyan revolutionists so they could wait for the fleet at the border and escort them to Tripoli. Second, because these volunteers were headed into a war zone, their friends and family wanted to keep track of them to make sure they were safe. The solution? IntaFeen.com.

Inta feen? means “where are you?” in Arabic and IntaFeen.com is a mobile check-in service like Foursquare but localized for the Arab World. Convoy drivers used IntaFeen to check-in at different stops along the way to Tripoli to provide regular updates on the situation. This is how volunteers back in Egypt who coordinated the convoy kept track of their progress and communicated updates in real-time to their Libyan counterparts. Volunteers who went along with the convoys also used IntaFeen and their check-in’s would also get posted on Twitter and Facebook, allowing families and friends in Egypt to track their whereabouts.

Al Amain Road is a highway between Alexandria and Tripoli. These tweets and check-in’s acted as a DIY fleet management system for volunteers and activists.

The use of IntaFeen combined with Facebook and Twitter also created an interesting side-effect in terms of social media marketing to promote activism. The sharing of these updates within and across various social networks galvanized more Egyptians to volunteer their time and resulted in more convoys.

I wonder whether these activists knew about another crowdsourced volunteer project taking place at exactly the same time in support of the UN’s humanitarian relief operations: Libya Crisis Map. Much of the content added to the map was sourced from social media. Could the #LibyaConvoy project have benefited from the real-time situational awareness provided by the Libya Crisis Map?

Will we see more convergence between volunteer-run crisis maps and volunteer-run humanitarian response in the near future?

Big thanks to Adel Youssef from IntaFeen.com who spoke about this fascinating project (and Ushahidi) at Where 2.0 this week. More information on #Libya Convoy is available here. See also my earlier blog posts on the use of check-in’s for activism and disaster response.

Trails of Trustworthiness in Real-Time Streams

Real-time information channels like Twitter, Facebook and Google have created cascades of information that are becoming increasingly challenging to navigate. “Smart-filters” alone are not the solution since they won’t necessarily help us determine the quality and trustworthiness of the information we receive. I’ve been studying this challenge ever since the idea behind SwiftRiver first emerged several years ago now.

I was thus thrilled to come across a short paper on “Trails of Trustworthiness in Real-Time Streams” which describes a start-up project that aims to provide users with a “system that can maintain trails of trustworthiness propagated through real-time information channels,” which will “enable its educated users to evaluate its provenance, its credibility and the independence of the multiple sources that may provide this information.” The authors, Panagiotis Metaxas and Eni Mustafaraj, kindly cite my paper on “Information Forensics” and also reference SwiftRiver in their conclusion.

The paper argues that studying the tactics that propagandists employ in real life can provide insights and even predict the tricks employed by Web spammers.

“To prove the strength of this relationship between propagandistic and spamming techniques, […] we show that one can, in fact, use anti-propagandistic techniques to discover Web spamming networks. In particular, we demonstrate that when starting from an initial untrustworthy site, backwards propagation of distrust (looking at the graph defined by links pointing to to an untrustworthy site) is a successful approach to finding clusters of spamming, untrustworthy sites. This approach was inspired by the social behavior associated with distrust: in society, recognition of an untrustworthy entity (person, institution, idea, etc) is reason to question the trust- worthiness of those who recommend it. Other entities that are found to strongly support untrustworthy entities become less trustworthy themselves. As in society, distrust is also propagated backwards on the Web graph.”

The authors document that today’s Web spammers are using increasingly sophisticated tricks.

“In cases where there are high stakes, Web spammers’ influence may have important consequences for a whole country. For example, in the 2006 Congressional elections, activists using Google bombs orchestrated an effort to game search engines so that they present information in the search results that was unfavorable to 50 targeted candidates. While this was an operation conducted in the open, spammers prefer to work in secrecy so that their actions are not revealed. So,  revealed and documented the first Twitter bomb, which tried to influence the Massachusetts special elections, show- ing how an Iowa-based political group, hiding its affiliation and profile, was able to serve misinformation a day before the election to more than 60,000 Twitter users that were follow- ing the elections. Very recently we saw an increase in political cybersquatting, a phenomenon we reported in [28]. And even more recently, […] we discovered the existence of Pre-fabricated Twitter factories, an effort to provide collaborators pre-compiled tweets that will attack members of the Media while avoiding detection of automatic spam algorithms from Twitter.

The theoretical foundations for a trustworthiness system:

“Our concept of trustworthiness comes from the epistemology of knowledge. When we believe that some piece of information is trustworthy (e.g., true, or mostly true), we do so for intrinsic and/or extrinsic reasons. Intrinsic reasons are those that we acknowledge because they agree with our own prior experience or belief. Extrinsic reasons are those that we accept because we trust the conveyor of the information. If we have limited information about the conveyor of information, we look for a combination of independent sources that may support the information we receive (e.g., we employ “triangulation” of the information paths). In the design of our system we aim to automatize as much as possible the process of determining the reasons that support the information we receive.”

“We define as trustworthy, information that is deemed reliable enough (i.e., with some probability) to justify action by the receiver in the future. In other words, trustworthiness is observable through actions.”

“The overall trustworthiness of the information we receive is determined by a linear combination of (a) the reputation RZ of the original sender Z, (b) the credibility we associate with the contents of the message itself C(m), and (c) characteristics of the path that the message used to reach us.”

“To compute the trustworthiness of each message from scratch is clearly a huge task. But the research that has been done so far justifies optimism in creating a semi-automatic, personalized tool that will help its users make sense of the information they receive. Clearly, no such system exists right now, but components of our system do exist in some of the popular [real-time information channels]. For a testing and evaluation of our system we plan to use primarily Twitter, but also real-time Google results and Facebook.”

In order to provide trails of trustworthiness in real-time streams, the authors plan to address the following challenges:

•  “Establishment of new metrics that will help evaluate the trustworthiness of information people receive, especially from real-time sources, which may demand immediate attention and action. […] we show that coverage of a wider range of opinions, along with independence of results’ provenance, can enhance the quality of organic search results. We plan to extend this work in the area of real-time information so that it does not rely on post-processing procedures that evaluate quality, but on real-time algorithms that maintain a trail of trustworthiness for every piece of information the user receives.”

• “Monitor the evolving ways in which information reaches users, in particular citizens near election time.”

•  “Establish a personalizable model that captures the parameters involved in the determination of trustworthiness of in- formation in real-time information channels, such as Twitter, extending the work of measuring quality in more static information channels, and by applying machine learning and data mining algorithms. To implement this task, we will design online algorithms that support the determination of quality via the maintenance of trails of trustworthiness that each piece of information carries with it, either explicitly or implicitly. Of particular importance, is that these algorithms should help maintain privacy for the user’s trusting network.”

• “Design algorithms that can detect attacks on [real-time information channels]. For example we can automatically detect bursts of activity re- lated to a subject, source, or non-independent sources. We have already made progress in this area. Recently, we advised and provided data to a group of researchers at Indiana University to help them implement “truthy”, a site that monitors bursty activity on Twitter.  We plan to advance, fine-tune and automate this process. In particular, we will develop algorithms that calculate the trust in an information trail based on a score that is affected by the influence and trustworthiness of the informants.”

In conclusion, the authors “mention that in a month from this writing, Ushahidi […] plans to release SwiftRiver, a platform that ‘enables the filtering and verification of real-time data from channels like Twitter, SMS, Email and RSS feeds’. Several of the features of Swift River seem similar to what we propose, though a major difference appears to be that our design is personalization at the individual user level.”

Indeed, having been involved in SwiftRiver research since early 2009 and currently testing the private beta, there are important similarities and some differences. But one such difference is not personalization. Indeed, Swift allows full personalization at the individual user level.

Another is that we’re hoping to go beyond just text-based information with Swift, i.e., we hope to pull in pictures and video footage (in addition to Tweets, RSS feeds, email, SMS, etc) in order to cross-validate information across media, which we expect will make the falsification of crowdsourced information more challenging, as I argue here. In any case, I very much hope that the system being developed by the authors will be free and open source so that integration might be possible.

A copy of the paper is available here (PDF). I hope to meet the authors at the Berkman Center’s “Truth in Digital Media Symposium” and highly recommend the wiki they’ve put together with additional resources. I’ve added the majority of my research on verification of crowdsourced information to that wiki, such as my 20-page study on “Information Forensics: Five Case Studies on How to Verify Crowdsourced Information from Social Media.”

Passing the I’m-Not-Gaddafi Test: Authenticating Identity During Crisis Mapping Operations

I’ve found myself telling this story so often in response to various questions that it really should be a blog post. The story begins with the launch of the Libya Crisis Map a few months ago at the request of the UN. After the first 10 days of deploying the live map, the UN asked us to continue for another two weeks. When I write “us” here, I mean the Standby Volunteer Task Force (SBTF), which is designed for short-term rapid crisis mapping support, not long term deploy-ments. So we needed to recruit additional volunteers to continue mapping the Libya crisis. And this is where the I’m-not-Gaddafi test comes in.

To do our live crisis mapping work, SBTF volunteers generally need password access to whatever mapping platform we happen to be using. This has typically been the Ushahidi platform. Giving out passwords to several dozen volunteers in almost as many countries requires trust. Password access means one could start sabotaging the platform, e.g., deleting reports, creating fake ones, etc. So when we began recruiting 200+ new volunteers to sustain our crisis mapping efforts in Libya, we needed a way to vet these new recruits, particularly since we were dealing with a political conflict. So we set up an I’m-not-Gaddafi test by using this Google Form:

So we placed the burden of proof on our (very patient) volunteers. Here’s a quick summary of the key items we used in our “grading” to authenticate volunteers’ identity:

Email address: Professional or academic email addresses were preferred and received a more favorable “score”.

Twitter handle: The great thing about Twitter is you can read through weeks’ worth of someone’s Twitter stream. I personally used this feature several times to determine whether any political tweets revealed a pro-Gaddafi attitude.

Facebook page: Given that posing as someone else or a fictitious person on Facebook violates their terms of service, having the link to an applicant’s Facebook page was considered a plus.

LinkedIn profile: This was a particularly useful piece of evidence given that the majority of people on LinkedIn are professionals.

Personal/Professional blog or website: This was also a great to way to authenticate an individual’s identity. We also encouraged applicants to share links to anything they had published which was available online.

For every application, we had two or more of us from the core team go through the responses. In order to sign off a new volunteer as vetted, two people had to write down “Yes” with their name. We would give priority to the most complete applications. I would say that 80% of the 200+ applications we received were able to be signed off on without requiring additional information. We did follow ups via email for the remaining 20%, the majority of whom provided us with extra info that enabled us to validate their identity. One individual even sent us a copy of his official ID. There may have been a handful who didn’t reply to our requests for additional information.

This entire vetting process appears to have worked, but it was extremely laborious and time-consuming. I personally spent hours and hours going through more than 100 applications. We definitely need to come up with a different system in the future. So I’ve been exploring some possible solutions—such as social authentication—with a number of groups and I hope to provide an update next month which will make all our lives a lot easier, not to mention give us more dedicated mapping time. There’s also the need to improve the Ushahidi platform to make it more like Wikipedia, i.e., where contributions can be tracked and logged. I think combining both approaches—identity authentication and tracking—may be the way to go.

The Role of Facebook in Disaster Response

I recently met up with some Facebook colleagues to discuss the role that they and their platform might play in disaster response. So I thought I’d share some thoughts that come up during the conversation seeing as I’ve been thinking about this topic with a number of other colleagues for a while. I’m also very interested to hear any ideas and suggestions that iRevolution readers may have on this.

There’s no doubt that Facebook can—and already does—play an important role in disaster response. In Haiti, a colleague used Facebook to recruit hundreds of Creole speaking volunteers to translate tens of thousands of text messages into English as part of our Ushahidi-Haiti crisis mapping efforts. When an earth-quake struck New Zealand earlier this year, thousands of students organized their response via a Facebook group and also used the platform’s check-in’s feature to alert others in their social network that they were alright.

But how else might Facebook be used? The Haiti example demonstrates that the ability to rapidly recruit large numbers of volunteers is really key. So Facebook could create a dedicated landing page when a crisis unfolds, much like Google does. This landing page could then be used to recruit thousands of new volunteers for live crisis mapping operations in support of humanitarian organizations (for example). The landing page could spotlight a number of major projects that new volunteers could join, such as the Standby Volunteer Task Force (SBTF) or perhaps highlight the deployment of an Ushahidi platform for a particular crisis.

The use of Facebook to recruit volunteers presents several advantages, the most important ones being identity and scale. When we recruited hundreds of new volunteers for the Libya Crisis Map in support of the UN’s humanitarian response, we had to vet and verify each and every single one of them twice to ensure they were who they really said they were. This took hours, which wouldn’t be the case using Facebook. If we could set up a way for Facebook users to sign into an Ushahidi platform directly from their Facebook account, this too would save many hours of tedious work—a nice idea that my colleague Jaroslav Valuch suggested. See Facebook Connect, for example.

Facebook also operates at a scale of more than half-a-billion people, which has major “Cognitive Surplus” potential. We could leverage Facebook’s ad services as well—a good point made one Facebook colleague (and also Jon Gosier in an earlier conversation). That way, Facebook users would receive targeted adds on how they could volunteer based on their existing profiles.

So there’s huge potential, but like much else in the ICT-for-you-name-it space, you first have to focus on people, then process and then the technology. In other words, what we need to do first is establish a relationship with Facebook and decide on the messaging and the process by which volunteers on Facebook would join a volunteer network like the Standby Volunteer Task Force and help out on an Ushahidi map, for example.

Absorbing several hundred or thousands of new volunteers is no easy task but as long as we have a simple and efficient micro-tasking system via Facebook, we should be able to absorb this surge. Perhaps our colleagues at Facebook could take the lead on that, i.e, create a a simple interface allowing groups like the Task Force to farm out all kinds of micro-tasks, much like Crowdflower, which already embeds micro-tasks in Facebook. Indeed, we worked with Crowdflower during the floods in Pakistan to create this micro-tasking app for volunteers.

As my colleague Jaroslav also noted, this Mechanical Turk approach would allow these organizations to evaluate the performance of their volunteers on particular tasks. I would add to this some gaming dynamics to provide incentives and rewards for volunteering, as I blogged about here. Having a public score board based on the number of tasks completed by each volunteer would be just one idea. One could add badges, stickers, banners, etc., to your Facebook profile page as you complete tasks. And yes, the next question would be: how do we create the Farmville of disaster response?

On the Ushahidi end, it would also be good to create a Facebook app for Ushahidi so that users could simply map from their own Facebook page rather than open up  another browser to map critical information. As one Facebook colleague also noted, friends could then easily invite others to help map a crisis via Facebook. Indeed, this social effect could be most powerful reason to develop an Ushahidi Facebook app. As you submit a report on a map, this could be shared as a status update, for example, inviting your friends to join the cause. This could help crisis mapping go viral across your own social network—an effect that was particularly important in launching the Ushahidi-Haiti project.

As a side note, there is an Ushahidi plugin for Facebook that allows content posted on a wall to be directly pushed to the Ushahidi backend for mapping. But perhaps our colleagues at Facebook could help us add more features to this existing plugin to make it even more useful, such add integrating Facebook Connect, as noted earlier.

In sum, there are some low hanging fruits and quick wins that a few weeks of collaboration with Facebook could yield. These quick wins could make a really significant impact even if they sound (and are) rather simple. For me, the most exciting of these is the development of a Facebook app for Ushahidi.

How to Use Facebook if You Are a Repressive Regime

As it happens, the main country case studies for my dissertation are Egypt and the Sudan. I’ll have to write a whole lot more given the unprecedented events that have taken place in both countries since January 25th. As many iRevolution readers know, my dissertation analyzes how access to new information and communication technologies changes the balance of power between repressive regimes and popular resistance movements. This means I’m paying close attention to how these regimes leverage tools like Facebook.

The purpose of this blog post is not to help repressive regimes use Facebook better, but rather to warn activists about the risks they face when using Facebook. Granted, many activists already know about these risks, but those I’ve been in touch with over the past few weeks simply had no idea. So what follows is a brief account of how repressive regimes in North Africa have recently used Facebook to further their own ends. I also include some specific steps that activists might take to be safer—that said, I’m no expert and would very much welcome feedback so I can pass this on to colleagues.

We’ve seen how Facebook was used in Tunisia, Egypt and the Sudan to schedule and organize the recent protests. What we’ve also seen, however, is sophistication and learning on the part of repressive regimes—this is nothing new and perfectly expected with plenty of precedents. The government in Tunis was able to hack into every single Facebook account before the company intervened. In Egypt, the police used Facebook to track down protesters’ names before rounding them up. Again, this is nothing new and certainly not unprecedented. What is new, however, is how Sudan’s President Bashir leveraged Facebook to crack down on recent protests.

The Sudanese government reportedly set up a Facebook group calling for protests on a given date at a specific place. Thousands of activists promptly subscribed to this group. The government then deliberately changed the time of the protests on the day of to create confusion and stationed police at the rendez-vous point where they promptly arrested several dozen protestors in one swoop. There are also credible reports that many of those arrested were then tortured to reveal their Facebook (and email) passwords.

And that’s not all. Earlier this week, Bashir called on his supporters to use Facebook to push back against his opposition. According to this article from the Sudan Tribune, the state’s official news agency also “cited Bashir as instructing authorities to pay more attention towards extending electricity to the countryside so that the younger citizens can use computers and internet to combat opposition through social networking sites such as Facebook.”

So what are activists to do? If they use false names, they run the risk of getting their accounts shut down without warning. Using a false identity won’t prevent you from falling for the kind of mouse trap that the Bashir government set with their fabricated Facebook page. Using https won’t help either with this kind of trap and I understand that some regimes can block https access anyway. So what to do if you are in a precarious situation with a sophisticated repressive regime on your back and if, like 99% of the world’s population, you are not an expert in computer security?

1. Back-up your Facebook account: Account –> Account Settings –> Download your information –> Learn more. Click on the Download button.

2. Remove all sensitive content from your Facebook page including links to activist friends, but keep your real name and profile picture. Why? So if you do get arrested and are forced to give up your password, you actually have something to give to your aggressors and remain credible during the interrogation.

3. Create a new Facebook account with a false name, email address and no picture and minimize incriminating content. Yes, I realize this may get you shut down by Facebook but is that as bad as getting tortured?

4. Create an account on Crabgrass. This social networking platform is reportedly more secure and can be used anonymously. A number of activists have apparently switched from Facebook to Crabgrass.

6. If you can do all of the above while using Tor, more power to you. Tor allows you to browse the web anonymously, and this is really important when doing the above. So I highly recommend taking the time to download and install Tor before you do any of the other steps above.

5. Try to validate the authenticity of a Facebook group that calls for a protest (or any in-person event for that matter) before going to said protest. As the Sudan case shows, governments may increasingly use this tactic to arrest activists and thwart demonstrations.

6. Remember that your activist friends may have had their Facebook accounts compromised. So when you receive a Facebook message or a note on your wall from a friend about meeting up in person, try to validate the account user’s identity before meeting in person.

If you have additional recommendations on how to use Facebook safely, or other examples of how repressive regimes have leveraged Facebook, please do add them in the comments section below for others to read and learn. Thank you.