Blog
Marijuana Advertising: How Purpose Helps Brands Enter New Markets
User generated content (UGC) is an amazing way to crowdsource creative ideas, increase emotional ties with your brand, expand the organic reach of marketing initiatives, and strengthen consumer trust. However, as we’ve learned from Facebook, UGC comes with the risk of people creating offensive material that may go against your brand’s core values. Therefore, it is crucial that brands establish clear guidelines and crisis management frameworks to ensure that UGC is not only relevant, but appropriate as well.
The Guardian recently leaked the Facebook Files, which explain company policies for monitoring and removing offensive content. These documents highlight the difficulties associated with flagging content and the ambiguities between acceptable and unacceptable posts.
It goes without saying that each individual has a different comfort level when it comes to violent, sexual, or vulgar material – which is something Facebook is actively working to address. While most companies don’t have to deal with that level of complexity, from a branding perspective, UGC should reflect the brand’s ethos and culture, and simultaneously highlight individuals’ nuances.
With over 4.7 billion posts daily, Facebook has a much larger pool of content than most brands looking to use UGC in their marketing strategies; however, brands can learn valuable lessons from the most used social network on how to monitor content.
1) Clearly state an objective: In Facebook’s case, the brand is dedicated to fostering a supportive, safe, informed, civically-engaged and inclusive community; thus, in an ideal world, every post would exhibit these ideals.
Brands that utilize, rather than host, social platforms typically seek to increase consumer awareness and engagement with UGC campaigns. These campaigns often take the form of a competition, a call to action related to larger cultural conversations, or a means to crowd-source creative. By establishing a clear objective for your UGC campaign, you set the stage for the type of content you want your audience to create and can more easily assess whether posts fit inside – or outside – of that framework.
2) Crowdsource surveillance: Monitoring every post would be extremely time consuming and although Facebook and others are developing artificial intelligence to help identify unacceptable content, current technology struggles with pinpointing context and implications. Therefore, it’s critical to enlist consumers as community watchdogs.
To help monitor what’s acceptable, Facebook includes a tab in the top right corner of every post that enables users to report it and provide feedback on why they flagged the content. The content hub is also developing a system that will allow users to set their own standards for offensive content, which will help filter the type of content each person sees and enable customized tolerance levels.
While most posts will not be offensive, asking viewers to mark disturbing content is an excellent way to identify material that goes against your stated objective or brand culture.
3) Set standards and protocol: Once content has been flagged – internally or by other users — it’s important to have a clearly established framework for determining what is acceptable and what isn’t. This helps build a sense of transparency and accountability as well as avoid any implication of bias.
As the Facebook Files illustrate, there are numerous specifications and criteria that go into determining acceptability. In general, the social media company seeks to “allow as much speech as possible but draws the line at content that could cause real world harm.” An example of how Facebook carries out this policy is in their protocol on acceptable mention of violent content. The company removes content that specifically addresses an individual, religious group or other particular target but allows content that explains how to cause harm to someone, or poses aspirational or conditional expression of violence.
Ultimately, established frameworks are important for streamlining decisions about what is unacceptable and how to create an egalitarian method of removing content.
UGC is an increasingly effective way to increase engagement with your fans and followers; but, it can be challenging to ensure creators feel free and safe to generate content, while simultaneously prevent offensive material from tarnishing your brand. That’s why it is crucial to clarify intentions, set standards and allow users to help monitor material.
Connect with We First!
Twitter: @WeFirstBranding
Facebook: WeFirst
LinkedIn: WeFirst
Youtube: WeFirstTV
Join our mailing list and invite Simon to speak at your next event or meeting.