There s more to context than meets the eye adage.com - get the latest breaking news, showbiz & celebrity photos, sport news & rumours, viral videos and top stories from adage.com Daily Mail and Mail on Sunday newspapers.
April 23, 2021
Turns out, some advertisers will walk their talk on behaving ethically to a point.
Sure, we’ve been here before, but CEOs have short memories and people are wising up to that apathy; something had to give. Now, companies are being asked, pressured, forced, encouraged, and regulated to take a stand or at least not stand back on everything from data privacy to voting rights, diversity to sustainability.
Nowhere is this sort of corporate do-goodery more on show currently than in advertising.
Responsible spending seems to be more important than ever if the agency pitches are anything to go by. In February, for example, GroupM joined the Conscious Advertising Network (CAN) a voluntary coalition of over 70 organizations set up to highlight the ethics that underpin advertising. Of course, there’s a chance this could all be window dressing given how often brand purpose tends to look like propaganda. For all the strongly-worded warnings from marketers, there
Not So Safe
Eighty percent of the more than 3.3 billion pieces of content removed from social media platforms – including Instagram, TikTok, Pinterest and Snapchat – is either spam, adult or explicit content, or hate speech, according to a new report from the Global Alliance for Responsible Media (GARM). The data in the report – GARM’s first on digital brand safety – was self-reported, Research Live reports. And Ad Age points out there are still gaps in reporting on topics such as how safe the digital platforms are for consumers and advertisers, and how effective they are at policing themselves and correcting mistakes. GARM began working more closely with all the platforms following the political and social upheaval in 2020, and the brand boycott of Facebook, which was prompted by concerns over hate speech and disinformation.
Nearly 80% of content removed online for spam, hate speech and explicit nature Details 20 April 2021
Nearly 80% of the 3.3 billion pieces of content removed from major content platforms come from spam, adult and explicit content, and hate speech and acts of aggression. This is according to the WFA Global Alliance for Responsible Media (GARM), which tracked the performance of brand safety across seven platforms. Namely, Today’s report includes self-reported data from Facebook, Instagram, Pinterest, Snap, TikTok, Twitter and YouTube.
The data, accumulated by WFA GARM found that there has also been a growth in action taken on hate speech and acts of aggression across platforms. GARM platforms have reported increases in activity and its impact with significant progress by YouTube in the number of account removals, Facebook in the reduction of prevalence, and Twitter in the removal of pieces of content, said the report.
Four-fifths of content removed from tech platforms falls under three key categories
Vast majority of ejected content comprises of spam, adult and explicit content, and hate speech and acts of aggression, new report reveals.
by Staff
Free email bulletins