Imperial War Museum Image of Social Interpretation Blog title

Bullying Jack

Public house debate, 1945. An American soldier is amongst the audience listening to the second speaker of the evening, Miss Crooks (not pictured), on the topic of 'America and Britain'. The original caption states that "the few Americans present were unusually tongue-tied, had nothing to say to frank discussion of their qualities".

How do you control what information is online?  In the case of Twitter and Facebook, with difficulty, as Ryan Giggs found out last summer.  But these are huge sites with a lot of organisation behind them, and they will have a fair amount of resources to fight legal claims.

So what about your smaller site?  How do you control content?  What about the issues of defamation, data protection, and, with public authorities, freedom of information?   Or just insults, bullying and heated debates getting out of hand?

I talked this over with a friend of mine who used to manage a local museum.  They had a comments box in which visitors were encouraged to share their thoughts on the Museum via the technologically advanced method of writing them on a card and putting them in a box.  The box was checked each week and the best comments put in a case in the Museum.   She said the most common comment was ‘Jack is gay’.  Always the same name (changed here), with Jack clearly being the target of a bullying campaign.  Each week, the box would be full of cards with this comment on, with the authors clearly trying to drop them into the box face up so they could be read.  Obviously these cards would be edited out by the staff, but what if the museum had had an online comments box instead?  If you are moderating content, then fine, these things can be removed quite quickly, but otherwise?   Checks can be set up, with certain words and characters in a message triggering the blocking of that message, but this in itself requires some moderation to establish how the site is being used.  And in the case of large museums, covering controversial subjects and with a large number of potential users, moderating sites can take out a large chunk of staff time. 

We would all like to think comment and discussion sites are there for intelligent and well thought out debate, but  there is always the potential for comments of the ‘Jack is gay’ variety (especially if the museum is visited by large numbers of schoolchildren), or the usual endless posts demanding the legalising of cannabis and denying climate change/demanding the return of the Elgin Marbles/enter your museum’s controversial subject here.

So how far do you go to protect people like Jack or stop your site being hijacked without stifling debate and spontaneity?  And how much of a problem is this anyway?  Do you welcome or shy away from controversy?  Striking a balance between freedom to comment and protecting the individual – or your institution – may not always be easy.

Sarah Henning
Museum Archivist
Imperial War Museum

Be Sociable, Share!
1 comment
  1. Tim Trent says: January 20, 20126:16 pm

    Online forums are more used to lay spam link trails than to libel individuals. The duty of moderation is no less important there, though. I would argue that pre-moderation should only be used in the event of a large attack of inappropriate remarks.

    It is definitely worth considering blacklisting repeat offenders, usually by IP address or IP address range. In general one loses very little real comment, if any, by so doing. Sites like Wikipedia where all may edit rely on their own members to patrol for vandalism. Smaller sites cannot achieve this.

    I volunteer to do the web presence for Dartmouth Museum, a small charitable museum in Dartmouth in South Devon. We decided that we would use Facebook as our blog-like vehicle, and have not, so far, had inappropriate comments there. We know that we will at some point, and we will simply remove them and use whatever blocking tools Facebook supplies.

Submit comment