It seems a little bit self-regarding writing-up a comments policy for a small blog these days. Due to changes in the way people consume media online, and the frequency with which I post, I don’t get nearly as much blog traffic as I used to a decade ago. Furthermore commentary is largely something that’s migrated away from the blogosphere towards social media, which for the most part is out of my hands.
Why write a policy confined in scope to an outlet folks won’t use? Why put yourself in the position of being able to be questioned on matters of policy compliance for so little? What kind of reader would press for the precise exercise of such a policy in such a circumstance, anyway?
So this isn’t a policy document. This blog doesn’t have a comments policy. To paraphrase old Art, “I reserve the right to be a capricious bastard…”
Still, what makes for good discussion online is interesting and important, if elusive and ever changing. Instead of delivering edicts limited to an incredibly confined scope, this post serves as a discussion piece, should it be needed, wherever it may be wanted.
***
In the ‘90s, if you joined a discussion list or USENET newsgroup, and “netiquette” was enforced, the kind of edicts you’d see invoked often entailed technological concerns; top-vs-bottom-posting, cross-posting, HTML vs plain-text content and so on. If you were new to the conventions – that is if you hadn’t used email prior to the popular uptake of the World Wide Web – it was a bit like learning the conventions of CB Radio for the first time.
In 1995, Intel’s Sally Hambridge wrote a seminal text for the Network Working Group of the Internet Engineering Task Force; Netiquette Guidelines. Being largely a response to an influx of “Newbies”, and geared towards providing a blueprint for policy makers at the time, the document has noticeably dated. For example…
“Never send chain letters via electronic mail. Chain letters are forbidden on the Internet. Your network privileges will be revoked. Notify your local system administrator if your ever receive one.”
(Hambridge, Netiquette Guidelines, ‘2.1.1 For mail’, 1995)
By today’s standards, just over two decades later, this clause seems over-reaching and authoritarian. It’s certainly, in as far as the social media equivalent is concerned, un-achievable. If you were to contact your ISP to inform them you’d received a chain letter in 2017, maybe you’d get a reply from the help-desk, but you can pretty much guarantee that the sysadmins wouldn’t be that interested in your query.
What I’d like for you the reader to consider though, are the likely concerns behind, and the context surrounding this rule.
In terms of concerns, to this day chain letters and their equivalents degrade the signal-to-noise ratio in Internet discussion. The best case scenario is some mild entertainment, while the worst, especially when such spam is particularly both viral and dis-informative, is an effect that undermines democracy.
In terms of the context, in 1995 and discussed elsewhere in Hambridge’s text, is the reality that people often didn’t have their own Internet connection – often they had an account at work, or on campus, or so-on. The implication of this, not expressed so clearly in Hambridge’s text, although more obvious at the time, was that your sysadmin was a flesh and blood human being you may very well have even mingled with in meatspace; someone you basically had a pact with rather than someone institutionally removed from you to the umpteenth degree. The illusion that the Internet was a public space, rather than a construct built up on privately owned servers, wasn’t nearly as strong as it is now either.
There also weren’t the same automated bells and whistles more modern sysadmins have today, and this meant that they may very well had to have gotten up close and personal with your drama. Algorithms are copping a lot of flak lately, having introduced an array of self-perpetuating biases into democracy itself, but at base, they’ve saved sysadmins an awful lot of work as well.
So back in the day, if you handled yourself courteously – thus potentially saving your sysadmin an array of thankless and resource draining chores – you got your Internet privileges. Over a more manually-run Internet, over more obviously private infrastructure, saying chain letters were “forbidden” was a far more reasonable expectation.
***
Without wanting to sound ecclesiastical about it, one of the best ways to kill a comments policy, or any policy regarding discussion, is to use the letter of the law to violate the spirit of the law. You’ll see this in particular in any instance where someone who’s abusive online engages in a narrow parsing of the rules in order to confect the case that They’re The Victim Here(tm) – that everyone else just feels that they’ve been abused, but that objectively, by the rules, they’re the one’s engaging in wrongdoing.
If you’ve ever argued with a Men’s Rights Activist, or other, similarly querulous sorts, you already know what I’m on about.
I’m no doubt re-inventing the wheel by making this observation, but I strongly suspect that as technology ages, literal rules of communication, heavily grounded in the particulars of a given medium, are bound to act as an anchor upon civil, open discussion. This rapid dating of rules then further compounds the problem of the letter of the law being used to violate its spirit.
It wouldn’t have been out of place, for example, for a 1990s sysadmin to consider what someone did over a different Internet connection and a different medium, outside their purview. However, even late last decade, it wouldn’t have been unreasonable for someone moderating blog comments to take Twitter harassment into consideration when considering who is or isn’t allowed to participate.
It’s not that the concerns have changed, it’s that the technology, the specific consequences, and hence the range of feasible implementations of the rules, have. This, I think, is true much more for online discussion than for say the conventions of formal meetings in meatspace.
New conventions were needed after Gopher gave way to the World Wide Web, the latter eventually bringing in an influx of “newbies”. Newer rules were needed with the explosion of Web 2.0 in the Aughts. Social media has subsequently thrown the specifics of a lot of this out of the window – supplanting the older technologies all while increasing the size of a user base that largely doesn’t care about how it was all done before they arrived, much less why.
***
So what, then? No rules? Maybe not here, but in general I don’t think it’s all a lost cause.
While the specific acts have changed – SkeezBros don’t ask “ASL?” on Facebook like they did on Yahoo! Chat, and you don’t have to manually accept that dick pic on Facebook the way you had to on IRC – the mentality of abusers has not. Instead of grounding the rules so heavily in tech, then, why not base them on something more persistent, like basic attitudes?
I can think of a few good reasons why this may present problems. It is easier for example, for both algorithms and humans to target cuss words, than it is to run your text through something based on the DSM-V. While something based on the DSM-V may provide insights into more far reaching behaviours than what the current tech used to enforce the ToS does, it would be more expensive.
That is until Facebook finds a profitable way to sell the results of a DSM-V-based test to the likes of potential employers, insurers and so on. (Assuming they haven’t already).
Ultimately though, I think that in as far as human involvement in facilitating online discussion is concerned – and at least until AI is more field-proven as democracy-friendly, I think humans should be more involved here – it’d be good for folks to familiarize themselves with a bit of human nature and its implications. (Viewers of Halt and Catch Fire can consider me Team Comet on this front.)
***
What sort of things about human nature? What kind of considerations?
At the risk of appearing to create a set of rules for this blog, here’s a list of a few things that come to mind. It’s by no means an exhaustive list, but I hope it highlights the kind of attitude-based, rather than specific-tech-based approach to moderating online discussion I’m talking about.
This bit will probably blow out the word-count, so don’t feel obliged to not skip forward if you get the gist.
No media outlet, nor its authors, are apps on your computer.
Automated, near-instantaneous electronic gratification may have conditioned you to expect a certain response at the click of a button. But unless shoe-horned into inequitable conditions, humans don’t offer this feature to end-users. If you’re in the habit of being indulged this way, try to grow out of it, and certainly don’t expect it from actual people in discussions of contentious issues.
(Nor, if you manage to get humans to be largely compliant with such expectations, should you expect quality discussion; reduce a human to the role of a bot, then don’t expect them to produce output of a higher standard.)
If you still have trouble with this concept, consider taking your technological solipsism to a therapist.
You are not the editor of someone else’s media.
Unless you’ve got a heap of state power behind you, or a contract employing or otherwise positioning you as an editor, you’re not participating in discussion in that capacity. Bloggers etc. get to make their own mistakes in their own space. Think you’ve got legal recourse to change that? See a lawyer, or ask Napoleon The Boar.
There are occasions where a friendly, professional editor may chip in with editorial advice for an emerging writer, but even then, from what I can tell, said editors tend to observe and appreciate the emerging writer’s creative autonomy. Unsolicited editorializing is something I’ve only really seen either from people who aren’t editors at all, or who are recently-graduated, self-employed editors with massive entitlement biases. (Admittedly, my experience is limited).
If you try hard enough, maybe you’ll be a shit lawyer.
Lawyers tend not to push judges as far as some trolls try to push admins, because if they did, they’d be turfed for contempt of court. This seems funny to me, because a lot of Internet trolls appropriate the terms-of-art and dramatized rhetoric of TV lawyers.
Not that I think lawyers are perfect role-models, but I think folks cribbing their lines from Rumpole of the Bailey could at least emulate a little of his self-restraint (such as it is).
You’re not owed affection or affirmation.
Sure, people shouldn’t dehumanize you, but it’s not incumbent upon individuals, as individuals, to tend to your wounds after the fact – even individuals with opinions about the nature of the kind of dehumanization you’ve experienced. There are a lot of ways this matter can play out, politically.
Even if you’ve been dehumanized by an oppressor, conservatives may very well tell you to harden the fuck up. This wouldn’t be my approach. Rather, I’d argue that its the responsibility of a progressive state to cater to your psychological health via a universal public health care system. (That, and for the system causing the initial oppression to be overturned).
I can’t however, see myself as being personally responsible for providing this kind of health care; for a start, while interested in these kinds of issues, I’m not a qualified practitioner. Nor incidentally are most bloggers. You don’t need people like me tinkering around in your brain. Further, there’s a whole load to unpack here concerning the issue of individual action versus collective organizing (and how progressive causes have been undermined by such individualism).
This is where I’d find common ground with a number of conservative bloggers; it’s not our job as individuals. We’re not obliged to love you. We’re not personally obliged to provide care. Suffice to say that those who aren’t actually oppressed (yes you, MRAs), can reasonably expect even less sympathy.
“Practitioner of pathological behavior” is not an oppressed class.
The mentally ill may on occasion exhibit behaviour that is pathological towards other people, but as any number of people affecting social justice concerns have rightfully pointed out, pathological behaviour towards others isn’t something mental illness guarantees, nor that mental health prevents. We mentally ill are not incapable of occasionally keeping our shit together.
The corollary that some people seem unwilling to make, though, is that while the mentally ill may form an disadvantaged class, a predisposition to abusive behaviour does not qualify as membership in this class.
If you’re a clinical narcissist, and that’s the limit of your psychological flaws, then sorry, no, you’re not mentally ill and you’re not being stigmatized/oppressed on that basis. All this means is that you have a particular set of character deficiencies that makes you a pain, and that this may be of diagnostic use for people with an interest in that kind of thing (e.g. employers, prisons etc.). The pathology is in what you do, not in what’s being done to you.
This doesn’t make you a victim of SJWs/2nd Wave Feminists/The Family Court/Reductive Positivists/Psychiatry/[Insert Anyone Else You Wish To Scapegoat]. Nor, back on the matter of this piece, does it make poor behaviour on the Internet magically excusable.
If this is you, people get to exclude you from their spaces on precisely this basis, not in spite of it. No amount of trying to shoehorn yourself into a category where you don’t belong changes this.
Connotations
Intending your words to have different connotations than the ones people attach to them won’t change the connotations attached to them. Sure, it’s not unreasonable to anticipate that some, possibly many people may extend a degree of charity of interpretation to you. Yes, some other folks will vexatiously attribute any connotation to any word you use if it serves their ends.
But if you wind up with dishonest interlocutors, and they’ve not come to you, then a solution is as easy as walking away. Why spend time and effort at a blog, or Facebook page, or IRC channel, or wherever else where you’ll be intentionally misunderstood?
And if you’ve honestly been inept in your use of language, and you haven’t been abused for it, you’ve got an opportunity to learn. Why squander that by letting your ego get in the way?
Why would your very first instinct be to be skeptical of the sincerity of a person attaching different connotations to a word than you; a skepticism that kicks in before even a consideration of the semantics of the word, let alone the context your interlocutor argues from?
The bots have to pass a Turing Test, and so do you.
The Turing Test, put simply, is a test to see if an artificial intelligence can act so much like a human, that it becomes indistinguishable from one. There’s probably no good reason why actual humans should be held to a lower standard online, so it’d also probably be a good idea for people to lift their game to a level more convincing than that of an automated advertisement for penis pills.
Best not to make your contributions boilerplate if you want actual discourse. (Sam Harris fans, I’m looking at you, but not at you in particular).
You get no receipts.
So someone’s blocked/banned you from their own space online. Their reason: …
Get used to it.
***
It’s maybe a bit much to expect everyone with control over an online outlet to exercise judgement in-line with a thorough study of human nature. Despite the verbiage, I certainly haven’t reached that benchmark here. Like most people I haven’t majored in psychology.
To some extent, the success of a healthy place for open discussion is going to rely on the discernment of readers, at least in as far as supporting good hosts of discussion. A world where the likes of The Mind Unleashed and The Freethought Project can masquerade as the hosts of serious discussion, in front of millions no less, possibly hints at a need for moderated expectations. The mere existence of the Alt-Right as an Internet powerhouse makes this seem all the more daunting.
And not everyone who wants to host discussion in good faith – as is their right – can be a good student of human nature to begin with, let alone put things into practice. The hurdles are many, and it’s not the case that we’d all want to discourage argument in good faith, even if hosted a little ineptly.
As personally confronting as the prospect may be, I’m finding the prospect of judging people by their character, rather than moderating them according to the precise letter of their word or the formation of their metadata, is going to be the best way to be fair to the fair-minded. For all the risks, I’m hoping this outlook will be the more sustainable in the long run, for anyone who takes this approach.
~ Bruce