iPhone vs. iPhone 3G by Ricky Romero (used under creative commons)

Re-post: Social networking requirements

During the December/January slowdown, Geek Feminism is re-publishing some of our highlights from last year. This post originally appeared on July 8, 2011.

I knew that someone posted on this blog discussing what requirements a feminist-informed social network would have. Turns out it was me. A year on, and due to discussions around Google+, I think I have some positive requirements. (I recommend reading the old comments thread too.)

Control over identifying information. Name, gender, age, who you are friends with, what you talk about, what events you are in, and what you look like: this is all varyingly sensitive information and should be able to be hidden.

As few restrictions as possible on identity. Allowing use of pseudonyms, not assuming that everyone has two, or two ‘important’, names, free specification of gender if specified at all. As little structured compulsory information as possible. Unstructured, free-form, and non-compulsory are key things here.

Accessibility. State of the art accessibility design including testing with screen readers, colour palettes suited to as many variants of vision as possible, collaborative transcripting and captioning of images, no flashing ads or autoplaying video.

You own your space and control entry. This means you should be able to moderate things. Being able to ignore people is good but is not enough: you likely don’t want to subject your friends to the conversation of a person who you dislike enough to ignore.

Rigorous site-level attention to spam and harassment. No one (much) wants spam, enough said. But harassment—continued interactions or attempts to interact after being told to stop, including ban evasion—should be a terms of service level violation, as should any threats (whether or not the person has been told to stop). Use of threats or hate speech in user names and default icons or other things that appear in directory listings or search results may also need to be considered. This all requires staffing and a complaints system.

Consistent access control. If you set something private, or it was private by default at the time, it should stay that way, probably to the extent where if it can’t remain private for technical reasons, it should be deleted/hidden by the site rather than made public.

Access to your work and ability to export it. The correct thing to do here is a little tricky (are other people’s comments in your space yours to export and republish, or not? what about co-owned spaces?) The autonomo.us community has had some inconclusive discussions.

Fine-grained access control. I don’t think something along the lines of that which Livejournal and its forks have had for years and which Facebook and Google+ have implemented to varying degrees, is required (public blogs have a strong presence in activist discussions) but it’s useful for more universal participation. Some people need it.

Clear limits on sharing. This is something that Google+ early testers are coming up against again and again: ‘Limited’ posts are or were shareable, a commenter using someone’s name with the + sign (eg ‘+Mary’) does or did actually invite them into private comment threads without the original poster’s input. If you offer access control, the software must make it clear what controls apply to any space, and if you have influence over that or not, so that you can control your own revelations in that space. Substantial user testing to make sure that people understand what your interface is trying to say is required.

No advertising. I guess it might be possible to show people ads in a way that has neither the problem of offensive or upsetting ads (“lose weight for your wedding today!”) nor the problem of the advertisers doing dodgy malware ads to harvest your info or worse. Maybe.

What else? How do your favourite sites do on these?

4 thoughts on “Re-post: Social networking requirements

  1. Zack

    I have been doing academic research on Internet censorship — by which I mean the real thing, not the sort of active site moderation that gets OMG CENZOR’D reactions from trolls. However, thinking about that leads me to thinking about the gray area. The Obama administration must have people doing active moderation on their site where anyone can submit a petition, otherwise it would be overrun by the likes of /b/ in seconds. But I, as a citizen, would like to have some sort of way to verify that they were only stomping on the trolls and not deleting petitions that were politically embarrassing.

    I wrote an article about related ethical issues on my own site, and I’d be interested in comments on that from your perspective; but I’m also struggling with the technical issues. How do you deny the trolls a soapbox in a way that allows uninvolved parties to confirm that that is what you are doing? Are there circumstances where you have to make something vanish without auditability, because any possible audit trail would itself reveal the thing that needs to vanish? Etc. (Ignoring legal issues — the boundaries of technical possibility are fuzzy enough to me without adding that can of worms.)

  2. Megpie71

    I agree strongly with the whole “no advertising” one – one of the key points I like about Dreamwidth is that for them, the users of the site are their clients, rather than the advertisers. So they’re more willing to put time and effort into things which will make the site a good place for the people who use it, rather than making it a profitable place to display ads. I prefer being a client to being a product.

  3. Gunnar Tveiten

    Distributed. A centralised system forces you to agree to the rules laid out by the organisation running the site in order to interact with your friends. This may be in a jurisdiction you’re no fan off, or it may be owned by someone you don’t trust, or any number of other problems.

    It should be possible to interact fully with people using service-A, without yourself having an account with service-A (or being subject to their rules).

    Diaspora, and some other systems fulfill this requirement.

  4. John

    Diaspora, and some other systems fulfill this requirement.

    A problem with distributed systems is that it’s harder to meet the requirement Rigorous site-level attention to spam and harassment, a classic example being Usenet.

    Another approach that I think is now possible (going by the natural-language `comprehension’ shown by IBM’s Watson system), although perhaps as yet beyond the level of personal machines, is automoderation by software that `understands’ what is offensive (or plain irrelevant, including spam), at a much subtler level than keyword matching. That way, we could have distributed social networking systems that ignore trolls on your behalf.

Comments are closed.