Data

Don't fall into the well again

Now that we’ve had a little more time since the initial shock and awe of the Elon Musk purchase of Twitter, social networks have had a chance to build up interest as potential next networks to replace Twitter.

We have been given a rare chance, as a culture of social media addicts, to break a cycle we keep repeating and move to something that isn’t just owned by somebody who stands to make billions of dollars from your decision.

Unfortunately, we’re in real risk of falling into the same well as before, convinced that the open choice is somehow too complicated. That has meant that two separate social networks, Post and Hive, have driven interest from Twitter’s failings, away from open networks like Mastodon or the broader Fediverse[.]

Don’t fall into the well again.

I have nothing to hide

I know you are not a terrorist but still you give privacy for understood in many aspects of your daily physical life. You expect it every time you go to the bathroom and close the door, you expect it when you go to the doctor or you confide to a close friend.

[…]

Everyone has things that keep to themselves, things that only say to a special person, things that can be shared with close friends or family, things they talk only to a doctor and so on and so forth.

If in real life we have so many layers, why don’t we expect the same level of privacy on internet? We are giving to Facebook, Google, Linkedin and the others more information about ourselves than we give to our SO or even to ourselves (remember Google is investing in DNA mapping).

Browse Against the Machine

[T]he web looks more and more like a feudal system, where the geography of the web has been partitioned off by the Frightful Five. Google, Facebook, Microsoft, Apple and Amazon are our lord and protectors, exacting a royal sum for our online behaviors. We’re the serfs and tenants, providing homage inside their walled fortresses. Noble upstarts are erased or subsumed under their existing order.

Build a Better Monster: Morality, Machine Learning, and Mass Surveillance

We built the commercial internet by mastering techniques of persuasion and surveillance that we’ve extended to billions of people, including essentially the entire population of the Western democracies. But admitting that this tool of social control might be conducive to authoritarianism is not something we’re ready to face. After all, we’re good people. We like freedom. How could we have built tools that subvert it?

[…]

The learning algorithms have no ethics or boundaries. There’s no slot in the algorithm that says “insert moral compass here”, or any way to tell them that certain inferences are forbidden because they would be wrong. In applying them to human beings, we leave ourselves open to unpleasant surprises.

The issue is not just intentional abuse (by trainers feeding skewed data into algorithms to affect the outcome), or unexamined bias that creeps in with in our training data, but the fundamental non-humanity of these algorithms.

[…]

So what happens when these tools for maximizing clicks and engagement creep into the political sphere?

This is a delicate question! If you concede that they work just as well for politics as for commerce, you’re inviting government oversight. If you claim they don’t work well at all, you’re telling advertisers they’re wasting their money.

Facebook and Google have tied themselves into pretzels over this. The idea that these mechanisms of persuasion could be politically useful, and especially that they might be more useful to one side than the other, violates cherished beliefs about the “apolitical” tech industry.

[…]

One problem is that any system trying to maximize engagement will try to push users towards the fringes. You can prove this to yourself by opening YouTube in an incognito browser (so that you start with a blank slate), and clicking recommended links on any video with political content. When I tried this experiment last night, within five clicks I went from a news item about demonstrators clashing in Berkeley to a conspiracy site claiming Trump was planning WWIII with North Korea, and another exposing FEMA’s plans for genocide.

This pull to the fringes doesn’t happen if you click on a cute animal story. In that case, you just get more cute animals (an experiment I also recommend trying). But the algorithms have learned that users interested in politics respond more if they’re provoked more, so they provoke. Nobody programmed the behavior into the algorithm; it made a correct observation about human nature and acted on it.

[…]

But even though we’re likely to fail, all we can do is try. Good intentions are not going to make these structural problems go away. Talking about them is not going to fix them.

We have to do something.

People's Names That Break Websites

The intersection of rushed (or careless) development and unintended consequences:

We’re doing a story about people that have names that websites and computers don’t seem to like - for example, we spoke to a guy named William Test, and a woman named Katie Test, both of whom can’t seem to keep a hotel or airplane booking because the name “test” is flagged by internal systems.

We also spoke to a guy named Christopher Null who had the same problem, and woman named Joan Fread, who can’t use paypal because her last name is the same as a PHP command.

I’m curious if there’s anyone in the dev community that is thinking about this, and how to deal with it. Is it even considered a problem? Is the population that this affects so small that people don’t even think about it?