You are viewing tacit

dragonpoly
A couple of years ago, during a lackadaisical time in my life when I was only running two businesses and wasn't on tour to support a book I'd just coauthored, I sat down with my sweetie Zaiah and we watched all the episodes of the Joss Whedon television show Dollhouse over the course of a week or so.

The premise of the show, which isn't really important to what I want to write about, concerns a technology that allows personalities, identities, and skills to be constructed in a computer (much as one might write a computer program) and then implanted in a person's brain, such that that person takes on that identity and personality and has those skills. The television show followed a company that rented out custom-designed people, constructed in a bespoke fashion for clients' jobs and then erased once those jobs were over. Need a master assassin, a perfect lover, a simulation of your dead wife, a jewel thief? No problem! Rent that exact person by the hour!



Anyway, in Episode 10 of the short-lived series, one of the characters objects to the idea of using personality transplants as a kind of immortality, telling another character, "morality doesn't exist without the fear of death." I cringed when I heard it.

And that's the bit I want to talk about.




The New York Times has an article about research which purports to show that when reminded of their own mortality, people tend to cling to their ethical and moral values tightly. The article hypothesizes,

Researchers see in these findings implications that go far beyond the psychology of moralistic judgments. They propose a sweeping theory that gives the fear of death a central and often unsuspected role in psychological life. The theory holds, for instance, that a culture's very concept of reality, its model of "the good life," and its moral codes are all intended to protect people from the terror of death.


This seems plausible to me. Religious value systems--indeed, religions in general--provide a powerful defense against the fear of death. I remember when I first came nose to nose with the idea of my own mortality back when I was 12 or 13, how the knowledge that one day I would die filled me with stark terror, and how comforting religion was in protecting me from it. Now that I no longer have religious belief, the knowledge of the Void is a regular part of my psychological landscape. There is literally not a day that goes by I am not aware of my own mortality.

But the idea that fear of death reminds people of their values, and causes them to cling more tightly to them, doesn't show that there are no values without the fear of death.

As near as I can understand it, the statement "morality doesn't exist without the fear of death" appears to be saying that without fear of punishment, we can't be moral. (I'm inferring here that the fear of death is actually the fear of some kind of divine judgment post-death, which seems plausible given the full context of the statement: "That's the beginning of the end. Life everlasting. It's...it's the ultimate quest. Christianity, most religion, morality....doesn't exist, without the fear of death.") This is a popular idea among some theists, but does it hold water?

The notion that there is no morality without the fear of death seems to me to rest on two foundational premises:

1. Morality is extrinsic, not intrinsic. It is given to us by an outside authority; without that outside authority, no human-derived idea about morality, no human-conceived set of values is any better than any other.

2. We behave in accordance with moral strictures because we fear being punished if we do not.


Premise 1 is a very common one. "There is no morality without God" is a notion those of us who aren't religious never cease to be tired of hearing. There are a number of significant problems with this idea (whose God? Which set of moral values? What if those moral values--"thou shalt not suffer a witch to live," say, or "if a man also lie with mankind, as he lieth with a woman, both of them have committed an abomination: they shall surely be put to death," or "whatsoever hath no fins nor scales in the waters, that shall be an abomination unto you"--cause you to behave reprehensibly to other people? What is the purpose of morality, if not to tell us how to be more excellent to one another rather than less?), but its chief difficulty lies in what it says about the nature of humankind.

It says that we are not capable of moral action, or even of recognizing moral values, on our own; we must be given morals from an outside authority, which becomes the definition of morality. I have spoken to self-identified Christians who say that without religion, nothing would prevent them from committing rape and murder at will; it is only the strictures of their religion that prevent them from doing so. I have spoken to self-identified Christians who say if they believed the Bible commanded them to murder children or shoot people from a clock tower, they would do it. (There is, unsurprisingly, considerable overlap between these two sets of self-identified Christian.) If it takes the edict of an outside force to tell you why it's wrong to steal or rape or kill, I am unlikely to trust you with my silverware, much less my life. Folks who say either of these things seldom get invited back to my house.

The notion that the fear of death is a necessary component of moral behavior because without punishment, we will not be moral is, if anything, even more problematic. If the only thing making you behave morally is fear of punishment, I submit you're not actually a moral person at all, no matter which rules of moral behavior you follow.

Morality properly flows from empathy, from compassion, from the recognition that other people are just as real as you are and just as worthy of dignity and respect. Reducing morality to a list of edicts we'll be punished if we disobey means there is no need for empathy, compassion, charity, or respect--we aren't moral people by exercising these traits, we're moral by following the list of rules. If the list of rules tells us to stone gays, then by God, that's what we'll do.

An argument I hear all the time (and in these kinds of conversations, I do mean all the time) is "well, if there's no God and no fear of Hell, who's to say the Nazis were wrong in what they did?" It boggles me every single time I hear it. I cannot rightly apprehend the thought process that would lead to such a statement, in no small part because it seems to betray a boggling inability to allow empathy and compassion be one's moral signposts.

What it all comes down to, when you get to brass tacks, is internal moral values vs. external moral values. When we can empathize with other human beings, even those who are different from us, and allow ourselves to fully appreciate their essential humanness, treating them ethically becomes easy. When we do not--and often, religious prescriptions on behavior explicitly tell us not to--it becomes impossible. An intrinsic set of moral values is predicated on that foundation of reciprocal recognition of one another's humanness, worth, and dignity.

Those who say without God or without fear of punishment there can be no morality seem blind to that reciprocal recognition of one another's humanness, worth, and dignity. And those folks scare me.

Some thoughts on the Seven Virtues

dragonpoly
A while ago, over dinner with my partner Eve, her mom, and some friends of theirs, we started talking about the Seven Deadly Sins.

I am not terribly good at them; in fact, it took a while to remember what they were (greed, envy, sloth, lust, gluttony, pride, and wrath). Of the seven, the only one at which I have any skill is lust; in fact, I've put so many character points into lust I'm still forced to make default rolls for all six others.

I got to thinking about the Seven Deadly Sins, and wondering if there were Seven Virtues to go along with them. Apparently, there are; a few hundred years after the list of vices caught hold, someone decided there should be a similar list of virtues, and made such a list by negating the vices. The virtue Chastity was proposed as the opposite of Lust, for example, and the virtue Humility as the opposite of Pride. (Some of the others don't really make a lot of sense; proposing Kindness as Envy's opposite ignores the fact that people can simultaneously feel envious and behave kindly. But no matter.)

The negative version of the Seven Deadly Sins didn't really seem to catch on, so Catholic doctrine has embraced a different set of virtues: prudence, justice, temperance, courage, faith, hope, and charity.

I look at that list, and find it a bit...underwhelming. We've given Christianity two thousand years to come up with a cardinal list of virtues in human thought and deed, and that's the best it can do? It's almost as disappointing as the list of Ten Commandments, which forbids working on Saturday and being disrespectful to your parents but not, say, slavery or rape, as I talked about here.

Now, don't get me wrong, some of the things on the list of virtues I heartily endorse. Courage, that's a good one. Justice is another good one, though as often as not people have an unfortunate tendency to perpetrate the most horrifying atrocities in its name. (Handy hint for the confused: "justice" and "vengeance" aren't the same thing, and in fact aren't on speaking terms with one another.) Temperance in opposing injustice is not a virtue, hope is that thing at the bottom of Pandora's jar of evils, and faith...well, the Catholic catechism says that faith means "we believe in God and believe all that he has said and revealed to us," and furthermore that we believe all "that Holy Church proposes for our belief." In this sense, to quote Mark Twain, faith is believing what you know ain't so. (On the subject of hope, though, it should be mentioned that Hesiod's epic poem about Pandora says of women, "From her is the race of women and female kind: of her is the deadly race and tribe of women who live amongst mortal men to their great trouble, no helpmates in hateful poverty, but only in wealth." So it is without an exuberance of cynicism that I might suggest there is perhaps a synchronicity between the ancient Greek and modern Catholic thinkings on the subject of the fairer sex.)

In any event, it seems that, once again, the traditional institutions charged with the prescription of human morality have proven insufficient to the task. In my musings on the Ten Commandments, I proposed a set of ten commandments that might, all things considered, prove a better moral guideline than the ten we already have, and it is with the same spirit I'd like to propose a revised set of Seven Cardinal Virtues.

Courage. I quite like this one. In fact, to quote Maya Angelou, "Courage is the most important of all the virtues, because without courage you can’t practice any other virtue consistently. You can practice any virtue erratically, but nothing consistently without courage." So this one stays; in fact, I think it moves to the head of the list.

Prudence is a bit of an odd duck. Most simply, it means something like "foresight," or perhaps "right thinking." The Catholic Education Site defines prudence as the intellectual virtue which rightly directs particular human acts, through rectitude of the appetite, toward a good end. But that seems a bit tail-recursive to me; a virtue is that which directs you to do good, and doing good means having these virtues...yes, yes, that's fine and all, but what is good? You can't define a thing in terms of a quality a person has and then define that quality in terms of that thing!

So perhaps it might be better to speak of Beneficence, which is the principle of making choices that, first, do no harm to others, and, second, seek to prevent harm to others. The principle of harm reduction seems a better foundation for an ethical framework than the principle of "right action" without any context for the "right" bit. (I'm aware that a great deal of theology attempts to provide context for the virtue of prudence, but I remain unconvinced; I would find, for example, it is more prudent to deny belonging to a religion than to be hanged for it, simply on the logic that it is difficult for dead Utopians to build Utopia...)

Justice is another virtue I like, though in implementation it can be a bit tricky. Justice, when it's reduced to the notion of an eye for an eye, becomes mere retribution. If it is to be a virtue, it must be the sort of justice that seeks the elevation of all humankind, rather than a list of rules about which forms of retaliation are endorsed against whom; formal systems of justice, being invented and maintained by corruptible humans, all too easily become corrupt. A system which does not protect the weakest and most vulnerable people is not a just system.

Temperance needs to go. Moderation in the pursuit of virtue is no virtue, and passion in the pursuit of things which improve the lot of people everywhere is no vice. And this virtue too easily becomes a blanket prohibition; the Women's Christian Temperance Union, who were anything but temperate in their zeal to eradicate alcohol, failed to acknowledge that drinking is not necessarily, of and by itself, intemperate; and their intemperance helped create organized crime in the US, a scourge we have still been unable to eradicate.

In its place, I would propose Compassion, and particularly, the variety of compassion that allows us to see the struggles of others, and to treat others with kindness wherever and whenever possible, to the greatest extent we are able. It is a virtue arising from the difficult realization that other people are actually real, and so deserve to be treated the way we would have them treat us.

Faith and Hope seem, to be frank, like poor virtues to me, at least as they are defined by Catholicism. (There is a broader definition of "faith," used by mainline Protestant denominations, that has less to do with accepting the inerrancy of the Church in receiving divine revelation and more to do with an assurance that, even in the face of the unknown, it's possible to believe that one will be okay; this kind of faith, I can get behind.) Indeed, an excess of faith of the dogmatic variety leads to all sorts of nasty problems, as folks who have faith their god wants them to bomb a busy subway might illustrate. And hope (in the Catholic sense of "desiring the kingdom of heaven and eternal life as our happiness, placing our trust in Christ's promises and relying not on our own strength, but on the help of the grace of the Holy Spirit") can lead to inaction in the face of real-world obstacles--if we believe that once we get past the grave, nothing can go wrong, we might be disinclined to pursue happiness or oppose injustice in the here and now.

I would suggest that better virtues might be Integrity and Empathy. Integrity as a virtue means acting in accordance with one's own stated moral precepts; but there's more to it than that. As a virtue, integrity also means acknowledging when others are right; being intellectually rigorous, and mindful of the traps of confirmation bias and anti-intellectualism; and being clear about what we know and what we hope. (When, for example, we state something we want to be true but don't know is true as a fact, we are not behaving with integrity.)

Empathy in this context means, first and foremost, not treating other people as things. It is related to compassion, in that it recognizes the essential humanity of others. As a moral principle, it means acknowledging the agency and rights of others, as we would have them acknowledge our agency and our rights.

Charity is, I think, a consequence arising from the applications of justice, compassion, and empathy, rather than a foundational virtue itself. In its place, I propose Sovereignty, the assumption that the autonomy and self-determinism of others is worthy of respect, and must not be infringed insofar as is possible without compromising one's own self.

So bottom line, that gives us the following list of Seven Virtues: Courage, Beneficence, Justice, Compassion, Integrity, Empathy, and Sovereignty. I like this draft better than the one put forth by Catholicism. But coming up with a consistent, coherent framework of moral behavior is hard! What say you, O Interwebs?
dragonpoly
"Well, in our country," said Alice, still panting a little, "you'd generally get to somewhere else — if you ran very fast for a long time, as we’ve been doing."
"A slow sort of country!" said the Queen. "Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run twice as fast as that!"
"I'd rather not try, please!" said Alice. "I'm quite content to stay here — only I am so hot and thirsty!"

-- Lewis Carroll, Through the Looking Glass

"When we just saw that man, I think it was [biologist P.Z. Myers], talking about how great scientists were, I was thinking to myself the last time any of my relatives saw scientists telling them what to do they were telling them to go to the showers to get gassed … that was horrifying beyond words, and that’s where science – in my opinion, this is just an opinion – that’s where science leads you."
-- Ben Stein, Trinity Broadcasting System interview, 2008


What do spam emails, AIDS denial, conspiracy theories, fear of GM foods, rejection of global warming, antivaccination crusades, and the public school district of Tucson, Arizona banning Shakespeare's The Tempest have in common?


A typical spam message in my inbox


The answer is anti-intellectualism. Anti-intellectualism--the rejection of scientific study and reason as tools for understanding the physical world, and the derision of people who are perceived as educated or "intellectual"--has deep roots in the soil of American civil discourse. John Cotton, theological leader of the Puritans of Massachusetts Bay, wrote in 1642, "the more learned and witty you bee, the more fit to act for Satan will you bee"--a sentiment many Evangelical Protestants identify with today. (Tammy Faye Bakker, wife of the disgraced former televangelist Jim Bakker, once remarked "it's possible to educate yourself right out of a personal relationship with Jesus Christ.")

It seems weird that such a virulent streak of anti-intellectualism should be present in the world's only remaining superpower, a position the US achieved largely on the merits of its technological and scientific innovation. Our economic, military, and political position in the world were secured almost entirely by our ability to discover, invent, and innovate...and yet there is a broad swath of American society that despises the intellectualism that makes that innovation possible in the first place.

Liberals in the US tend to deride conservatives as ignorant, anti-intellectual hillbillies. It's arguably easy to see why; the conservative political party in the US is actively, openly hostile to science and intellectualism. The Republican Party of Texas has written into the party platform a passage opposing the teaching of critical thinking in public school. Liberals scoff at conservatives who deny the science of climate change, teach that the world and everything in it is six thousand years old, and seek to ban the teaching of evolutionary science...all while claiming that GMO foods are dangerous and vaccines cause autism. Anti-intellectualism is an equal-opportunity phenomenon that cuts across the entire American political landscape. The differences in liberal and conservative rejection of science are merely matters of detail.

So why is it such a pervasive part of American cultural dialog? There are a lot of reasons. Anti-intellectualism is built into the foundation of US culture; the Puritans, whose influence casts a very long shadow over the whole of US society, were famously suspicious of any sort of intellectual pursuit. They came to the New World seeking religious freedom, by which they meant the freedom to execute anyone they didn't like, a practice their European contemporaries were insufficiently appreciative of; and the list of people they didn't like included any unfortunate person suspected of learning or knowledge. That suspicion lingers; we've never succeeded in purging ourselves of it entirely.

Those of a cynical nature like to suggest that anti-intellectualism is politically convenient It's easier, so the narrative goes, to control a poorly educated populace, especially when that populace lacks even basic reasoning skills. If you've ever watched an evening of Fox News, it's a difficult argument to rebut. One does not need to be all that cynical to suggest a party plank rejecting critical thinking skills is a very convenient thing to a political party that enshrines young-earth Creationism, for instance.

But the historical narrative and the argument from political convenience seem insufficient to explain the breathtaking aggressiveness of anti-intellectualism in the US today, particularly among political progressives and liberals, who are often smugly self-congratulatory about how successfully they have escaped the clutches of tradition and dogma.

I think there's another factor, and that's the Red Queen problem.

In evolutionary, biology, the Red Queen hypothesis suggests that organisms in competition with each other must continue to evolve and adapt merely to maintain the status quo. When cheetahs prey on gazelles, the fastest cheetahs will be most successful at catching prey; the fastest gazelles will be most successful at escaping cheetahs. So natural selection favors faster and faster gazelles and cheetahs as each adapts to the other. Parasites evolve and become more efficient at parasitizing their hosts, which develop more efficient defenses against the parasites. I would like to propose that the same hypothesis can help explain anti-intellectualism, at least in part.

As we head into the twenty-first century, the sum total of human knowledge is increasing exponentially. When I was in college in the late 1980s and early 1990s, my neurobiology professors taught me things--adult human brains don't grow new neurons, we're all born with all the brain cells we'll ever have--that we now know not to be true. And that means anyone who wants to be educated needs to keep learning new things all the time, just to stay in one place.

Those who reject science like to say that science is flawed because it changes all the time. How can we trust science, they say, when it keeps changing? In fact, what's flawed is such critics' estimation of how complicated the natural world is, and how much there is to know about it. Science keeps changing because we keep shining lights into previously dark areas of understanding.

But it's really hard to keep up. A person who wants to stay abreast of the state of the art of human understanding has to run faster and faster and faster merely to stay in one place. It's fatiguing, not just because it means constantly learning new things, but because it means constantly examining things you believed you already knew, re-assessing how new discoveries fit into your mental framework of how the world works.

For those without the time, inclination, tools, and habits to keep up with the state of human understanding, scientists look like priests. We must merely accept what they say, because we don't have the tools to fact-check them. Their pronouncements seem arbitrary, and worse, inconsistent; why did they say we never grow new brain cells yesterday, only to say the exact opposite today? If two different scientists say two different things, who do you trust?

If you don't race to keep up with the Red Queen, that's what it is--trust. You must simply trust what someone else says, because trying to wrap your head around what's going on is so goddamn fatiguing. And it's easier to trust people who say the same thing every time than to trust people who say something different today than what they said yesterday. (Or who, worse, yet, tell you "I don't know" when you ask a question. "I don't know" is a deeply unsatisfying answer. If a Bronze Age tribesman asks two people "What is the sun?" and one of them gives a fanciful story about a fire-god and a dragon, and the other says "I don't know," the answer about the fire-god and the dragon is far more satisfying, even in complete absence of any evidence that fire-gods or dragons actually exist at all.)

Science is comfortable with the notion that models and frameworks change, and science is comfortable with "I don't know" as an answer. Human beings, rather less so. We don't want to run and run to keep up with the Red Queen. We also don't want to hear "I don't know" as an answer.

So science, then, becomes a kind of trust game, not that much different from the priesthood. We accept the pronouncements of priests and scientists alike when they tell us things they want to hear, and reject them when they don't. Political conservatives don't want to hear that our industrial activity is changing the global climate; liberals don't want to hear that there's nothing wrong with GMO food. Both sides of the political aisle find common ground in one place: running after the Red Queen is just plain too much work.
dragonpoly
Among the left-leaning progressives that make up a substantial part of Portland's general population, there is a profound fear of GMO food that's becoming an identity belief--a belief that's held not because it's supported by evidence, but because it helps define membership in a group.

It's frustrating to talk to the anti-GMO crowd, in part because these conversations always involve goalposts whipping around so fast I'm afraid someone will poke my eye out. It generally starts with "I don't like GMOs because food safety," but when you start talking about how evidence to support that position is as thin on the ground as snowmen in the Philippines, the goalposts quickly move to "I don't like GMOs because Monsanto." Monsanto, if you listen to Portland hippies, is a gigantic, evil mega-corporation that controls the government, buys off all the world's scientists, intimidates farmers, and rules supreme over the media.

So I got to thinking, How big is Monsanto? Because it takes quite a lot of money to do the things Monsanto is accused of doing--when they can be done at all, that is.

And I started Googling. The neat thing about publicly-traded corporations is they have to post all their financials. A quick Google search will reveal just how big any public company really is.

I expected to learn that Monsanto was big. I was surprised.

As big companies go, Monsanto is a runt. In terms of gross revenue, it is almost exactly the same size as Whole Foods and Starbucks. It's smaller than The Gap, way smaller than 7-11 and UPS, a tiny fraction of the size of Home Depot, and miniscule compared to Verizon and ExxonMobil. That's it, way down on the left on this graph I made:



You can't shake a stick in the anti-GMO crowd without hearing a dozen conspiracy theories, almost all of them centered around Monsanto. Lefties like to sneer at conservative conspiracy theories about global warming, but when it comes to GMOs, they haven't met a conspiracy theory they don't love to embrace.

Most of these conspiracy theories talk about how Monsanto, that enormous, hulking brute of a magacorporation, has somehow bought off all the world's scientists, creating a conspiracy to tell us GMOs are safe when they're not.

Now, hippie lefties usually aren't scientists. In fact, anyone who's ever been part of academia can tell you a conspiracy of scientists saying something that isn't true is only a little bit more likely than a conspiracy of cats saying tuna is evil. As an essay on Slate put it,

Think of your meanest high school mean girl at her most gleefully, underminingly vicious. Now give her a doctorate in your discipline, and a modicum of power over your future. That’s peer review.


Speaking of conspiracies of scientists, let's get back to conservatives and their "climate change" scientific conspiracy. Look at the left-hand side of the chart up there, then look at the right-hand side. Look at the left side again. Now look at the right side again.

ExxonMobil makes more than 26 times more money than Monsanto, and has a higher net profit margin, too. Combined, the country's top 5 oil companies have a gross revenue exceeding $1.3 trillion, more than 87 times Monsanto's revenue, and yet...

...they still can't get the world's scientists to say global warming isn't a thing.

If the oil companies can't buy a conspiracy of scientists, how can a pipsqueak like Monsanto manage it?

I'm planning a more in-depth blog post about GMOs and anti-GMO activism later. But the "Monsanto buys off scientists" conspiracy nuttiness needed addressing on its own, because it's so ridiculous.

It's easy to root for the underdog. One of the cheapest, most manipulative ways to make an argument is to refer to something you don't like as "Big" (Big Oil, Big Pharma, Big SCAM as I like to think of the Supplemental, Complementary, and Alternative Medicine community). We are culturally wired to love the underdog; a great deal of left identity is wrapped up in being the ones who root for the common man against Big Whatever.

So the ideology of Monsanto as the Big Enemy has emotional resonance. We like to think of the small guy standing up against Big Monsanto, when the reality is Whole Foods, so beloved of hippies everywhere, is basically the same size big corporation as the oft-hated Monsanto, and both of them are tiny in the shadow of far larger companies like 7-11 and Target.

Now if you'll excuse me, I'm going to head down to Starbucks for a pumpkin spice latte and listen to the hippies rant about how much they hate big corporations like Monsanto.
dragonpoly
A nontrivial problem with machine learning is organization of new information and recollection of appropriate information in a given circumstance. Simple storing of information (cats are furry, balls bounce, water is wet) is relatively straightforward, and one common approach to doing this is simply to define the individual pieces of knowledge as objects which contain things (water, cats, balls) and descriptors (water is wet, water flows, water is necessary for life; cats are furry, cats meow, cats are egocentric little psychopaths).

This presents a problem with information storage and retrieval. Some information systems that have a specific function, such as expert systems that diagnose illness or identify animals, solve this problem by representing the information hierarchically as a tree, with the individual units of information at the tree's branches and a series of questions representing paths through the tree. For instance, if an expert system identifies an animal, it might start with the question "is this animal a mammal?" A "yes" starts down one side of the tree, and a "no" starts down the other. At each node in the tree, another question identifies which branch to take—"Is the animal four-legged?" "Does the animal eat meat?" "Does the animal have hooves?" Each path through the tree is a series of questions that leads ultimately to a single leaf.

This is one of the earliest approaches to expert systems, and it's quite successful for representing hierarchical knowledge and for performing certain tasks like identifying animals. Some of these expert systems are superior to humans at the same tasks. But the domain of cognitive tasks that can be represented by this variety of expert system is limited. Organic brains do not really seem to organize knowledge this way.

Instead, we can think of the organization of information in an organic brain as a series of individual facts that are context dependent. In this view, a "context" represents a particular domain of knowledge—how to build a model, say, or change a diaper. There may be thousands, tens of thousands, or millions of contexts a person can move within, and a particular piece of information might belong to many contexts.

What is a context?

A context might be thought of as a set of pieces of information organized into a domain in which those pieces of information are relevant to each other. Contexts may be procedural (the set of pieces of information organized into necessary steps for baking a loaf of bread), taxonomic (a set of related pieces of information arranged into a hierarchy, such as knowledge of the various birds of North America), hierarchical (the set of information necessary for diagnosing an illness), or simply related to one another experientially (the set of information we associate with "visiting grandmother at the beach).

Contexts overlap and have fuzzy boundaries. In organic brains, even hierarchical or procedural contexts will have extensive overlap with experiential contexts—the context of "how to bake bread" will overlap with the smell of baking bread, our memories of the time we learned to bake bread, and so on. It's probably very, very rare in an organic brain that any particular piece of information belongs to only one context.



In a machine, we might represent this by creating a structure of contexts CX (1,2,3,4,5,…n) where each piece of information is tagged with the contexts it belongs to. For instance, "water" might appear in many contexts: a context called "boating," a context called "drinking," a context called "wet," a context called "transparent," a context called "things that can kill me," a context called "going to the beach," and a context called "diving." In each of these contexts, "water" may be assigned different attributes, whose relevance is assigned different weights based on the context. "Water might cause me to drown" has a low relevance in the context of "drinking" or "making bread," and a high relevance in the context of "swimming."

In a contextually based information storage system, new knowledge is gained by taking new information and assigning it correctly to relevant contexts, or creating new contexts. Contexts themselves may be arranged as expert systems or not, depending on the nature of the context. A human doctor diagnosing illness might have, for instance, a diagnostic context that behaves similarly in some ways to the way a diagnostic expert system; a doctor might ask a patient questions about his symptoms, and arrive at her conclusion by following the answers to a single possible diagnosis. This process might be informed by past contexts, though; if she has just seen a dozen patients with norovirus, her knowledge of those past diagnoses, her understanding of how contagious norovirus is, and her observation of the similarity of this new patient's symptoms to those previous patients' symptoms might allow her to bypass a large part of the decision tree. Indeed, it is possible that a great deal of what we call "intuition" is actually the ability to make observations and use heuristics that allow us to bypass parts of an expert system tree and arrive at a leaf very quickly.

But not all types of cognitive tasks can be represented as traditional expert systems. Tasks that require things like creativity, for example, might not be well represented by highly static decision trees.

When we navigate the world around us, we're called on to perform large numbers of cognitive tasks seamlessly and to be able to switch between them effortlessly. A large part of this process might be thought of as context switching. A context represents a domain of knowledge and information—how to drive a car or prepare a meal—and organic brains show a remarkable flexibility in changing contexts. Even in the course of a conversation over a dinner table, we might change contexts dozens of times.

A flexible machine learning system needs to be able to switch contexts easily as well, and deal with context changes resiliently. Consider a dinner conversation that moves from art history to the destruction of Pompeii to a vacation that involved climbing mountains in Hawaii to a grandparent who lived on the beach. Each of these represents a different context, but the changes between contexts aren't arbitrary. If we follow the normal course of conversations, there are usually trains of thought that lead from one subject to the next; and these trains of thought might be represented as information stored in multiple contexts. Art history and Pompeii are two contexts that share specific pieces of information (famous paintings) in common. Pompeii and Hawaii are contexts that share volcanoes in common. Understanding the organization of individual pieces of information into different contexts is vital to understanding the shifts in an ordinary human conversation; where we lack information—for example, if we don't know that Pompeii was destroyed by a volcano—the conversation appears arbitrary and unconnected.

There is a danger in a system being too prone to context shifts; it meanders endlessly, unable to stay on a particular cognitive task. A system that changes contexts only with difficulty, on the other hand, appears rigid, even stubborn. We might represent focus, then, in terms of how strongly (or not) we cling to whatever context we're in. Dustin Hoffman's character in Rain Man possesses a cognitive system that clung very tightly to the context he was in!

Other properties of organic brains and human knowledge might also be represented in terms of information organized into contexts. Creativity is the ability to find connections between pieces of information that normally exist in different contexts, and to find commonalities of contextual overlap between them. Perception is the ability to assign new information to relevant contexts easily.

Representing contexts in a machine learning system is a nontrivial challenge. It is difficult, to begin with, to determine how many contexts might exist. As a machine entity gains new information and learns to perform new cognitive tasks, the number of contexts in which it can operate might increase indefinitely, and the system must be able to assign old information to new contexts as it encounters them. If we think of each new task we might want the machine learning system to be able to perform as a context, we need to devise mechanisms by which old information can be assigned to these new contexts.

Organic brains, of course, don't represent information the way computers do. Organic brains represent information as neural traces—specific activation pathways among collections of neurons.

These pathways become biased toward activation when we are in situations similar to those where they were first formed, or similar to situations in which they have been previously activated. For example, when we talk about Pompeii, if we're aware that it was destroyed by a volcano, other pathways pertaining to our experiences with or understanding of volcanoes become biased toward activation—and so, for example, our vacation climbing the volcanoes in Hawaii come to mind. When others share these same pieces of information, their pathways similarly become biased toward activation, and so they can follow the transition from talking about Pompeii to talking about Hawaii.

This method of encoding and recalling information makes organic brains very good at tasks like pattern recognition and associating new information with old information. In the process of recalling memories or performing tasks, we also rewrite those memories, so the process of assigning old information to new contexts is transparent and seamless. (A downside of this approach is information reliability; the more often we access a particular memory, the more often we rewrite it, so paradoxically, the memories we recall most often tend to be the least reliable.)

Machine learning systems need a system for tagging individual units of information with contexts. This becomes complex from an implementation perspective when we recall that simply storing a bit of information with descriptors (such as water is wet, water is necessary for life, and so on) is not sufficient; each of those descriptors has a value that changes depending on context. Representing contexts as a simple array CX (1,2,3,4,…n) and assigning individual facts to contexts (water belongs to contexts 2, 17, 43, 156, 287, and 344) is not sufficient. The properties associated with water will have different weights—different relevancies—depending on the context.

Machine learning systems also need a mechanism for recognizing contexts (it would not do for a general purpose machine learning system to respond to a fire alarm by beginning to bake bread) and for following changes in context without becoming confused. Additionally, contexts themselves are hierarchical; if a person is driving a car, that cognitive task will tend to override other cognitive tasks, like preparing notes for a lecture. Attempting to switch contexts in the middle of driving can be problematic. Some contexts, therefore, are more "sticky" than others, more resistant to switching out of.

A context-based machine learning system, then, must be able to recognize context and prioritize contexts. Context recognition is itself a nontrivial problem, based on recognition of input the system is provided with, assignment of that input to contexts, and seeking the most relevant context (which may in most situations be the context with greatest overlap with all the relevant input). Assigning some cognitive tasks, such as diagnosing an illness, to a context is easy; assigning other tasks, such as natural language recognition, processing, and generation in a conversation, to a context is more difficult to do. (We can view engaging in natural conversation as one context, with the topics of the conversation belonging to sub-contexts. This is a different approach than that taken by many machine conversational approaches, such as Markov chains, which can be viewed as memoryless state machines. Each state, which may correspond for example to a word being generated in a sentence, can be represented by S(n), and the transition from S(n) to S(n+1) is completely independent of S(n-1); previous parts of the conversation are not relevant to future parts. This creates limitations, as human conversations do not progress this way; previous parts of a conversation may influence future parts.)

Context seems to be an important part of flexibility in cognitive tasks, and thinking of information in terms not just of object/descriptor or decision trees but also in terms of context may be an important part of the next generation of machine learning systems.

Sex tech: Update on the dildo you can feel

dragonpoly
A few months back, I wrote a blog post about a brain hack that might create a dildo the wearer can actually feel. The idea came to me in the shower. I'd been thinking about the brain's plasticity, and about how it might be possible to trick the brain into internalizing a somatosensory perception that a strap-on dildo is a real part of the body, by using sensors along the dildo connected to tiny electrical stimulation pads worn inside the vagina.

It's an interesting idea, I think. So I blogged about it. I didn't expect the response I got.

I've received a bunch of emails about it, and had a bunch of people tell me "OMG this is the most amazing thing ever! Make it happen!"

So I have, between work on getting the bookMore Than Two out the door and preparing for the book tour, been chugging away at this idea. Here's an update:

1. I've filed for a patent on the idea. I've received confirmation that the application has been accepted and the process is started.

2. I've talked to an electronics prototyping firm about developing a prototype. Based on feedback from the prototyping firm, I've modified the initial design extensively. The first version I'd thought about was based on the same principle as the Feeldoe; the redesign uses a separate dildo and harness, with an external computer to receive signals from the sensors in the dildo and transmit them to the vaginal insert. The new design looks, and works, something like this. (Apologies for the horrible animated GIF; art isn't really my specialty.)



3. The prototyping firm has outlined a multi-step process to develop a workable, manufacturable device. The process would go something like:

Phase 1: Research and proof of concept. This would include researching designs for the sensors on the dildo and the electrodes on the vaginal insert. It would also include a crude proof-of-concept device that would essentially be nothing more than the vaginal insert connected to a computer programmed to simulate the rest of the device.

The intent at this stage is to see if the idea is even workable. What kind of electrodes could be used? Would the produce the right kind of stimulation? How densely arranged could they be? How small could they be? Would the brain actually be able to interpret sensations produced by the electrodes in a way that would trick the wearer into thinking the dildo was a part of the body? If so, how long would that somatosensory rewiring take?

Phase 2: Assuming the initial research showed the idea to be viable, the next step would be to figure out a sensor design, fabricate a microcontroller to connect the sensors to the electrodes, and experiment with sensor design and fabrication. Would a single sensor provide adequate range of tactile feedback, or would it be necessary to multiplex several sensors (some designed to respond to light touch, others to a heavier touch) together in order to provide a good dynamic range? What mechanical properties would the sensors need to have? How would they be built? (We talked about several potential designs, including piezoelectric, resistive polymer, and fluid-filled devices.) How would the sensors be placed along the dildo?

Phase 3: Once a working prototype is developed, the next step is detail design and engineering. This is essentially the process of taking a working prototype and producing a manufacturable product from it. This includes everything from engineering drawings for fabrication to choosing materials to developing the final version of the software.


So. That's where the project is right now.

The up side? I think this thing could actually work. The down side? It's going to be expensive.

My partner Eve and I have already started investigating ways to make it happen. If we incorporate in Canada, we may be eligible for Canadian financial incentives designed to spur tech research and development.

The fabricating company seems to think the first phase would most likely cost somewhere around $5,000-10,000. Depending on what's learned during that phase, the development of a fully functional prototype might run anywhere from $50,000 to $100,000, a lot of which hinges on design of the sensors, which will likely be the most challenging bit of engineering. They didn't even want to speculate about the cost of going from working prototype to manufacturable product; too many unknowns.

We're discussing the possibility of doing crowdfunding to get from phase 2 to 3, and possibly from phase 1 to 2. It's not likely that crowdfunding is appropriate for the first phase, because we won't have anything tangible to offer backers. Indeed, it's possible that we might spend the initial money and discover the idea isn't workable.

It might be possible to just put the first phase on a credit card or something, though it'd hurt. Neither of us is really in a position to afford it, especially given the money we've spent establishing the publishing house and supporting the book.

Ideally, I'd like to find people who think this idea is worth investigating who can afford to invest in the first phase. If you know anybody who might be interested in this project, let me know!

Also, one of the people at the prototyping company suggested the name "Hapdick." I'm still not sure how I feel about that, but I do have to admit it's clever.

Some thoughts on happiness

dragonpoly
I am a happy person. By some accident of genetics or privileged brain chemistry, my default state is incredibly happy, and it always has been. Seriously, if you could bottle up the way I feel as my normal background state and distribute it among the world, there'd never be war or strife again.

That doesn't mean I'm euphoric 100% of the time, of course. But just as things like depression can be a matter of brain chemistry, so, I think, can general background happiness.

And yet...and yet...

Whenever I see, or hear, conversations about happiness, it seems that many people are taught to profoundly fear and distrust the state of being happy. Contemporary American society teaches us a lot of incredibly destructive myths about happiness, some of which I see over and over again. For example:

Myth #1: If you are happy, you don't accomplish anything.

I am happy...and I have just released my first book. I own two businesses. I am getting set to start a tour across Canada and the US with my coauthor, Eve Rickert, where we will be lecturing and giving workshops on relationships, polyamory, and ethics. I have traveled Eastern and Western Europe. My life is rich and filled with accomplishment. In fact, I have the kind of life some folks pay money to see on the Internet.

Myth #2: Generally happy people don't experience the full range of human emotions.

I hear this one all the time. "I don't want to be happy because it would dull me to pain and suffering, and I couldn't experience the full range of life." "If I were happy all the time, I would be blind to the sadness in the world." "I wouldn't want to be happy, because if I were happy, I couldn't experience pain and suffering."

Emotions are complex, and it is possible to feel more than one at the same time. I am a happy person, but that doesn't mean there are never times when I feel sad, fearful, angry, or other things. It just means those emotions don't stick. (One of my girlfriends says things like anger, frustration, and sadness bounce off me; when I feel them, they are transitory, and don't weigh me down.) My baseline of happiness makes me emotionally resilient.

Myth #3: Happiness and euphoria are the same thing.

There are pills that make people feel euphoric, or intoxicated, but being euphoric isn't the same thing as being happy. Happiness is more a generalized feeling of positive, pleasant satisfaction than it is a rush or a thrill; it's the feeling of being able to live one's life on one's terms and feel that you're flourishing, that every day brings new awe and wonder, that the universe you live in is an amazing place to be and the more you experience of it the more amazing it becomes.

Yet all the time, I hear folks say things like "If I were happy, I'd never get things done." "If I were happy, I would just want to sit on the couch all day." (No, dude, that's not happiness, it's a heroin fix you're thinking of.)

Myth #4: Happiness is the enemy of productivity.

This isn't really quite the same thing as myth #1--it's possible to be productive without accomplishment. (Doing the dishes is productive, but doesn't directly lead to finishing a book.) But they are related, in that it's hard to be accomplished without being productive.

For me, creating things, writing, co-creating with partners, making things that didn't exist until I worked my will on the world and caused them to exist--these are expressions of my happiness. The more I do them, the happier I am...and the happier I am, the more I do them. In fact, depression and unhappiness are much more corrosive to productivity than happiness is...ask anyone who suffers from depression how difficult it is to do anything when you're in its grip!

Myth #5: Happiness is meaningless to a person who is always happy. We can't appreciate happiness without sadness, life without death, joy without sorrow, light without darkness, Albert Einstein without Deepak Chopra, Mozart without Justin Bieber, word processors without cuneiform, blah blah blah.

I realize this notion that you can't enjoy X without its dark and sinister anti-X evil twin is deeply embedded in Western cultural consciousness, but it still makes me scratch my head every single time I hear it. Folks actually appear to believe this is true, and I just don't get it. I appreciate the fact that I can see, yet I've never been blind.

In fact, happiness is exactly what lets me appreciate the awe-inspiring beauty and wonder of the natural universe. You don't have to be sad in order to enjoy and appreciate happiness; being happy is, of and by itself, a happy experience! That's kind of what it says on the tin.

I know this sounds like a radical notion, but I would like to propose that happiness is not something to fear, it's something to embrace, for the simple reason that it makes our lives better. We have inherited our distrust of happiness from our Puritan forefathers, I suspect, but you know what? Fuck them. They said we should sacrifice our happiness in our worldly lives so that we would be happy in the afterlife, with nary a thought to the contradiction inherent in the notion of pursuing happiness by denying happiness.

The idea that we should fear happiness is, I would argue one of the most singular causes of the many evils bedeviling humankind. And I can not rightly understand why this fear has such great currency.

Tags:

Robot sex machines? Yes please!

dragonpoly
Of all the deadly sins, my favorite by far is Lust. In fact, I'm actually a bit rubbish at all the other ones, so great is my fondness for Lust. I am also a huge fan of mixing sex and tech. So when I saw a crowdfunding campaign for a "robotic blowjob machine," as you can probably imagine, I had to get on board with it. Women generally seem to benefit the most from the intersection of sex and technology, so the notion of a sex robot for men had more than passing appeal to me.

The campaign was a success, and I recently received in the mail one "Autoblow 2," the robotic sex machine whose marketing campaign advertises "unlimited blowjobs on demand." (Seriously.)

It's an interesting-looking piece of kit:



Not quite as stylish, perhaps, as the new wave of vibrators from companies like Lelo and JimmyJane, but hey, I'll take it.

This thing has two parts: the base, which contains a motor that moves a pair of spring bands covered with little rollers up and down, and a sleeve that inserts into the base. The sleeves come in several sizes, and are made of this really bizarre soft silicone material that flops about and feels kinda squishy. (Materials science is an avenue of human endeavor that has, until now, rarely been applied to the pursuit of the ultimate orgasm, more's the pity. For hundreds of years, leather, stone, wood, and ivory represented the state of the art for Things To Make You Come, so I'm pleased to see improvements in this area.)

Still, when the time came to put my willie in this thing, I will admit I was a little apprehensive. I looked dubiously at it for a bit, until my sweetie zaiah said "oh, give me that" and took it away from me. She squirted some lube into the "insert willie here" end and stuck it over my junk.

No robotic blowjob machine would be complete without a speed control, and sure enough, there's a little knob on the bottom that makes it go. She turned it on and it whirred to life, stroking mechanically away.

Now, I've had some amazing blowjobs from some exceptionally talented partners, so honesty compels me to admit this gadget does not really feel like a blowjob. It's a fair approximation, I suppose, considering the formidable engineering challenge that a real blowjob simulator would face, but it isn't quite up to a true blowjob experience. A double-blind face-off between this thing and genuine oral sex would, I suspect, be rather lopsided.

However, even if it doesn't quite capture the true essence of the oral arts, this robotic sex machine does feel good. Really, really good. I was surprised, in fact. I cranked it up to maximum speed and, yeah, it did exactly what it says on the tin.

I am normally multiply orgasmic; it's not uncommon for me to get off half a dozen times or more during sex. But this thing...well, when this thing got me off, it was intense and it got me off for good. I was done when I finally stopped screaming.

At which point I discovered a design flaw. The little control knob on the bottom? It's little. As in, really difficult to find in a hurry when you're gasping and panting and your body's still shaking. I tried to yank it off my junk, but my partner grabbed me by the wrist. "No," she said, and held it there until I found the control.

Which, naturally, brought up a really interesting idea, because I'm a kinky motherfucker and there's no innocent pleasure I can't find a way to corrupt with wicked thoughts.

A lot of women quite like the notion of forced orgasms, and it's pretty easy to do, really--there are entire Web sites dedicated to the high art of the forced orgasm, but when you get down to brass tacks all it really takes is a bit of rope and a Hitachi magic wand. It's more difficult to find ways to do the same thing to a person with an outie rather than an innie...

...at least until now.

This thing feels good on its own, no question about it, but a bit of rope, perhaps a blindfold, a gag if you don't want to wake the neighbors, and this gadget can be so much more. Tie your guy down, set this thing going, and wait. You probably won't have to wait to long. If my brief experience is any indication, the results should be pretty...um, dramatic.

You can find this robot blowjob machine here. (Full disclosure: I liked it enough I signed up as an affiliate.) Get one for yourself or that guy in your life you want to tie down and make scream give the gift of pleasure! You'll be making the world a happier place and encouraging new high-tech sex toys for men, both of which I think are laudable goals.

Visiting Chrome

dragonpoly
"What do you want to do tonight?" I asked Eve.

"Dunno. What do you want to do?"

"I'm up for anything," I said, in a rare moment of underestimating the true meaning of 'anything.'

"Well," she said, pointing to her laptop screen, "this looks interesting."

And so it was we left this plane of reality and stepped into William Gibson's version of 2014, as seen from the mid-1980s.

It wasn't actually our intention to travel to a dystopian alternate reality, you understand. We were looking for an evening's casual entertainment, and didn't feel like watching Guardians of the Galaxy. So she did a Google search, and found a thing called Richmond Night Market.

If Canada had truth-in-advertising laws, the name "Richmond Night Market" might raise eyebrows at whatever regulatory bodies (tribal meetings of Kurgan warriors? Men in polar bear skins pounding on each other with long decorative spears?) may exist in the bitter frozen wastelands of the North.

"Richmond Night Market." It's what you might call a flea market with unorthodox hours, or perhaps a weekly gathering of fishmongers selling wares straight off the boat to the finest sushi restaurants in downtown Vancouver. "Richmond Night Market." The name conjures wholesome images of open-air commerce, the sort of place where one might go to buy a new china bowl for serving fruit punch in.

One would not expect, from the name, a gigantic rubber duck. Nor a dystopian world of stimrunners and outlawed bioactives, shivs and black docs.

We got there after sunset. The line already wrapped around the fenced perimeter, snaking beneath massive concrete pilings supporting the whining elevated trains. Loudspeakers encouraged us to buy books of passes, which would get us in at a discounted rate. Eve climbed a bit of broken concrete and leaned over the perimeter fence for a picture.



We eventually made our way in, via a quick bit of social engineering to persuade the people in line around us to pool our resources for a passbook ("skip the line!" the cute Asian woman hawking them said. "Save fifty cents!"). Passbook in hand, our ratag group went to the special entrance, and stepped through the perimeter into...into...

If Ridley Scott decided to do an adaptation of Neuromancer, this is where you'd go to find a Netrunner. If Neal Stephenson were to reimagine Snow Crash as a Canadian made-for-TV series, you might find Raven here, scowling and skulking among the stalls. If I ever run a postcyberpunk RPG, this place will be there, somewhere, a glittering Easter egg of neon and LEDs waiting for the players to find.

On the surface, the Richmond Night Market is an open-air collection of vendors selling wares. But such a simple explanation fails to do justice to it, in the way that describing the Great Pyramid of Cheops as a "big pile of rocks" or the combined works of William Shakespeare as "a bunch of words about people being awful to each other" fails to convey the pure Platonic essence of these things.

Richmond Night Market is an open-air collection of vendors selling wares. But such a place it is, and such wares.

Upon entering the Richmond Night Market through the special, skip-the-line-with-your-magic-passbook gate, one is confronted with a riot of bright lights and busy signs, most in Chinese and English, some in Chinese only. Crowds of people flow like oil through the interstitial spaces between the stalls, while vendors work busily to separate them from their money.

We passed hastily-erected tents offering e-cigarettes ("Vape! Vape! Better than smoking!"), small radio-controlled drones with cameras on them, and long black swords ("buy one, get one free!"). Next to the stall selling smartphone accessories was another selling DNA typing ("put your name on the registry! Find an organ donor!"). A dazzling display of laser lights led to a bored-looking woman with a collection of drop knives and canisters of pepper spray. Across from her, another booth offered stem cell tissue typing ("must be between 18 and 35," the stern-looking woman said). Around the corner, we found small paper buckets of battered squid tentacles, deep-fried Mars bars, and computer services ("Unlock your phone! Run any software! Any software you like!") Eve accepted a sample of exotic tea in a tiny paper cup that leaked. "They don't seem terribly interested in selling tea," I said. "Probably contraband biologicals in the back."

At one booth, a dour-looking man about the size of Philadelphia stood with his arms folded. A small sign was propped against the table, showing two exuberantly muscled men standing back to back, one holding a sword. "What do--?" I started to ask. He growled. "I'll just keep moving, then," I said.

Signs tied to an enormous rubber ducky with bits of nylon rope promised a Magical Candyland. We wandered around, blinking, until we found it: a low concrete wall with flaking paint, behind which a couple of elderly women sold lollipops from a yellowing plastic bin. I didn't ask what the magic was; I'm still not entirely sure I want to know.

A momentary turbulence in the flow of people disgorged a friend of Eve's. "I found pens!" she said, before the crowd swallowed her again. "Hello Kitty!" Behind her, a man dressed as a panda sold airline tickets to mainland China. "Samsung TV!" said a guy to my right. "True 4K! Only $3,000!"

"Who the hell," I asked Eve, "comes here and drops three thousand bucks on an impulse buy?"

We wandered through the noise and mayhem, feeling a bit like the main character of Zero Theorem at the party. Everyone around us seemed to move with purpose, crowds of people here each with an agenda, and almost none of those agendas involving Hello Kitty pens. Eddies swirled in the crowd, looking random--one in front of the DNA testing tent, another at the place selling drones. "Vape! Vape! Run any software! Tissue typing!" A crowd gathered in front of the booth advertising "The secret knowledge of the Bible, what Jesus REALLY said!" and disappeared just as quickly.

Eventually, the flow of the crowd deposited us near where we'd come in. "So, um," she said, "are you ready to leave? Because this place--"

"Yes," I said. "Yes, I am."

We headed out empty-handed. I was too old for tissue typing, didn't have a spare three thousand dollars for a new TV, and wasn't sure I wanted to start trafficking in restricted biochemical agents just that evening.

Still, I will admit to some nostalgia for the days when we thought dystopia would mean netrunners and celebrities with Zeiss Ikon eyes, rather than the dreary same-old same-old of run-of-the-mill corporate malfeasance and Middle Eastern war we ended up with. We had, for a brief, shining moment, a taste of the more interesting ways society might have run off the rails, and that world seemed so much more fascinating than the dystopia we settled for.

1984: How George Orwell Got it Wrong

dragonpoly
When I was in high school, one of the many books on our required reading list in my AP English class was George Orwell's 1984. As a naive, inexperienced teenager, I was deeply affected by it, in much the same way many other naive, inexperienced teens are deeply affected by Atlas Shrugged. I wrote a glowing book report, which, if memory serves, got me an A+.



1984 was a crude attempt at dystopian fiction, partly because it was more a hysterical anti-Communist screed than a serious effort at literature. Indeed, had it not been written at exactly the point in history it was written, near the dawn of the Cold War and just prior to the rise of McCarthyist anti-communist hysteria, it probably would not have become nearly the cultural touchstone it is now.

From the vantage point of 2014, parts of it seem prescient, particularly the overwhelming government surveillance of every aspect of the citizen's lives. 1984 describes a society in which everyone is watched, all the time; there's a minor plot hole (who's watching all these video feeds?), but it escaped my notice back then.

But something happened on the way to dystopia--something Orwell didn't predict. We tend to see surveillance as a tool of oppressive government; in a sense, we have all been trained to see it that way. But it is just as powerful a tool in the hands of the citizens, when they use it to watch the government.




As I write this, the town of Ferguson, Missouri has been wracked for over a week now because of the killing of an unarmed black teenager at the hands of an aggressive and overzealous police officer. When the people of Ferguson protested, the police escalated, and escalated, and escalated, responding with tear gas, arrests, and curfews.

Being a middle-aged white dude gives me certain advantages. I don't smoke pot, but if I did and a police officer found me with a bag of weed in my pocket, the odds I'd ever go to prison are very, very small. Indeed, the odds I'd even be arrested are small. If I were to jaywalk in front of a police officer, or be seen by a police officer walking at night along a suburban sidewalk, the odds of a violent confrontation are vanishingly tiny. So it's impossible for me, or real;y for most white dudes, to appreciate or even understand what it's like to be black in the United States.

This is nothing new. The hand of government weighs most heavily on those who are least enfranchised, and it has always been so. All social structures, official and unofficial, slant toward the benefit of those on top, and in the United States, that means the male and pale.

And there's long been a strong connection between casual, systemic racism and the kind of anti-Commie agitprop that made Orwell famous.



It is ironic, though not unexpected, that the Invisible Empire of the Knights of the Ku Klux Klan is raising a "reward" for the police officer who "did his job against the negro criminal".

So far, so normal. This is as it has been since before the founding of this country. But now, something is different...and not in the way Orwell predicted. Surveillance changes things.




What Orwell didn't see, and couldn't have seen, is a time in which nearly every citizen carries a tiny movie camera everywhere. The rise of cell phones has made citizen surveillance nearly universal, with results that empower citizens against abuses of government, rather than the other way around.

Today, it's becoming difficult for police to stop, question, arrest, beat, or shoot someone without cell phone footage ending up on YouTube within hours. And that is, I think, as it should be. Over and over again, police have attempted to prevent peopel from recording them in public places...and over and over again, the courts have ruled that citizens have the right to record the police.

It's telling that in Ferguson, the protestors, who've been labeled "looters" and "thugs" by police, have been the ones who want video and journalism there...and it's been the police who are trying to keep video recording away. That neatly sums up everything you need to know about the politics of Ferguson, seems to me.

Cell phone technology puts the shoe on the other foot. And, unsurprisingly, when the institutions of authority--the ones who say "if you have nothing to hide, you have nothing to fear from surveillance"--find themselves on the receiving end rather than the recording end of surveillance, they become very uncomfortable. In the past, abuses of power were almost impossible to prosecute; they happened in dark places, away from the disinfecting eye of public scrutiny. But now, that's changing. Now, it's harder and harder to find those dark places where abuse thrives.

In fact, the ACLU has released a smartphone app called Police Tape, which you can start running as soon as you find yourself confronted by police. It silently (and invisibly) records everything that happens, and uploads the file to a remote server.

If those in power truly had nothing to hide, they would welcome surveillance. New measures are being proposed in many jurisdictions that would require police officers to wear cameras wherever they go. The video from these cameras could corroborate officers' accounts of their actions whenever misconduct was alleged, if--and this is the critical part--the officers tell the truth. When I hear people object to such cameras, then, the only conclusion I can draw is they don't want a record of their activities, and I wonder why.

William Gibson, in the dystopian book Neuromancer (published, as fate would have it, in 1984) proposed that the greatest threats to personal liberty come, not from a government, but from corporations that assume de facto control over government. His vision seems more like 1984 than 1984. He was less jaundiced than Orwell, though. In the short story Burning Chrome, Gibson wrote, "The street finds its own uses for things." The explosion of citizen surveillance proves how remarkably apt that sentiment is.

The famous first TV commercial for the Apple Macintosh includes the line "why 1984 won't be like 1984." The success of the iPhone and other camera-equipped smartphones, shows how technology can turn the tables on authority.

The police commissioners and state governors and others in the halls of political power haven't quite figured out the implications yet. Technology moves fast, and the machinery of authority moves slowly. But the times, they are a-changin'. Orwell got it exactly wrong; it is the government, not the citizens, who have the most to fear from a surveillance society.

And that is a good thing.

Syndicate

RSS Atom
Powered by LiveJournal.com
Designed by Lilia Ahner