What If A New Technology Forces Everyone to be Honest?

 

“Magnetic interference with the brain can make it impossible to lie, and polygraphs and “truth serums” will soon be obsolete, say Estonian researchers.

Inga Karton and Talis Bachmann worked with 16 volunteers who submitted to transcranial magnetic stimulation, which can stimulate some parts of the brain and not others.”

International Business Times, September 9, 2011

2024 presidential candidate Bob Glitch is preaching to the Republican Party faithful: “With God as my witness, we’re going to bring morality and family values back to America and we’re going to turn back the homosexual agenda…”Anonymous member Bob Dobbometer joins the cheering throng, raising his fist in the air and points his magnetic truth ring right at the candidate.  Glitch continues. “I have nothing against homosexuals. Jesus said we should love every… (Glitch pauses, twitches slightly) “Man, that dude in the muscle shirt is freaking’ hot.  I’d like to meet him in a men’s room and…” The mic goes dead.

Of course, the early truth machine will not come in the form of a handy dandy decoder ring and anyway the assumption is that it will be used only on “criminals”… and by criminals we of course mean poor criminals without connections and armies of attorneys. But what might it be like sometime during the coming years of radical technological evolution, if its use becomes generalized and it becomes really hard to tell a lie?

In a piece for H+, I asked whether enhancement seekers really want to know themselves.  This piece could be seen as asking a similar question: do we really want to know what other people are thinking… about us; about treasured beliefs; about anything?… and do we want them to know what we’re really thinking?

One group that would say yes are the practitioners of “radical honesty.” Focused mainly on total truth telling in interpersonal relationships, the advocates promise to “transform your life.”   I wonder.

 

“How’s my hair?” Jill asks, pointing the truth ring at Jack. “It looks terrible,” Jack replies. Jill is unhappy.  She runs off to the beauty parlor and gets a new look and returns.  “How’s my hair?” she asks, and points the truth ring again.  “I think that’s a trivial dumb question,” Jack replies.  “You should be thinking about the crisis in East Blogostan.” She moves the ring closer. “What do you care about East Blogostan?” she shouts.  “You don’t do a fucking thing about East Blogostan?” Jack twitches slightly.  He sinks deeper into his real thoughts.  “You’re right.  I don’t give a fuck about East Blogostan.  I’m actually unhappy because my dick is only 4”.   He frowns and now he points his ring at Jill.  “So am I!” she screams.

If Jack could have simply said, “Your hair looks beautiful,” Jack and Jill would have each gotten a pleasant little jolt of oxytocin which would have made them both feel good and decreased their stress… and, as you know, stress is a major cause of health problems.  Instead, they bitched at each other for several months until they got a divorce.  Soon thereafter, Jack died of a heart attack and Jill got kicked out of the health club when she tells the owner, a large black man with a gigantic magnetic truth decoder neck chain who had just asked her to dinner: “Black men like you scare me.”

On the other hand, Radical Honesty advocate Brad Blanton has run for Congress in Virginia and that seems like the sort of place where my imagined Magnetic Truth Decoder Ring could be quite useful.  Of course, politicians would resort to headgear   to protect their brains from forced magnetic transparency; or if that proves to be too obvious, they would likely opt for surgically implanted firewalls against forced truth telling .  Early adopters of this surgery would have a huge advantage. They would be able to spew political and personal homilies and everyone would assume it was true.

OK. I’m being playful here, but — as with the other column I referenced earlier in this piece, it poses a serious question.  Is enhancement simply a matter of more?… more years, more muscle, more copies of the self, more brain power — or is it a matter of depth and complexity?  Are we better humans because we have replaceable parts or because we have been transformed in our thoughts and behaviors by technologies that are challenging and perhaps painful to utilize?  I’m not suggesting that the answer is obvious but I do think it’s a worthy area of discourse

Semi-autonomous software agents: A personal perspective.

by: The Doctor [412/724/301/703/415]

So, after going on for a good while about software agents you’re probably wondering why I have such an interest in them. I started experimenting with my own software agents in the fall of 1996 when I first started undergrad. When I went away to college I finally had an actual network connection for the first time in my life (where I grew up the only access I had was through dialup) and I wanted to abuse it. Not in the way that the rest of my classmates were but to do things I actually had an interest in. So, the first thing I did was set up my own e-mail server with Qmail and subscribed to a bunch of mailing lists because that’s where all of the action was at the time. I also rapidly developed a list of websites that I checked once or twice a day because they were often updated with articles that I found interesting. It was through those communication fora that I discovered the research papers on software agents that I mentioned in earlier posts in this series. I soon discovered that I’d bitten off more than I could chew, especially when some mailing lists went realtime (which is when everybody started replying to one another more or less the second they received a message) and I had to check my e-mail every hour or so to keep from running out of disk space. Rather than do the smart thing (unsubscribing from a few ‘lists) I decided to work smarter and not harder and see if I could use some of the programming languages I was playing with at the time to help. I’ve found over the years that it’s one thing to study a programming language academically, but to really learn one you need a toy project to learn the ins and outs. So, I wrote some software that would crawl my inbox, scan messages for certain keywords or phrases and move them into a folder so I’d see them immediately, and leave the rest for later. I wrote some shell scripts, and when those weren’t enough I wrote a few Perl scripts (say what you want about Perl, but it was designed first and foremost for efficiently chewing on data). Later, when that wasn’t enough I turned to C to implement some of the tasks I needed Leandra to carry out.

Due to the fact that Netscape Navigator was highly unreliable on my system for reasons I was never quite clear on (it used to throw bus errors all over the place) I wasn’t able to consistently keep up with my favorite websites at the time. While the idea of update feeds existed as far back as 1995 they didn’t actually exist until the publication of the RSS v0.9 specification in 1999, and ATOM didn’t exist until 2003, so I couldn’t just point a feed reader at them. So I wrote a bunch of scripts that used lynx -dump http://www.example.com/ > ~/websites/www.example.com/`date ‘+%Y%m%d-%H:%M:%S’`.txt and diff to detect changes and tell me what sites to look at when I got back from class.

That was one of the prettier sequences of commands I had put together, too. This kept going on for quite a few years, and the legion of software agents I had running on my home machine grew out of control. As opportunities presented themselves, I upgraded Leandra as best I could, from an 80486 to an 80586 to P-III, with corresponding increases in RAM and disk space. As one does. Around 2005 I discovered the novel Accelerando by Charles Stross and decided to call this system of scripts and daemons my exocortex, after the network of software agents that the protagonist of the first story arc had much of his mind running on through the story. In early 2014 I got tired of maintaining all of the C code (which was, to be frank, terrible because they were my first C projects), shell scripts (lynx and wget+awk+sed+grep+cut+uniq+wc+…) and Perl scripts to keep them going as I upgraded my hardware and software and had to keep on top of site redesign after site redesign… so I started rewriting Exocortex as an object-oriented framework in Python, in part as another toy project and in part because I really do find myself enjoying Python as a thing that I do. I worked on this project for a couple of months and put my code on Github because I figured that eventually someone might find it helpful. When I had a first cut more or less stable I showed it to a friend at work, who immediately said “Hey, that works just like Huginn!”

I took one look at Huginn, realized that it did everything I wanted plus more by at least three orders of magnitude, and scrapped my codebase in favor of a Huginn install on one of my servers.

Porting my existing legion of software agents over to Huginn took about four hours… it used to take me between two and twelve hours to get a new agent up and running due to the amount of new code I had to write, even if I used an existing agent as a template. Just looking at the effort/payoff tradeoff there really wasn’t any competition. So, since that time I’ve been a dedicated Huginn user and hacker, and it’s been more or less seamlessly integrated into my day to day life as an extension of myself. I find it strange, somewhat alarming when I don’t hear from any of them (which usually means that something locked up, which still happens from time to time) but I’ve been working with the core development team to make it more stable. I find it’s a lot more stable than my old system was simply due to the fact that it’s an integrated framework, and not a constantly mutating patchwork of code in several languages sharing a back-end. Additionally, my original exocortex network was comprised of many very complex agents: One was designed to monitor Slashdot, another Github, another a particular IRC network; Huginn offers a large number of agents, each of which carries out a specific task (like providing an interface to an IMAP account or scraping a website). By customizing the configurations of instances of each agent type and wiring them together, you can build an agent network which can carry out very complex tasks which would otherwise require significant amounts of programming time. Read more “Semi-autonomous software agents: A personal perspective.”

Semi-autonomous agents: What are they, exactly?

by: The Doctor [412/724/301/703/415]

This post is intended to be the first in a series of long form articles (how many, I don’t yet know) on the topic of semi-autonomous software agents, a technology that I’ve been using fairly heavily for just shy of twenty years in my everyday life. My goals are to explain what they are, go over the history of agents as a technology, discuss how I started working with them between 1996e.v. and 2000e.v., and explain a little of what I do with them in my everyday life. I will also, near the end of the series, discuss some of the software systems and devices I use in the nebula of software agents that comprises what I now call my Exocortex (which is also the name of the project), make available some of the software agents which help to expand my spheres of influence in everyday life, and talk a little bit about how it’s changed me as a person and what it means to my identity.

So, what are semi-autonomous agents?

One working definition is that they are utility software that acts on behalf of a user or other piece of software to carry out useful tasks, farming out busywork that one would have to do oneself to free up time and energy for more interesting things. A simple example of this might be the pop-up toaster notification in an e-mail client alerting you that you have a new message from someone; if you don’t know what I mean play around with this page a little bit and it’ll demonstrate what a toaster notification is. Another possible working definition is that agents are software which observes a user-defined environment for changes which are then reported to a user or message queuing system. An example of this functionality might be Blogtrottr, which you plug the RSS feeds of one or more blogs into, and whenever a new post goes up you get an e-mail containing the article. Software agents may also be said to be utility software that observes a domain of the world and reports interesting things back to its user. A hypothetical software agent may scan the activity on one or more social networks for keywords which a statistically unusual number of users are posting and send alerts in response. I’ll go out on a limb a bit here and give a more fanciful example of what software agents can be compared to, the six robots from the Infocom game Suspended.

In the game, you the player are unable to act on your own because your body is locked in a cryogenic suspension tank, but the six robots (Auda, Iris, Poet, Sensa, Waldo, and Whiz) carry out orders given them, subject to their inherent limitations but are smart enough to figure out how to interpret those orders (Waldo, for example, doesn’t need to be told exactly how to pick up a microsurgical arm, he just knows how to do it). So, now that we have some working definitions of software agents, what are some of the characteristics that make them different from other kinds of software? For starters, agents run autonomously after they’re started up. In some ways you can compare them to daemons running on a UNIX system or Windows services, but instead of carrying out system level tasks (like sending and receiving e-mail) they carry out user level tasks. Agents may event in a reactive fashion in response to something they encounter (like an e-mail from a particular person) or in a proactive fashion (on a schedule, when certain thresholds are reached, or when they see something that fits a set of programmed parameters). Software agents may be adaptive to new operational conditions if they are designed that way. There are software agents which use statistical analysis to fine tune their operational parameters, sometimes in conjunction with feedback from their user, perhaps by turning down keywords or flagging certain things as false positives or false negatives. Highly sophisticated software agent systems may incorporate machine learning techniques to operate more effectively for their users over time, such as artificial neural networks, perceptrons, and Bayesian reasoning networks. Software agent networks may under some circumstances be considered to be implementations of machine learning systems because they can exhibit the functional architectures and behaviors of machine learning mechanisms. I wouldn’t make this characterization of every semi-autonomous agent system out there, though. Read more “Semi-autonomous agents: What are they, exactly?”

Exocortices: A Definition of a Technology

By The Doctor

A common theme of science fiction in the transhumanist vein, and less commonly in applied (read: practical) transhumanist circles is the concept of having an exocortex either installed within oneself, or interfaced in some way with one’s brain to augment one’s intelligence.  To paint a picture with a fairly broad brush, an exocortex was a system postulated by JCR Licklider in the research paper Man-Computer Symbiosis which would implement a new lobe of the human brain which was situated outside of the organism (though some components of it might be internal).  An exocortex would be a symbiotic device that would provide additional cognitive capacity or new capabilities that the organism previously did not posses, such as:

  • Identifying and executing cognitively intensive tasks (such as searching for and mining data for a project) on behalf of the organic brain, in effect freeing up CPU time for the wetware.
  • Adding additional density to existing neuronal networks to more rapidly and efficiently process information.  Thinking harder as well as faster.
  • Providing databases of experiential knowledge (synthetic memories) for the being to “remember” and act upon.  Skillsofts, basically.
  • Adding additional “execution threads” to one’s thinking processes.  Cognitive multitasking.
  • Modifying the parameters of one’s consciousness, for example, modulating emotions to suppress anxiety and/or stimulate interest, stimulating a hyperfocus state to enhance concentration, or artificially inducing zen states of consciousness.
  • Expanding short-term memory beyond baseline parameters.  For example, mechanisms that translate short-term memory into long-term memory significantly more efficiently.
  • Adding I/O interfaces to the organic brain to facilitate connection to external networks, processing devices, and other tools.

What I consider early implementations of such a cognitive prosthetic exist now and can be constructed using off-the-shelf hardware and open source software.  One might carefully state that, observing that technologies build on top of one another to advance in sophistication, these early implementations may pave the way for the future sci-fi implementations.  While a certain amount of systems administration and engineering know-how is required at this time to construct a personal exocortex, system automation for at-scale deployments can be used to set up and maintain a significant amount of exocortex infrastructure.  Personal interface devices – smartphones, tablets, smart watches, and other wearable devices – are highly useful I/O devices for exocortices, and probably will be until such time that direct brain interfaces are widely available and affordable.  There are also several business models inherent in exocortex technology but it should be stressed that potential compromises of privacy, issues of trust, and legal matters (in particular in the United States and European Union) are also inherent.  This part of the problem space is insufficiently explored and thus expert assistance is required.

Here are some of the tools that I used to build my own exocortex:

Ultimately, computers are required to implement such a prosthesis.  Lots of them.  I maintain multiple virtual machines at a number of hosting providers around the world, all running various parts of the infrastructure of my exocortex.  I also maintain multiple physical servers to carry out tasks which I don’t trust to hardware that I don’t personally control.  Running my own servers also means that I can build storage arrays large enough for my needs without spending more money than I have available at any one time.  For example, fifty terabytes of disk space on a SAN in a data center might cost hundreds of dollars per month, but the up-front cost of a RAID-5 array with at least one hotspare drive for resiliency was roughly the same amount.  Additionally, I can verify and validate that the storage arrays are suitably encrypted and sufficiently monitored.

Back end databases are required to store much of the information my exocortex collects and processes.  They not only hold data for all of the applications that I use (and are used by my software), they serve as the memory fields for the various software agents I use.  The exact databases I use are largely irrelevant to this article because they all do the same thing, just with slightly different network protocols and dialects of SQL.  The databases are relational databases right now because the software I use requires them, but I have schemes for document and graph databases for use with future applications as necessary.  I also make use of several file storage and retrieval mechanisms for data that parts of me collect: Files of many kinds, copies of web pages, notes, annotations, and local copies of data primarily stored with other services.  I realize that it sounds somewhat stereotypical for a being such as myself to hoard information, but as often as not I find myself needing to refer to a copy of a whitepaper (such as Licklider’s, referenced earlier in this article) and having one at hand with a single search.  Future projects involve maintaining local mirrors of certain kinds of data for preferential use due to privacy issues and risks of censorship or even wholesale data destruction due to legislative fiat or political pressure.

Arguably, implementing tasks is the most difficult part.  A nontrivial amount of programming is required as well as in-depth knowledge of interfacing with public services, authentication, security features… thankfully there are now software frameworks that abstract much of this detail away.  After many years of building and rebuilding and fighting to maintain an army of software agents I ported much of my infrastructure over to  Andrew Cantino’s Huginn.  Huginn is a software framework which implements several dozen classes of semi-autonomous software agents, each of which is designed to implement one kind of task, like sending an HTTP request to a web server, filtering events by content, emitting events based upon some external occurrance, sending events to other services.  The basic concept behind Huginn is the same as the UNIX philosophy: Every functional component does one thing, and does it very well.  Events generated by one agent can be ingested by other agents for processing.  The end result is greater than the sum of the results achieved by each individual agent.  To be sure, I’ve written lots of additional software that plugs into Huginn because there are some things it’s not particularly good, mainly very long running tasks that require minimal user input on a random basis but result in a great deal of user output.

Storing large volumes of information requires the use of search mechanisms to find anything.  It can be a point of self-discipline to carefully name each file, sort them into directories, and tag them, but when you get right down to it that isn’t a sustainable effort.  My record is eleven years before giving up and letting search software do the work for me, and much more efficiently at that.  Primarily I use the YaCy search engine to index websites, documents, and archives on my servers because it works better than any search engine I’ve yet tried to write, has a reasonable API to interface with, and can be run as an isolated system (i.e., not participating in the global YaCy network, which is essential for preserving privacy).  When searching the public Net I use several personal instances of Searx, an open source meta-search engine that is highly configurable, hackable, and also presents a very reasonable API.

I make a point of periodically backing up as much of my exocortex as possible.  One of the first automated jobs that I set up on a new server or when installing new software is running automatic backups of the application as well as its datastores several times every day.  In addition, those backups are mirrored to multiple locations on a regular basis, and those multiple locations copy everything to cold storage once a day.  To conserve mass storage space I have a one month rotation period for those backups; copies older than 31 days are automatically deleted to reclaim space.

In future articles, I’ll talk about how I built my own exocortex, what I do with it, and what benefits and drawbacks it has in my everyday life.  I will also, of course, discuss some of the security implications and a partial threat model for exocortices.  I will also write about what I call “soft transhumanism” in the future, personal training techniques that form the bedrock of my technologically implemented augmentations.

After a talk given at HOPE XI.

This is the first in a series of articles by The Doctor

The Doctor is a security practitioner working for a large Silicon Valley software-as-a-service company on next-generation technologies to bring reality and virtuality closer together.  His professional background includes security research, red team penetration testing, open source intelligence analysis, and wireless security.  When not reading hex dumps, auditing code, and peeking through sensors scattered across the globe he travels through time and space inside a funny blue box, contributes designs and code to a number of open-source hardware and software projects, and hacks on his exocortex,a software ecosystem and distributed cognitive prosthesis that augments his cognitive capabilities and assists him in day to day life collecting information and by farming out personally relevant tasks.  His primary point of presence is the blog Antarctica Starts Here.

 

Steal This Singularity Part 3: Bean Counters in Paradise

 

It was 2008 — maybe a week or two into my first experience working with “official” “organized” (as if) transhumanism as editor of h+ magazine. I was being driven down from Marin Country to San Jose to listen to a talk by a scientist long associated with various transhumanoid obsessions, among them nanotechnology, encryption and cryonics. As we made the two hour trip, the conversation drifted to notions of an evolved humanity; a different sort of species — maybe disembodied or maybe not — but decidedly post-Darwinian and in control of its instincts. I suggested that a gloomy aspect of these projections was that sex would likely disappear, since those desires and pleasures arose from more primitive aspects of the human psyche. My driver told me that he didn’t like sex because it was a distraction — a waste of brain power… not to mention sloppy.

I arrived at a Pizza Hut in an obscure part of the city. This gathering of about 15 – 20 transhumanoids would take place over cheap pizza in the back room that was reserved for the event. There was even a projector and a screen.The speaker — a pear shaped fellow clad in dress pants held up by a belt pulled up above his stomach — started his rap. As I recall, he predicted major nanotechnology breakthroughs (real nanotechnology i.e. molecular machines capable of making copies of themselves and making just about anything that nature allows extremely cheaply) within our extended lifetimes, allowing us, among other things, to stay healthy  indefinitely and finally migrate into space.

I recall him presenting a scenario in which all of us — or many of us — could own some pretty prime real estate; that is, chunks of this galaxy, at the very least that we could populate with our very own advanced progeny (mind children, perhaps.) I’m a bit sketchy on the details from so long ago, but it was a very far out vision of us united with advanced intelligences many times greater than our own either never dying or arising from the frozen dead and, yes, each one getting this gigantic chunk of space real estate to populate. (That these unlivable areas can be made livable either by changing it or ourselves or both with technology is the assumption here.)

Once the speaker had laid out the amazing future as scientifically plausible, he confessed that he was mainly there to make a pitch.  Alcor  — the cryonics company that he was involved in — needed more customers. As he delineated how inexpensively one could buy an insurance policy to  be frozen for an eventual return performance, he began to emphasize the importance of a person in cryonics not being considered legally dead… because that person could then build interest on a savings account or otherwise have his or her value increase in a stock market that was — by all nanocalculations — destined to explode into unthinkable numbers (a bigger boom).

For the bulk of his talk, the speaker dwelt on the importance of returning decades or maybe even a century or so hence to a handsome bank account. It was one of those “I can’t emphasize this enough” sort of talks that parents used to give to their 20-something kids about 401ks. Read more “Steal This Singularity Part 3: Bean Counters in Paradise”

The Invention of Reality Hackers – A “Mutazine” (1988)

Something was starting to surface. Several small subcultures were drifting together, and some of these esoteric groupings included those who were creating the next economy. Clearly, we were positioned to become the magazine of a slow baking gestalt.

 

From Freaks In The Machine: MONDO 2000 in Late 20th Century Tech Culture

by R.U. Sirius

Some time in 1988, we made a rash decision. Despite High Frontiers relatively successful rise within the ‘zine scene (where 18,000 in sales was solid), we decided to change the name of the magazine itself to Reality Hackers.

It was my idea.

We’d been hipped to cyberpunk SF and I’d read Gibson’s Neuromancer and Sterling’s Mirrorshades collection. Sterling’s famous introduction for that book, describing what cyberpunk was doing in fiction — seemed to express precisely what a truly contemporary transmutational magazine should be about. Here are some parts of it:

“The term, (cyberpunk) captures something crucial to the work of these writers, something crucial to the decade as a whole: a new kind of integration. The overlapping of worlds that were formerly separate: the realm of high tech, and the modern pop underground.

“This integration has become our decade’s crucial source of cultural energy. The work of the cyberpunks is paralleled throughout the Eighties pop culture: in rock video; in the hacker underground; in the jarring street tech of hip hop and scratch music; in the synthesizer rock of London and Tokyo. This phenomenon, this dynamic, has a global range; cyberpunk is its literary incarnation…

An unholy alliance of the technical world and the world of organized dissent — the underground world of pop culture, visionary fluidity, and street-level anarchy…

For the cyberpunks… technology is visceral. It is not the bottled genie of remote Big Science boffins; it is pervasive, utterly intimate. Not outside us, but next to us. Under our skin; often, inside our minds.

Certain central themes spring up repeatedly in cyberpunk. The theme of body invasion: prosthetic limbs, implanted circuitry, cosmetic surgery, genetic alteration. The even more powerful theme of mind invasion: brain-computer interfaces, artificial intelligence, neurochemistry — techniques radically redefining — the nature of humanity, the nature of the self. Read more “The Invention of Reality Hackers – A “Mutazine” (1988)”

Some Comments About The Transhumanist Project (2014)

by R.U. Sirius

One problem is the underlying philosophical assumptions that enhancement is always enhancement or is just enhancement. And I always think of Marshall McLuhan’s dictum that our extensions come with amputations.

 

 

These are some comments that I wrote in response to some questions from Peter Rothman on the h+ website in 2014

 

Transhumanism as an ism — or a belief system — is probably about the right of individuals and, possibly, the human species as a whole (or large groups therefrom) to self-enhance and to engage in an experiment in self-directed evolution, in a literal sense. In other words, not that we merely have glasses and cell phones but that we might become something other, in a biological and/or perceptual sense.

 

I don’t think it’s necessarily optimistic and I don’t think it’s necessarily rationalist, (particularly when we’re talking about people who think they’re pretty darn rational, who can only really be responded to with satire). I do think rationality and technology — stuff that actually works — are the fundamental tools for attaining an increasingly transhuman or posthuman condition. But tools are not, in and of themselves, paradigms. So individual transhumanists may feel like rationalism is a fine tool for living well but not the essential factor in actually living or even in apprehending what life is about… to the degree that can even be done, or in having social relationships.

 

My ongoing support for the idea of transhumanism is partly a rare acquiescence  to foolish consistency. I’d like to see if the project of a positive radical mutation of the human condition suggested by people like Timothy Leary can somehow win the day; whether, with the engineers and scientists in the vanguard of making it possible, we alternatively minded mutant types can pull a few aces from the bottom of the deck and actually somehow transform this pinched, mean, surveilled, existentially barren and risky 21st civilization into something that feels like liberation, generosity and heightened awareness. At this moment, the tools that could be applied to such a state of affairs are gathering, but the memetic and environmental thrusts lean towards epic failure.

Read more “Some Comments About The Transhumanist Project (2014)”