By The Doctor
A common theme of science fiction in the transhumanist vein, and less commonly in applied (read: practical) transhumanist circles is the concept of having an exocortex either installed within oneself, or interfaced in some way with one’s brain to augment one’s intelligence. To paint a picture with a fairly broad brush, an exocortex was a system postulated by JCR Licklider in the research paper Man-Computer Symbiosis which would implement a new lobe of the human brain which was situated outside of the organism (though some components of it might be internal). An exocortex would be a symbiotic device that would provide additional cognitive capacity or new capabilities that the organism previously did not posses, such as:
- Identifying and executing cognitively intensive tasks (such as searching for and mining data for a project) on behalf of the organic brain, in effect freeing up CPU time for the wetware.
- Adding additional density to existing neuronal networks to more rapidly and efficiently process information. Thinking harder as well as faster.
- Providing databases of experiential knowledge (synthetic memories) for the being to “remember” and act upon. Skillsofts, basically.
- Adding additional “execution threads” to one’s thinking processes. Cognitive multitasking.
- Modifying the parameters of one’s consciousness, for example, modulating emotions to suppress anxiety and/or stimulate interest, stimulating a hyperfocus state to enhance concentration, or artificially inducing zen states of consciousness.
- Expanding short-term memory beyond baseline parameters. For example, mechanisms that translate short-term memory into long-term memory significantly more efficiently.
- Adding I/O interfaces to the organic brain to facilitate connection to external networks, processing devices, and other tools.
What I consider early implementations of such a cognitive prosthetic exist now and can be constructed using off-the-shelf hardware and open source software. One might carefully state that, observing that technologies build on top of one another to advance in sophistication, these early implementations may pave the way for the future sci-fi implementations. While a certain amount of systems administration and engineering know-how is required at this time to construct a personal exocortex, system automation for at-scale deployments can be used to set up and maintain a significant amount of exocortex infrastructure. Personal interface devices – smartphones, tablets, smart watches, and other wearable devices – are highly useful I/O devices for exocortices, and probably will be until such time that direct brain interfaces are widely available and affordable. There are also several business models inherent in exocortex technology but it should be stressed that potential compromises of privacy, issues of trust, and legal matters (in particular in the United States and European Union) are also inherent. This part of the problem space is insufficiently explored and thus expert assistance is required.
Here are some of the tools that I used to build my own exocortex:
Ultimately, computers are required to implement such a prosthesis. Lots of them. I maintain multiple virtual machines at a number of hosting providers around the world, all running various parts of the infrastructure of my exocortex. I also maintain multiple physical servers to carry out tasks which I don’t trust to hardware that I don’t personally control. Running my own servers also means that I can build storage arrays large enough for my needs without spending more money than I have available at any one time. For example, fifty terabytes of disk space on a SAN in a data center might cost hundreds of dollars per month, but the up-front cost of a RAID-5 array with at least one hotspare drive for resiliency was roughly the same amount. Additionally, I can verify and validate that the storage arrays are suitably encrypted and sufficiently monitored.
Back end databases are required to store much of the information my exocortex collects and processes. They not only hold data for all of the applications that I use (and are used by my software), they serve as the memory fields for the various software agents I use. The exact databases I use are largely irrelevant to this article because they all do the same thing, just with slightly different network protocols and dialects of SQL. The databases are relational databases right now because the software I use requires them, but I have schemes for document and graph databases for use with future applications as necessary. I also make use of several file storage and retrieval mechanisms for data that parts of me collect: Files of many kinds, copies of web pages, notes, annotations, and local copies of data primarily stored with other services. I realize that it sounds somewhat stereotypical for a being such as myself to hoard information, but as often as not I find myself needing to refer to a copy of a whitepaper (such as Licklider’s, referenced earlier in this article) and having one at hand with a single search. Future projects involve maintaining local mirrors of certain kinds of data for preferential use due to privacy issues and risks of censorship or even wholesale data destruction due to legislative fiat or political pressure.
Arguably, implementing tasks is the most difficult part. A nontrivial amount of programming is required as well as in-depth knowledge of interfacing with public services, authentication, security features… thankfully there are now software frameworks that abstract much of this detail away. After many years of building and rebuilding and fighting to maintain an army of software agents I ported much of my infrastructure over to Andrew Cantino’s Huginn. Huginn is a software framework which implements several dozen classes of semi-autonomous software agents, each of which is designed to implement one kind of task, like sending an HTTP request to a web server, filtering events by content, emitting events based upon some external occurrance, sending events to other services. The basic concept behind Huginn is the same as the UNIX philosophy: Every functional component does one thing, and does it very well. Events generated by one agent can be ingested by other agents for processing. The end result is greater than the sum of the results achieved by each individual agent. To be sure, I’ve written lots of additional software that plugs into Huginn because there are some things it’s not particularly good, mainly very long running tasks that require minimal user input on a random basis but result in a great deal of user output.
Storing large volumes of information requires the use of search mechanisms to find anything. It can be a point of self-discipline to carefully name each file, sort them into directories, and tag them, but when you get right down to it that isn’t a sustainable effort. My record is eleven years before giving up and letting search software do the work for me, and much more efficiently at that. Primarily I use the YaCy search engine to index websites, documents, and archives on my servers because it works better than any search engine I’ve yet tried to write, has a reasonable API to interface with, and can be run as an isolated system (i.e., not participating in the global YaCy network, which is essential for preserving privacy). When searching the public Net I use several personal instances of Searx, an open source meta-search engine that is highly configurable, hackable, and also presents a very reasonable API.
I make a point of periodically backing up as much of my exocortex as possible. One of the first automated jobs that I set up on a new server or when installing new software is running automatic backups of the application as well as its datastores several times every day. In addition, those backups are mirrored to multiple locations on a regular basis, and those multiple locations copy everything to cold storage once a day. To conserve mass storage space I have a one month rotation period for those backups; copies older than 31 days are automatically deleted to reclaim space.
In future articles, I’ll talk about how I built my own exocortex, what I do with it, and what benefits and drawbacks it has in my everyday life. I will also, of course, discuss some of the security implications and a partial threat model for exocortices. I will also write about what I call “soft transhumanism” in the future, personal training techniques that form the bedrock of my technologically implemented augmentations.
After a talk given at HOPE XI.
This is the first in a series of articles by The Doctor
The Doctor is a security practitioner working for a large Silicon Valley software-as-a-service company on next-generation technologies to bring reality and virtuality closer together. His professional background includes security research, red team penetration testing, open source intelligence analysis, and wireless security. When not reading hex dumps, auditing code, and peeking through sensors scattered across the globe he travels through time and space inside a funny blue box, contributes designs and code to a number of open-source hardware and software projects, and hacks on his exocortex,a software ecosystem and distributed cognitive prosthesis that augments his cognitive capabilities and assists him in day to day life collecting information and by farming out personally relevant tasks. His primary point of presence is the blog Antarctica Starts Here.