Head in the clouds, boots on the ground.

Self-hosted infrastructure is the first step toward voluntary apotheosis.

–Unknown

When people think of The Cloud(tm), they think of ubiquitous computing. Whatever you need, whenever you need it’s there from the convenience of your mobile, from search engines to storage to chat.  However, as the latest Amazon and Cloudflare outages have demonstrated all it takes is a single glitch to knock out half the Internet as we know it. 

This is, as they say, utter bollocks.  Much of the modern world spent a perfectly good day that could have been spent procrastinating, shitposting, and occasionally doing something productive bereft of Slack, Twitter, Autodesk, Roku, and phone service through Vonage.  While thinking about this fragile state of affairs in the shower this morning I realized that, for the somewhat technically inclined and their respective cohorts there are ways to mitigate the risks of letting other people run stuff you need every day.  Let us consider the humble single board computer, computing devices the size of two decks of cards at most, a €1 coin at the very least.  While this probably won’t help you keep earning a paycheque it would help you worry less about the next time Amazon decides to fall on its face.

Read more “Head in the clouds, boots on the ground.”

Money for vulns but 0-days ain’t free.

by: Renketsu Link

An interesting point was raised in a comment on an earlier article published at Mondo: To help even the playing ground, why don’t companies offer cash rewards for vulnerabilities so they can be patched prior to publication?  This is not a new idea.  Such operations are called bug bounty programs and have been around since 1995.  the very first bug bounty program was instituted by Netscape (remember that?) as a way of improving their flagship product, Netscape Navigator.  Other companies you may have heard of started doing the same thing about ten years later, in a much higher profile way.  Github has one, Google and Facebook have them, some banks have them… I could go on and on, but there’s little point.  There are even companies that specialize in running bug bounty programs, in effect administration, bounty payment and legal hassles.

Here’s the thing: They’re pointless.

Bug bounty programs are a feel-good way for wannabe white hats to find vulns and make a little pocket money on the side, with reduced risk of being sued or arrested.  There are, of course, always exceptions to this rule because you can’t trust anybody or anything out to turn a profit.  Typically, bug bounty reports require that an NDA be signed when submitting proof, which can mean that the hacker is forbidden from publishing their work on their own, and sometimes even talking about it.  If you’re looking to build your rep and maybe pad your resume a little, unless you get the go-ahead after publication you wasted your time.  There is also only the say-so of the company running the program that any bugs found will actually be fixed which, if the internet of shit community is any indication, more than a wish and less than a fart.  Even governments and militaries have gotten into the cash-for-vulns business.

It is common for bug bounty programs to declare anything really interesting or sensitive off limits.  Of course, this means that anything with a security classification, personnel records, schedules, medical records… things they actually need to worry about protecting don’t get probed by anyone who doesn’t have a vested interest in taking them for all they’ve got and staying out of sight.  While there is some value in probing only public facing services, as we learned from Gary McKinnon it’s off the reservation where the really interesting things are hiding.  As if that wasn’t enough, a lot of bug bounties pay a pittance at best.  For the longest time Yahoo (for example) would pay at most $12.50us for a vulnerability found anywhere in their infrastructure, and that was after people got fed up with their crappy t-shirts that only lasted two washes.

Of course, if you just want to earn some decent money for your work – fame being one of the enemies of the hacker at large, of course – you can sell weaponized 0-days on the black market for several tens of thousands of dollars if you don’t want to go into business for yourself building and running botnets or extortion rackets.  Governments are sometimes, depending on the severity and overall utility of the vuln willing to pay in the low hundreds of thousands of dollars for 0-days that can be used offensively.  It is rumored that the United States government paid $100,000us for a working copy of the infamous MS08-067 vulnerability almost a year before Microsoft found out about it.  I haven’t been able to confirm this (mostly because the person who originally found it politely turned down the job I offered him.)  Apple’s iOS, which is widely touted as the most secure mobile OS definitely has at least one true 0-day in each release (otherwise there wouldn’t be any such thing as jailbreaking), but why in the world would you sell your work to Apple when you could sell it to a vulnerability reseller and make well over a million dollars for a couple of days of dicking around?

A couple of hundred bucks at most compared to more significant digits than most people will ever see in their whole lives.  The math’s not hard.

Renketsu Link is one of the senior otaku of the Tanpa Supai Kai, an industrial espionage contractor headquartered in Hokkaido, Nihon. Beginning as a lowly copy protection cracker, Link swiftly rose to the position of chief network infiltration specialist with a concentration on data exfiltration. Link has pioneered multiple strategies and techniques for making CISOs commit seppuku and BOFHs go on shooting rampages in the NOC before swallowing thermite grenades.

From Anonymous To Pursuance — Barrett Brown in Conversation with Steve Phillips

Conversation with Steve Phillips, lead developer and Project Manager of The Pursuance Project edited by The Doctor. Also, thanks to Barrett Brown for reviewing the transcript for errors and to Lisa Rein for final proofs.

This article and its sequels constitute a somewhat edited and condensed transcript of a discussion between Barrett Brown and Steve Phillips at the San Francisco Aaron Swartz Day Hackathon in 2017.  We’ve tried to trade off informality for clarity, and refer the curious reader to the original source material in the spirit of transparency.  Any emphasis and hyperlink references are ours.

In 2012, Barrett Brown was arrested for his role in the Anonymous hack of Stratfor, a global intelligence company that serves corporate and government interests around the world. The hack revealed an apparent insider trading relationship between the company and Goldman Sachs, among other dubious business activities.  Brown won the National Magazine Award for his work while in solitary confinement.  While in prison, Barrett had some ideas for a software project, the democracy-building project called Pursuance.

From the Pursuance homepage

“Pursuance exists to amplify the efforts of activists, journalists, and non-profits by (1) creating open source collaboration software and (2) building a powerful network of talented, reasonable individuals around it…

“Our free, open source, and secure Pursuance System software enables participants to: create action-oriented groups called “pursuances”, discuss how best to achieve their mission, rapidly record exciting strategies and ideas in an actionable form… receive social recognition for their contributions, and to delegate tasks to other pursuances in this ecosystem in order to harness its collective intelligence, passion, and expertise.”

Read the full statement

 

Steve Phillips

STEVE PHILLIPS: I read a Wired Magazine article that said that Barrett Brown was out of jail and doing awesome stuff again, an encrypted environment where you can collaborate with other activists and journalists and people who are trying to change the world, and I thought that sounded ultra-compelling. So, I immediately reached out to him in several different ways — snail mail and Twitter and email, like all in parallel, and got through, and then I jumped on a plane and flew to Texas to meet Barrett.

[Barrett Brown], let’s talk a little bit about your background. You won a National Magazine Award, but let’s go back to 2010. You had the sense that we should be using the Internet for activist-type things. Even if it’s just a small percentage of people who really care about issues, there are billions of people online so that could be quite the force. Take me back to some of your thinking around that time, right before you started doing work with Anonymous.

Read more “From Anonymous To Pursuance — Barrett Brown in Conversation with Steve Phillips”

Semi-autonomous software agents: A personal perspective.

by: The Doctor [412/724/301/703/415]

So, after going on for a good while about software agents you’re probably wondering why I have such an interest in them. I started experimenting with my own software agents in the fall of 1996 when I first started undergrad. When I went away to college I finally had an actual network connection for the first time in my life (where I grew up the only access I had was through dialup) and I wanted to abuse it. Not in the way that the rest of my classmates were but to do things I actually had an interest in. So, the first thing I did was set up my own e-mail server with Qmail and subscribed to a bunch of mailing lists because that’s where all of the action was at the time. I also rapidly developed a list of websites that I checked once or twice a day because they were often updated with articles that I found interesting. It was through those communication fora that I discovered the research papers on software agents that I mentioned in earlier posts in this series. I soon discovered that I’d bitten off more than I could chew, especially when some mailing lists went realtime (which is when everybody started replying to one another more or less the second they received a message) and I had to check my e-mail every hour or so to keep from running out of disk space. Rather than do the smart thing (unsubscribing from a few ‘lists) I decided to work smarter and not harder and see if I could use some of the programming languages I was playing with at the time to help. I’ve found over the years that it’s one thing to study a programming language academically, but to really learn one you need a toy project to learn the ins and outs. So, I wrote some software that would crawl my inbox, scan messages for certain keywords or phrases and move them into a folder so I’d see them immediately, and leave the rest for later. I wrote some shell scripts, and when those weren’t enough I wrote a few Perl scripts (say what you want about Perl, but it was designed first and foremost for efficiently chewing on data). Later, when that wasn’t enough I turned to C to implement some of the tasks I needed Leandra to carry out.

Due to the fact that Netscape Navigator was highly unreliable on my system for reasons I was never quite clear on (it used to throw bus errors all over the place) I wasn’t able to consistently keep up with my favorite websites at the time. While the idea of update feeds existed as far back as 1995 they didn’t actually exist until the publication of the RSS v0.9 specification in 1999, and ATOM didn’t exist until 2003, so I couldn’t just point a feed reader at them. So I wrote a bunch of scripts that used lynx -dump http://www.example.com/ > ~/websites/www.example.com/`date ‘+%Y%m%d-%H:%M:%S’`.txt and diff to detect changes and tell me what sites to look at when I got back from class.

That was one of the prettier sequences of commands I had put together, too. This kept going on for quite a few years, and the legion of software agents I had running on my home machine grew out of control. As opportunities presented themselves, I upgraded Leandra as best I could, from an 80486 to an 80586 to P-III, with corresponding increases in RAM and disk space. As one does. Around 2005 I discovered the novel Accelerando by Charles Stross and decided to call this system of scripts and daemons my exocortex, after the network of software agents that the protagonist of the first story arc had much of his mind running on through the story. In early 2014 I got tired of maintaining all of the C code (which was, to be frank, terrible because they were my first C projects), shell scripts (lynx and wget+awk+sed+grep+cut+uniq+wc+…) and Perl scripts to keep them going as I upgraded my hardware and software and had to keep on top of site redesign after site redesign… so I started rewriting Exocortex as an object-oriented framework in Python, in part as another toy project and in part because I really do find myself enjoying Python as a thing that I do. I worked on this project for a couple of months and put my code on Github because I figured that eventually someone might find it helpful. When I had a first cut more or less stable I showed it to a friend at work, who immediately said “Hey, that works just like Huginn!”

I took one look at Huginn, realized that it did everything I wanted plus more by at least three orders of magnitude, and scrapped my codebase in favor of a Huginn install on one of my servers.

Porting my existing legion of software agents over to Huginn took about four hours… it used to take me between two and twelve hours to get a new agent up and running due to the amount of new code I had to write, even if I used an existing agent as a template. Just looking at the effort/payoff tradeoff there really wasn’t any competition. So, since that time I’ve been a dedicated Huginn user and hacker, and it’s been more or less seamlessly integrated into my day to day life as an extension of myself. I find it strange, somewhat alarming when I don’t hear from any of them (which usually means that something locked up, which still happens from time to time) but I’ve been working with the core development team to make it more stable. I find it’s a lot more stable than my old system was simply due to the fact that it’s an integrated framework, and not a constantly mutating patchwork of code in several languages sharing a back-end. Additionally, my original exocortex network was comprised of many very complex agents: One was designed to monitor Slashdot, another Github, another a particular IRC network; Huginn offers a large number of agents, each of which carries out a specific task (like providing an interface to an IMAP account or scraping a website). By customizing the configurations of instances of each agent type and wiring them together, you can build an agent network which can carry out very complex tasks which would otherwise require significant amounts of programming time. Read more “Semi-autonomous software agents: A personal perspective.”

Semi-autonomous agents: What are they, exactly?

by: The Doctor [412/724/301/703/415]

This post is intended to be the first in a series of long form articles (how many, I don’t yet know) on the topic of semi-autonomous software agents, a technology that I’ve been using fairly heavily for just shy of twenty years in my everyday life. My goals are to explain what they are, go over the history of agents as a technology, discuss how I started working with them between 1996e.v. and 2000e.v., and explain a little of what I do with them in my everyday life. I will also, near the end of the series, discuss some of the software systems and devices I use in the nebula of software agents that comprises what I now call my Exocortex (which is also the name of the project), make available some of the software agents which help to expand my spheres of influence in everyday life, and talk a little bit about how it’s changed me as a person and what it means to my identity.

So, what are semi-autonomous agents?

One working definition is that they are utility software that acts on behalf of a user or other piece of software to carry out useful tasks, farming out busywork that one would have to do oneself to free up time and energy for more interesting things. A simple example of this might be the pop-up toaster notification in an e-mail client alerting you that you have a new message from someone; if you don’t know what I mean play around with this page a little bit and it’ll demonstrate what a toaster notification is. Another possible working definition is that agents are software which observes a user-defined environment for changes which are then reported to a user or message queuing system. An example of this functionality might be Blogtrottr, which you plug the RSS feeds of one or more blogs into, and whenever a new post goes up you get an e-mail containing the article. Software agents may also be said to be utility software that observes a domain of the world and reports interesting things back to its user. A hypothetical software agent may scan the activity on one or more social networks for keywords which a statistically unusual number of users are posting and send alerts in response. I’ll go out on a limb a bit here and give a more fanciful example of what software agents can be compared to, the six robots from the Infocom game Suspended.

In the game, you the player are unable to act on your own because your body is locked in a cryogenic suspension tank, but the six robots (Auda, Iris, Poet, Sensa, Waldo, and Whiz) carry out orders given them, subject to their inherent limitations but are smart enough to figure out how to interpret those orders (Waldo, for example, doesn’t need to be told exactly how to pick up a microsurgical arm, he just knows how to do it). So, now that we have some working definitions of software agents, what are some of the characteristics that make them different from other kinds of software? For starters, agents run autonomously after they’re started up. In some ways you can compare them to daemons running on a UNIX system or Windows services, but instead of carrying out system level tasks (like sending and receiving e-mail) they carry out user level tasks. Agents may event in a reactive fashion in response to something they encounter (like an e-mail from a particular person) or in a proactive fashion (on a schedule, when certain thresholds are reached, or when they see something that fits a set of programmed parameters). Software agents may be adaptive to new operational conditions if they are designed that way. There are software agents which use statistical analysis to fine tune their operational parameters, sometimes in conjunction with feedback from their user, perhaps by turning down keywords or flagging certain things as false positives or false negatives. Highly sophisticated software agent systems may incorporate machine learning techniques to operate more effectively for their users over time, such as artificial neural networks, perceptrons, and Bayesian reasoning networks. Software agent networks may under some circumstances be considered to be implementations of machine learning systems because they can exhibit the functional architectures and behaviors of machine learning mechanisms. I wouldn’t make this characterization of every semi-autonomous agent system out there, though. Read more “Semi-autonomous agents: What are they, exactly?”

How did we let it get so bad? (Everybody is Pwned)

By: Renketsu Link

You’ve undoubtedly been trying to figure out what to do about what might have been the worst data breach this year, the compromise of the multinational credit bureau Equifax.  As it stands now, the credit histories of millions of people in the United States and other parts of the world are now in the wild and undoubtedly being sold on the black market at cost (nowadays, rather less than $20 per dossier) and if you’re reading this you’ve probably got free credit monitoring until the year 2030.  But that doesn’t answer the key question here: How the fuck did things get so bad?

 

The answer is a complex one and involves significant amounts of suck and fail at every level of complexity.  Let’s start at the top of the stack.

Instituting a security program requires funding from the company itself as well as buy-in from upper management.  Without money, the system administrators at the company can only cobble things together in their spare time – hardening, monitoring, patching, reporting, and deploying the occasional passive security measure.  In some companies to this day, they have to do this on the down low because management there is actively hostile toward security and will force the admins to remove “those useless things.”  Some measures require the purchase of additional hardware and license keys, and not every budget has a couple of thousand dollars to spare on a few new boxes.  If you don’t think places like that exist, they do – I’ve worked at a few, and they make our work much, much easier.  Even if they have money, the C-levels (Chief * Officers), V-levels (Vice *), and D-levels (Directors) need to make it public that they support the security program, will abide by it, and officially order everyone who works there to abide by it, too.  All it takes is one C-level who doesn’t give a fuck to cut the nuts off of the entire thing.

Second, let’s talk about so-called security practitioners.  Probably 80% of the “cybersecurity experts” I’ve butted heads with are barely able to turn on a computer, let alone actually put up a fight.  Most of the security industry pimps certifications like the CISSP or Certified Ethical Hacker sheepskin don’t actually know anything useful about security in any way, shape, or form.  The Wikipedia pages talk a good game by throwing around words like “provable,” “experts,” “ethical,” and “cyber,” but if you actually read any of their training texts (which, of course, are published by those certification bodies and cost as much as your average college textbook) usable information is pretty scarce.  Let’s take the CEH: If you look at what it actually teaches you (things like not running telnet, setting up firewalls, installing patches, and not running all your shit as root) it actually reflects the publically known stuff about security in the late 1990’s.  You’d be hard pressed to find a Linux or real UNIX that actually includes in.telnetd these days but doesn’t mention anything about the sorts of vulnerabilities that one finds today (like process injection or memory hardening evasion techniques).  As for the CISSP, it tells you up front that the Common Body of Knowledge is a mile wide and an inch deep (or more recently “at the thirty-thousand foot view”) but in the same breath they’ll also tell you that when you actually sit for the exam all you have to do is pick the least wrong answer; if you actually know anything about security, for about 74% (if I did my math right (hey, the book’s a thousand pages, cut me some slack)) of the questions have all incorrect answers, and if you actually did what you were taught… well, you know how I make my living, so by all means, keep doing exactly what you were taught.

attrition.org's Charlatans page would actually be the size of Wikipedia if Jericho was still updating it.

Practically every company out there has some legal or industry-specific guidelines that they have to at least make an attempt to comply with, and there’s no shortage of them: PCI-DSS, NIST SP 800-53, NSA IA, HIPAA, ISO 27001… I could go on and on, but all you need to know is that they all say basically the same thing: Google “how do I harden <insert operating system or appliance here>,” follow the instructions if the link isn’t from a perfectly legitimate Russian or Chinese business conglomerate, patch your shit every couple of days, read your logs and respond to what you see, and generally don’t be a dumbass.  In practice, however, they get treated as lists of checkmarks or cells in a spreadsheet.  A couple of meetings are scheduled and suffered through by everybody who bothered to show up (of course, at least one Android phone that now belongs to someone like me is on the table) and roadmaps are drawn up that are supposed to act as a timeline for security measures that need to be instituted.  Sometimes, once or twice a year, a security assessment is held; rarely a security company is hired to do the work.  Then, and here’s the fun part, the remediation loophole kicks in.  It goes like this: Every security program has a requirement built into it that basically says, “You now have x months to fix the findings from this assessment, after which time we’ll run another assessment.”  You probably see where this is headed.  Nothing happens to fix the vulnerabilities found, the next assessment happens, nothing has changed (usually things have gotten worse in the meantime), and during the burndown meeting someone says “Okay, you now have x months to fix the findings from this assessment, after which time we’ll run another assessment.”  Over and over and over again.

Of course, there are a few out there who actually have a clue.  They’re the ones who don’t last very long because they eventually get tired of being ignored, quit, and occasionally go into business as hired guns with their inside knowledge (come on in, the water’s fine!)  They run their scans, tell the sysadmins to patch their shit and harden SQL Server, and read their logs.  They’re also the ones who get told that installing patches adds bugs instead of fixing them, get told that complex passwords are unnecessary, get bitched out at all-hands meetings for trying to institute multifactor authentication because it adds an extra step to normal work (meaning that your Battle.net account probably has stronger authentication than your bank), and watch in horror as C-levels plug flash drives they found in the parking lot into workstations where the user’s logged in as the local admin.  The lot in life of a real security professional is a sad one that often results in functional alcoholism, endless bitching at hacker cons (attended under the pretext of “vacation” because actually hanging out with hackers can cost someone one of those expensive certifications if anyone finds out) and often early retirement to a log cabin in Appalachia.  That’s if they don’t get fired for actually doing their jobs; nobody ever likes being shown that their security program doesn’t actually work and the messenger always gets shot.

Next in line are the sysadmins.  As with any group, there is a subset that actually know what they’re doing, and go as far as they need to so they can do their jobs (which, if they know what they’re doing consists of automating everything in the first month, fucking off the rest of the time, and having a boss key set up so they can look busy whenever somebody wearing a tie walks by).  The rest are content to stand up a Window or Linux box, throw an app or two on it, and let it go at that.  Some don’t bother patching anything, either because it’d be too much work or because the developers won’t let them (“If you patch that you’ll break our production app!”)  Most have a patch cycle that’s entirely too long (weeks to months), which leaves them vulnerable for extremely long periods of time.  Also, operating system ecosystems are becoming more security hostile in very subtle ways (you’re welcome).  There is no shortage of Windows APIs that let a creative user turn off or evade security policy entirely, and systemd has been a godsend to hackers the world over.

Last and certainly not least are the end users, who may as well be on our payroll because they make it all possible.  They’re the ones who use Password_1 as their passwords because password complexity guidelines don’t let them use strings like qu;;o5Eey9aiV-ai3FexiC<a7cu2hGhi|g}e (okay, so that’s not entirely their fault but I’m not above a cheap shot now and then (trolled people are people who make exploitable mistakes)), open every document sent to them from a vaguely official looking e-mail address, and are trained from an early age to click on buttons that make error messages go away. Let’s not forget those wonderful people who forward our trojaned documents to entire teams and make it rain shells.  App developers are the ones who demand that sysadmins not lock their shit down because they don’t know how to write robust code (you’d be amazed at the e-mail threads where a stupid bug made by a dev was blamed on a security patch) and say that many different classes of RCE are theoretical and thus are wastes of time to mitigate (protip: Getting caught selling 0-days in your own code is a career limiting move.)  Analysts that spend more time at work surfing porn than looking at system logs or vuln reports are always fun, plus if you pop their laptops they occasionally have security reports that make life easier for us in the short term.

Exocortices: A Definition of a Technology

By The Doctor

A common theme of science fiction in the transhumanist vein, and less commonly in applied (read: practical) transhumanist circles is the concept of having an exocortex either installed within oneself, or interfaced in some way with one’s brain to augment one’s intelligence.  To paint a picture with a fairly broad brush, an exocortex was a system postulated by JCR Licklider in the research paper Man-Computer Symbiosis which would implement a new lobe of the human brain which was situated outside of the organism (though some components of it might be internal).  An exocortex would be a symbiotic device that would provide additional cognitive capacity or new capabilities that the organism previously did not posses, such as:

  • Identifying and executing cognitively intensive tasks (such as searching for and mining data for a project) on behalf of the organic brain, in effect freeing up CPU time for the wetware.
  • Adding additional density to existing neuronal networks to more rapidly and efficiently process information.  Thinking harder as well as faster.
  • Providing databases of experiential knowledge (synthetic memories) for the being to “remember” and act upon.  Skillsofts, basically.
  • Adding additional “execution threads” to one’s thinking processes.  Cognitive multitasking.
  • Modifying the parameters of one’s consciousness, for example, modulating emotions to suppress anxiety and/or stimulate interest, stimulating a hyperfocus state to enhance concentration, or artificially inducing zen states of consciousness.
  • Expanding short-term memory beyond baseline parameters.  For example, mechanisms that translate short-term memory into long-term memory significantly more efficiently.
  • Adding I/O interfaces to the organic brain to facilitate connection to external networks, processing devices, and other tools.

What I consider early implementations of such a cognitive prosthetic exist now and can be constructed using off-the-shelf hardware and open source software.  One might carefully state that, observing that technologies build on top of one another to advance in sophistication, these early implementations may pave the way for the future sci-fi implementations.  While a certain amount of systems administration and engineering know-how is required at this time to construct a personal exocortex, system automation for at-scale deployments can be used to set up and maintain a significant amount of exocortex infrastructure.  Personal interface devices – smartphones, tablets, smart watches, and other wearable devices – are highly useful I/O devices for exocortices, and probably will be until such time that direct brain interfaces are widely available and affordable.  There are also several business models inherent in exocortex technology but it should be stressed that potential compromises of privacy, issues of trust, and legal matters (in particular in the United States and European Union) are also inherent.  This part of the problem space is insufficiently explored and thus expert assistance is required.

Here are some of the tools that I used to build my own exocortex:

Ultimately, computers are required to implement such a prosthesis.  Lots of them.  I maintain multiple virtual machines at a number of hosting providers around the world, all running various parts of the infrastructure of my exocortex.  I also maintain multiple physical servers to carry out tasks which I don’t trust to hardware that I don’t personally control.  Running my own servers also means that I can build storage arrays large enough for my needs without spending more money than I have available at any one time.  For example, fifty terabytes of disk space on a SAN in a data center might cost hundreds of dollars per month, but the up-front cost of a RAID-5 array with at least one hotspare drive for resiliency was roughly the same amount.  Additionally, I can verify and validate that the storage arrays are suitably encrypted and sufficiently monitored.

Back end databases are required to store much of the information my exocortex collects and processes.  They not only hold data for all of the applications that I use (and are used by my software), they serve as the memory fields for the various software agents I use.  The exact databases I use are largely irrelevant to this article because they all do the same thing, just with slightly different network protocols and dialects of SQL.  The databases are relational databases right now because the software I use requires them, but I have schemes for document and graph databases for use with future applications as necessary.  I also make use of several file storage and retrieval mechanisms for data that parts of me collect: Files of many kinds, copies of web pages, notes, annotations, and local copies of data primarily stored with other services.  I realize that it sounds somewhat stereotypical for a being such as myself to hoard information, but as often as not I find myself needing to refer to a copy of a whitepaper (such as Licklider’s, referenced earlier in this article) and having one at hand with a single search.  Future projects involve maintaining local mirrors of certain kinds of data for preferential use due to privacy issues and risks of censorship or even wholesale data destruction due to legislative fiat or political pressure.

Arguably, implementing tasks is the most difficult part.  A nontrivial amount of programming is required as well as in-depth knowledge of interfacing with public services, authentication, security features… thankfully there are now software frameworks that abstract much of this detail away.  After many years of building and rebuilding and fighting to maintain an army of software agents I ported much of my infrastructure over to  Andrew Cantino’s Huginn.  Huginn is a software framework which implements several dozen classes of semi-autonomous software agents, each of which is designed to implement one kind of task, like sending an HTTP request to a web server, filtering events by content, emitting events based upon some external occurrance, sending events to other services.  The basic concept behind Huginn is the same as the UNIX philosophy: Every functional component does one thing, and does it very well.  Events generated by one agent can be ingested by other agents for processing.  The end result is greater than the sum of the results achieved by each individual agent.  To be sure, I’ve written lots of additional software that plugs into Huginn because there are some things it’s not particularly good, mainly very long running tasks that require minimal user input on a random basis but result in a great deal of user output.

Storing large volumes of information requires the use of search mechanisms to find anything.  It can be a point of self-discipline to carefully name each file, sort them into directories, and tag them, but when you get right down to it that isn’t a sustainable effort.  My record is eleven years before giving up and letting search software do the work for me, and much more efficiently at that.  Primarily I use the YaCy search engine to index websites, documents, and archives on my servers because it works better than any search engine I’ve yet tried to write, has a reasonable API to interface with, and can be run as an isolated system (i.e., not participating in the global YaCy network, which is essential for preserving privacy).  When searching the public Net I use several personal instances of Searx, an open source meta-search engine that is highly configurable, hackable, and also presents a very reasonable API.

I make a point of periodically backing up as much of my exocortex as possible.  One of the first automated jobs that I set up on a new server or when installing new software is running automatic backups of the application as well as its datastores several times every day.  In addition, those backups are mirrored to multiple locations on a regular basis, and those multiple locations copy everything to cold storage once a day.  To conserve mass storage space I have a one month rotation period for those backups; copies older than 31 days are automatically deleted to reclaim space.

In future articles, I’ll talk about how I built my own exocortex, what I do with it, and what benefits and drawbacks it has in my everyday life.  I will also, of course, discuss some of the security implications and a partial threat model for exocortices.  I will also write about what I call “soft transhumanism” in the future, personal training techniques that form the bedrock of my technologically implemented augmentations.

After a talk given at HOPE XI.

This is the first in a series of articles by The Doctor

The Doctor is a security practitioner working for a large Silicon Valley software-as-a-service company on next-generation technologies to bring reality and virtuality closer together.  His professional background includes security research, red team penetration testing, open source intelligence analysis, and wireless security.  When not reading hex dumps, auditing code, and peeking through sensors scattered across the globe he travels through time and space inside a funny blue box, contributes designs and code to a number of open-source hardware and software projects, and hacks on his exocortex,a software ecosystem and distributed cognitive prosthesis that augments his cognitive capabilities and assists him in day to day life collecting information and by farming out personally relevant tasks.  His primary point of presence is the blog Antarctica Starts Here.

 

Old and busted: Learned helplessness. New hotness: Nihilism as survival trait.

 

by: Pariah McCree

They didn’t censor the gunshots, or the people getting hurt, or the blood… they blurred out a guy in the crowd standing up and flipping off the shooter. I guess we know where their priorities are.

Last night I finally had the opportunity to spend some quality time with a partner I don’t get to see very often, because our lives reside in vastly different orbits these days.  This quality time consisted of sitting on their couch sipping bourbon, shooting the shit, and occasionally glancing over at the television showing a queue full of episodes of Rick and Morty.  I’m not particularly interested in television but I can certainly appreciate the antics of a sarcastic, hedonistic, substance using and abusing mad scientist.  At some point last night, said partner’s phone began making noises as if it were about to explode, or possibly perish of a combination stroke and heart attack.  Knowing them as long as I have and being a product of my time, I immediately extracted my smartphone and began scanning social media.  Did the Oompa-Loompa in Chief finally start World War III?

“Bah.  Another mass shooting, this one at the hotel I stayed at this summer.  Whatever.”

I dropped both phone and drink, committing one of the few sins I actually care about (alcohol abuse).  “What?”

“Another mass shooting, this one at a festival in Las Vegas.  Initial reports are forty fatalities and at least two hundred injured.  Data is still being compiled.”  They sipped their drink and tossed their own phone onto the coffee table as Rick prattled on about the racial epithets of alien species.

“No,” I said.  “You just blew it off.  This isn’t like you.  How much did you have?”

“Just the one,” they said.  “It’s just another fucking mass shooting.  They’ll happen more and more often as people get more and more crazy.  After your ninth or tenth you stop rising to take the bait and flow with it.”

 

I don’t mind saying that I spent the rest of the evening drinking quietly and staring at my long-time friend, co-conspirator, and lover. This is a person who sends flowers when someone’s cat dies, and wept upon discovering that a hamster’s disappearance was due to the creature hiding in its cage to expire quietly. And they’re not even breaking a sweat upon discovering that several hundred people (at last count, more than 500 injuries and almost 60 deaths) were on the wrong end of a jackass with a room full of guns and a week’s worth of ammunition in downtown Las Vegas? Some days I’m not sure how human they are, but last night took the taco. I stayed as far away from the coat closet as I could, lest a cyborg facehugger spring from the shadows and shove its ovipositor down my throat.

In the shower this morning, where I always do my best thinking (don’t you?) I rolled the events of the previous night around in my head. By my take, there have been about 115 mass shootings since the year 2000. While the numbers bounce around a little bit they’re steadily creeping upward.  Add to that the sheer insanity of the past year and… this is our new normal. The pattern of how the aftermath of the Las Vegas massacre is going to unfold is probably going to be just like all the others.

The shooter was a wealthy, retired white guy, so there goes the narrative of “brown people who are also Muslim killing good Christian ‘muricans (fuck yeah!)” Nobody could possibly have predicted that this poor, sweet man was going through such a bad time, he was mentally ill and acting on his own, he wasn’t radicalized at all… blah blah fucking blah. If he hadn’t offed himself before the SWAT team blew his door they’d have escorted him safely down to the basement garage of the hotel and whisked him away before sending a talking head to give a statement to the press. Once again, nothing substantial is going to happen, or at least nothing good. The usual talking heads are all whining that they don’t know how such a thing could have happened; they may as well re-run the interviews from the last nine shootings involving white guys and be done with it. The usual “abolish all gun laws,” “armed people don’t get gunned down,” and “Second Amendment uber alles!” crowd is running its mouth and that rattling sound you hear is the NRA shutting money around back channels.  People who absolutely cannot wrap their heads around the fact that a retiree might decide to open fire on a crowd for no good reason at all are blaming everything from an Illuminati human sacrifice ritual to forgotten MK-ULTRA deep cover agents getting the go-code are shitting up the Internet with their wild-ass speculations. The Oompa-Loompa in Chief is barely responding, per usual. He was too busy golfing to pay attention until somebody suggested that it would make him look more presidential to say something sympathetic in front of a camera. The Onion, possibly the last bastion of sarcasm-as-uncomfortable-observation has re-run its “mass shooting in America” article once more, and again changing only the dates and location.

As much as it makes my cold, black heart ache, I find myself agreeing with my partner-in-crime. We’re rapidly approaching a state of being in which the possibility of being gunned down at any moment by some rando is the new normal. Ironically — and this is the part that really fucks with me — some of the news media felt a need to censor some of the social media footage before airing it. They didn’t censor the gunshots, or the people getting hurt, or the blood… they blurred out a guy in the crowd standing up and flipping off the shooter. I guess we know where their priorities are. Conservative mouthpiece Bill O’Reilly even went so far today to say that the shooting in Las Vegas last night was simply “the price of freedom.

I now think I understand why my partner was so nonplussed about last night.  When you live in a world in which the worth of one single life pales in comparison to the value of being able to take a life with ease, one’s self-worth also diminishes. The math scarily balances: One life equals one death. It almost doesn’t even seem worth taking basic precautions to protect one’s safety, does it? The possibility that one may die a violent death at any moment is such a real one that acceptance and preparation seems like the most obvious course of action, because there really is no solution. Lies are the only thing that matter and evidence stating anything else is ignored or mockedPreconceptions born of bad television mean more than answers from real-life experts.

The takeaway from last night’s bourbon-and-cartoon marathon? The bit that stuck with me, aside from wondering if somebody I care about finally gave up?

“Nobody exists on purpose. Nobody belongs anywhere. Everybody’s gonna die.  Come watch TV.”

Goddamn.