Semi-autonomous software agents: A personal perspective.

by: The Doctor [412/724/301/703/415]

So, after going on for a good while about software agents you’re probably wondering why I have such an interest in them. I started experimenting with my own software agents in the fall of 1996 when I first started undergrad. When I went away to college I finally had an actual network connection for the first time in my life (where I grew up the only access I had was through dialup) and I wanted to abuse it. Not in the way that the rest of my classmates were but to do things I actually had an interest in. So, the first thing I did was set up my own e-mail server with Qmail and subscribed to a bunch of mailing lists because that’s where all of the action was at the time. I also rapidly developed a list of websites that I checked once or twice a day because they were often updated with articles that I found interesting. It was through those communication fora that I discovered the research papers on software agents that I mentioned in earlier posts in this series. I soon discovered that I’d bitten off more than I could chew, especially when some mailing lists went realtime (which is when everybody started replying to one another more or less the second they received a message) and I had to check my e-mail every hour or so to keep from running out of disk space. Rather than do the smart thing (unsubscribing from a few ‘lists) I decided to work smarter and not harder and see if I could use some of the programming languages I was playing with at the time to help. I’ve found over the years that it’s one thing to study a programming language academically, but to really learn one you need a toy project to learn the ins and outs. So, I wrote some software that would crawl my inbox, scan messages for certain keywords or phrases and move them into a folder so I’d see them immediately, and leave the rest for later. I wrote some shell scripts, and when those weren’t enough I wrote a few Perl scripts (say what you want about Perl, but it was designed first and foremost for efficiently chewing on data). Later, when that wasn’t enough I turned to C to implement some of the tasks I needed Leandra to carry out.

Due to the fact that Netscape Navigator was highly unreliable on my system for reasons I was never quite clear on (it used to throw bus errors all over the place) I wasn’t able to consistently keep up with my favorite websites at the time. While the idea of update feeds existed as far back as 1995 they didn’t actually exist until the publication of the RSS v0.9 specification in 1999, and ATOM didn’t exist until 2003, so I couldn’t just point a feed reader at them. So I wrote a bunch of scripts that used lynx -dump http://www.example.com/ > ~/websites/www.example.com/`date ‘+%Y%m%d-%H:%M:%S’`.txt and diff to detect changes and tell me what sites to look at when I got back from class.

That was one of the prettier sequences of commands I had put together, too. This kept going on for quite a few years, and the legion of software agents I had running on my home machine grew out of control. As opportunities presented themselves, I upgraded Leandra as best I could, from an 80486 to an 80586 to P-III, with corresponding increases in RAM and disk space. As one does. Around 2005 I discovered the novel Accelerando by Charles Stross and decided to call this system of scripts and daemons my exocortex, after the network of software agents that the protagonist of the first story arc had much of his mind running on through the story. In early 2014 I got tired of maintaining all of the C code (which was, to be frank, terrible because they were my first C projects), shell scripts (lynx and wget+awk+sed+grep+cut+uniq+wc+…) and Perl scripts to keep them going as I upgraded my hardware and software and had to keep on top of site redesign after site redesign… so I started rewriting Exocortex as an object-oriented framework in Python, in part as another toy project and in part because I really do find myself enjoying Python as a thing that I do. I worked on this project for a couple of months and put my code on Github because I figured that eventually someone might find it helpful. When I had a first cut more or less stable I showed it to a friend at work, who immediately said “Hey, that works just like Huginn!”

I took one look at Huginn, realized that it did everything I wanted plus more by at least three orders of magnitude, and scrapped my codebase in favor of a Huginn install on one of my servers.

Porting my existing legion of software agents over to Huginn took about four hours… it used to take me between two and twelve hours to get a new agent up and running due to the amount of new code I had to write, even if I used an existing agent as a template. Just looking at the effort/payoff tradeoff there really wasn’t any competition. So, since that time I’ve been a dedicated Huginn user and hacker, and it’s been more or less seamlessly integrated into my day to day life as an extension of myself. I find it strange, somewhat alarming when I don’t hear from any of them (which usually means that something locked up, which still happens from time to time) but I’ve been working with the core development team to make it more stable. I find it’s a lot more stable than my old system was simply due to the fact that it’s an integrated framework, and not a constantly mutating patchwork of code in several languages sharing a back-end. Additionally, my original exocortex network was comprised of many very complex agents: One was designed to monitor Slashdot, another Github, another a particular IRC network; Huginn offers a large number of agents, each of which carries out a specific task (like providing an interface to an IMAP account or scraping a website). By customizing the configurations of instances of each agent type and wiring them together, you can build an agent network which can carry out very complex tasks which would otherwise require significant amounts of programming time.

So, because it’s been my considered experience that Huginn is the hottest thing since sliced bread, here are some of its capabilities:

  • Huginn supports the concept of scenarios – groups of arbitrary agents put together under a common name to help organize them. I group mine together by functional category and give them names that I find appropriate, which I’ll go into later. Scenarios can also be exported into files to back them up or share them with other people.
  • Visualization of scenarios as flow charts (sort of) with Graphviz. Not only do they make pretty pictures, but I find that having flowcharts for my more complex agent networks is essential for debugging. And to think I resented having to draw all those flow charts as a kid…
  • The Liquid templating language is used to format events and assemble messages in between agents, and just before output is transmitted to the user. I find that Liquid is mostly easy to work with, but when it comes to dicing up and recombining strings it’s not terribly efficient, and if it really comes down to that I have to write extra code outside of Huginn.
  • Huginn maintains a collection of credentials separate from agent code that can be substituted in with templating tags to prevent hardcoding passwords or API keys in agents. This means that it’s relatively safe to share exported agent networks.
  • Huginn’s internal scheduler can run agent jobs anywhere from once a minute to every couple of hours to specific times of day. If you incorporate a Scheduler Agent instance into an agent network, you are basically implementing cron with just three lines of code. This gives you much greater flexibility in when some of your agents will run.
  • Huginn’s agents are implemented in a simple domain specific language based on JSON. To be more accurate, if you look in the codebase there is only one copy of each kind of agent running in each execution path, but the scheduler runs them with different sets of configurations, resulting in different agents within the same unified framework. It’s a little hard to explain, I’ll try again later.

So, the ultimate question seems to be “What do I use it for?”

The plan and simple answer is, a great many things. I use the latest iteration of my Exocortex to carry out several hundred tasks at any one time, and several times that when I ask specific software agents to execute for me. Here’s a very partial list, which I may or may not elaborate upon later:

  • Monitor the RSS and ATOM feeds of several score blogs, filtering for specific key words and phrases pertaining to things I’m interested in.
  • Monitor the RSS and ATOM feeds of several score news sites around the Net, both mainstream and not. Statistical and sentiment analysis are carried out at the same time as keyword and keyphrase matching to find breaking events that are potentially of interest to me.
  • Monitor weather forecasts in several locations around the world that I’m likely to travel to or have family at.
  • Monitor the number of people listening to various police scanners hooked into the Net. If the number of listeners spikes significantly, send a high priority alert with a direct link to that scanner’s audio feed.
  • Data mine several cryptocurrency networks and exchanges to detect transaction patterns and observe unusual network dynamics.
  • Monitor information security advisory feeds, mailing lists, and sites for published vulnerabilities and exploits for signs that software I use may be in danger.
  • Watch the stock prices and trade volumes of the 20 largest defense contractors and petrochemical companies on the planet for unusual changes. For many years, I’ve found that this is a very good way of forecasting the geopolitical weather.
  • Carry out general secretarial duties pertaining to my various e-mail and voicemail boxes.

Over the years I’ve worked out a hierarchy of priorities which different kinds of events map to. Low priority alerts are sent to my primary mobile device as e-mail digests every couple of hours, which I browse or skip as I have time and compute cycles for. Moderate priority alerts are sent as individual e-mails to my primary mobile device, where I glance at them once an hour or so. High priority alerts used to be sent to one of the XMPP accounts that I log into from my phone but lately I’ve been growing more and more frustrated with XMPP4r, the XMPP protocol implementation Huginn uses and I’ve been experimenting with other ways of getting those alerts to my attention in a timely manner. When I’ve worked out something that works well and have running code, I’ll publish more about it. Crash priority alerts take the form of telephone calls, sent through a voice-over-IP service and generated by a speech synthesizer I put together some time ago.

This article was originally published at Antarctica Starts Here.

Leave a Reply

Your email address will not be published. Required fields are marked *