track all lines seen and all lines said by Markov. every 30 seconds,
if there have been more than 20 such lines, and Markov is responsible
for roughly half of them, then shut up for 30 seconds, because the
bot probably got stuck talking to another bot.
this should mean that such a reply infinite loop can't happen for
more than a minute.
i'm not entirely sure on the 30 sec/20 lines ratio. this may need
tuning.
when finding a key for (__start1,__start2), instead of fetcihng all
(which can be a lot, in chatty channels and/or over time), get the
max ID in the table, pick a random ID between 1,max, and pick the
first id >= to it, and use that. just as random, nowhere near as
intensive.
a context is a meta-classification ('banter, 'secrets', whatever)
based on targets (channels or nicknames). when a line is being
learned from a known target, the chains are placed in that context.
this is for allowing one brain to have multiple personalities, in
a sense, for large networks or cases where there may be a more
sanitized set of channels and a couple channels where everyone lets
it rip. a later enhancement would have sentence creation choose from
context-less chains (and contexts matching the current target), but
i need to go back to the drawing board on that one a bit.
ramble ramble ramble
e.g. if i say 'dr_botzo: hello dude', he only learns 'hello dude'.
this is mainly being done because the bot's name being in the brain
so many times was getting kind of silly, especially in channels that
have lots of conversations with the bot
somehow a chain led us down a path where there are no values for
the keys in the chain. if that happens, just abort.
i'm not quite sure how this could happen
the motivation here is doing (foo)++ would match \S+ first, adding
(foo) to the karma database (rather than foo, which is probably what
the user meant)
this eliminates the expensive database hit on every request for a line.
the cache is loaded when the module loads and learning new lines should
add the appropriate word to the list. seemed like a pretty good compromise
this keeps us from having the entire markov chain in memory and
having to do the pickling and so on. in many ways, this is a good
thing.
in one way, this is a bad thing. each line on irc will create a
__start1,__start2 item in the database, which means starting a
chain will be an expensive process. (approx 3 seconds, from irc
logs of 600,000 K lines). following selects run much faster, but
the first one is dog slow. a later commit should hopefully fix this.
this just sends a privmsg to the specified target on the specified
connection. pretty straightforward. also, update the modules that
need it to use it.
more of a moving of the code, actually, it now exists in (an overridden)
_handle_event, so that recursions happen against irc events directly,
rather than an already partially interpreted object.
with this change, modules don't need to implement do() nor do we have a
need for the internal_bus, which was doing an additional walk of the
modules after the irc event was already handled and turned into text. now
the core event handler does the recursion scans.
to support this, we bring back the old replypath trick and use it again,
so we know when to send a privmsg reply and when to return text so that
it may be chained in recursion. this feels old hat by now, but if you
haven't been following along, you should really look at the diff.
that's the meat of the change. the rest is updating modules to use
self.reply() and reimplementing (un)register_handlers where appropriate