track all lines seen and all lines said by Markov. every 30 seconds,
if there have been more than 20 such lines, and Markov is responsible
for roughly half of them, then shut up for 30 seconds, because the
bot probably got stuck talking to another bot.
this should mean that such a reply infinite loop can't happen for
more than a minute.
i'm not entirely sure on the 30 sec/20 lines ratio. this may need
tuning.
when finding a key for (__start1,__start2), instead of fetcihng all
(which can be a lot, in chatty channels and/or over time), get the
max ID in the table, pick a random ID between 1,max, and pick the
first id >= to it, and use that. just as random, nowhere near as
intensive.
a context is a meta-classification ('banter, 'secrets', whatever)
based on targets (channels or nicknames). when a line is being
learned from a known target, the chains are placed in that context.
this is for allowing one brain to have multiple personalities, in
a sense, for large networks or cases where there may be a more
sanitized set of channels and a couple channels where everyone lets
it rip. a later enhancement would have sentence creation choose from
context-less chains (and contexts matching the current target), but
i need to go back to the drawing board on that one a bit.
ramble ramble ramble
e.g. if i say 'dr_botzo: hello dude', he only learns 'hello dude'.
this is mainly being done because the bot's name being in the brain
so many times was getting kind of silly, especially in channels that
have lots of conversations with the bot
somehow a chain led us down a path where there are no values for
the keys in the chain. if that happens, just abort.
i'm not quite sure how this could happen
the motivation here is doing (foo)++ would match \S+ first, adding
(foo) to the karma database (rather than foo, which is probably what
the user meant)
this eliminates the expensive database hit on every request for a line.
the cache is loaded when the module loads and learning new lines should
add the appropriate word to the list. seemed like a pretty good compromise
this keeps us from having the entire markov chain in memory and
having to do the pickling and so on. in many ways, this is a good
thing.
in one way, this is a bad thing. each line on irc will create a
__start1,__start2 item in the database, which means starting a
chain will be an expensive process. (approx 3 seconds, from irc
logs of 600,000 K lines). following selects run much faster, but
the first one is dog slow. a later commit should hopefully fix this.
this just sends a privmsg to the specified target on the specified
connection. pretty straightforward. also, update the modules that
need it to use it.
more of a moving of the code, actually, it now exists in (an overridden)
_handle_event, so that recursions happen against irc events directly,
rather than an already partially interpreted object.
with this change, modules don't need to implement do() nor do we have a
need for the internal_bus, which was doing an additional walk of the
modules after the irc event was already handled and turned into text. now
the core event handler does the recursion scans.
to support this, we bring back the old replypath trick and use it again,
so we know when to send a privmsg reply and when to return text so that
it may be chained in recursion. this feels old hat by now, but if you
haven't been following along, you should really look at the diff.
that's the meat of the change. the rest is updating modules to use
self.reply() and reimplementing (un)register_handlers where appropriate
if the end of a chain has been reached via __end, but min_size
has not been satisfied, discard the last couple elements in the
chain and try again. use min_search_tries so we don't do this
forever.
needs authentication. this adds a sqlite database, to track a couple
settings. one, since_id, tracks the last successful time this poll
happened, so it's pretty important you don't muck around with it.
default value is 0, so the first time this poll occurs, it may be a
bit spammy.
note that this isn't guaranteed, if the chain is such that the
current tuple has nowhere to go but to the end of the line, then
it will follow it --- it doesn't try to go back and rebuilt a different
chain or anything.
yeah, we have MegaHAL, but i can't find a good implementation in
python that actually works and is stable, so we'll implement a
simple thing ourselves. works pretty much like MegaHAL does, but
without the string corruption.
original code provided by ape, care of mike bloy
apparently at 3 AM i forgot to implement important features, because
this is pretty critical to the game actually being playable. let
the assignee, if the game is still open, get the text of the line
they are to reply to.
also display it, rather than the add line command, where appropriate.
i'd originally intended to use strings, too, but never decided on
if there should be a game name, or the commands should search
something, or what, so i'll just quit waffling and remove it. numbers
only for now.
this module implements a game where players write a line in a story,
probably a nonsensical one, a couple lines at a time. once the player
who started the story has written something, the last line is
passed along to someone else in the game, who continues the story ---
or disregards the small bit of context entirely and writes their own
thing.
eventually you get a story like this:
line 1 by user 1
line 2 by user 1
line 3 by user 2 (who only read line 2)
line 4 by user 2
line 5 by user 3 (who only read line 4)
...
conceptually, that's the idea of the game. the code itself is still
a bit rough around the edges, but i can bang through a game by
myself. it needs some robustification, but it's fairly well
documented and the module does try to provide some clues to IRC while
you're playing.
config option explanations, more such options, etc. to come. critically
important is a way to get completed stories out of the bot, of course.
more to come, i'll shut up now and commit.